From 2a6855c1bee93ff5fb6ed06f531d8c671792caac Mon Sep 17 00:00:00 2001 From: Ankit Kala Date: Mon, 24 Jun 2024 21:27:39 +0530 Subject: [PATCH 001/167] Added details for indexing triage meeting (#14518) Signed-off-by: Ankit Kala --- TRIAGING.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/TRIAGING.md b/TRIAGING.md index 90842cd8e9393..c7c07a8ce30bd 100644 --- a/TRIAGING.md +++ b/TRIAGING.md @@ -14,7 +14,7 @@ Each meeting we seek to address all new issues. However, should we run out of ti ### How do I join a Triage meeting? - Check the [OpenSearch Meetup Group](https://www.meetup.com/opensearch/) for the latest schedule and details for joining each meeting. Each component area has its own meetup series: [Search](https://www.meetup.com/opensearch/events/300929493/), [Storage](https://www.meetup.com/opensearch/events/299907409/), [Cluster Manager](https://www.meetup.com/opensearch/events/301082218/), and [Core](https://www.meetup.com/opensearch/events/301061009/). + Check the [OpenSearch Meetup Group](https://www.meetup.com/opensearch/) for the latest schedule and details for joining each meeting. Each component area has its own meetup series: [Search](https://www.meetup.com/opensearch/events/300929493/), [Storage](https://www.meetup.com/opensearch/events/299907409/), [Cluster Manager](https://www.meetup.com/opensearch/events/301082218/), [Indexing](https://www.meetup.com/opensearch/events/301734024/), and [Core](https://www.meetup.com/opensearch/events/301061009/). After joining the virtual meeting, you can enable your video / voice to join the discussion. If you do not have a webcam or microphone available, you can still join in via the text chat. @@ -29,9 +29,10 @@ Meeting structure may vary slightly, but the general structure is as follows: 3. **Announcements:** Any announcements will be made at the beginning of the meeting. 4. **Review of New Issues:** We start by reviewing all untriaged issues. Each meeting has a label-based search to find relevant issues: - [Search](https://github.com/opensearch-project/OpenSearch/issues?q=is%3Aissue+is%3Aopen+label%3Auntriaged+label%3A%22Search%22%2C%22Search%3ARemote+Search%22%2C%22Search%3AResiliency%22%2C%22Search%3APerformance%22%2C%22Search%3ARelevance%22%2C%22Search%3AAggregations%22%2C%22Search%3AQuery+Capabilities%22%2C%22Search%3AQuery+Insights%22%2C%22Search%3ASearchable+Snapshots%22%2C%22Search%3AUser+Behavior+Insights%22) + - [Indexing](https://github.com/opensearch-project/OpenSearch/issues?q=is%3Aissue+is%3Aopen+label%3Auntriaged+label%3A%22Indexing%3AReplication%22%2C%22Indexing%22%2C%22Indexing%3APerformance%22%2C%22Indexing+%26+Search%22%2C) - [Storage](https://github.com/opensearch-project/OpenSearch/issues?q=is%3Aissue+is%3Aopen+label%3Auntriaged+label%3AStorage%2C%22Storage%3AResiliency%22%2C%22Storage%3APerformance%22%2C%22Storage%3ASnapshots%22%2C%22Storage%3ARemote%22%2C%22Storage%3ADurability%22) - [Cluster Manager](https://github.com/opensearch-project/OpenSearch/issues?q=is%3Aissue+is%3Aopen+label%3Auntriaged+label%3A%22Cluster+Manager%22%2C%22ClusterManager%3ARemoteState%22) - - [Core](https://github.com/opensearch-project/OpenSearch/issues?q=is%3Aissue+is%3Aopen+label%3Auntriaged+-label%3A%22Search%22%2C%22Search%3ARemote+Search%22%2C%22Search%3AResiliency%22%2C%22Search%3APerformance%22%2C%22Search%3ARelevance%22%2C%22Search%3AAggregations%22%2C%22Search%3AQuery+Capabilities%22%2C%22Search%3AQuery+Insights%22%2C%22Search%3ASearchable+Snapshots%22%2C%22Search%3AUser+Behavior+Insights%22%2C%22Storage%22%2C%22Storage%3AResiliency%22%2C%22Storage%3APerformance%22%2C%22Storage%3ASnapshots%22%2C%22Storage%3ARemote%22%2C%22Storage%3ADurability%22%2C%22Cluster+Manager%22%2C%22ClusterManager%3ARemoteState%22) + - [Core](https://github.com/opensearch-project/OpenSearch/issues?q=is%3Aissue+is%3Aopen+label%3Auntriaged+-label%3A%22Search%22%2C%22Search%3ARemote+Search%22%2C%22Search%3AResiliency%22%2C%22Search%3APerformance%22%2C%22Search%3ARelevance%22%2C%22Search%3AAggregations%22%2C%22Search%3AQuery+Capabilities%22%2C%22Search%3AQuery+Insights%22%2C%22Search%3ASearchable+Snapshots%22%2C%22Search%3AUser+Behavior+Insights%22%2C%22Storage%22%2C%22Storage%3AResiliency%22%2C%22Storage%3APerformance%22%2C%22Storage%3ASnapshots%22%2C%22Storage%3ARemote%22%2C%22Storage%3ADurability%22%2C%22Cluster+Manager%22%2C%22ClusterManager%3ARemoteState%22%2C%22Indexing%3AReplication%22%2C%22Indexing%22%2C%22Indexing%3APerformance%22%2C%22Indexing+%26+Search%22) 5. **Attendee Requests:** An opportunity for any meeting member to request consideration of an issue or pull request. 6. **Open Discussion:** Attendees can bring up any topics not already covered by filed issues or pull requests. 7. **Review of Old Untriaged Issues:** Time permitting, each meeting will look at all [untriaged issues older than 14 days](https://github.com/opensearch-project/OpenSearch/issues?q=is%3Aissue+is%3Aopen+label%3Auntriaged+created%3A%3C2024-05-20) to prevent issues from falling through the cracks (note the GitHub API does not allow for relative times, so the date in this search must be updated every meeting). From 1da19d3b5bf4297e286e3fbaacec4704903bfc55 Mon Sep 17 00:00:00 2001 From: panguixin Date: Tue, 25 Jun 2024 00:45:12 +0800 Subject: [PATCH 002/167] Fix fs info reporting negative available size (#11573) * fix fs info reporting negative available size Signed-off-by: panguixin * change log Signed-off-by: panguixin * fix test Signed-off-by: panguixin * fix test Signed-off-by: panguixin * spotless Signed-off-by: panguixin --------- Signed-off-by: panguixin Signed-off-by: Andrew Ross Co-authored-by: Andrew Ross --- CHANGELOG.md | 1 + .../org/opensearch/monitor/fs/FsProbe.java | 4 ++ .../opensearch/monitor/fs/FsProbeTests.java | 41 +++++++++++++++++++ 3 files changed, 46 insertions(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index ba1279b4cf458..3f5c1c01f8dc0 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -41,6 +41,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Fixed rest-high-level client searchTemplate & mtermVectors endpoints to have a leading slash ([#14465](https://github.com/opensearch-project/OpenSearch/pull/14465)) - Write shard level metadata blob when snapshotting searchable snapshot indexes ([#13190](https://github.com/opensearch-project/OpenSearch/pull/13190)) - Fix aggs result of NestedAggregator with sub NestedAggregator ([#13324](https://github.com/opensearch-project/OpenSearch/pull/13324)) +- Fix fs info reporting negative available size ([#11573](https://github.com/opensearch-project/OpenSearch/pull/11573)) - Add ListPitInfo::getKeepAlive() getter ([#14495](https://github.com/opensearch-project/OpenSearch/pull/14495)) ### Security diff --git a/server/src/main/java/org/opensearch/monitor/fs/FsProbe.java b/server/src/main/java/org/opensearch/monitor/fs/FsProbe.java index f4731a4a34373..f93cb63ff1f0a 100644 --- a/server/src/main/java/org/opensearch/monitor/fs/FsProbe.java +++ b/server/src/main/java/org/opensearch/monitor/fs/FsProbe.java @@ -82,6 +82,10 @@ public FsInfo stats(FsInfo previous) throws IOException { paths[i].fileCacheReserved = adjustForHugeFilesystems(dataLocations[i].fileCacheReservedSize.getBytes()); paths[i].fileCacheUtilized = adjustForHugeFilesystems(fileCache.usage().usage()); paths[i].available -= (paths[i].fileCacheReserved - paths[i].fileCacheUtilized); + // occurs if reserved file cache space is occupied by other files, like local indices + if (paths[i].available < 0) { + paths[i].available = 0; + } } } FsInfo.IoStats ioStats = null; diff --git a/server/src/test/java/org/opensearch/monitor/fs/FsProbeTests.java b/server/src/test/java/org/opensearch/monitor/fs/FsProbeTests.java index 59a888c665be7..e2e09d5ce63fe 100644 --- a/server/src/test/java/org/opensearch/monitor/fs/FsProbeTests.java +++ b/server/src/test/java/org/opensearch/monitor/fs/FsProbeTests.java @@ -58,6 +58,7 @@ import java.util.function.Function; import java.util.function.Supplier; +import static org.opensearch.monitor.fs.FsProbe.adjustForHugeFilesystems; import static org.hamcrest.CoreMatchers.equalTo; import static org.hamcrest.Matchers.emptyOrNullString; import static org.hamcrest.Matchers.greaterThan; @@ -162,6 +163,46 @@ public void testFsCacheInfo() throws IOException { } } + public void testFsInfoWhenFileCacheOccupied() throws IOException { + Settings settings = Settings.builder().putList("node.roles", "search", "data").build(); + try (NodeEnvironment env = newNodeEnvironment(settings)) { + // Use the total space as reserved space to simulate the situation where the cache space is occupied + final long totalSpace = adjustForHugeFilesystems(env.fileCacheNodePath().fileStore.getTotalSpace()); + ByteSizeValue gbByteSizeValue = new ByteSizeValue(totalSpace, ByteSizeUnit.BYTES); + env.fileCacheNodePath().fileCacheReservedSize = gbByteSizeValue; + FileCache fileCache = FileCacheFactory.createConcurrentLRUFileCache( + gbByteSizeValue.getBytes(), + 16, + new NoopCircuitBreaker(CircuitBreaker.REQUEST) + ); + + FsProbe probe = new FsProbe(env, fileCache); + FsInfo stats = probe.stats(null); + assertNotNull(stats); + assertTrue(stats.getTimestamp() > 0L); + FsInfo.Path total = stats.getTotal(); + assertNotNull(total); + assertTrue(total.total > 0L); + assertTrue(total.free > 0L); + assertTrue(total.fileCacheReserved > 0L); + + for (FsInfo.Path path : stats) { + assertNotNull(path); + assertFalse(path.getPath().isEmpty()); + assertFalse(path.getMount().isEmpty()); + assertFalse(path.getType().isEmpty()); + assertTrue(path.total > 0L); + assertTrue(path.free > 0L); + + if (path.fileCacheReserved > 0L) { + assertEquals(0L, path.available); + } else { + assertTrue(path.available > 0L); + } + } + } + } + public void testFsInfoOverflow() throws Exception { final FsInfo.Path pathStats = new FsInfo.Path( "/foo/bar", From 212efd76637bddebf9dac85a0aa5eadebd9456cb Mon Sep 17 00:00:00 2001 From: Rishabh Maurya Date: Mon, 24 Jun 2024 10:35:04 -0700 Subject: [PATCH 003/167] Fix a race condition in Derived Field parsing from search request (#14445) Signed-off-by: Rishabh Maurya --- CHANGELOG.md | 1 + .../mapper/DefaultDerivedFieldResolver.java | 28 ++++++++++++++++++- 2 files changed, 28 insertions(+), 1 deletion(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 3f5c1c01f8dc0..55728a58eca03 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -9,6 +9,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - [Remote Store] Rate limiter for remote store low priority uploads ([#14374](https://github.com/opensearch-project/OpenSearch/pull/14374/)) - Apply the date histogram rewrite optimization to range aggregation ([#13865](https://github.com/opensearch-project/OpenSearch/pull/13865)) - [Writable Warm] Add composite directory implementation and integrate it with FileCache ([12782](https://github.com/opensearch-project/OpenSearch/pull/12782)) +- Fix race condition while parsing derived fields from search definition ([14445](https://github.com/opensearch-project/OpenSearch/pull/14445)) ### Dependencies - Bump `org.gradle.test-retry` from 1.5.8 to 1.5.9 ([#13442](https://github.com/opensearch-project/OpenSearch/pull/13442)) diff --git a/server/src/main/java/org/opensearch/index/mapper/DefaultDerivedFieldResolver.java b/server/src/main/java/org/opensearch/index/mapper/DefaultDerivedFieldResolver.java index c577a4117247b..4dd17703b6f55 100644 --- a/server/src/main/java/org/opensearch/index/mapper/DefaultDerivedFieldResolver.java +++ b/server/src/main/java/org/opensearch/index/mapper/DefaultDerivedFieldResolver.java @@ -15,6 +15,8 @@ import org.opensearch.script.Script; import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; import java.util.HashMap; import java.util.HashSet; import java.util.List; @@ -189,9 +191,10 @@ private void initDerivedFieldTypes(Map derivedFieldsObject, List private Map getAllDerivedFieldTypeFromObject(Map derivedFieldObject) { Map derivedFieldTypes = new HashMap<>(); + // deep copy of derivedFieldObject is required as DocumentMapperParser modifies the map DocumentMapper documentMapper = queryShardContext.getMapperService() .documentMapperParser() - .parse(DerivedFieldMapper.CONTENT_TYPE, derivedFieldObject); + .parse(DerivedFieldMapper.CONTENT_TYPE, (Map) deepCopy(derivedFieldObject)); if (documentMapper != null && documentMapper.mappers() != null) { for (Mapper mapper : documentMapper.mappers()) { if (mapper instanceof DerivedFieldMapper) { @@ -226,4 +229,27 @@ private DerivedFieldType resolveUsingMappings(String name) { } return null; } + + private static Object deepCopy(Object value) { + if (value instanceof Map) { + Map mapValue = (Map) value; + Map copy = new HashMap<>(mapValue.size()); + for (Map.Entry entry : mapValue.entrySet()) { + copy.put(entry.getKey(), deepCopy(entry.getValue())); + } + return copy; + } else if (value instanceof List) { + List listValue = (List) value; + List copy = new ArrayList<>(listValue.size()); + for (Object itemValue : listValue) { + copy.add(deepCopy(itemValue)); + } + return copy; + } else if (value instanceof byte[]) { + byte[] bytes = (byte[]) value; + return Arrays.copyOf(bytes, bytes.length); + } else { + return value; + } + } } From afad5ebd4f1979fc77911bf3c369c74d3b605e3f Mon Sep 17 00:00:00 2001 From: kkewwei Date: Tue, 25 Jun 2024 05:43:34 +0800 Subject: [PATCH 004/167] Fix FuzzyQuery in keyword field when both of index and doc_value are true (#14378) Signed-off-by: kkewwei --- CHANGELOG.md | 1 + .../index/mapper/KeywordFieldMapper.java | 2 +- .../index/mapper/StringFieldType.java | 30 +++++++++++++++++++ .../index/query/FuzzyQueryBuilder.java | 2 +- .../index/mapper/KeywordFieldTypeTests.java | 7 +++-- 5 files changed, 38 insertions(+), 4 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 55728a58eca03..cafe9c20e7ff4 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -44,6 +44,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Fix aggs result of NestedAggregator with sub NestedAggregator ([#13324](https://github.com/opensearch-project/OpenSearch/pull/13324)) - Fix fs info reporting negative available size ([#11573](https://github.com/opensearch-project/OpenSearch/pull/11573)) - Add ListPitInfo::getKeepAlive() getter ([#14495](https://github.com/opensearch-project/OpenSearch/pull/14495)) +- Fix FuzzyQuery in keyword field will use IndexOrDocValuesQuery when both of index and doc_value are true ([#14378](https://github.com/opensearch-project/OpenSearch/pull/14378)) ### Security diff --git a/server/src/main/java/org/opensearch/index/mapper/KeywordFieldMapper.java b/server/src/main/java/org/opensearch/index/mapper/KeywordFieldMapper.java index 7f6d9231a37fc..2116ac522b705 100644 --- a/server/src/main/java/org/opensearch/index/mapper/KeywordFieldMapper.java +++ b/server/src/main/java/org/opensearch/index/mapper/KeywordFieldMapper.java @@ -549,7 +549,7 @@ public Query fuzzyQuery( ); } if (isSearchable() && hasDocValues()) { - Query indexQuery = super.fuzzyQuery(value, fuzziness, prefixLength, maxExpansions, transpositions, context); + Query indexQuery = super.fuzzyQuery(value, fuzziness, prefixLength, maxExpansions, transpositions, method, context); Query dvQuery = super.fuzzyQuery( value, fuzziness, diff --git a/server/src/main/java/org/opensearch/index/mapper/StringFieldType.java b/server/src/main/java/org/opensearch/index/mapper/StringFieldType.java index fbfca44c3062a..682ccc13f769d 100644 --- a/server/src/main/java/org/opensearch/index/mapper/StringFieldType.java +++ b/server/src/main/java/org/opensearch/index/mapper/StringFieldType.java @@ -55,6 +55,7 @@ import java.util.regex.Pattern; import static org.opensearch.search.SearchService.ALLOW_EXPENSIVE_QUERIES; +import static org.apache.lucene.search.FuzzyQuery.defaultRewriteMethod; /** Base class for {@link MappedFieldType} implementations that use the same * representation for internal index terms as the external representation so @@ -102,6 +103,35 @@ public Query fuzzyQuery( ); } + @Override + public Query fuzzyQuery( + Object value, + Fuzziness fuzziness, + int prefixLength, + int maxExpansions, + boolean transpositions, + MultiTermQuery.RewriteMethod method, + QueryShardContext context + ) { + if (!context.allowExpensiveQueries()) { + throw new OpenSearchException( + "[fuzzy] queries cannot be executed when '" + ALLOW_EXPENSIVE_QUERIES.getKey() + "' is set to false." + ); + } + failIfNotIndexed(); + if (method == null) { + method = defaultRewriteMethod(maxExpansions); + } + return new FuzzyQuery( + new Term(name(), indexedValueForSearch(value)), + fuzziness.asDistance(BytesRefs.toString(value)), + prefixLength, + maxExpansions, + transpositions, + method + ); + } + @Override public Query prefixQuery(String value, MultiTermQuery.RewriteMethod method, boolean caseInsensitive, QueryShardContext context) { if (context.allowExpensiveQueries() == false) { diff --git a/server/src/main/java/org/opensearch/index/query/FuzzyQueryBuilder.java b/server/src/main/java/org/opensearch/index/query/FuzzyQueryBuilder.java index a25a426792e31..93c32bbedcef4 100644 --- a/server/src/main/java/org/opensearch/index/query/FuzzyQueryBuilder.java +++ b/server/src/main/java/org/opensearch/index/query/FuzzyQueryBuilder.java @@ -357,7 +357,7 @@ protected Query doToQuery(QueryShardContext context) throws IOException { throw new IllegalStateException("Rewrite first"); } String rewrite = this.rewrite; - Query query = fieldType.fuzzyQuery(value, fuzziness, prefixLength, maxExpansions, transpositions, context); + Query query = fieldType.fuzzyQuery(value, fuzziness, prefixLength, maxExpansions, transpositions, null, context); if (query instanceof MultiTermQuery) { MultiTermQuery.RewriteMethod rewriteMethod = QueryParsers.parseRewriteMethod(rewrite, null, LoggingDeprecationHandler.INSTANCE); QueryParsers.setRewriteMethod((MultiTermQuery) query, rewriteMethod); diff --git a/server/src/test/java/org/opensearch/index/mapper/KeywordFieldTypeTests.java b/server/src/test/java/org/opensearch/index/mapper/KeywordFieldTypeTests.java index 393c448330142..b10035f54a0c0 100644 --- a/server/src/test/java/org/opensearch/index/mapper/KeywordFieldTypeTests.java +++ b/server/src/test/java/org/opensearch/index/mapper/KeywordFieldTypeTests.java @@ -263,8 +263,11 @@ public void testRegexpQuery() { public void testFuzzyQuery() { MappedFieldType ft = new KeywordFieldType("field"); assertEquals( - new FuzzyQuery(new Term("field", "foo"), 2, 1, 50, true), - ft.fuzzyQuery("foo", Fuzziness.fromEdits(2), 1, 50, true, MOCK_QSC) + new IndexOrDocValuesQuery( + new FuzzyQuery(new Term("field", "foo"), 2, 1, 50, true), + new FuzzyQuery(new Term("field", "foo"), 2, 1, 50, true, MultiTermQuery.DOC_VALUES_REWRITE) + ), + ft.fuzzyQuery("foo", Fuzziness.fromEdits(2), 1, 50, true, null, MOCK_QSC) ); Query indexExpected = new FuzzyQuery(new Term("field", "foo"), 2, 1, 50, true); From 0d01d1755e282763e5ea020112617ba25459863c Mon Sep 17 00:00:00 2001 From: bowenlan-amzn Date: Tue, 25 Jun 2024 08:21:33 -0700 Subject: [PATCH 005/167] Fix flaky test in range aggregation yaml test (#14486) Signed-off-by: bowenlan-amzn --- .../rest-api-spec/test/search.aggregation/10_histogram.yml | 5 +++++ .../rest-api-spec/test/search.aggregation/230_composite.yml | 6 ++++++ .../test/search.aggregation/330_auto_date_histogram.yml | 5 +++++ .../rest-api-spec/test/search.aggregation/40_range.yml | 6 ++++++ 4 files changed, 22 insertions(+) diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/10_histogram.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/10_histogram.yml index 996c2aae8cfe4..a75b1d0eac793 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/10_histogram.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/10_histogram.yml @@ -678,6 +678,11 @@ setup: - '{"index": {}}' - '{"date": "2016-03-01"}' + - do: + indices.forcemerge: + index: test_2 + max_num_segments: 1 + - do: search: index: test_2 diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/230_composite.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/230_composite.yml index 78e2e6858c6ff..ade9eb3eee0dc 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/230_composite.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/230_composite.yml @@ -1101,6 +1101,12 @@ setup: - '{"date": "2016-02-01"}' - '{"index": {}}' - '{"date": "2016-03-01"}' + + - do: + indices.forcemerge: + index: test_2 + max_num_segments: 1 + - do: search: index: test_2 diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/330_auto_date_histogram.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/330_auto_date_histogram.yml index fc82517788c91..0897e0bdd894b 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/330_auto_date_histogram.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/330_auto_date_histogram.yml @@ -133,6 +133,11 @@ setup: - '{"index": {}}' - '{"date": "2020-03-09", "v": 4}' + - do: + indices.forcemerge: + index: test_profile + max_num_segments: 1 + - do: search: index: test_profile diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/40_range.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/40_range.yml index 2fd926276d0b4..80aad96ce1f6b 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/40_range.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/40_range.yml @@ -544,6 +544,7 @@ setup: body: settings: number_of_replicas: 0 + number_of_shards: 1 refresh_interval: -1 mappings: properties: @@ -567,6 +568,11 @@ setup: - '{"index": {}}' - '{"double" : 50}' + - do: + indices.forcemerge: + index: test_profile + max_num_segments: 1 + - do: search: index: test_profile From aa83733c3ffdf6ef1ff98abd9519001e5aa71509 Mon Sep 17 00:00:00 2001 From: Prudhvi Godithi Date: Tue, 25 Jun 2024 11:29:41 -0700 Subject: [PATCH 006/167] Use CODECOV_TOKEN (#14536) Signed-off-by: Prudhvi Godithi --- .github/workflows/gradle-check.yml | 1 + 1 file changed, 1 insertion(+) diff --git a/.github/workflows/gradle-check.yml b/.github/workflows/gradle-check.yml index 2909ee95349ce..89d894403ff1a 100644 --- a/.github/workflows/gradle-check.yml +++ b/.github/workflows/gradle-check.yml @@ -113,6 +113,7 @@ jobs: if: success() uses: codecov/codecov-action@v4 with: + token: ${{ secrets.CODECOV_TOKEN }} files: ./codeCoverage.xml - name: Create Comment Success From 563375de28b16870ab42b9fb4260127598d47d91 Mon Sep 17 00:00:00 2001 From: Sagar <99425694+sgup432@users.noreply.github.com> Date: Tue, 25 Jun 2024 12:04:17 -0700 Subject: [PATCH 007/167] [Tiered Caching] Moving query recomputation logic outside of write lock (#14187) * Moving query recompute out of write lock Signed-off-by: Sagar Upadhyaya * [Tiered Caching] Moving query recomputation logic outside of write lock Signed-off-by: Sagar Upadhyaya * Adding java doc for the completable map Signed-off-by: Sagar Upadhyaya * Changes to call future handler only once per key Signed-off-by: Sagar Upadhyaya * Fixing spotless check Signed-off-by: Sagar Upadhyaya * Added changelog Signed-off-by: Sagar Upadhyaya * Addressing comments Signed-off-by: Sagar Upadhyaya * Fixing gradle fail Signed-off-by: Sagar Upadhyaya * Addressing comments to refactor unit test Signed-off-by: Sagar Upadhyaya * minor UT refactor Signed-off-by: Sagar Upadhyaya --------- Signed-off-by: Sagar Upadhyaya Signed-off-by: Sagar <99425694+sgup432@users.noreply.github.com> Co-authored-by: Sagar Upadhyaya --- CHANGELOG.md | 1 + .../common/tier/TieredSpilloverCache.java | 85 ++++- .../tier/TieredSpilloverCacheTests.java | 339 +++++++++++++++++- 3 files changed, 406 insertions(+), 19 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index cafe9c20e7ff4..f71ba46745ef1 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -26,6 +26,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Bump `azure-identity` from 1.11.4 to 1.13.0, Bump `msal4j` from 1.14.3 to 1.15.1, Bump `msal4j-persistence-extension` from 1.2.0 to 1.3.0 ([#14506](https://github.com/opensearch-project/OpenSearch/pull/14506)) ### Changed +- [Tiered Caching] Move query recomputation logic outside write lock ([#14187](https://github.com/opensearch-project/OpenSearch/pull/14187)) - unsignedLongRangeQuery now returns MatchNoDocsQuery if the lower bounds are greater than the upper bounds ([#14416](https://github.com/opensearch-project/OpenSearch/pull/14416)) - Updated the `indices.query.bool.max_clause_count` setting from being static to dynamically updateable ([#13568](https://github.com/opensearch-project/OpenSearch/pull/13568)) - Make the class CommunityIdProcessor final ([#14448](https://github.com/opensearch-project/OpenSearch/pull/14448)) diff --git a/modules/cache-common/src/main/java/org/opensearch/cache/common/tier/TieredSpilloverCache.java b/modules/cache-common/src/main/java/org/opensearch/cache/common/tier/TieredSpilloverCache.java index 63cdbca101f2a..b6d6913a9f8d4 100644 --- a/modules/cache-common/src/main/java/org/opensearch/cache/common/tier/TieredSpilloverCache.java +++ b/modules/cache-common/src/main/java/org/opensearch/cache/common/tier/TieredSpilloverCache.java @@ -8,6 +8,8 @@ package org.opensearch.cache.common.tier; +import org.apache.logging.log4j.LogManager; +import org.apache.logging.log4j.Logger; import org.opensearch.cache.common.policy.TookTimePolicy; import org.opensearch.common.annotation.ExperimentalApi; import org.opensearch.common.cache.CacheType; @@ -35,9 +37,13 @@ import java.util.Map; import java.util.NoSuchElementException; import java.util.Objects; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ExecutionException; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.locks.ReadWriteLock; import java.util.concurrent.locks.ReentrantReadWriteLock; +import java.util.function.BiFunction; import java.util.function.Function; import java.util.function.Predicate; import java.util.function.ToLongBiFunction; @@ -61,6 +67,7 @@ public class TieredSpilloverCache implements ICache { // Used to avoid caching stale entries in lower tiers. private static final List SPILLOVER_REMOVAL_REASONS = List.of(RemovalReason.EVICTED, RemovalReason.CAPACITY); + private static final Logger logger = LogManager.getLogger(TieredSpilloverCache.class); private final ICache diskCache; private final ICache onHeapCache; @@ -86,6 +93,12 @@ public class TieredSpilloverCache implements ICache { private final Map, TierInfo> caches; private final List> policies; + /** + * This map is used to handle concurrent requests for same key in computeIfAbsent() to ensure we load the value + * only once. + */ + Map, CompletableFuture, V>>> completableFutureMap = new ConcurrentHashMap<>(); + TieredSpilloverCache(Builder builder) { Objects.requireNonNull(builder.onHeapCacheFactory, "onHeap cache builder can't be null"); Objects.requireNonNull(builder.diskCacheFactory, "disk cache builder can't be null"); @@ -190,10 +203,7 @@ public V computeIfAbsent(ICacheKey key, LoadAwareCacheLoader, V> // Add the value to the onHeap cache. We are calling computeIfAbsent which does another get inside. // This is needed as there can be many requests for the same key at the same time and we only want to load // the value once. - V value = null; - try (ReleasableLock ignore = writeLock.acquire()) { - value = onHeapCache.computeIfAbsent(key, loader); - } + V value = compute(key, loader); // Handle stats if (loader.isLoaded()) { // The value was just computed and added to the cache by this thread. Register a miss for the heap cache, and the disk cache @@ -222,6 +232,57 @@ public V computeIfAbsent(ICacheKey key, LoadAwareCacheLoader, V> return cacheValueTuple.v1(); } + private V compute(ICacheKey key, LoadAwareCacheLoader, V> loader) throws Exception { + // Only one of the threads will succeed putting a future into map for the same key. + // Rest will fetch existing future and wait on that to complete. + CompletableFuture, V>> future = completableFutureMap.putIfAbsent(key, new CompletableFuture<>()); + // Handler to handle results post processing. Takes a tuple or exception as an input and returns + // the value. Also before returning value, puts the value in cache. + BiFunction, V>, Throwable, Void> handler = (pair, ex) -> { + if (pair != null) { + try (ReleasableLock ignore = writeLock.acquire()) { + onHeapCache.put(pair.v1(), pair.v2()); + } catch (Exception e) { + // TODO: Catch specific exceptions to know whether this resulted from cache or underlying removal + // listeners/stats. Needs better exception handling at underlying layers.For now swallowing + // exception. + logger.warn("Exception occurred while putting item onto heap cache", e); + } + } else { + if (ex != null) { + logger.warn("Exception occurred while trying to compute the value", ex); + } + } + completableFutureMap.remove(key); // Remove key from map as not needed anymore. + return null; + }; + V value = null; + if (future == null) { + future = completableFutureMap.get(key); + future.handle(handler); + try { + value = loader.load(key); + } catch (Exception ex) { + future.completeExceptionally(ex); + throw new ExecutionException(ex); + } + if (value == null) { + NullPointerException npe = new NullPointerException("Loader returned a null value"); + future.completeExceptionally(npe); + throw new ExecutionException(npe); + } else { + future.complete(new Tuple<>(key, value)); + } + } else { + try { + value = future.get().v2(); + } catch (InterruptedException ex) { + throw new IllegalStateException(ex); + } + } + return value; + } + @Override public void invalidate(ICacheKey key) { // We are trying to invalidate the key from all caches though it would be present in only of them. @@ -328,12 +389,22 @@ void handleRemovalFromHeapTier(RemovalNotification, V> notification ICacheKey key = notification.getKey(); boolean wasEvicted = SPILLOVER_REMOVAL_REASONS.contains(notification.getRemovalReason()); boolean countEvictionTowardsTotal = false; // Don't count this eviction towards the cache's total if it ends up in the disk tier - if (caches.get(diskCache).isEnabled() && wasEvicted && evaluatePolicies(notification.getValue())) { + boolean exceptionOccurredOnDiskCachePut = false; + boolean canCacheOnDisk = caches.get(diskCache).isEnabled() && wasEvicted && evaluatePolicies(notification.getValue()); + if (canCacheOnDisk) { try (ReleasableLock ignore = writeLock.acquire()) { diskCache.put(key, notification.getValue()); // spill over to the disk tier and increment its stats + } catch (Exception ex) { + // TODO: Catch specific exceptions. Needs better exception handling. We are just swallowing exception + // in this case as it shouldn't cause upstream request to fail. + logger.warn("Exception occurred while putting item to disk cache", ex); + exceptionOccurredOnDiskCachePut = true; } - updateStatsOnPut(TIER_DIMENSION_VALUE_DISK, key, notification.getValue()); - } else { + if (!exceptionOccurredOnDiskCachePut) { + updateStatsOnPut(TIER_DIMENSION_VALUE_DISK, key, notification.getValue()); + } + } + if (!canCacheOnDisk || exceptionOccurredOnDiskCachePut) { // If the value is not going to the disk cache, send this notification to the TSC's removal listener // as the value is leaving the TSC entirely removalListener.onRemoval(notification); diff --git a/modules/cache-common/src/test/java/org/opensearch/cache/common/tier/TieredSpilloverCacheTests.java b/modules/cache-common/src/test/java/org/opensearch/cache/common/tier/TieredSpilloverCacheTests.java index 54b15f236a418..b9c7bbdb77d3d 100644 --- a/modules/cache-common/src/test/java/org/opensearch/cache/common/tier/TieredSpilloverCacheTests.java +++ b/modules/cache-common/src/test/java/org/opensearch/cache/common/tier/TieredSpilloverCacheTests.java @@ -44,8 +44,12 @@ import java.util.UUID; import java.util.concurrent.CopyOnWriteArrayList; import java.util.concurrent.CountDownLatch; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; import java.util.concurrent.Phaser; import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicReference; import java.util.function.Function; import java.util.function.Predicate; @@ -56,6 +60,10 @@ import static org.opensearch.cache.common.tier.TieredSpilloverCacheStatsHolder.TIER_DIMENSION_VALUE_DISK; import static org.opensearch.cache.common.tier.TieredSpilloverCacheStatsHolder.TIER_DIMENSION_VALUE_ON_HEAP; import static org.opensearch.common.cache.store.settings.OpenSearchOnHeapCacheSettings.MAXIMUM_SIZE_IN_BYTES_KEY; +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.Mockito.doThrow; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; public class TieredSpilloverCacheTests extends OpenSearchTestCase { static final List dimensionNames = List.of("dim1", "dim2", "dim3"); @@ -408,6 +416,7 @@ public void testComputeIfAbsentWithEvictionsFromOnHeapCache() throws Exception { assertEquals(onHeapCacheHit, getHitsForTier(tieredSpilloverCache, TIER_DIMENSION_VALUE_ON_HEAP)); assertEquals(cacheMiss + numOfItems1, getMissesForTier(tieredSpilloverCache, TIER_DIMENSION_VALUE_DISK)); assertEquals(diskCacheHit, getHitsForTier(tieredSpilloverCache, TIER_DIMENSION_VALUE_DISK)); + assertEquals(0, tieredSpilloverCache.completableFutureMap.size()); } public void testComputeIfAbsentWithEvictionsFromTieredCache() throws Exception { @@ -802,7 +811,7 @@ public String load(ICacheKey key) { }; loadAwareCacheLoaderList.add(loadAwareCacheLoader); phaser.arriveAndAwaitAdvance(); - tieredSpilloverCache.computeIfAbsent(key, loadAwareCacheLoader); + assertEquals(value, tieredSpilloverCache.computeIfAbsent(key, loadAwareCacheLoader)); } catch (Exception e) { throw new RuntimeException(e); } @@ -811,7 +820,7 @@ public String load(ICacheKey key) { threads[i].start(); } phaser.arriveAndAwaitAdvance(); - countDownLatch.await(); // Wait for rest of tasks to be cancelled. + countDownLatch.await(); int numberOfTimesKeyLoaded = 0; assertEquals(numberOfSameKeys, loadAwareCacheLoaderList.size()); for (int i = 0; i < loadAwareCacheLoaderList.size(); i++) { @@ -824,6 +833,215 @@ public String load(ICacheKey key) { // We should see only one heap miss, and the rest hits assertEquals(1, getMissesForTier(tieredSpilloverCache, TIER_DIMENSION_VALUE_ON_HEAP)); assertEquals(numberOfSameKeys - 1, getHitsForTier(tieredSpilloverCache, TIER_DIMENSION_VALUE_ON_HEAP)); + assertEquals(0, tieredSpilloverCache.completableFutureMap.size()); + } + + public void testComputIfAbsentConcurrentlyWithMultipleKeys() throws Exception { + int onHeapCacheSize = randomIntBetween(300, 500); + int diskCacheSize = randomIntBetween(600, 700); + int keyValueSize = 50; + + MockCacheRemovalListener removalListener = new MockCacheRemovalListener<>(); + Settings settings = Settings.builder() + .put( + OpenSearchOnHeapCacheSettings.getSettingListForCacheType(CacheType.INDICES_REQUEST_CACHE) + .get(MAXIMUM_SIZE_IN_BYTES_KEY) + .getKey(), + onHeapCacheSize * keyValueSize + "b" + ) + .build(); + + TieredSpilloverCache tieredSpilloverCache = initializeTieredSpilloverCache( + keyValueSize, + diskCacheSize, + removalListener, + settings, + 0 + ); + + int iterations = 10; + int numberOfKeys = 20; + List> iCacheKeyList = new ArrayList<>(); + for (int i = 0; i < numberOfKeys; i++) { + ICacheKey key = getICacheKey(UUID.randomUUID().toString()); + iCacheKeyList.add(key); + } + ExecutorService executorService = Executors.newFixedThreadPool(8); + CountDownLatch countDownLatch = new CountDownLatch(iterations * numberOfKeys); // To wait for all threads to finish. + + List, String>> loadAwareCacheLoaderList = new CopyOnWriteArrayList<>(); + for (int j = 0; j < numberOfKeys; j++) { + int finalJ = j; + for (int i = 0; i < iterations; i++) { + executorService.submit(() -> { + try { + LoadAwareCacheLoader, String> loadAwareCacheLoader = new LoadAwareCacheLoader<>() { + boolean isLoaded = false; + + @Override + public boolean isLoaded() { + return isLoaded; + } + + @Override + public String load(ICacheKey key) { + isLoaded = true; + return iCacheKeyList.get(finalJ).key; + } + }; + loadAwareCacheLoaderList.add(loadAwareCacheLoader); + tieredSpilloverCache.computeIfAbsent(iCacheKeyList.get(finalJ), loadAwareCacheLoader); + } catch (Exception e) { + throw new RuntimeException(e); + } finally { + countDownLatch.countDown(); + } + }); + } + } + countDownLatch.await(); + int numberOfTimesKeyLoaded = 0; + assertEquals(iterations * numberOfKeys, loadAwareCacheLoaderList.size()); + for (int i = 0; i < loadAwareCacheLoaderList.size(); i++) { + LoadAwareCacheLoader, String> loader = loadAwareCacheLoaderList.get(i); + if (loader.isLoaded()) { + numberOfTimesKeyLoaded++; + } + } + assertEquals(numberOfKeys, numberOfTimesKeyLoaded); // It should be loaded only once. + // We should see only one heap miss, and the rest hits + assertEquals(numberOfKeys, getMissesForTier(tieredSpilloverCache, TIER_DIMENSION_VALUE_ON_HEAP)); + assertEquals((iterations * numberOfKeys) - numberOfKeys, getHitsForTier(tieredSpilloverCache, TIER_DIMENSION_VALUE_ON_HEAP)); + assertEquals(0, tieredSpilloverCache.completableFutureMap.size()); + executorService.shutdownNow(); + } + + public void testComputeIfAbsentConcurrentlyAndThrowsException() throws Exception { + LoadAwareCacheLoader, String> loadAwareCacheLoader = new LoadAwareCacheLoader<>() { + boolean isLoaded = false; + + @Override + public boolean isLoaded() { + return isLoaded; + } + + @Override + public String load(ICacheKey key) { + throw new RuntimeException("Testing"); + } + }; + verifyComputeIfAbsentThrowsException(RuntimeException.class, loadAwareCacheLoader, "Testing"); + } + + @SuppressWarnings({ "rawtypes", "unchecked" }) + public void testComputeIfAbsentWithOnHeapCacheThrowingExceptionOnPut() throws Exception { + int onHeapCacheSize = randomIntBetween(100, 300); + int diskCacheSize = randomIntBetween(200, 400); + int keyValueSize = 50; + + MockCacheRemovalListener removalListener = new MockCacheRemovalListener<>(); + Settings settings = Settings.builder() + .put( + OpenSearchOnHeapCacheSettings.getSettingListForCacheType(CacheType.INDICES_REQUEST_CACHE) + .get(MAXIMUM_SIZE_IN_BYTES_KEY) + .getKey(), + onHeapCacheSize * keyValueSize + "b" + ) + .build(); + ICache.Factory onHeapCacheFactory = mock(OpenSearchOnHeapCache.OpenSearchOnHeapCacheFactory.class); + ICache mockOnHeapCache = mock(ICache.class); + when(onHeapCacheFactory.create(any(), any(), any())).thenReturn(mockOnHeapCache); + doThrow(new RuntimeException("Testing")).when(mockOnHeapCache).put(any(), any()); + CacheConfig cacheConfig = getCacheConfig(keyValueSize, settings, removalListener); + ICache.Factory mockDiskCacheFactory = new MockDiskCache.MockDiskCacheFactory(0, diskCacheSize, false); + + TieredSpilloverCache tieredSpilloverCache = getTieredSpilloverCache( + onHeapCacheFactory, + mockDiskCacheFactory, + cacheConfig, + null, + removalListener + ); + String value = ""; + value = tieredSpilloverCache.computeIfAbsent(getICacheKey("test"), new LoadAwareCacheLoader<>() { + @Override + public boolean isLoaded() { + return false; + } + + @Override + public String load(ICacheKey key) { + return "test"; + } + }); + assertEquals("test", value); + assertEquals(0, tieredSpilloverCache.completableFutureMap.size()); + } + + @SuppressWarnings({ "rawtypes", "unchecked" }) + public void testComputeIfAbsentWithDiskCacheThrowingExceptionOnPut() throws Exception { + int onHeapCacheSize = 0; + int keyValueSize = 50; + + MockCacheRemovalListener removalListener = new MockCacheRemovalListener<>(); + Settings settings = Settings.builder() + .put( + OpenSearchOnHeapCacheSettings.getSettingListForCacheType(CacheType.INDICES_REQUEST_CACHE) + .get(MAXIMUM_SIZE_IN_BYTES_KEY) + .getKey(), + onHeapCacheSize * keyValueSize + "b" + ) + .build(); + ICache.Factory onHeapCacheFactory = new OpenSearchOnHeapCache.OpenSearchOnHeapCacheFactory(); + CacheConfig cacheConfig = getCacheConfig(keyValueSize, settings, removalListener); + ICache.Factory mockDiskCacheFactory = mock(MockDiskCache.MockDiskCacheFactory.class); + ICache mockDiskCache = mock(ICache.class); + when(mockDiskCacheFactory.create(any(), any(), any())).thenReturn(mockDiskCache); + doThrow(new RuntimeException("Test")).when(mockDiskCache).put(any(), any()); + + TieredSpilloverCache tieredSpilloverCache = getTieredSpilloverCache( + onHeapCacheFactory, + mockDiskCacheFactory, + cacheConfig, + null, + removalListener + ); + + String response = ""; + response = tieredSpilloverCache.computeIfAbsent(getICacheKey("test"), new LoadAwareCacheLoader<>() { + @Override + public boolean isLoaded() { + return false; + } + + @Override + public String load(ICacheKey key) { + return "test"; + } + }); + ImmutableCacheStats diskStats = getStatsSnapshotForTier(tieredSpilloverCache, TIER_DIMENSION_VALUE_DISK); + + assertEquals(0, diskStats.getSizeInBytes()); + assertEquals(1, removalListener.evictionsMetric.count()); + assertEquals("test", response); + assertEquals(0, tieredSpilloverCache.completableFutureMap.size()); + } + + public void testComputeIfAbsentConcurrentlyWithLoaderReturningNull() throws Exception { + LoadAwareCacheLoader, String> loadAwareCacheLoader = new LoadAwareCacheLoader<>() { + boolean isLoaded = false; + + @Override + public boolean isLoaded() { + return isLoaded; + } + + @Override + public String load(ICacheKey key) { + return null; + } + }; + verifyComputeIfAbsentThrowsException(NullPointerException.class, loadAwareCacheLoader, "Loader returned a null value"); } public void testConcurrencyForEvictionFlowFromOnHeapToDiskTier() throws Exception { @@ -1408,6 +1626,26 @@ public boolean isLoaded() { }; } + private TieredSpilloverCache getTieredSpilloverCache( + ICache.Factory onHeapCacheFactory, + ICache.Factory mockDiskCacheFactory, + CacheConfig cacheConfig, + List> policies, + RemovalListener, String> removalListener + ) { + TieredSpilloverCache.Builder builder = new TieredSpilloverCache.Builder().setCacheType( + CacheType.INDICES_REQUEST_CACHE + ) + .setRemovalListener(removalListener) + .setOnHeapCacheFactory(onHeapCacheFactory) + .setDiskCacheFactory(mockDiskCacheFactory) + .setCacheConfig(cacheConfig); + if (policies != null) { + builder.addPolicies(policies); + } + return builder.build(); + } + private TieredSpilloverCache initializeTieredSpilloverCache( int keyValueSize, int diskCacheSize, @@ -1450,17 +1688,34 @@ private TieredSpilloverCache intializeTieredSpilloverCache( .build(); ICache.Factory mockDiskCacheFactory = new MockDiskCache.MockDiskCacheFactory(diskDeliberateDelay, diskCacheSize, false); - TieredSpilloverCache.Builder builder = new TieredSpilloverCache.Builder().setCacheType( - CacheType.INDICES_REQUEST_CACHE - ) + return getTieredSpilloverCache(onHeapCacheFactory, mockDiskCacheFactory, cacheConfig, policies, removalListener); + } + + private CacheConfig getCacheConfig( + int keyValueSize, + Settings settings, + RemovalListener, String> removalListener + ) { + return new CacheConfig.Builder().setKeyType(String.class) + .setKeyType(String.class) + .setWeigher((k, v) -> keyValueSize) + .setSettings(settings) + .setDimensionNames(dimensionNames) .setRemovalListener(removalListener) - .setOnHeapCacheFactory(onHeapCacheFactory) - .setDiskCacheFactory(mockDiskCacheFactory) - .setCacheConfig(cacheConfig); - if (policies != null) { - builder.addPolicies(policies); - } - return builder.build(); + .setKeySerializer(new StringSerializer()) + .setValueSerializer(new StringSerializer()) + .setSettings( + Settings.builder() + .put( + CacheSettings.getConcreteStoreNameSettingForCacheType(CacheType.INDICES_REQUEST_CACHE).getKey(), + TieredSpilloverCache.TieredSpilloverCacheFactory.TIERED_SPILLOVER_CACHE_NAME + ) + .put(FeatureFlags.PLUGGABLE_CACHE, "true") + .put(settings) + .build() + ) + .setClusterSettings(clusterSettings) + .build(); } // Helper functions for extracting tier aggregated stats. @@ -1501,6 +1756,66 @@ private ImmutableCacheStats getStatsSnapshotForTier(TieredSpilloverCache t return snapshot; } + private void verifyComputeIfAbsentThrowsException( + Class expectedException, + LoadAwareCacheLoader, String> loader, + String expectedExceptionMessage + ) throws InterruptedException { + int onHeapCacheSize = randomIntBetween(100, 300); + int diskCacheSize = randomIntBetween(200, 400); + int keyValueSize = 50; + + MockCacheRemovalListener removalListener = new MockCacheRemovalListener<>(); + Settings settings = Settings.builder() + .put( + OpenSearchOnHeapCacheSettings.getSettingListForCacheType(CacheType.INDICES_REQUEST_CACHE) + .get(MAXIMUM_SIZE_IN_BYTES_KEY) + .getKey(), + onHeapCacheSize * keyValueSize + "b" + ) + .build(); + + TieredSpilloverCache tieredSpilloverCache = initializeTieredSpilloverCache( + keyValueSize, + diskCacheSize, + removalListener, + settings, + 0 + ); + + int numberOfSameKeys = randomIntBetween(10, onHeapCacheSize - 1); + ICacheKey key = getICacheKey(UUID.randomUUID().toString()); + String value = UUID.randomUUID().toString(); + AtomicInteger exceptionCount = new AtomicInteger(); + + Thread[] threads = new Thread[numberOfSameKeys]; + Phaser phaser = new Phaser(numberOfSameKeys + 1); + CountDownLatch countDownLatch = new CountDownLatch(numberOfSameKeys); // To wait for all threads to finish. + + for (int i = 0; i < numberOfSameKeys; i++) { + threads[i] = new Thread(() -> { + try { + phaser.arriveAndAwaitAdvance(); + tieredSpilloverCache.computeIfAbsent(key, loader); + } catch (Exception e) { + exceptionCount.incrementAndGet(); + assertEquals(ExecutionException.class, e.getClass()); + assertEquals(expectedException, e.getCause().getClass()); + assertEquals(expectedExceptionMessage, e.getCause().getMessage()); + } finally { + countDownLatch.countDown(); + } + }); + threads[i].start(); + } + phaser.arriveAndAwaitAdvance(); + countDownLatch.await(); // Wait for rest of tasks to be cancelled. + + // Verify exception count was equal to number of requests + assertEquals(numberOfSameKeys, exceptionCount.get()); + assertEquals(0, tieredSpilloverCache.completableFutureMap.size()); + } + private ImmutableCacheStats getTotalStatsSnapshot(TieredSpilloverCache tsc) throws IOException { ImmutableCacheStatsHolder cacheStats = tsc.stats(new String[0]); return cacheStats.getStatsForDimensionValues(List.of()); From badf851c4883c8ecd4bf47fbad3ca0d3a608f613 Mon Sep 17 00:00:00 2001 From: kkewwei Date: Wed, 26 Jun 2024 03:12:43 +0800 Subject: [PATCH 008/167] Fix Flaky Test ClusterRerouteIT.testDelayWithALargeAmountOfShards (#14510) Signed-off-by: kkewwei kkewwei@163.com Signed-off-by: kkewwei kkewwei@163.com Signed-off-by: kkewwei --- .../cluster/allocation/ClusterRerouteIT.java | 3 ++- .../opensearch/test/OpenSearchIntegTestCase.java | 16 +++++++++++++++- 2 files changed, 17 insertions(+), 2 deletions(-) diff --git a/server/src/internalClusterTest/java/org/opensearch/cluster/allocation/ClusterRerouteIT.java b/server/src/internalClusterTest/java/org/opensearch/cluster/allocation/ClusterRerouteIT.java index dbcb030d8a4f7..f4b5f112f5785 100644 --- a/server/src/internalClusterTest/java/org/opensearch/cluster/allocation/ClusterRerouteIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/cluster/allocation/ClusterRerouteIT.java @@ -273,7 +273,8 @@ public void testDelayWithALargeAmountOfShards() throws Exception { internalCluster().stopRandomNode(InternalTestCluster.nameFilter(node_1)); // This might run slowly on older hardware - ensureGreen(TimeValue.timeValueMinutes(2)); + // In some case, the shards will be rebalanced back and forth, it seems like a very low probability bug. + ensureGreen(TimeValue.timeValueMinutes(2), false); } private void rerouteWithAllocateLocalGateway(Settings commonSettings) throws Exception { diff --git a/test/framework/src/main/java/org/opensearch/test/OpenSearchIntegTestCase.java b/test/framework/src/main/java/org/opensearch/test/OpenSearchIntegTestCase.java index 71ab56c98312a..ca5ddf21710af 100644 --- a/test/framework/src/main/java/org/opensearch/test/OpenSearchIntegTestCase.java +++ b/test/framework/src/main/java/org/opensearch/test/OpenSearchIntegTestCase.java @@ -864,6 +864,10 @@ public ClusterHealthStatus ensureGreen(TimeValue timeout, String... indices) { return ensureColor(ClusterHealthStatus.GREEN, timeout, false, indices); } + public ClusterHealthStatus ensureGreen(TimeValue timeout, boolean waitForNoRelocatingShards, String... indices) { + return ensureColor(ClusterHealthStatus.GREEN, timeout, waitForNoRelocatingShards, false, indices); + } + /** * Ensures the cluster has a yellow state via the cluster health API. */ @@ -891,6 +895,16 @@ private ClusterHealthStatus ensureColor( TimeValue timeout, boolean waitForNoInitializingShards, String... indices + ) { + return ensureColor(clusterHealthStatus, timeout, true, waitForNoInitializingShards, indices); + } + + private ClusterHealthStatus ensureColor( + ClusterHealthStatus clusterHealthStatus, + TimeValue timeout, + boolean waitForNoRelocatingShards, + boolean waitForNoInitializingShards, + String... indices ) { String color = clusterHealthStatus.name().toLowerCase(Locale.ROOT); String method = "ensure" + Strings.capitalize(color); @@ -899,7 +913,7 @@ private ClusterHealthStatus ensureColor( .timeout(timeout) .waitForStatus(clusterHealthStatus) .waitForEvents(Priority.LANGUID) - .waitForNoRelocatingShards(true) + .waitForNoRelocatingShards(waitForNoRelocatingShards) .waitForNoInitializingShards(waitForNoInitializingShards) // We currently often use ensureGreen or ensureYellow to check whether the cluster is back in a good state after shutting down // a node. If the node that is stopped is the cluster-manager node, another node will become cluster-manager and publish a From d320f36ca01ff68f344b54a91a75ef3390824eb7 Mon Sep 17 00:00:00 2001 From: bowenlan-amzn Date: Tue, 25 Jun 2024 12:35:35 -0700 Subject: [PATCH 009/167] Add doc for debugging rest tests (#14491) * add doc for debugging rest tests Signed-off-by: bowenlan-amzn * Update TESTING.md Co-authored-by: Marc Handalian Signed-off-by: bowenlan-amzn * Address comment Signed-off-by: bowenlan-amzn --------- Signed-off-by: bowenlan-amzn Co-authored-by: Marc Handalian --- TESTING.md | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/TESTING.md b/TESTING.md index e8416f61be7e1..de7ab3eefe2f8 100644 --- a/TESTING.md +++ b/TESTING.md @@ -17,6 +17,8 @@ OpenSearch uses [jUnit](https://junit.org/junit5/) for testing, it also uses ran - [Miscellaneous](#miscellaneous) - [Running verification tasks](#running-verification-tasks) - [Testing the REST layer](#testing-the-rest-layer) + - [Running REST Tests Against An External Cluster](#running-rest-tests-against-an-external-cluster) + - [Debugging REST Tests](#debugging-rest-tests) - [Testing packaging](#testing-packaging) - [Testing packaging on Windows](#testing-packaging-on-windows) - [Testing VMs are disposable](#testing-vms-are-disposable) @@ -272,7 +274,18 @@ yamlRestTest’s and javaRestTest’s are easy to identify, since they are found If in doubt about which command to use, simply run <gradle path>:check -Note that the REST tests, like all the integration tests, can be run against an external cluster by specifying the `tests.cluster` property, which if present needs to contain a comma separated list of nodes to connect to (e.g. localhost:9300). +## Running REST Tests Against An External Cluster + +Note that the REST tests, like all the integration tests, can be run against an external cluster by specifying the following properties `tests.cluster`, `tests.rest.cluster`, `tests.clustername`. Use a comma separated list of node properties for the multi-node cluster. + +For example : + + ./gradlew :rest-api-spec:yamlRestTest \ + -Dtests.cluster=localhost:9200 -Dtests.rest.cluster=localhost:9200 -Dtests.clustername=opensearch + +## Debugging REST Tests + +You can launch a local OpenSearch cluster in debug mode following [Launching and debugging from an IDE](#launching-and-debugging-from-an-ide), and run your REST tests against that following [Running REST Tests Against An External Cluster](#running-rest-tests-against-an-external-cluster). # Testing packaging From 0eb39aec6796d9a576e51b186b1cb2474f16f70a Mon Sep 17 00:00:00 2001 From: Peter Alfonsi Date: Tue, 25 Jun 2024 13:26:54 -0700 Subject: [PATCH 010/167] Fix flaky DefaultCacheStatsHolderTests (#14462) Signed-off-by: Peter Alfonsi Co-authored-by: Peter Alfonsi --- .../stats/DefaultCacheStatsHolderTests.java | 75 +++++++++++-------- 1 file changed, 42 insertions(+), 33 deletions(-) diff --git a/server/src/test/java/org/opensearch/common/cache/stats/DefaultCacheStatsHolderTests.java b/server/src/test/java/org/opensearch/common/cache/stats/DefaultCacheStatsHolderTests.java index c6e8252ddf806..8a59dd9d2d105 100644 --- a/server/src/test/java/org/opensearch/common/cache/stats/DefaultCacheStatsHolderTests.java +++ b/server/src/test/java/org/opensearch/common/cache/stats/DefaultCacheStatsHolderTests.java @@ -127,49 +127,58 @@ public void testCount() throws Exception { } public void testConcurrentRemoval() throws Exception { - List dimensionNames = List.of("dim1", "dim2"); + List dimensionNames = List.of("A", "B"); DefaultCacheStatsHolder cacheStatsHolder = new DefaultCacheStatsHolder(dimensionNames, storeName); // Create stats for the following dimension sets - List> populatedStats = List.of(List.of("A1", "B1"), List.of("A2", "B2"), List.of("A2", "B3")); + List> populatedStats = new ArrayList<>(); + int numAValues = 10; + int numBValues = 2; + for (int indexA = 0; indexA < numAValues; indexA++) { + for (int indexB = 0; indexB < numBValues; indexB++) { + populatedStats.add(List.of("A" + indexA, "B" + indexB)); + } + } for (List dims : populatedStats) { cacheStatsHolder.incrementHits(dims); } - // Remove (A2, B2) and (A1, B1), before re-adding (A2, B2). At the end we should have stats for (A2, B2) but not (A1, B1). - - Thread[] threads = new Thread[3]; - CountDownLatch countDownLatch = new CountDownLatch(3); - threads[0] = new Thread(() -> { - cacheStatsHolder.removeDimensions(List.of("A2", "B2")); - countDownLatch.countDown(); - }); - threads[1] = new Thread(() -> { - cacheStatsHolder.removeDimensions(List.of("A1", "B1")); - countDownLatch.countDown(); - }); - threads[2] = new Thread(() -> { - cacheStatsHolder.incrementMisses(List.of("A2", "B2")); - cacheStatsHolder.incrementMisses(List.of("A2", "B3")); - countDownLatch.countDown(); - }); + // Remove a subset of the dimensions concurrently. + // Remove both (A0, B0), and (A0, B1), so we expect the intermediate node for A0 to be null afterwards. + // For all the others, remove only the B0 value. Then we expect the intermediate nodes for A1 through A9 to be present + // and reflect only the stats for their B1 child. + + Thread[] threads = new Thread[numAValues + 1]; + for (int i = 0; i < numAValues; i++) { + int finalI = i; + threads[i] = new Thread(() -> { cacheStatsHolder.removeDimensions(List.of("A" + finalI, "B0")); }); + } + threads[numAValues] = new Thread(() -> { cacheStatsHolder.removeDimensions(List.of("A0", "B1")); }); for (Thread thread : threads) { thread.start(); - // Add short sleep to ensure threads start their functions in order (so that incrementing doesn't happen before removal) - Thread.sleep(1); } - countDownLatch.await(); - assertNull(getNode(List.of("A1", "B1"), cacheStatsHolder.getStatsRoot())); - assertNull(getNode(List.of("A1"), cacheStatsHolder.getStatsRoot())); - assertNotNull(getNode(List.of("A2", "B2"), cacheStatsHolder.getStatsRoot())); - assertEquals( - new ImmutableCacheStats(0, 1, 0, 0, 0), - getNode(List.of("A2", "B2"), cacheStatsHolder.getStatsRoot()).getImmutableStats() - ); - assertEquals( - new ImmutableCacheStats(1, 1, 0, 0, 0), - getNode(List.of("A2", "B3"), cacheStatsHolder.getStatsRoot()).getImmutableStats() - ); + for (Thread thread : threads) { + thread.join(); + } + + // intermediate node for A0 should be null + assertNull(getNode(List.of("A0"), cacheStatsHolder.getStatsRoot())); + + // leaf nodes for all B0 values should be null since they were removed + for (int indexA = 0; indexA < numAValues; indexA++) { + assertNull(getNode(List.of("A" + indexA, "B0"), cacheStatsHolder.getStatsRoot())); + } + + // leaf nodes for all B1 values, except (A0, B1), should not be null as they weren't removed, + // and the intermediate nodes A1 through A9 shouldn't be null as they have remaining children + for (int indexA = 1; indexA < numAValues; indexA++) { + DefaultCacheStatsHolder.Node b1LeafNode = getNode(List.of("A" + indexA, "B1"), cacheStatsHolder.getStatsRoot()); + assertNotNull(b1LeafNode); + assertEquals(new ImmutableCacheStats(1, 0, 0, 0, 0), b1LeafNode.getImmutableStats()); + DefaultCacheStatsHolder.Node intermediateLevelNode = getNode(List.of("A" + indexA), cacheStatsHolder.getStatsRoot()); + assertNotNull(intermediateLevelNode); + assertEquals(b1LeafNode.getImmutableStats(), intermediateLevelNode.getImmutableStats()); + } } /** From 8839904f2ce12ba1a88d395fb9025877f2d5410f Mon Sep 17 00:00:00 2001 From: "opensearch-trigger-bot[bot]" <98922864+opensearch-trigger-bot[bot]@users.noreply.github.com> Date: Wed, 26 Jun 2024 09:24:04 -0400 Subject: [PATCH 011/167] [AUTO] [main] Add bwc version 2.15.1. (#14549) * Add bwc version 2.15.1 Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> * Fix auto-generated version Signed-off-by: Andrew Ross --------- Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Signed-off-by: Andrew Ross Co-authored-by: opensearch-ci-bot <83309141+opensearch-ci-bot@users.noreply.github.com> Co-authored-by: Andrew Ross --- .ci/bwcVersions | 1 + libs/core/src/main/java/org/opensearch/Version.java | 1 + 2 files changed, 2 insertions(+) diff --git a/.ci/bwcVersions b/.ci/bwcVersions index 1f80ed34d6c10..a738eb54e17f6 100644 --- a/.ci/bwcVersions +++ b/.ci/bwcVersions @@ -34,4 +34,5 @@ BWC_VERSION: - "2.14.0" - "2.14.1" - "2.15.0" + - "2.15.1" - "2.16.0" diff --git a/libs/core/src/main/java/org/opensearch/Version.java b/libs/core/src/main/java/org/opensearch/Version.java index 0cb2d4f867c12..da43894863432 100644 --- a/libs/core/src/main/java/org/opensearch/Version.java +++ b/libs/core/src/main/java/org/opensearch/Version.java @@ -105,6 +105,7 @@ public class Version implements Comparable, ToXContentFragment { public static final Version V_2_14_0 = new Version(2140099, org.apache.lucene.util.Version.LUCENE_9_10_0); public static final Version V_2_14_1 = new Version(2140199, org.apache.lucene.util.Version.LUCENE_9_10_0); public static final Version V_2_15_0 = new Version(2150099, org.apache.lucene.util.Version.LUCENE_9_10_0); + public static final Version V_2_15_1 = new Version(2150199, org.apache.lucene.util.Version.LUCENE_9_10_0); public static final Version V_2_16_0 = new Version(2160099, org.apache.lucene.util.Version.LUCENE_9_11_0); public static final Version V_3_0_0 = new Version(3000099, org.apache.lucene.util.Version.LUCENE_9_12_0); public static final Version CURRENT = V_3_0_0; From 729276f7e8f799ca21ace12bb2767869322ff253 Mon Sep 17 00:00:00 2001 From: Sagar <99425694+sgup432@users.noreply.github.com> Date: Wed, 26 Jun 2024 11:14:42 -0700 Subject: [PATCH 012/167] Fix flaky test TieredSpilloverCacheTests.testComputeIfAbsentConcurrently (#14550) * Fix flaky test TieredSpilloverCacheTests.testComputeIfAbsentConcurrently Signed-off-by: Sagar Upadhyaya * Addressing comment Signed-off-by: Sagar Upadhyaya --------- Signed-off-by: Sagar Upadhyaya Signed-off-by: Sagar Upadhyaya Co-authored-by: Sagar Upadhyaya --- .../common/tier/TieredSpilloverCache.java | 21 ++++++++++++------- .../tier/TieredSpilloverCacheTests.java | 4 ++-- 2 files changed, 16 insertions(+), 9 deletions(-) diff --git a/modules/cache-common/src/main/java/org/opensearch/cache/common/tier/TieredSpilloverCache.java b/modules/cache-common/src/main/java/org/opensearch/cache/common/tier/TieredSpilloverCache.java index b6d6913a9f8d4..f69c56808b2a1 100644 --- a/modules/cache-common/src/main/java/org/opensearch/cache/common/tier/TieredSpilloverCache.java +++ b/modules/cache-common/src/main/java/org/opensearch/cache/common/tier/TieredSpilloverCache.java @@ -195,7 +195,16 @@ public V computeIfAbsent(ICacheKey key, LoadAwareCacheLoader, V> // and it only has to be loaded one time, we should report one miss and the rest hits. But, if we do stats in // getValueFromTieredCache(), // we will see all misses. Instead, handle stats in computeIfAbsent(). - Tuple cacheValueTuple = getValueFromTieredCache(false).apply(key); + Tuple cacheValueTuple; + CompletableFuture, V>> future = null; + try (ReleasableLock ignore = readLock.acquire()) { + cacheValueTuple = getValueFromTieredCache(false).apply(key); + if (cacheValueTuple == null) { + // Only one of the threads will succeed putting a future into map for the same key. + // Rest will fetch existing future and wait on that to complete. + future = completableFutureMap.putIfAbsent(key, new CompletableFuture<>()); + } + } List heapDimensionValues = statsHolder.getDimensionsWithTierValue(key.dimensions, TIER_DIMENSION_VALUE_ON_HEAP); List diskDimensionValues = statsHolder.getDimensionsWithTierValue(key.dimensions, TIER_DIMENSION_VALUE_DISK); @@ -203,7 +212,7 @@ public V computeIfAbsent(ICacheKey key, LoadAwareCacheLoader, V> // Add the value to the onHeap cache. We are calling computeIfAbsent which does another get inside. // This is needed as there can be many requests for the same key at the same time and we only want to load // the value once. - V value = compute(key, loader); + V value = compute(key, loader, future); // Handle stats if (loader.isLoaded()) { // The value was just computed and added to the cache by this thread. Register a miss for the heap cache, and the disk cache @@ -232,10 +241,8 @@ public V computeIfAbsent(ICacheKey key, LoadAwareCacheLoader, V> return cacheValueTuple.v1(); } - private V compute(ICacheKey key, LoadAwareCacheLoader, V> loader) throws Exception { - // Only one of the threads will succeed putting a future into map for the same key. - // Rest will fetch existing future and wait on that to complete. - CompletableFuture, V>> future = completableFutureMap.putIfAbsent(key, new CompletableFuture<>()); + private V compute(ICacheKey key, LoadAwareCacheLoader, V> loader, CompletableFuture, V>> future) + throws Exception { // Handler to handle results post processing. Takes a tuple or exception as an input and returns // the value. Also before returning value, puts the value in cache. BiFunction, V>, Throwable, Void> handler = (pair, ex) -> { @@ -253,7 +260,7 @@ private V compute(ICacheKey key, LoadAwareCacheLoader, V> loader logger.warn("Exception occurred while trying to compute the value", ex); } } - completableFutureMap.remove(key); // Remove key from map as not needed anymore. + completableFutureMap.remove(key);// Remove key from map as not needed anymore. return null; }; V value = null; diff --git a/modules/cache-common/src/test/java/org/opensearch/cache/common/tier/TieredSpilloverCacheTests.java b/modules/cache-common/src/test/java/org/opensearch/cache/common/tier/TieredSpilloverCacheTests.java index b9c7bbdb77d3d..c6440a1e1797f 100644 --- a/modules/cache-common/src/test/java/org/opensearch/cache/common/tier/TieredSpilloverCacheTests.java +++ b/modules/cache-common/src/test/java/org/opensearch/cache/common/tier/TieredSpilloverCacheTests.java @@ -760,7 +760,7 @@ public void testInvalidateAll() throws Exception { } public void testComputeIfAbsentConcurrently() throws Exception { - int onHeapCacheSize = randomIntBetween(100, 300); + int onHeapCacheSize = randomIntBetween(500, 700); int diskCacheSize = randomIntBetween(200, 400); int keyValueSize = 50; @@ -782,7 +782,7 @@ public void testComputeIfAbsentConcurrently() throws Exception { 0 ); - int numberOfSameKeys = randomIntBetween(10, onHeapCacheSize - 1); + int numberOfSameKeys = randomIntBetween(400, onHeapCacheSize - 1); ICacheKey key = getICacheKey(UUID.randomUUID().toString()); String value = UUID.randomUUID().toString(); From a99b4949ec8a16f2f47b9c909f66c262ae5605f8 Mon Sep 17 00:00:00 2001 From: Andrew Ross Date: Wed, 26 Jun 2024 12:48:09 -0700 Subject: [PATCH 013/167] Add allowlist setting for ingest-common processors (#14479) Add a new static setting that lets an operator choose specific ingest processors to enable by name. The behavior is as follows: - If the allowlist setting is not defined, all installed processors are enabled. This is the status quo. - If the allowlist setting is defined as the empty set, then all processors are disabled. - If the allowlist setting contains the names of valid processors, only those processors are enabled. - If the allowlist setting contains a name of a processor that does not exist, then the server will fail to start with an IllegalStateException listing which processors were defined in the allowlist but are not installed. - If the allowlist setting is changed between server restarts then any ingest pipeline using a now-disabled processor will fail. This is the same experience if a pipeline used a processor defined by a plugin but then that plugin were to be uninstalled across restarts. Related to #14439 Signed-off-by: Andrew Ross --- CHANGELOG.md | 1 + .../common/IngestCommonModulePlugin.java | 39 ++++++- .../common/IngestCommonModulePluginTests.java | 109 ++++++++++++++++++ 3 files changed, 146 insertions(+), 3 deletions(-) create mode 100644 modules/ingest-common/src/test/java/org/opensearch/ingest/common/IngestCommonModulePluginTests.java diff --git a/CHANGELOG.md b/CHANGELOG.md index f71ba46745ef1..b31784d5ac31c 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -10,6 +10,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Apply the date histogram rewrite optimization to range aggregation ([#13865](https://github.com/opensearch-project/OpenSearch/pull/13865)) - [Writable Warm] Add composite directory implementation and integrate it with FileCache ([12782](https://github.com/opensearch-project/OpenSearch/pull/12782)) - Fix race condition while parsing derived fields from search definition ([14445](https://github.com/opensearch-project/OpenSearch/pull/14445)) +- Add allowlist setting for ingest-common processors ([#14439](https://github.com/opensearch-project/OpenSearch/issues/14439)) ### Dependencies - Bump `org.gradle.test-retry` from 1.5.8 to 1.5.9 ([#13442](https://github.com/opensearch-project/OpenSearch/pull/13442)) diff --git a/modules/ingest-common/src/main/java/org/opensearch/ingest/common/IngestCommonModulePlugin.java b/modules/ingest-common/src/main/java/org/opensearch/ingest/common/IngestCommonModulePlugin.java index 162934efa6778..bf9e9b71b8491 100644 --- a/modules/ingest-common/src/main/java/org/opensearch/ingest/common/IngestCommonModulePlugin.java +++ b/modules/ingest-common/src/main/java/org/opensearch/ingest/common/IngestCommonModulePlugin.java @@ -58,10 +58,20 @@ import java.util.HashMap; import java.util.List; import java.util.Map; +import java.util.Set; +import java.util.function.Function; import java.util.function.Supplier; +import java.util.stream.Collectors; public class IngestCommonModulePlugin extends Plugin implements ActionPlugin, IngestPlugin { + static final Setting> PROCESSORS_ALLOWLIST_SETTING = Setting.listSetting( + "ingest.common.processors.allowed", + List.of(), + Function.identity(), + Setting.Property.NodeScope + ); + static final Setting WATCHDOG_INTERVAL = Setting.timeSetting( "ingest.grok.watchdog.interval", TimeValue.timeValueSeconds(1), @@ -77,7 +87,7 @@ public IngestCommonModulePlugin() {} @Override public Map getProcessors(Processor.Parameters parameters) { - Map processors = new HashMap<>(); + final Map processors = new HashMap<>(); processors.put(DateProcessor.TYPE, new DateProcessor.Factory(parameters.scriptService)); processors.put(SetProcessor.TYPE, new SetProcessor.Factory(parameters.scriptService)); processors.put(AppendProcessor.TYPE, new AppendProcessor.Factory(parameters.scriptService)); @@ -110,7 +120,7 @@ public Map getProcessors(Processor.Parameters paramet processors.put(RemoveByPatternProcessor.TYPE, new RemoveByPatternProcessor.Factory()); processors.put(CommunityIdProcessor.TYPE, new CommunityIdProcessor.Factory()); processors.put(FingerprintProcessor.TYPE, new FingerprintProcessor.Factory()); - return Collections.unmodifiableMap(processors); + return filterForAllowlistSetting(parameters.env.settings(), processors); } @Override @@ -133,7 +143,7 @@ public List getRestHandlers( @Override public List> getSettings() { - return Arrays.asList(WATCHDOG_INTERVAL, WATCHDOG_MAX_EXECUTION_TIME); + return Arrays.asList(WATCHDOG_INTERVAL, WATCHDOG_MAX_EXECUTION_TIME, PROCESSORS_ALLOWLIST_SETTING); } private static MatcherWatchdog createGrokThreadWatchdog(Processor.Parameters parameters) { @@ -147,4 +157,27 @@ private static MatcherWatchdog createGrokThreadWatchdog(Processor.Parameters par ); } + private Map filterForAllowlistSetting(Settings settings, Map map) { + if (PROCESSORS_ALLOWLIST_SETTING.exists(settings) == false) { + return Map.copyOf(map); + } + final Set allowlist = Set.copyOf(PROCESSORS_ALLOWLIST_SETTING.get(settings)); + // Assert that no unknown processors are defined in the allowlist + final Set unknownAllowlistProcessors = allowlist.stream() + .filter(p -> map.containsKey(p) == false) + .collect(Collectors.toSet()); + if (unknownAllowlistProcessors.isEmpty() == false) { + throw new IllegalArgumentException( + "Processor(s) " + + unknownAllowlistProcessors + + " were defined in [" + + PROCESSORS_ALLOWLIST_SETTING.getKey() + + "] but do not exist" + ); + } + return map.entrySet() + .stream() + .filter(e -> allowlist.contains(e.getKey())) + .collect(Collectors.toUnmodifiableMap(Map.Entry::getKey, Map.Entry::getValue)); + } } diff --git a/modules/ingest-common/src/test/java/org/opensearch/ingest/common/IngestCommonModulePluginTests.java b/modules/ingest-common/src/test/java/org/opensearch/ingest/common/IngestCommonModulePluginTests.java new file mode 100644 index 0000000000000..b0c1e0fdbaa63 --- /dev/null +++ b/modules/ingest-common/src/test/java/org/opensearch/ingest/common/IngestCommonModulePluginTests.java @@ -0,0 +1,109 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.ingest.common; + +import org.opensearch.common.settings.Settings; +import org.opensearch.env.TestEnvironment; +import org.opensearch.ingest.Processor; +import org.opensearch.test.OpenSearchTestCase; + +import java.io.IOException; +import java.util.List; +import java.util.Set; + +public class IngestCommonModulePluginTests extends OpenSearchTestCase { + + public void testAllowlist() throws IOException { + runAllowlistTest(List.of()); + runAllowlistTest(List.of("date")); + runAllowlistTest(List.of("set")); + runAllowlistTest(List.of("copy", "date")); + runAllowlistTest(List.of("date", "set", "copy")); + } + + private void runAllowlistTest(List allowlist) throws IOException { + final Settings settings = Settings.builder() + .putList(IngestCommonModulePlugin.PROCESSORS_ALLOWLIST_SETTING.getKey(), allowlist) + .build(); + try (IngestCommonModulePlugin plugin = new IngestCommonModulePlugin()) { + assertEquals(Set.copyOf(allowlist), plugin.getProcessors(createParameters(settings)).keySet()); + } + } + + public void testAllowlistNotSpecified() throws IOException { + final Settings.Builder builder = Settings.builder(); + builder.remove(IngestCommonModulePlugin.PROCESSORS_ALLOWLIST_SETTING.getKey()); + final Settings settings = builder.build(); + try (IngestCommonModulePlugin plugin = new IngestCommonModulePlugin()) { + final Set expected = Set.of( + "append", + "urldecode", + "sort", + "fail", + "trim", + "set", + "fingerprint", + "pipeline", + "json", + "join", + "kv", + "bytes", + "date", + "drop", + "community_id", + "lowercase", + "convert", + "copy", + "gsub", + "dot_expander", + "rename", + "remove_by_pattern", + "html_strip", + "remove", + "csv", + "grok", + "date_index_name", + "foreach", + "script", + "dissect", + "uppercase", + "split" + ); + assertEquals(expected, plugin.getProcessors(createParameters(settings)).keySet()); + } + } + + public void testAllowlistHasNonexistentProcessors() throws IOException { + final Settings settings = Settings.builder() + .putList(IngestCommonModulePlugin.PROCESSORS_ALLOWLIST_SETTING.getKey(), List.of("threeve")) + .build(); + try (IngestCommonModulePlugin plugin = new IngestCommonModulePlugin()) { + IllegalArgumentException e = expectThrows( + IllegalArgumentException.class, + () -> plugin.getProcessors(createParameters(settings)) + ); + assertTrue(e.getMessage(), e.getMessage().contains("threeve")); + } + } + + private static Processor.Parameters createParameters(Settings settings) { + return new Processor.Parameters( + TestEnvironment.newEnvironment(Settings.builder().put(settings).put("path.home", "").build()), + null, + null, + null, + () -> 0L, + (a, b) -> null, + null, + null, + $ -> {}, + null + ); + } +} From 2be25bbfe83800e96752aee0acfd01b50d9c0a68 Mon Sep 17 00:00:00 2001 From: panguixin Date: Thu, 27 Jun 2024 07:16:18 +0800 Subject: [PATCH 014/167] Fix file cache initialization (#14004) * fix file cache initialization Signed-off-by: panguixin * changelog Signed-off-by: panguixin * add test Signed-off-by: panguixin --------- Signed-off-by: panguixin --- CHANGELOG.md | 1 + .../snapshots/SearchableSnapshotIT.java | 25 ++++++ .../cluster/node/DiscoveryNode.java | 4 + .../remote/utils/cache/SegmentedCache.java | 4 +- .../org/opensearch/monitor/fs/FsProbe.java | 13 ++- .../main/java/org/opensearch/node/Node.java | 86 ++++++++++++------- .../env/NodeRepurposeCommandTests.java | 2 +- .../java/org/opensearch/node/NodeTests.java | 2 +- .../opensearch/test/InternalTestCluster.java | 10 ++- 9 files changed, 106 insertions(+), 41 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index b31784d5ac31c..3f8fff1db214a 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -47,6 +47,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Fix fs info reporting negative available size ([#11573](https://github.com/opensearch-project/OpenSearch/pull/11573)) - Add ListPitInfo::getKeepAlive() getter ([#14495](https://github.com/opensearch-project/OpenSearch/pull/14495)) - Fix FuzzyQuery in keyword field will use IndexOrDocValuesQuery when both of index and doc_value are true ([#14378](https://github.com/opensearch-project/OpenSearch/pull/14378)) +- Fix file cache initialization ([#14004](https://github.com/opensearch-project/OpenSearch/pull/14004)) ### Security diff --git a/server/src/internalClusterTest/java/org/opensearch/snapshots/SearchableSnapshotIT.java b/server/src/internalClusterTest/java/org/opensearch/snapshots/SearchableSnapshotIT.java index 2440a3c64e956..1c199df4d548e 100644 --- a/server/src/internalClusterTest/java/org/opensearch/snapshots/SearchableSnapshotIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/snapshots/SearchableSnapshotIT.java @@ -28,6 +28,7 @@ import org.opensearch.cluster.block.ClusterBlockException; import org.opensearch.cluster.metadata.IndexMetadata; import org.opensearch.cluster.node.DiscoveryNode; +import org.opensearch.cluster.node.DiscoveryNodeRole; import org.opensearch.cluster.routing.GroupShardsIterator; import org.opensearch.cluster.routing.ShardIterator; import org.opensearch.cluster.routing.ShardRouting; @@ -35,6 +36,7 @@ import org.opensearch.common.Priority; import org.opensearch.common.io.PathUtils; import org.opensearch.common.settings.Settings; +import org.opensearch.common.settings.SettingsException; import org.opensearch.common.unit.TimeValue; import org.opensearch.core.common.unit.ByteSizeUnit; import org.opensearch.core.index.Index; @@ -65,10 +67,13 @@ import java.util.stream.StreamSupport; import static org.opensearch.action.admin.cluster.node.stats.NodesStatsRequest.Metric.FS; +import static org.opensearch.common.util.FeatureFlags.TIERED_REMOTE_INDEX; import static org.opensearch.core.common.util.CollectionUtils.iterableAsArrayList; import static org.opensearch.index.store.remote.filecache.FileCacheSettings.DATA_TO_FILE_CACHE_SIZE_RATIO_SETTING; import static org.opensearch.test.NodeRoles.clusterManagerOnlyNode; import static org.opensearch.test.NodeRoles.dataNode; +import static org.opensearch.test.NodeRoles.onlyRole; +import static org.opensearch.test.NodeRoles.onlyRoles; import static org.opensearch.test.hamcrest.OpenSearchAssertions.assertAcked; import static org.hamcrest.Matchers.contains; import static org.hamcrest.Matchers.containsString; @@ -1009,6 +1014,26 @@ public void cleanup() throws Exception { ); } + public void testStartSearchNode() throws Exception { + // test start dedicated search node + internalCluster().startNode(Settings.builder().put(onlyRole(DiscoveryNodeRole.SEARCH_ROLE))); + // test start node without search role + internalCluster().startNode(Settings.builder().put(onlyRole(DiscoveryNodeRole.DATA_ROLE))); + // test start non-dedicated search node with TIERED_REMOTE_INDEX feature enabled + internalCluster().startNode( + Settings.builder() + .put(onlyRoles(Set.of(DiscoveryNodeRole.SEARCH_ROLE, DiscoveryNodeRole.DATA_ROLE))) + .put(TIERED_REMOTE_INDEX, true) + ); + // test start non-dedicated search node + assertThrows( + SettingsException.class, + () -> internalCluster().startNode( + Settings.builder().put(onlyRoles(Set.of(DiscoveryNodeRole.SEARCH_ROLE, DiscoveryNodeRole.DATA_ROLE))) + ) + ); + } + private void assertSearchableSnapshotIndexDirectoryExistence(String nodeName, Index index, boolean exists) throws Exception { final Node node = internalCluster().getInstance(Node.class, nodeName); final ShardId shardId = new ShardId(index, 0); diff --git a/server/src/main/java/org/opensearch/cluster/node/DiscoveryNode.java b/server/src/main/java/org/opensearch/cluster/node/DiscoveryNode.java index 690621c2e7bca..653f81830ed17 100644 --- a/server/src/main/java/org/opensearch/cluster/node/DiscoveryNode.java +++ b/server/src/main/java/org/opensearch/cluster/node/DiscoveryNode.java @@ -130,6 +130,10 @@ public static boolean isSearchNode(Settings settings) { return hasRole(settings, DiscoveryNodeRole.SEARCH_ROLE); } + public static boolean isDedicatedSearchNode(Settings settings) { + return getRolesFromSettings(settings).stream().allMatch(DiscoveryNodeRole.SEARCH_ROLE::equals); + } + private final String nodeName; private final String nodeId; private final String ephemeralId; diff --git a/server/src/main/java/org/opensearch/index/store/remote/utils/cache/SegmentedCache.java b/server/src/main/java/org/opensearch/index/store/remote/utils/cache/SegmentedCache.java index 2ea7ea8dbee12..9ff6ddb1fb667 100644 --- a/server/src/main/java/org/opensearch/index/store/remote/utils/cache/SegmentedCache.java +++ b/server/src/main/java/org/opensearch/index/store/remote/utils/cache/SegmentedCache.java @@ -52,15 +52,15 @@ private static final int ceilingNextPowerOfTwo(int x) { private final Weigher weigher; public SegmentedCache(Builder builder) { - this.capacity = builder.capacity; final int segments = ceilingNextPowerOfTwo(builder.concurrencyLevel); this.segmentMask = segments - 1; this.table = newSegmentArray(segments); - this.perSegmentCapacity = (capacity + (segments - 1)) / segments; + this.perSegmentCapacity = (builder.capacity + (segments - 1)) / segments; this.weigher = builder.weigher; for (int i = 0; i < table.length; i++) { table[i] = new LRUCache<>(perSegmentCapacity, builder.listener, builder.weigher); } + this.capacity = perSegmentCapacity * segments; } @SuppressWarnings("unchecked") diff --git a/server/src/main/java/org/opensearch/monitor/fs/FsProbe.java b/server/src/main/java/org/opensearch/monitor/fs/FsProbe.java index f93cb63ff1f0a..db77ec7628e76 100644 --- a/server/src/main/java/org/opensearch/monitor/fs/FsProbe.java +++ b/server/src/main/java/org/opensearch/monitor/fs/FsProbe.java @@ -81,7 +81,11 @@ public FsInfo stats(FsInfo previous) throws IOException { if (fileCache != null && dataLocations[i].fileCacheReservedSize != ByteSizeValue.ZERO) { paths[i].fileCacheReserved = adjustForHugeFilesystems(dataLocations[i].fileCacheReservedSize.getBytes()); paths[i].fileCacheUtilized = adjustForHugeFilesystems(fileCache.usage().usage()); - paths[i].available -= (paths[i].fileCacheReserved - paths[i].fileCacheUtilized); + // fileCacheFree will be less than zero if the cache being over-subscribed + long fileCacheFree = paths[i].fileCacheReserved - paths[i].fileCacheUtilized; + if (fileCacheFree > 0) { + paths[i].available -= fileCacheFree; + } // occurs if reserved file cache space is occupied by other files, like local indices if (paths[i].available < 0) { paths[i].available = 0; @@ -215,4 +219,11 @@ public static FsInfo.Path getFSInfo(NodePath nodePath) throws IOException { return fsPath; } + public static long getTotalSize(NodePath nodePath) throws IOException { + return adjustForHugeFilesystems(nodePath.fileStore.getTotalSpace()); + } + + public static long getAvailableSize(NodePath nodePath) throws IOException { + return adjustForHugeFilesystems(nodePath.fileStore.getUsableSpace()); + } } diff --git a/server/src/main/java/org/opensearch/node/Node.java b/server/src/main/java/org/opensearch/node/Node.java index a91dce4ece126..505c9264d62bb 100644 --- a/server/src/main/java/org/opensearch/node/Node.java +++ b/server/src/main/java/org/opensearch/node/Node.java @@ -38,6 +38,7 @@ import org.opensearch.Build; import org.opensearch.ExceptionsHelper; import org.opensearch.OpenSearchException; +import org.opensearch.OpenSearchParseException; import org.opensearch.OpenSearchTimeoutException; import org.opensearch.Version; import org.opensearch.action.ActionModule; @@ -108,6 +109,7 @@ import org.opensearch.common.settings.Settings; import org.opensearch.common.settings.SettingsException; import org.opensearch.common.settings.SettingsModule; +import org.opensearch.common.unit.RatioValue; import org.opensearch.common.unit.TimeValue; import org.opensearch.common.util.BigArrays; import org.opensearch.common.util.FeatureFlags; @@ -176,7 +178,6 @@ import org.opensearch.ingest.IngestService; import org.opensearch.monitor.MonitorService; import org.opensearch.monitor.fs.FsHealthService; -import org.opensearch.monitor.fs.FsInfo; import org.opensearch.monitor.fs.FsProbe; import org.opensearch.monitor.jvm.JvmInfo; import org.opensearch.node.remotestore.RemoteStoreNodeService; @@ -372,9 +373,12 @@ public class Node implements Closeable { } }, Setting.Property.NodeScope); - public static final Setting NODE_SEARCH_CACHE_SIZE_SETTING = Setting.byteSizeSetting( + private static final String ZERO = "0"; + + public static final Setting NODE_SEARCH_CACHE_SIZE_SETTING = new Setting<>( "node.search.cache.size", - ByteSizeValue.ZERO, + s -> (FeatureFlags.isEnabled(FeatureFlags.TIERED_REMOTE_INDEX_SETTING) || DiscoveryNode.isDedicatedSearchNode(s)) ? "80%" : ZERO, + Node::validateFileCacheSize, Property.NodeScope ); @@ -2002,43 +2006,59 @@ DiscoveryNode getNode() { * Initializes the search cache with a defined capacity. * The capacity of the cache is based on user configuration for {@link Node#NODE_SEARCH_CACHE_SIZE_SETTING}. * If the user doesn't configure the cache size, it fails if the node is a data + search node. - * Else it configures the size to 80% of available capacity for a dedicated search node, if not explicitly defined. + * Else it configures the size to 80% of total capacity for a dedicated search node, if not explicitly defined. */ private void initializeFileCache(Settings settings, CircuitBreaker circuitBreaker) throws IOException { boolean isWritableRemoteIndexEnabled = FeatureFlags.isEnabled(FeatureFlags.TIERED_REMOTE_INDEX_SETTING); - if (DiscoveryNode.isSearchNode(settings) || isWritableRemoteIndexEnabled) { - NodeEnvironment.NodePath fileCacheNodePath = nodeEnvironment.fileCacheNodePath(); - long capacity = NODE_SEARCH_CACHE_SIZE_SETTING.get(settings).getBytes(); - FsInfo.Path info = ExceptionsHelper.catchAsRuntimeException(() -> FsProbe.getFSInfo(fileCacheNodePath)); - long availableCapacity = info.getAvailable().getBytes(); - - // Initialize default values for cache if NODE_SEARCH_CACHE_SIZE_SETTING is not set. - if (capacity == 0) { - // If node is not a dedicated search node without configuration, prevent cache initialization - if (!isWritableRemoteIndexEnabled - && DiscoveryNode.getRolesFromSettings(settings) - .stream() - .anyMatch(role -> !DiscoveryNodeRole.SEARCH_ROLE.equals(role))) { - throw new SettingsException( - "Unable to initialize the " - + DiscoveryNodeRole.SEARCH_ROLE.roleName() - + "-" - + DiscoveryNodeRole.DATA_ROLE.roleName() - + " node: Missing value for configuration " - + NODE_SEARCH_CACHE_SIZE_SETTING.getKey() - ); - } else { - capacity = 80 * availableCapacity / 100; - } + if (DiscoveryNode.isSearchNode(settings) == false && isWritableRemoteIndexEnabled == false) { + return; + } + + String capacityRaw = NODE_SEARCH_CACHE_SIZE_SETTING.get(settings); + logger.info("cache size [{}]", capacityRaw); + if (capacityRaw.equals(ZERO)) { + throw new SettingsException( + "Unable to initialize the " + + DiscoveryNodeRole.SEARCH_ROLE.roleName() + + "-" + + DiscoveryNodeRole.DATA_ROLE.roleName() + + " node: Missing value for configuration " + + NODE_SEARCH_CACHE_SIZE_SETTING.getKey() + ); + } + + NodeEnvironment.NodePath fileCacheNodePath = nodeEnvironment.fileCacheNodePath(); + long totalSpace = ExceptionsHelper.catchAsRuntimeException(() -> FsProbe.getTotalSize(fileCacheNodePath)); + long capacity = calculateFileCacheSize(capacityRaw, totalSpace); + if (capacity <= 0 || totalSpace <= capacity) { + throw new SettingsException("Cache size must be larger than zero and less than total capacity"); + } + + this.fileCache = FileCacheFactory.createConcurrentLRUFileCache(capacity, circuitBreaker); + fileCacheNodePath.fileCacheReservedSize = new ByteSizeValue(this.fileCache.capacity(), ByteSizeUnit.BYTES); + List fileCacheDataPaths = collectFileCacheDataPath(fileCacheNodePath); + this.fileCache.restoreFromDirectory(fileCacheDataPaths); + } + + private static long calculateFileCacheSize(String capacityRaw, long totalSpace) { + try { + RatioValue ratioValue = RatioValue.parseRatioValue(capacityRaw); + return Math.round(totalSpace * ratioValue.getAsRatio()); + } catch (OpenSearchParseException e) { + try { + return ByteSizeValue.parseBytesSizeValue(capacityRaw, NODE_SEARCH_CACHE_SIZE_SETTING.getKey()).getBytes(); + } catch (OpenSearchParseException ex) { + ex.addSuppressed(e); + throw ex; } - capacity = Math.min(capacity, availableCapacity); - fileCacheNodePath.fileCacheReservedSize = new ByteSizeValue(capacity, ByteSizeUnit.BYTES); - this.fileCache = FileCacheFactory.createConcurrentLRUFileCache(capacity, circuitBreaker); - List fileCacheDataPaths = collectFileCacheDataPath(fileCacheNodePath); - this.fileCache.restoreFromDirectory(fileCacheDataPaths); } } + private static String validateFileCacheSize(String capacityRaw) { + calculateFileCacheSize(capacityRaw, 0L); + return capacityRaw; + } + /** * Returns the {@link FileCache} instance for remote search node * Note: Visible for testing diff --git a/server/src/test/java/org/opensearch/env/NodeRepurposeCommandTests.java b/server/src/test/java/org/opensearch/env/NodeRepurposeCommandTests.java index 2a3525143c01f..d2d6fdc387dfe 100644 --- a/server/src/test/java/org/opensearch/env/NodeRepurposeCommandTests.java +++ b/server/src/test/java/org/opensearch/env/NodeRepurposeCommandTests.java @@ -95,7 +95,7 @@ public void createNodePaths() throws IOException { dataClusterManagerSettings = buildEnvSettings(Settings.EMPTY); Settings defaultSearchSettings = Settings.builder() .put(dataClusterManagerSettings) - .put(NODE_SEARCH_CACHE_SIZE_SETTING.getKey(), new ByteSizeValue(16, ByteSizeUnit.GB)) + .put(NODE_SEARCH_CACHE_SIZE_SETTING.getKey(), new ByteSizeValue(16, ByteSizeUnit.GB).toString()) .build(); searchNoDataNoClusterManagerSettings = onlyRole(dataClusterManagerSettings, DiscoveryNodeRole.SEARCH_ROLE); diff --git a/server/src/test/java/org/opensearch/node/NodeTests.java b/server/src/test/java/org/opensearch/node/NodeTests.java index f44cc352cd330..0093091f61a1c 100644 --- a/server/src/test/java/org/opensearch/node/NodeTests.java +++ b/server/src/test/java/org/opensearch/node/NodeTests.java @@ -380,7 +380,7 @@ public void testCreateWithFileCache() throws Exception { List> plugins = basePlugins(); ByteSizeValue cacheSize = new ByteSizeValue(16, ByteSizeUnit.GB); Settings searchRoleSettingsWithConfig = baseSettings().put(searchRoleSettings) - .put(Node.NODE_SEARCH_CACHE_SIZE_SETTING.getKey(), cacheSize) + .put(Node.NODE_SEARCH_CACHE_SIZE_SETTING.getKey(), cacheSize.toString()) .build(); Settings onlySearchRoleSettings = Settings.builder() .put(searchRoleSettingsWithConfig) diff --git a/test/framework/src/main/java/org/opensearch/test/InternalTestCluster.java b/test/framework/src/main/java/org/opensearch/test/InternalTestCluster.java index ca80c65e58522..ec88002317284 100644 --- a/test/framework/src/main/java/org/opensearch/test/InternalTestCluster.java +++ b/test/framework/src/main/java/org/opensearch/test/InternalTestCluster.java @@ -165,6 +165,7 @@ import static org.opensearch.test.NodeRoles.onlyRoles; import static org.opensearch.test.NodeRoles.removeRoles; import static org.opensearch.test.OpenSearchTestCase.assertBusy; +import static org.opensearch.test.OpenSearchTestCase.randomBoolean; import static org.opensearch.test.OpenSearchTestCase.randomFrom; import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.greaterThan; @@ -216,7 +217,8 @@ public final class InternalTestCluster extends TestCluster { nodeAndClient.node.settings() ); - private static final ByteSizeValue DEFAULT_SEARCH_CACHE_SIZE = new ByteSizeValue(2, ByteSizeUnit.GB); + private static final String DEFAULT_SEARCH_CACHE_SIZE_BYTES = "2gb"; + private static final String DEFAULT_SEARCH_CACHE_SIZE_PERCENT = "5%"; public static final int DEFAULT_LOW_NUM_CLUSTER_MANAGER_NODES = 1; public static final int DEFAULT_HIGH_NUM_CLUSTER_MANAGER_NODES = 3; @@ -700,8 +702,10 @@ public synchronized void ensureAtLeastNumSearchAndDataNodes(int n) { logger.info("increasing cluster size from {} to {}", size, n); Set searchAndDataRoles = Set.of(DiscoveryNodeRole.DATA_ROLE, DiscoveryNodeRole.SEARCH_ROLE); Settings settings = Settings.builder() - .put(Settings.EMPTY) - .put(Node.NODE_SEARCH_CACHE_SIZE_SETTING.getKey(), DEFAULT_SEARCH_CACHE_SIZE) + .put( + Node.NODE_SEARCH_CACHE_SIZE_SETTING.getKey(), + randomBoolean() ? DEFAULT_SEARCH_CACHE_SIZE_PERCENT : DEFAULT_SEARCH_CACHE_SIZE_BYTES + ) .build(); startNodes(n - size, Settings.builder().put(onlyRoles(settings, searchAndDataRoles)).build()); validateClusterFormed(); From bb9819c3ed4d319e67f683d5fedd23184b39b85f Mon Sep 17 00:00:00 2001 From: Bukhtawar Khan Date: Thu, 27 Jun 2024 07:48:42 +0530 Subject: [PATCH 015/167] Add Ashish Singh as maintainer (#14567) Signed-off-by: Bukhtawar Khan --- .github/CODEOWNERS | 2 +- MAINTAINERS.md | 1 + 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/.github/CODEOWNERS b/.github/CODEOWNERS index 5a2d08756c49f..8d69e98220b69 100644 --- a/.github/CODEOWNERS +++ b/.github/CODEOWNERS @@ -24,4 +24,4 @@ /.github/ @peternied -/MAINTAINERS.md @anasalkouz @andrross @Bukhtawar @CEHENKLE @dblock @dbwiddis @gbbafna @jed326 @kotwanikunal @mch2 @msfroh @nknize @owaiskazi19 @peternied @reta @Rishikesh1159 @sachinpkale @saratvemulapalli @shwetathareja @sohami @VachaShah +/MAINTAINERS.md @anasalkouz @andrross @ashking94 @Bukhtawar @CEHENKLE @dblock @dbwiddis @gbbafna @jed326 @kotwanikunal @mch2 @msfroh @nknize @owaiskazi19 @peternied @reta @Rishikesh1159 @sachinpkale @saratvemulapalli @shwetathareja @sohami @VachaShah diff --git a/MAINTAINERS.md b/MAINTAINERS.md index 91b57a4cbc74e..3298ceb15463c 100644 --- a/MAINTAINERS.md +++ b/MAINTAINERS.md @@ -9,6 +9,7 @@ This document contains a list of maintainers in this repo. See [opensearch-proje | Anas Alkouz | [anasalkouz](https://github.com/anasalkouz) | Amazon | | Andrew Ross | [andrross](https://github.com/andrross) | Amazon | | Andriy Redko | [reta](https://github.com/reta) | Aiven | +| Ashish Singh | [ashking94](https://github.com/ashking94) | Amazon | | Bukhtawar Khan | [Bukhtawar](https://github.com/Bukhtawar) | Amazon | | Charlotte Henkle | [CEHENKLE](https://github.com/CEHENKLE) | Amazon | | Dan Widdis | [dbwiddis](https://github.com/dbwiddis) | Amazon | From 391dee23aa2e8a808821389c379499e91ee2d550 Mon Sep 17 00:00:00 2001 From: Andriy Redko Date: Thu, 27 Jun 2024 12:16:16 -0400 Subject: [PATCH 016/167] Allow @InternalApi annotation on classes not meant to be constructed outside of the OpenSearch core (#14575) Signed-off-by: Andriy Redko --- CHANGELOG.md | 1 + .../ApiAnnotationProcessorTests.java | 13 ++++++++++++ .../processor/InternalApiAnnotated.java | 4 ++-- ...licApiConstructorAnnotatedInternalApi.java | 21 +++++++++++++++++++ 4 files changed, 37 insertions(+), 2 deletions(-) create mode 100644 libs/common/src/test/resources/org/opensearch/common/annotation/processor/PublicApiConstructorAnnotatedInternalApi.java diff --git a/CHANGELOG.md b/CHANGELOG.md index 3f8fff1db214a..e01dfed0e585e 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -31,6 +31,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - unsignedLongRangeQuery now returns MatchNoDocsQuery if the lower bounds are greater than the upper bounds ([#14416](https://github.com/opensearch-project/OpenSearch/pull/14416)) - Updated the `indices.query.bool.max_clause_count` setting from being static to dynamically updateable ([#13568](https://github.com/opensearch-project/OpenSearch/pull/13568)) - Make the class CommunityIdProcessor final ([#14448](https://github.com/opensearch-project/OpenSearch/pull/14448)) +- Allow @InternalApi annotation on classes not meant to be constructed outside of the OpenSearch core ([#14575](https://github.com/opensearch-project/OpenSearch/pull/14575)) ### Deprecated diff --git a/libs/common/src/test/java/org/opensearch/common/annotation/processor/ApiAnnotationProcessorTests.java b/libs/common/src/test/java/org/opensearch/common/annotation/processor/ApiAnnotationProcessorTests.java index 8d8a4c7895339..52162e3df0c1c 100644 --- a/libs/common/src/test/java/org/opensearch/common/annotation/processor/ApiAnnotationProcessorTests.java +++ b/libs/common/src/test/java/org/opensearch/common/annotation/processor/ApiAnnotationProcessorTests.java @@ -473,4 +473,17 @@ public void testPublicApiWithProtectedInterface() { assertThat(failure.diagnotics(), not(hasItem(matching(Diagnostic.Kind.ERROR)))); } + + /** + * The constructor arguments have relaxed semantics at the moment: those could be not annotated or be annotated as {@link InternalApi} + */ + public void testPublicApiConstructorAnnotatedInternalApi() { + final CompilerResult result = compile("PublicApiConstructorAnnotatedInternalApi.java", "NotAnnotated.java"); + assertThat(result, instanceOf(Failure.class)); + + final Failure failure = (Failure) result; + assertThat(failure.diagnotics(), hasSize(2)); + + assertThat(failure.diagnotics(), not(hasItem(matching(Diagnostic.Kind.ERROR)))); + } } diff --git a/libs/common/src/test/resources/org/opensearch/common/annotation/processor/InternalApiAnnotated.java b/libs/common/src/test/resources/org/opensearch/common/annotation/processor/InternalApiAnnotated.java index 9996ba8b736aa..b0b542e127285 100644 --- a/libs/common/src/test/resources/org/opensearch/common/annotation/processor/InternalApiAnnotated.java +++ b/libs/common/src/test/resources/org/opensearch/common/annotation/processor/InternalApiAnnotated.java @@ -8,9 +8,9 @@ package org.opensearch.common.annotation.processor; -import org.opensearch.common.annotation.PublicApi; +import org.opensearch.common.annotation.InternalApi; -@PublicApi(since = "1.0.0") +@InternalApi public class InternalApiAnnotated { } diff --git a/libs/common/src/test/resources/org/opensearch/common/annotation/processor/PublicApiConstructorAnnotatedInternalApi.java b/libs/common/src/test/resources/org/opensearch/common/annotation/processor/PublicApiConstructorAnnotatedInternalApi.java new file mode 100644 index 0000000000000..d355a6b770391 --- /dev/null +++ b/libs/common/src/test/resources/org/opensearch/common/annotation/processor/PublicApiConstructorAnnotatedInternalApi.java @@ -0,0 +1,21 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.common.annotation.processor; + +import org.opensearch.common.annotation.InternalApi; +import org.opensearch.common.annotation.PublicApi; + +@PublicApi(since = "1.0.0") +public class PublicApiConstructorAnnotatedInternalApi { + /** + * The constructors have relaxed semantics at the moment: those could be not annotated or be annotated as {@link InternalApi} + */ + @InternalApi + public PublicApiConstructorAnnotatedInternalApi(NotAnnotated arg) {} +} From f70fd71c388fe1903cb1b1efa2675f2428199aa8 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Thu, 27 Jun 2024 12:17:51 -0400 Subject: [PATCH 017/167] Bump com.azure:azure-storage-common from 12.21.2 to 12.25.1 in /plugins/repository-azure (#14517) * Bump com.azure:azure-storage-common in /plugins/repository-azure Bumps [com.azure:azure-storage-common](https://github.com/Azure/azure-sdk-for-java) from 12.21.2 to 12.25.1. - [Release notes](https://github.com/Azure/azure-sdk-for-java/releases) - [Commits](https://github.com/Azure/azure-sdk-for-java/compare/azure-storage-common_12.21.2...azure-storage-blob_12.25.1) --- updated-dependencies: - dependency-name: com.azure:azure-storage-common dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] * Updating SHAs Signed-off-by: dependabot[bot] * Update changelog Signed-off-by: dependabot[bot] Signed-off-by: Andriy Redko --------- Signed-off-by: dependabot[bot] Signed-off-by: Andriy Redko Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] --- CHANGELOG.md | 1 + plugins/repository-azure/build.gradle | 2 +- .../licenses/azure-storage-common-12.21.2.jar.sha1 | 1 - .../licenses/azure-storage-common-12.25.1.jar.sha1 | 1 + .../org/opensearch/repositories/azure/AzureStorageService.java | 3 +++ .../src/main/plugin-metadata/plugin-security.policy | 1 + 6 files changed, 7 insertions(+), 2 deletions(-) delete mode 100644 plugins/repository-azure/licenses/azure-storage-common-12.21.2.jar.sha1 create mode 100644 plugins/repository-azure/licenses/azure-storage-common-12.25.1.jar.sha1 diff --git a/CHANGELOG.md b/CHANGELOG.md index e01dfed0e585e..5a52250906ff6 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -25,6 +25,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Bump `com.gradle.develocity` from 3.17.4 to 3.17.5 ([#14397](https://github.com/opensearch-project/OpenSearch/pull/14397)) - Bump `opentelemetry` from 1.36.0 to 1.39.0 ([#14457](https://github.com/opensearch-project/OpenSearch/pull/14457)) - Bump `azure-identity` from 1.11.4 to 1.13.0, Bump `msal4j` from 1.14.3 to 1.15.1, Bump `msal4j-persistence-extension` from 1.2.0 to 1.3.0 ([#14506](https://github.com/opensearch-project/OpenSearch/pull/14506)) +- Bump `com.azure:azure-storage-common` from 12.21.2 to 12.25.1 ([#14517](https://github.com/opensearch-project/OpenSearch/pull/14517)) ### Changed - [Tiered Caching] Move query recomputation logic outside write lock ([#14187](https://github.com/opensearch-project/OpenSearch/pull/14187)) diff --git a/plugins/repository-azure/build.gradle b/plugins/repository-azure/build.gradle index f3aa64316b667..f88d291a8eb4a 100644 --- a/plugins/repository-azure/build.gradle +++ b/plugins/repository-azure/build.gradle @@ -47,7 +47,7 @@ dependencies { api 'com.azure:azure-core:1.49.1' api 'com.azure:azure-json:1.1.0' api 'com.azure:azure-xml:1.0.0' - api 'com.azure:azure-storage-common:12.21.2' + api 'com.azure:azure-storage-common:12.25.1' api 'com.azure:azure-core-http-netty:1.15.1' api "io.netty:netty-codec-dns:${versions.netty}" api "io.netty:netty-codec-socks:${versions.netty}" diff --git a/plugins/repository-azure/licenses/azure-storage-common-12.21.2.jar.sha1 b/plugins/repository-azure/licenses/azure-storage-common-12.21.2.jar.sha1 deleted file mode 100644 index b3c73774764df..0000000000000 --- a/plugins/repository-azure/licenses/azure-storage-common-12.21.2.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -d2676d4fc40a501bd5d0437b8d2bfb9926022bea \ No newline at end of file diff --git a/plugins/repository-azure/licenses/azure-storage-common-12.25.1.jar.sha1 b/plugins/repository-azure/licenses/azure-storage-common-12.25.1.jar.sha1 new file mode 100644 index 0000000000000..822a60d81ca27 --- /dev/null +++ b/plugins/repository-azure/licenses/azure-storage-common-12.25.1.jar.sha1 @@ -0,0 +1 @@ +96e2df76ce9a8fa084ae289bb59295d565f2b8d5 \ No newline at end of file diff --git a/plugins/repository-azure/src/main/java/org/opensearch/repositories/azure/AzureStorageService.java b/plugins/repository-azure/src/main/java/org/opensearch/repositories/azure/AzureStorageService.java index f39ed185d8b35..4f30247f0af08 100644 --- a/plugins/repository-azure/src/main/java/org/opensearch/repositories/azure/AzureStorageService.java +++ b/plugins/repository-azure/src/main/java/org/opensearch/repositories/azure/AzureStorageService.java @@ -141,6 +141,9 @@ public Void run() { // - https://github.com/Azure/azure-sdk-for-java/pull/25004 // - https://github.com/Azure/azure-sdk-for-java/pull/24374 Configuration.getGlobalConfiguration().put("AZURE_JACKSON_ADAPTER_USE_ACCESS_HELPER", "true"); + // See please: + // - https://github.com/Azure/azure-sdk-for-java/issues/37464 + Configuration.getGlobalConfiguration().put("AZURE_ENABLE_SHUTDOWN_HOOK_WITH_PRIVILEGE", "true"); } public AzureStorageService(Settings settings) { diff --git a/plugins/repository-azure/src/main/plugin-metadata/plugin-security.policy b/plugins/repository-azure/src/main/plugin-metadata/plugin-security.policy index e8fbe35ebab1d..eedcfd98da150 100644 --- a/plugins/repository-azure/src/main/plugin-metadata/plugin-security.policy +++ b/plugins/repository-azure/src/main/plugin-metadata/plugin-security.policy @@ -38,6 +38,7 @@ grant { permission java.lang.RuntimePermission "accessDeclaredMembers"; permission java.lang.reflect.ReflectPermission "suppressAccessChecks"; permission java.lang.RuntimePermission "setContextClassLoader"; + permission java.lang.RuntimePermission "shutdownHooks"; // azure client set Authenticator for proxy username/password permission java.net.NetPermission "setDefaultAuthenticator"; From d9e9944670ffc02dfef6c6a5e50dabd14779b67a Mon Sep 17 00:00:00 2001 From: Andrew Ross Date: Thu, 27 Jun 2024 12:14:38 -0500 Subject: [PATCH 018/167] Add allowlist setting for search-pipeline-common processors (#14562) Add a new static setting that lets an operator choose specific search pipeline processors to enable by name. The behavior is as follows: - If the allowlist setting is not defined, all installed processors are enabled. This is the status quo. - If the allowlist setting is defined as the empty set, then all processors are disabled. - If the allowlist setting contains the names of valid processors, only those processors are enabled. - If the allowlist setting contains a name of a processor that does not exist, then the server will fail to start with an IllegalStateException listing which processors were defined in the allowlist but are not installed. - If the allowlist setting is changed between server restarts then any ingest pipeline using a now-disabled processor will fail. This is the same experience if a pipeline used a processor defined by a plugin but then that plugin were to be uninstalled across restarts. A distinct setting exists for each of request, response, and search phase results processors. Related to #14439 Signed-off-by: Andrew Ross --- CHANGELOG.md | 2 +- .../common/IngestCommonModulePlugin.java | 2 +- .../SearchPipelineCommonModulePlugin.java | 102 ++++++++++++++--- ...SearchPipelineCommonModulePluginTests.java | 106 ++++++++++++++++++ 4 files changed, 196 insertions(+), 16 deletions(-) create mode 100644 modules/search-pipeline-common/src/test/java/org/opensearch/search/pipeline/common/SearchPipelineCommonModulePluginTests.java diff --git a/CHANGELOG.md b/CHANGELOG.md index 5a52250906ff6..c6b2d815750f9 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -10,7 +10,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Apply the date histogram rewrite optimization to range aggregation ([#13865](https://github.com/opensearch-project/OpenSearch/pull/13865)) - [Writable Warm] Add composite directory implementation and integrate it with FileCache ([12782](https://github.com/opensearch-project/OpenSearch/pull/12782)) - Fix race condition while parsing derived fields from search definition ([14445](https://github.com/opensearch-project/OpenSearch/pull/14445)) -- Add allowlist setting for ingest-common processors ([#14439](https://github.com/opensearch-project/OpenSearch/issues/14439)) +- Add allowlist setting for ingest-common and search-pipeline-common processors ([#14439](https://github.com/opensearch-project/OpenSearch/issues/14439)) ### Dependencies - Bump `org.gradle.test-retry` from 1.5.8 to 1.5.9 ([#13442](https://github.com/opensearch-project/OpenSearch/pull/13442)) diff --git a/modules/ingest-common/src/main/java/org/opensearch/ingest/common/IngestCommonModulePlugin.java b/modules/ingest-common/src/main/java/org/opensearch/ingest/common/IngestCommonModulePlugin.java index bf9e9b71b8491..5b2db9ff940e7 100644 --- a/modules/ingest-common/src/main/java/org/opensearch/ingest/common/IngestCommonModulePlugin.java +++ b/modules/ingest-common/src/main/java/org/opensearch/ingest/common/IngestCommonModulePlugin.java @@ -165,7 +165,7 @@ private Map filterForAllowlistSetting(Settings settin // Assert that no unknown processors are defined in the allowlist final Set unknownAllowlistProcessors = allowlist.stream() .filter(p -> map.containsKey(p) == false) - .collect(Collectors.toSet()); + .collect(Collectors.toUnmodifiableSet()); if (unknownAllowlistProcessors.isEmpty() == false) { throw new IllegalArgumentException( "Processor(s) " diff --git a/modules/search-pipeline-common/src/main/java/org/opensearch/search/pipeline/common/SearchPipelineCommonModulePlugin.java b/modules/search-pipeline-common/src/main/java/org/opensearch/search/pipeline/common/SearchPipelineCommonModulePlugin.java index 5378a6721efb2..1574621a8200e 100644 --- a/modules/search-pipeline-common/src/main/java/org/opensearch/search/pipeline/common/SearchPipelineCommonModulePlugin.java +++ b/modules/search-pipeline-common/src/main/java/org/opensearch/search/pipeline/common/SearchPipelineCommonModulePlugin.java @@ -8,24 +8,61 @@ package org.opensearch.search.pipeline.common; +import org.opensearch.common.settings.Setting; +import org.opensearch.common.settings.Settings; import org.opensearch.plugins.Plugin; import org.opensearch.plugins.SearchPipelinePlugin; import org.opensearch.search.pipeline.Processor; +import org.opensearch.search.pipeline.SearchPhaseResultsProcessor; import org.opensearch.search.pipeline.SearchRequestProcessor; import org.opensearch.search.pipeline.SearchResponseProcessor; +import java.util.List; import java.util.Map; +import java.util.Set; +import java.util.function.Function; +import java.util.stream.Collectors; /** * Plugin providing common search request/response processors for use in search pipelines. */ public class SearchPipelineCommonModulePlugin extends Plugin implements SearchPipelinePlugin { + static final Setting> REQUEST_PROCESSORS_ALLOWLIST_SETTING = Setting.listSetting( + "search.pipeline.common.request.processors.allowed", + List.of(), + Function.identity(), + Setting.Property.NodeScope + ); + + static final Setting> RESPONSE_PROCESSORS_ALLOWLIST_SETTING = Setting.listSetting( + "search.pipeline.common.response.processors.allowed", + List.of(), + Function.identity(), + Setting.Property.NodeScope + ); + + static final Setting> SEARCH_PHASE_RESULTS_PROCESSORS_ALLOWLIST_SETTING = Setting.listSetting( + "search.pipeline.common.search.phase.results.processors.allowed", + List.of(), + Function.identity(), + Setting.Property.NodeScope + ); + /** * No constructor needed, but build complains if we don't have a constructor with JavaDoc. */ public SearchPipelineCommonModulePlugin() {} + @Override + public List> getSettings() { + return List.of( + REQUEST_PROCESSORS_ALLOWLIST_SETTING, + RESPONSE_PROCESSORS_ALLOWLIST_SETTING, + SEARCH_PHASE_RESULTS_PROCESSORS_ALLOWLIST_SETTING + ); + } + /** * Returns a map of processor factories. * @@ -34,25 +71,62 @@ public SearchPipelineCommonModulePlugin() {} */ @Override public Map> getRequestProcessors(Parameters parameters) { - return Map.of( - FilterQueryRequestProcessor.TYPE, - new FilterQueryRequestProcessor.Factory(parameters.namedXContentRegistry), - ScriptRequestProcessor.TYPE, - new ScriptRequestProcessor.Factory(parameters.scriptService), - OversampleRequestProcessor.TYPE, - new OversampleRequestProcessor.Factory() + return filterForAllowlistSetting( + REQUEST_PROCESSORS_ALLOWLIST_SETTING, + parameters.env.settings(), + Map.of( + FilterQueryRequestProcessor.TYPE, + new FilterQueryRequestProcessor.Factory(parameters.namedXContentRegistry), + ScriptRequestProcessor.TYPE, + new ScriptRequestProcessor.Factory(parameters.scriptService), + OversampleRequestProcessor.TYPE, + new OversampleRequestProcessor.Factory() + ) ); } @Override public Map> getResponseProcessors(Parameters parameters) { - return Map.of( - RenameFieldResponseProcessor.TYPE, - new RenameFieldResponseProcessor.Factory(), - TruncateHitsResponseProcessor.TYPE, - new TruncateHitsResponseProcessor.Factory(), - CollapseResponseProcessor.TYPE, - new CollapseResponseProcessor.Factory() + return filterForAllowlistSetting( + RESPONSE_PROCESSORS_ALLOWLIST_SETTING, + parameters.env.settings(), + Map.of( + RenameFieldResponseProcessor.TYPE, + new RenameFieldResponseProcessor.Factory(), + TruncateHitsResponseProcessor.TYPE, + new TruncateHitsResponseProcessor.Factory(), + CollapseResponseProcessor.TYPE, + new CollapseResponseProcessor.Factory() + ) ); } + + @Override + public Map> getSearchPhaseResultsProcessors(Parameters parameters) { + return filterForAllowlistSetting(SEARCH_PHASE_RESULTS_PROCESSORS_ALLOWLIST_SETTING, parameters.env.settings(), Map.of()); + } + + private Map> filterForAllowlistSetting( + Setting> allowlistSetting, + Settings settings, + Map> map + ) { + if (allowlistSetting.exists(settings) == false) { + return Map.copyOf(map); + } + final Set allowlist = Set.copyOf(allowlistSetting.get(settings)); + // Assert that no unknown processors are defined in the allowlist + final Set unknownAllowlistProcessors = allowlist.stream() + .filter(p -> map.containsKey(p) == false) + .collect(Collectors.toUnmodifiableSet()); + if (unknownAllowlistProcessors.isEmpty() == false) { + throw new IllegalArgumentException( + "Processor(s) " + unknownAllowlistProcessors + " were defined in [" + allowlistSetting.getKey() + "] but do not exist" + ); + } + return map.entrySet() + .stream() + .filter(e -> allowlist.contains(e.getKey())) + .collect(Collectors.toUnmodifiableMap(Map.Entry::getKey, Map.Entry::getValue)); + } } diff --git a/modules/search-pipeline-common/src/test/java/org/opensearch/search/pipeline/common/SearchPipelineCommonModulePluginTests.java b/modules/search-pipeline-common/src/test/java/org/opensearch/search/pipeline/common/SearchPipelineCommonModulePluginTests.java new file mode 100644 index 0000000000000..519468ebe17ff --- /dev/null +++ b/modules/search-pipeline-common/src/test/java/org/opensearch/search/pipeline/common/SearchPipelineCommonModulePluginTests.java @@ -0,0 +1,106 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.search.pipeline.common; + +import org.opensearch.common.settings.Settings; +import org.opensearch.env.TestEnvironment; +import org.opensearch.plugins.SearchPipelinePlugin; +import org.opensearch.test.OpenSearchTestCase; + +import java.io.IOException; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.function.BiFunction; + +public class SearchPipelineCommonModulePluginTests extends OpenSearchTestCase { + + public void testRequestProcessorAllowlist() throws IOException { + final String key = SearchPipelineCommonModulePlugin.REQUEST_PROCESSORS_ALLOWLIST_SETTING.getKey(); + runAllowlistTest(key, List.of(), SearchPipelineCommonModulePlugin::getRequestProcessors); + runAllowlistTest(key, List.of("filter_query"), SearchPipelineCommonModulePlugin::getRequestProcessors); + runAllowlistTest(key, List.of("script"), SearchPipelineCommonModulePlugin::getRequestProcessors); + runAllowlistTest(key, List.of("oversample", "script"), SearchPipelineCommonModulePlugin::getRequestProcessors); + runAllowlistTest(key, List.of("filter_query", "script", "oversample"), SearchPipelineCommonModulePlugin::getRequestProcessors); + + final IllegalArgumentException e = expectThrows( + IllegalArgumentException.class, + () -> runAllowlistTest(key, List.of("foo"), SearchPipelineCommonModulePlugin::getRequestProcessors) + ); + assertTrue(e.getMessage(), e.getMessage().contains("foo")); + } + + public void testResponseProcessorAllowlist() throws IOException { + final String key = SearchPipelineCommonModulePlugin.RESPONSE_PROCESSORS_ALLOWLIST_SETTING.getKey(); + runAllowlistTest(key, List.of(), SearchPipelineCommonModulePlugin::getResponseProcessors); + runAllowlistTest(key, List.of("rename_field"), SearchPipelineCommonModulePlugin::getResponseProcessors); + runAllowlistTest(key, List.of("truncate_hits"), SearchPipelineCommonModulePlugin::getResponseProcessors); + runAllowlistTest(key, List.of("collapse", "truncate_hits"), SearchPipelineCommonModulePlugin::getResponseProcessors); + runAllowlistTest( + key, + List.of("rename_field", "truncate_hits", "collapse"), + SearchPipelineCommonModulePlugin::getResponseProcessors + ); + + final IllegalArgumentException e = expectThrows( + IllegalArgumentException.class, + () -> runAllowlistTest(key, List.of("foo"), SearchPipelineCommonModulePlugin::getResponseProcessors) + ); + assertTrue(e.getMessage(), e.getMessage().contains("foo")); + } + + public void testSearchPhaseResultsProcessorAllowlist() throws IOException { + final String key = SearchPipelineCommonModulePlugin.SEARCH_PHASE_RESULTS_PROCESSORS_ALLOWLIST_SETTING.getKey(); + runAllowlistTest(key, List.of(), SearchPipelineCommonModulePlugin::getSearchPhaseResultsProcessors); + + final IllegalArgumentException e = expectThrows( + IllegalArgumentException.class, + () -> runAllowlistTest(key, List.of("foo"), SearchPipelineCommonModulePlugin::getSearchPhaseResultsProcessors) + ); + assertTrue(e.getMessage(), e.getMessage().contains("foo")); + } + + private void runAllowlistTest( + String settingKey, + List allowlist, + BiFunction> function + ) throws IOException { + final Settings settings = Settings.builder().putList(settingKey, allowlist).build(); + try (SearchPipelineCommonModulePlugin plugin = new SearchPipelineCommonModulePlugin()) { + assertEquals(Set.copyOf(allowlist), function.apply(plugin, createParameters(settings)).keySet()); + } + } + + public void testAllowlistNotSpecified() throws IOException { + final Settings settings = Settings.EMPTY; + try (SearchPipelineCommonModulePlugin plugin = new SearchPipelineCommonModulePlugin()) { + assertEquals(Set.of("oversample", "filter_query", "script"), plugin.getRequestProcessors(createParameters(settings)).keySet()); + assertEquals( + Set.of("rename_field", "truncate_hits", "collapse"), + plugin.getResponseProcessors(createParameters(settings)).keySet() + ); + assertEquals(Set.of(), plugin.getSearchPhaseResultsProcessors(createParameters(settings)).keySet()); + } + } + + private static SearchPipelinePlugin.Parameters createParameters(Settings settings) { + return new SearchPipelinePlugin.Parameters( + TestEnvironment.newEnvironment(Settings.builder().put(settings).put("path.home", "").build()), + null, + null, + null, + () -> 0L, + (a, b) -> null, + null, + null, + $ -> {}, + null + ); + } +} From 243e8db04edb1339dc308b877c3fd26bdc92acc9 Mon Sep 17 00:00:00 2001 From: Andriy Redko Date: Thu, 27 Jun 2024 15:03:02 -0400 Subject: [PATCH 019/167] Bump Apache Lucene to 9.11.1 (#14576) (#14581) (cherry picked from commit 0095fd1a44b583a7457b4d4578cdf6a0ed1fd2f3) Signed-off-by: Andriy Redko --- buildSrc/version.properties | 2 +- libs/core/licenses/lucene-core-9.12.0-snapshot-847316d.jar.sha1 | 1 + libs/core/licenses/lucene-core-9.12.0-snapshot-c896995.jar.sha1 | 1 - libs/core/src/main/java/org/opensearch/Version.java | 2 +- .../lucene-expressions-9.12.0-snapshot-847316d.jar.sha1 | 1 + .../lucene-expressions-9.12.0-snapshot-c896995.jar.sha1 | 1 - .../lucene-analysis-icu-9.12.0-snapshot-847316d.jar.sha1 | 1 + .../lucene-analysis-icu-9.12.0-snapshot-c896995.jar.sha1 | 1 - .../lucene-analysis-kuromoji-9.12.0-snapshot-847316d.jar.sha1 | 1 + .../lucene-analysis-kuromoji-9.12.0-snapshot-c896995.jar.sha1 | 1 - .../lucene-analysis-nori-9.12.0-snapshot-847316d.jar.sha1 | 1 + .../lucene-analysis-nori-9.12.0-snapshot-c896995.jar.sha1 | 1 - .../lucene-analysis-phonetic-9.12.0-snapshot-847316d.jar.sha1 | 1 + .../lucene-analysis-phonetic-9.12.0-snapshot-c896995.jar.sha1 | 1 - .../lucene-analysis-smartcn-9.12.0-snapshot-847316d.jar.sha1 | 1 + .../lucene-analysis-smartcn-9.12.0-snapshot-c896995.jar.sha1 | 1 - .../lucene-analysis-stempel-9.12.0-snapshot-847316d.jar.sha1 | 1 + .../lucene-analysis-stempel-9.12.0-snapshot-c896995.jar.sha1 | 1 - .../lucene-analysis-morfologik-9.12.0-snapshot-847316d.jar.sha1 | 1 + .../lucene-analysis-morfologik-9.12.0-snapshot-c896995.jar.sha1 | 1 - .../lucene-analysis-common-9.12.0-snapshot-847316d.jar.sha1 | 1 + .../lucene-analysis-common-9.12.0-snapshot-c896995.jar.sha1 | 1 - .../lucene-backward-codecs-9.12.0-snapshot-847316d.jar.sha1 | 1 + .../lucene-backward-codecs-9.12.0-snapshot-c896995.jar.sha1 | 1 - server/licenses/lucene-core-9.12.0-snapshot-847316d.jar.sha1 | 1 + server/licenses/lucene-core-9.12.0-snapshot-c896995.jar.sha1 | 1 - .../licenses/lucene-grouping-9.12.0-snapshot-847316d.jar.sha1 | 1 + .../licenses/lucene-grouping-9.12.0-snapshot-c896995.jar.sha1 | 1 - .../lucene-highlighter-9.12.0-snapshot-847316d.jar.sha1 | 1 + .../lucene-highlighter-9.12.0-snapshot-c896995.jar.sha1 | 1 - server/licenses/lucene-join-9.12.0-snapshot-847316d.jar.sha1 | 1 + server/licenses/lucene-join-9.12.0-snapshot-c896995.jar.sha1 | 1 - server/licenses/lucene-memory-9.12.0-snapshot-847316d.jar.sha1 | 1 + server/licenses/lucene-memory-9.12.0-snapshot-c896995.jar.sha1 | 1 - server/licenses/lucene-misc-9.12.0-snapshot-847316d.jar.sha1 | 1 + server/licenses/lucene-misc-9.12.0-snapshot-c896995.jar.sha1 | 1 - server/licenses/lucene-queries-9.12.0-snapshot-847316d.jar.sha1 | 1 + server/licenses/lucene-queries-9.12.0-snapshot-c896995.jar.sha1 | 1 - .../lucene-queryparser-9.12.0-snapshot-847316d.jar.sha1 | 1 + .../lucene-queryparser-9.12.0-snapshot-c896995.jar.sha1 | 1 - server/licenses/lucene-sandbox-9.12.0-snapshot-847316d.jar.sha1 | 1 + server/licenses/lucene-sandbox-9.12.0-snapshot-c896995.jar.sha1 | 1 - .../lucene-spatial-extras-9.12.0-snapshot-847316d.jar.sha1 | 1 + .../lucene-spatial-extras-9.12.0-snapshot-c896995.jar.sha1 | 1 - .../licenses/lucene-spatial3d-9.12.0-snapshot-847316d.jar.sha1 | 1 + .../licenses/lucene-spatial3d-9.12.0-snapshot-c896995.jar.sha1 | 1 - server/licenses/lucene-suggest-9.12.0-snapshot-847316d.jar.sha1 | 1 + server/licenses/lucene-suggest-9.12.0-snapshot-c896995.jar.sha1 | 1 - 48 files changed, 25 insertions(+), 25 deletions(-) create mode 100644 libs/core/licenses/lucene-core-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 libs/core/licenses/lucene-core-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 modules/lang-expression/licenses/lucene-expressions-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 modules/lang-expression/licenses/lucene-expressions-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 plugins/analysis-icu/licenses/lucene-analysis-icu-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 plugins/analysis-icu/licenses/lucene-analysis-icu-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 plugins/analysis-kuromoji/licenses/lucene-analysis-kuromoji-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 plugins/analysis-kuromoji/licenses/lucene-analysis-kuromoji-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 plugins/analysis-nori/licenses/lucene-analysis-nori-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 plugins/analysis-nori/licenses/lucene-analysis-nori-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 plugins/analysis-phonetic/licenses/lucene-analysis-phonetic-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 plugins/analysis-phonetic/licenses/lucene-analysis-phonetic-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 plugins/analysis-smartcn/licenses/lucene-analysis-smartcn-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 plugins/analysis-smartcn/licenses/lucene-analysis-smartcn-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 plugins/analysis-stempel/licenses/lucene-analysis-stempel-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 plugins/analysis-stempel/licenses/lucene-analysis-stempel-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 plugins/analysis-ukrainian/licenses/lucene-analysis-morfologik-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 plugins/analysis-ukrainian/licenses/lucene-analysis-morfologik-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 server/licenses/lucene-analysis-common-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 server/licenses/lucene-analysis-common-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 server/licenses/lucene-backward-codecs-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 server/licenses/lucene-backward-codecs-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 server/licenses/lucene-core-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 server/licenses/lucene-core-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 server/licenses/lucene-grouping-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 server/licenses/lucene-grouping-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 server/licenses/lucene-highlighter-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 server/licenses/lucene-highlighter-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 server/licenses/lucene-join-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 server/licenses/lucene-join-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 server/licenses/lucene-memory-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 server/licenses/lucene-memory-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 server/licenses/lucene-misc-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 server/licenses/lucene-misc-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 server/licenses/lucene-queries-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 server/licenses/lucene-queries-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 server/licenses/lucene-queryparser-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 server/licenses/lucene-queryparser-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 server/licenses/lucene-sandbox-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 server/licenses/lucene-sandbox-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 server/licenses/lucene-spatial-extras-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 server/licenses/lucene-spatial-extras-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 server/licenses/lucene-spatial3d-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 server/licenses/lucene-spatial3d-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 server/licenses/lucene-suggest-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 server/licenses/lucene-suggest-9.12.0-snapshot-c896995.jar.sha1 diff --git a/buildSrc/version.properties b/buildSrc/version.properties index e9aa32ea9a4f5..a99bd4801b7f3 100644 --- a/buildSrc/version.properties +++ b/buildSrc/version.properties @@ -1,5 +1,5 @@ opensearch = 3.0.0 -lucene = 9.12.0-snapshot-c896995 +lucene = 9.12.0-snapshot-847316d bundled_jdk_vendor = adoptium bundled_jdk = 21.0.3+9 diff --git a/libs/core/licenses/lucene-core-9.12.0-snapshot-847316d.jar.sha1 b/libs/core/licenses/lucene-core-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..e3fd1708ea428 --- /dev/null +++ b/libs/core/licenses/lucene-core-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +51ff4940eb1024184bbaa5dae39695d2392c5bab \ No newline at end of file diff --git a/libs/core/licenses/lucene-core-9.12.0-snapshot-c896995.jar.sha1 b/libs/core/licenses/lucene-core-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index 299283562fddc..0000000000000 --- a/libs/core/licenses/lucene-core-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -826b328c37ea7f27c05d685db03bf8d2b00457ff \ No newline at end of file diff --git a/libs/core/src/main/java/org/opensearch/Version.java b/libs/core/src/main/java/org/opensearch/Version.java index da43894863432..b647a92d6708a 100644 --- a/libs/core/src/main/java/org/opensearch/Version.java +++ b/libs/core/src/main/java/org/opensearch/Version.java @@ -106,7 +106,7 @@ public class Version implements Comparable, ToXContentFragment { public static final Version V_2_14_1 = new Version(2140199, org.apache.lucene.util.Version.LUCENE_9_10_0); public static final Version V_2_15_0 = new Version(2150099, org.apache.lucene.util.Version.LUCENE_9_10_0); public static final Version V_2_15_1 = new Version(2150199, org.apache.lucene.util.Version.LUCENE_9_10_0); - public static final Version V_2_16_0 = new Version(2160099, org.apache.lucene.util.Version.LUCENE_9_11_0); + public static final Version V_2_16_0 = new Version(2160099, org.apache.lucene.util.Version.LUCENE_9_11_1); public static final Version V_3_0_0 = new Version(3000099, org.apache.lucene.util.Version.LUCENE_9_12_0); public static final Version CURRENT = V_3_0_0; diff --git a/modules/lang-expression/licenses/lucene-expressions-9.12.0-snapshot-847316d.jar.sha1 b/modules/lang-expression/licenses/lucene-expressions-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..83dd8e657bdd5 --- /dev/null +++ b/modules/lang-expression/licenses/lucene-expressions-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +b866103bbaca4141c152deca9252bd137026dafc \ No newline at end of file diff --git a/modules/lang-expression/licenses/lucene-expressions-9.12.0-snapshot-c896995.jar.sha1 b/modules/lang-expression/licenses/lucene-expressions-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index 6d8d3be59f945..0000000000000 --- a/modules/lang-expression/licenses/lucene-expressions-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -9f0321cf2d34fca3f1f9334fdfee2b79d9d27444 \ No newline at end of file diff --git a/plugins/analysis-icu/licenses/lucene-analysis-icu-9.12.0-snapshot-847316d.jar.sha1 b/plugins/analysis-icu/licenses/lucene-analysis-icu-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..80e254ed3d098 --- /dev/null +++ b/plugins/analysis-icu/licenses/lucene-analysis-icu-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +04436942995a4952ce5654126dfb767d6335674e \ No newline at end of file diff --git a/plugins/analysis-icu/licenses/lucene-analysis-icu-9.12.0-snapshot-c896995.jar.sha1 b/plugins/analysis-icu/licenses/lucene-analysis-icu-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index 696803bf63b46..0000000000000 --- a/plugins/analysis-icu/licenses/lucene-analysis-icu-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -e6314f36fb29e208d58c0470f14269c9c36996ba \ No newline at end of file diff --git a/plugins/analysis-kuromoji/licenses/lucene-analysis-kuromoji-9.12.0-snapshot-847316d.jar.sha1 b/plugins/analysis-kuromoji/licenses/lucene-analysis-kuromoji-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..3baed2a6e660b --- /dev/null +++ b/plugins/analysis-kuromoji/licenses/lucene-analysis-kuromoji-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +85918e24fc3bf63fcd953807ab2eb3fa55c987c2 \ No newline at end of file diff --git a/plugins/analysis-kuromoji/licenses/lucene-analysis-kuromoji-9.12.0-snapshot-c896995.jar.sha1 b/plugins/analysis-kuromoji/licenses/lucene-analysis-kuromoji-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index 7a12077d7fc62..0000000000000 --- a/plugins/analysis-kuromoji/licenses/lucene-analysis-kuromoji-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -77fbf1e37af79715f28f66d8cc5b50af2982fc54 \ No newline at end of file diff --git a/plugins/analysis-nori/licenses/lucene-analysis-nori-9.12.0-snapshot-847316d.jar.sha1 b/plugins/analysis-nori/licenses/lucene-analysis-nori-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..4e9327112d412 --- /dev/null +++ b/plugins/analysis-nori/licenses/lucene-analysis-nori-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +15e425e9cc0ab9d65fac3c919199a24dfa3631eb \ No newline at end of file diff --git a/plugins/analysis-nori/licenses/lucene-analysis-nori-9.12.0-snapshot-c896995.jar.sha1 b/plugins/analysis-nori/licenses/lucene-analysis-nori-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index efed62c7e5e5b..0000000000000 --- a/plugins/analysis-nori/licenses/lucene-analysis-nori-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -a7a4e9c6004c72782e1002e1dcfaf4fbab7887d8 \ No newline at end of file diff --git a/plugins/analysis-phonetic/licenses/lucene-analysis-phonetic-9.12.0-snapshot-847316d.jar.sha1 b/plugins/analysis-phonetic/licenses/lucene-analysis-phonetic-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..7e7e9fe5b22b4 --- /dev/null +++ b/plugins/analysis-phonetic/licenses/lucene-analysis-phonetic-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +3d16c18348e7d4a00cb83100c43f3e21239d224e \ No newline at end of file diff --git a/plugins/analysis-phonetic/licenses/lucene-analysis-phonetic-9.12.0-snapshot-c896995.jar.sha1 b/plugins/analysis-phonetic/licenses/lucene-analysis-phonetic-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index f2020abcb8ef7..0000000000000 --- a/plugins/analysis-phonetic/licenses/lucene-analysis-phonetic-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -42ac148a3769d6eb880d7f184d1917bad48ca303 \ No newline at end of file diff --git a/plugins/analysis-smartcn/licenses/lucene-analysis-smartcn-9.12.0-snapshot-847316d.jar.sha1 b/plugins/analysis-smartcn/licenses/lucene-analysis-smartcn-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..98e0ecc9cbb89 --- /dev/null +++ b/plugins/analysis-smartcn/licenses/lucene-analysis-smartcn-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +2ef6d9dffc6816d3cd04a54fe1ee43e13f850a37 \ No newline at end of file diff --git a/plugins/analysis-smartcn/licenses/lucene-analysis-smartcn-9.12.0-snapshot-c896995.jar.sha1 b/plugins/analysis-smartcn/licenses/lucene-analysis-smartcn-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index b64e4061311e5..0000000000000 --- a/plugins/analysis-smartcn/licenses/lucene-analysis-smartcn-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -adf2a25339ac8722647f8196288c1f5056bbf0de \ No newline at end of file diff --git a/plugins/analysis-stempel/licenses/lucene-analysis-stempel-9.12.0-snapshot-847316d.jar.sha1 b/plugins/analysis-stempel/licenses/lucene-analysis-stempel-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..ef675f2b9702e --- /dev/null +++ b/plugins/analysis-stempel/licenses/lucene-analysis-stempel-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +e72b2262f5393d9ff255fb901297d4e6790e9102 \ No newline at end of file diff --git a/plugins/analysis-stempel/licenses/lucene-analysis-stempel-9.12.0-snapshot-c896995.jar.sha1 b/plugins/analysis-stempel/licenses/lucene-analysis-stempel-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index f56e7fc5df766..0000000000000 --- a/plugins/analysis-stempel/licenses/lucene-analysis-stempel-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -a689e3af2015b21b7b4f41a1206b50c44519b6f7 \ No newline at end of file diff --git a/plugins/analysis-ukrainian/licenses/lucene-analysis-morfologik-9.12.0-snapshot-847316d.jar.sha1 b/plugins/analysis-ukrainian/licenses/lucene-analysis-morfologik-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..d8bbac27fd360 --- /dev/null +++ b/plugins/analysis-ukrainian/licenses/lucene-analysis-morfologik-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +416ac44b2e76592c9e85338798cae93c3cf5475e \ No newline at end of file diff --git a/plugins/analysis-ukrainian/licenses/lucene-analysis-morfologik-9.12.0-snapshot-c896995.jar.sha1 b/plugins/analysis-ukrainian/licenses/lucene-analysis-morfologik-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index 30732e3c4a688..0000000000000 --- a/plugins/analysis-ukrainian/licenses/lucene-analysis-morfologik-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -c875f7706ee81b1fb0b3443767a8c9c52f30abc5 \ No newline at end of file diff --git a/server/licenses/lucene-analysis-common-9.12.0-snapshot-847316d.jar.sha1 b/server/licenses/lucene-analysis-common-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..f1249066d10f2 --- /dev/null +++ b/server/licenses/lucene-analysis-common-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +7e282aab7388efc911348f1eacd90e661580dda7 \ No newline at end of file diff --git a/server/licenses/lucene-analysis-common-9.12.0-snapshot-c896995.jar.sha1 b/server/licenses/lucene-analysis-common-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index 4b545e061c52f..0000000000000 --- a/server/licenses/lucene-analysis-common-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -73696492c6e59972974cd91e03ad9464e6b5bfcd \ No newline at end of file diff --git a/server/licenses/lucene-backward-codecs-9.12.0-snapshot-847316d.jar.sha1 b/server/licenses/lucene-backward-codecs-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..ac50c5e110a72 --- /dev/null +++ b/server/licenses/lucene-backward-codecs-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +69e59ba4bed4c58836d2727d72b7f0095d2dcb92 \ No newline at end of file diff --git a/server/licenses/lucene-backward-codecs-9.12.0-snapshot-c896995.jar.sha1 b/server/licenses/lucene-backward-codecs-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index ae4ffb2b1800b..0000000000000 --- a/server/licenses/lucene-backward-codecs-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -3cbb29ecc873e8c880a6f32e739655551708dbcf \ No newline at end of file diff --git a/server/licenses/lucene-core-9.12.0-snapshot-847316d.jar.sha1 b/server/licenses/lucene-core-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..e3fd1708ea428 --- /dev/null +++ b/server/licenses/lucene-core-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +51ff4940eb1024184bbaa5dae39695d2392c5bab \ No newline at end of file diff --git a/server/licenses/lucene-core-9.12.0-snapshot-c896995.jar.sha1 b/server/licenses/lucene-core-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index 299283562fddc..0000000000000 --- a/server/licenses/lucene-core-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -826b328c37ea7f27c05d685db03bf8d2b00457ff \ No newline at end of file diff --git a/server/licenses/lucene-grouping-9.12.0-snapshot-847316d.jar.sha1 b/server/licenses/lucene-grouping-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..cc5bf5bfd8ec0 --- /dev/null +++ b/server/licenses/lucene-grouping-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +5847a7d47f13ecb7f039fb9adf6f3b8e4bddde77 \ No newline at end of file diff --git a/server/licenses/lucene-grouping-9.12.0-snapshot-c896995.jar.sha1 b/server/licenses/lucene-grouping-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index b0268c98167d3..0000000000000 --- a/server/licenses/lucene-grouping-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -a3a7003dc83197523e830f058a3748dbea96cab7 \ No newline at end of file diff --git a/server/licenses/lucene-highlighter-9.12.0-snapshot-847316d.jar.sha1 b/server/licenses/lucene-highlighter-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..eb14059d2cd8c --- /dev/null +++ b/server/licenses/lucene-highlighter-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +7cc0a26777a479f06fbcfae7abc23e784e1a00dc \ No newline at end of file diff --git a/server/licenses/lucene-highlighter-9.12.0-snapshot-c896995.jar.sha1 b/server/licenses/lucene-highlighter-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index d87927364b5a8..0000000000000 --- a/server/licenses/lucene-highlighter-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -00eb386915c3cffa9efcef2dc4c406f8a6776afe \ No newline at end of file diff --git a/server/licenses/lucene-join-9.12.0-snapshot-847316d.jar.sha1 b/server/licenses/lucene-join-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..b87170c39c78c --- /dev/null +++ b/server/licenses/lucene-join-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +9cd99401c826d910da3c2beab8e42f1af8be6ea4 \ No newline at end of file diff --git a/server/licenses/lucene-join-9.12.0-snapshot-c896995.jar.sha1 b/server/licenses/lucene-join-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index 25a95546ab544..0000000000000 --- a/server/licenses/lucene-join-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -bb1fc572da7d473bf39672fd8ac323b15a1ffff0 \ No newline at end of file diff --git a/server/licenses/lucene-memory-9.12.0-snapshot-847316d.jar.sha1 b/server/licenses/lucene-memory-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..de591dd659cb5 --- /dev/null +++ b/server/licenses/lucene-memory-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +cfee136ecbc3df7adc38b38e020dca5e61c22773 \ No newline at end of file diff --git a/server/licenses/lucene-memory-9.12.0-snapshot-c896995.jar.sha1 b/server/licenses/lucene-memory-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index a0b3fd812561c..0000000000000 --- a/server/licenses/lucene-memory-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -05ebfcef0435f4870859a19c93020e24398bb939 \ No newline at end of file diff --git a/server/licenses/lucene-misc-9.12.0-snapshot-847316d.jar.sha1 b/server/licenses/lucene-misc-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..1a999bb9c6686 --- /dev/null +++ b/server/licenses/lucene-misc-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +afbc5adf93d4eb1a1b109ad828d1968bf16ef292 \ No newline at end of file diff --git a/server/licenses/lucene-misc-9.12.0-snapshot-c896995.jar.sha1 b/server/licenses/lucene-misc-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index 1e2cc97c37257..0000000000000 --- a/server/licenses/lucene-misc-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -d5747ed1be242b59aa36b0c32b0d3bd26b1d8fb8 \ No newline at end of file diff --git a/server/licenses/lucene-queries-9.12.0-snapshot-847316d.jar.sha1 b/server/licenses/lucene-queries-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..783a26551ae8c --- /dev/null +++ b/server/licenses/lucene-queries-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +16907c36f6adb8ba8f260e05738c66afb37c72d3 \ No newline at end of file diff --git a/server/licenses/lucene-queries-9.12.0-snapshot-c896995.jar.sha1 b/server/licenses/lucene-queries-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index 31d4fe2886fc1..0000000000000 --- a/server/licenses/lucene-queries-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -fb6678d7fe035e55c545450682b67be49457ef1b \ No newline at end of file diff --git a/server/licenses/lucene-queryparser-9.12.0-snapshot-847316d.jar.sha1 b/server/licenses/lucene-queryparser-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..b3e9e4de96174 --- /dev/null +++ b/server/licenses/lucene-queryparser-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +72baa9bddcf2efb71ffb695f1e9f548699ec13a0 \ No newline at end of file diff --git a/server/licenses/lucene-queryparser-9.12.0-snapshot-c896995.jar.sha1 b/server/licenses/lucene-queryparser-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index 754e4ea20765f..0000000000000 --- a/server/licenses/lucene-queryparser-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -a11d7f56a9e78dc8e61f85b9b54ad94d73583bb3 \ No newline at end of file diff --git a/server/licenses/lucene-sandbox-9.12.0-snapshot-847316d.jar.sha1 b/server/licenses/lucene-sandbox-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..2aefa435b1e9a --- /dev/null +++ b/server/licenses/lucene-sandbox-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +dd3c63066f583d90b563ebaa6fbe61c603403acb \ No newline at end of file diff --git a/server/licenses/lucene-sandbox-9.12.0-snapshot-c896995.jar.sha1 b/server/licenses/lucene-sandbox-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index 08c2bc48ae85b..0000000000000 --- a/server/licenses/lucene-sandbox-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -75352855bcc052abfba821f878a27fd2b328fb1c \ No newline at end of file diff --git a/server/licenses/lucene-spatial-extras-9.12.0-snapshot-847316d.jar.sha1 b/server/licenses/lucene-spatial-extras-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..d27112c6db6ab --- /dev/null +++ b/server/licenses/lucene-spatial-extras-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +69b99530e0b05251c12863bee6a9325cafd5fdaa \ No newline at end of file diff --git a/server/licenses/lucene-spatial-extras-9.12.0-snapshot-c896995.jar.sha1 b/server/licenses/lucene-spatial-extras-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index 5e0b7196f48c2..0000000000000 --- a/server/licenses/lucene-spatial-extras-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -299be103216d67ca092bef177642b275224e77a6 \ No newline at end of file diff --git a/server/licenses/lucene-spatial3d-9.12.0-snapshot-847316d.jar.sha1 b/server/licenses/lucene-spatial3d-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..29423ac0ababd --- /dev/null +++ b/server/licenses/lucene-spatial3d-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +a67d193b4b08790169db7cf005a2429991260287 \ No newline at end of file diff --git a/server/licenses/lucene-spatial3d-9.12.0-snapshot-c896995.jar.sha1 b/server/licenses/lucene-spatial3d-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index c79b34adea5e2..0000000000000 --- a/server/licenses/lucene-spatial3d-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -29b4a76cd0bdabe0e067063831e661dedac6e503 \ No newline at end of file diff --git a/server/licenses/lucene-suggest-9.12.0-snapshot-847316d.jar.sha1 b/server/licenses/lucene-suggest-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..6ce1f639ccbb7 --- /dev/null +++ b/server/licenses/lucene-suggest-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +7a1625ae39071ccbfb3af11df5a74291758f4b47 \ No newline at end of file diff --git a/server/licenses/lucene-suggest-9.12.0-snapshot-c896995.jar.sha1 b/server/licenses/lucene-suggest-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index 8d5334f0c4619..0000000000000 --- a/server/licenses/lucene-suggest-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -597edb659e9ea93398a816e6837da7d47ef53873 \ No newline at end of file From a34270d31df6e6ebd85f7e3b52513b80494f6bcd Mon Sep 17 00:00:00 2001 From: Shivansh Arora Date: Fri, 28 Jun 2024 14:46:34 +0530 Subject: [PATCH 020/167] Add unittests for RemoteClusterStateAttributesManager (#14427) * Add unittests for RemoteClusterStateAttributesManager Signed-off-by: Shivansh Arora --- .../model/RemoteClusterStateCustoms.java | 2 +- ...oteClusterStateAttributesManagerTests.java | 276 +++++++++++++++--- 2 files changed, 231 insertions(+), 47 deletions(-) diff --git a/server/src/main/java/org/opensearch/gateway/remote/model/RemoteClusterStateCustoms.java b/server/src/main/java/org/opensearch/gateway/remote/model/RemoteClusterStateCustoms.java index f384908bc6b65..affbc7ba66cb8 100644 --- a/server/src/main/java/org/opensearch/gateway/remote/model/RemoteClusterStateCustoms.java +++ b/server/src/main/java/org/opensearch/gateway/remote/model/RemoteClusterStateCustoms.java @@ -34,12 +34,12 @@ */ public class RemoteClusterStateCustoms extends AbstractRemoteWritableBlobEntity { public static final String CLUSTER_STATE_CUSTOM = "cluster-state-custom"; + public final ChecksumWritableBlobStoreFormat clusterStateCustomsFormat; private long stateVersion; private final String customType; private ClusterState.Custom custom; private final NamedWriteableRegistry namedWriteableRegistry; - private final ChecksumWritableBlobStoreFormat clusterStateCustomsFormat; public RemoteClusterStateCustoms( final ClusterState.Custom custom, diff --git a/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateAttributesManagerTests.java b/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateAttributesManagerTests.java index 41e1546ead164..fe9ed57fa77b8 100644 --- a/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateAttributesManagerTests.java +++ b/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateAttributesManagerTests.java @@ -17,9 +17,10 @@ import org.opensearch.cluster.DiffableUtils; import org.opensearch.cluster.block.ClusterBlocks; import org.opensearch.cluster.node.DiscoveryNodes; -import org.opensearch.common.CheckedRunnable; +import org.opensearch.common.blobstore.BlobPath; import org.opensearch.common.settings.ClusterSettings; import org.opensearch.common.settings.Settings; +import org.opensearch.common.util.TestCapturingListener; import org.opensearch.core.action.ActionListener; import org.opensearch.core.common.io.stream.NamedWriteableRegistry; import org.opensearch.core.common.io.stream.StreamInput; @@ -28,6 +29,7 @@ import org.opensearch.core.compress.NoneCompressor; import org.opensearch.core.xcontent.XContentBuilder; import org.opensearch.gateway.remote.model.RemoteClusterBlocks; +import org.opensearch.gateway.remote.model.RemoteClusterStateCustoms; import org.opensearch.gateway.remote.model.RemoteDiscoveryNodes; import org.opensearch.gateway.remote.model.RemoteReadResult; import org.opensearch.index.translog.transfer.BlobStoreTransferService; @@ -36,46 +38,63 @@ import org.opensearch.threadpool.TestThreadPool; import org.opensearch.threadpool.ThreadPool; import org.junit.After; -import org.junit.Assert; import org.junit.Before; import java.io.IOException; +import java.io.InputStream; import java.util.Collections; import java.util.List; import java.util.Map; import java.util.concurrent.CountDownLatch; -import java.util.concurrent.atomic.AtomicReference; -import static java.util.Collections.emptyList; +import static org.opensearch.common.blobstore.stream.write.WritePriority.URGENT; +import static org.opensearch.gateway.remote.RemoteClusterStateAttributesManager.CLUSTER_STATE_ATTRIBUTE; +import static org.opensearch.gateway.remote.RemoteClusterStateAttributesManager.CLUSTER_STATE_ATTRIBUTES_CURRENT_CODEC_VERSION; import static org.opensearch.gateway.remote.RemoteClusterStateAttributesManager.DISCOVERY_NODES; +import static org.opensearch.gateway.remote.RemoteClusterStateUtils.CLUSTER_STATE_EPHEMERAL_PATH_TOKEN; +import static org.opensearch.gateway.remote.RemoteClusterStateUtils.CLUSTER_STATE_PATH_TOKEN; +import static org.opensearch.gateway.remote.RemoteClusterStateUtils.CUSTOM_DELIMITER; +import static org.opensearch.gateway.remote.RemoteClusterStateUtils.DELIMITER; +import static org.opensearch.gateway.remote.RemoteClusterStateUtils.PATH_DELIMITER; +import static org.opensearch.gateway.remote.RemoteClusterStateUtils.encodeString; import static org.opensearch.gateway.remote.model.RemoteClusterBlocks.CLUSTER_BLOCKS; import static org.opensearch.gateway.remote.model.RemoteClusterBlocks.CLUSTER_BLOCKS_FORMAT; import static org.opensearch.gateway.remote.model.RemoteClusterBlocksTests.randomClusterBlocks; +import static org.opensearch.gateway.remote.model.RemoteClusterStateCustoms.CLUSTER_STATE_CUSTOM; +import static org.opensearch.gateway.remote.model.RemoteClusterStateCustomsTests.getClusterStateCustom; import static org.opensearch.gateway.remote.model.RemoteDiscoveryNodes.DISCOVERY_NODES_FORMAT; import static org.opensearch.gateway.remote.model.RemoteDiscoveryNodesTests.getDiscoveryNodes; +import static org.opensearch.index.remote.RemoteStoreUtils.invertLong; import static org.hamcrest.Matchers.is; +import static org.mockito.ArgumentMatchers.any; import static org.mockito.ArgumentMatchers.anyIterable; import static org.mockito.ArgumentMatchers.anyString; +import static org.mockito.ArgumentMatchers.eq; +import static org.mockito.Mockito.doAnswer; import static org.mockito.Mockito.mock; import static org.mockito.Mockito.when; public class RemoteClusterStateAttributesManagerTests extends OpenSearchTestCase { private RemoteClusterStateAttributesManager remoteClusterStateAttributesManager; private BlobStoreTransferService blobStoreTransferService; - private BlobStoreRepository blobStoreRepository; private Compressor compressor; - private ThreadPool threadPool = new TestThreadPool(RemoteClusterStateAttributesManagerTests.class.getName()); + private final ThreadPool threadPool = new TestThreadPool(RemoteClusterStateAttributesManagerTests.class.getName()); + private final long VERSION = 7331L; + private NamedWriteableRegistry namedWriteableRegistry; + private final String CLUSTER_NAME = "test-cluster"; + private final String CLUSTER_UUID = "test-cluster-uuid"; @Before public void setup() throws Exception { ClusterSettings clusterSettings = new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS); - NamedWriteableRegistry namedWriteableRegistry = new NamedWriteableRegistry(emptyList()); - blobStoreRepository = mock(BlobStoreRepository.class); + namedWriteableRegistry = writableRegistry(); + BlobStoreRepository blobStoreRepository = mock(BlobStoreRepository.class); + when(blobStoreRepository.basePath()).thenReturn(new BlobPath()); blobStoreTransferService = mock(BlobStoreTransferService.class); compressor = new NoneCompressor(); remoteClusterStateAttributesManager = new RemoteClusterStateAttributesManager( - "test-cluster", + CLUSTER_NAME, blobStoreRepository, blobStoreTransferService, writableRegistry(), @@ -89,7 +108,40 @@ public void tearDown() throws Exception { threadPool.shutdown(); } - public void testGetAsyncMetadataReadAction_DiscoveryNodes() throws IOException { + public void testGetAsyncMetadataWriteAction_DiscoveryNodes() throws IOException, InterruptedException { + DiscoveryNodes discoveryNodes = getDiscoveryNodes(); + RemoteDiscoveryNodes remoteDiscoveryNodes = new RemoteDiscoveryNodes(discoveryNodes, VERSION, CLUSTER_UUID, compressor); + doAnswer(invocationOnMock -> { + invocationOnMock.getArgument(4, ActionListener.class).onResponse(null); + return null; + }).when(blobStoreTransferService) + .uploadBlob(any(InputStream.class), anyIterable(), anyString(), eq(URGENT), any(ActionListener.class)); + final CountDownLatch latch = new CountDownLatch(1); + final TestCapturingListener listener = new TestCapturingListener<>(); + remoteClusterStateAttributesManager.getAsyncMetadataWriteAction( + DISCOVERY_NODES, + remoteDiscoveryNodes, + new LatchedActionListener<>(listener, latch) + ).run(); + latch.await(); + assertNull(listener.getFailure()); + assertNotNull(listener.getResult()); + assertEquals(DISCOVERY_NODES, listener.getResult().getComponent()); + String uploadedFileName = listener.getResult().getUploadedFilename(); + String[] pathTokens = uploadedFileName.split(PATH_DELIMITER); + assertEquals(5, pathTokens.length); + assertEquals(RemoteClusterStateUtils.encodeString(CLUSTER_NAME), pathTokens[0]); + assertEquals(CLUSTER_STATE_PATH_TOKEN, pathTokens[1]); + assertEquals(CLUSTER_UUID, pathTokens[2]); + assertEquals(CLUSTER_STATE_EPHEMERAL_PATH_TOKEN, pathTokens[3]); + String[] splitFileName = pathTokens[4].split(DELIMITER); + assertEquals(4, splitFileName.length); + assertEquals(DISCOVERY_NODES, splitFileName[0]); + assertEquals(invertLong(VERSION), splitFileName[1]); + assertEquals(CLUSTER_STATE_ATTRIBUTES_CURRENT_CODEC_VERSION, Integer.parseInt(splitFileName[3])); + } + + public void testGetAsyncMetadataReadAction_DiscoveryNodes() throws IOException, InterruptedException { DiscoveryNodes discoveryNodes = getDiscoveryNodes(); String fileName = randomAlphaOfLength(10); when(blobStoreTransferService.downloadBlob(anyIterable(), anyString())).thenReturn( @@ -97,29 +149,57 @@ public void testGetAsyncMetadataReadAction_DiscoveryNodes() throws IOException { ); RemoteDiscoveryNodes remoteObjForDownload = new RemoteDiscoveryNodes(fileName, "cluster-uuid", compressor); CountDownLatch latch = new CountDownLatch(1); - AtomicReference readDiscoveryNodes = new AtomicReference<>(); - LatchedActionListener assertingListener = new LatchedActionListener<>( - ActionListener.wrap(response -> readDiscoveryNodes.set((DiscoveryNodes) response.getObj()), Assert::assertNull), - latch - ); - CheckedRunnable runnable = remoteClusterStateAttributesManager.getAsyncMetadataReadAction( + TestCapturingListener listener = new TestCapturingListener<>(); + remoteClusterStateAttributesManager.getAsyncMetadataReadAction( DISCOVERY_NODES, remoteObjForDownload, - assertingListener - ); + new LatchedActionListener<>(listener, latch) + ).run(); + latch.await(); + assertNull(listener.getFailure()); + assertNotNull(listener.getResult()); + assertEquals(CLUSTER_STATE_ATTRIBUTE, listener.getResult().getComponent()); + assertEquals(DISCOVERY_NODES, listener.getResult().getComponentName()); + DiscoveryNodes readDiscoveryNodes = (DiscoveryNodes) listener.getResult().getObj(); + assertEquals(discoveryNodes.getSize(), readDiscoveryNodes.getSize()); + discoveryNodes.getNodes().forEach((nodeId, node) -> assertEquals(readDiscoveryNodes.get(nodeId), node)); + assertEquals(discoveryNodes.getClusterManagerNodeId(), readDiscoveryNodes.getClusterManagerNodeId()); + } - try { - runnable.run(); - latch.await(); - assertEquals(discoveryNodes.getSize(), readDiscoveryNodes.get().getSize()); - discoveryNodes.getNodes().forEach((nodeId, node) -> assertEquals(readDiscoveryNodes.get().get(nodeId), node)); - assertEquals(discoveryNodes.getClusterManagerNodeId(), readDiscoveryNodes.get().getClusterManagerNodeId()); - } catch (Exception e) { - throw new RuntimeException(e); - } + public void testGetAsyncMetadataWriteAction_ClusterBlocks() throws IOException, InterruptedException { + ClusterBlocks clusterBlocks = randomClusterBlocks(); + RemoteClusterBlocks remoteClusterBlocks = new RemoteClusterBlocks(clusterBlocks, VERSION, CLUSTER_UUID, compressor); + doAnswer(invocationOnMock -> { + invocationOnMock.getArgument(4, ActionListener.class).onResponse(null); + return null; + }).when(blobStoreTransferService) + .uploadBlob(any(InputStream.class), anyIterable(), anyString(), eq(URGENT), any(ActionListener.class)); + final CountDownLatch latch = new CountDownLatch(1); + final TestCapturingListener listener = new TestCapturingListener<>(); + remoteClusterStateAttributesManager.getAsyncMetadataWriteAction( + CLUSTER_BLOCKS, + remoteClusterBlocks, + new LatchedActionListener<>(listener, latch) + ).run(); + latch.await(); + assertNull(listener.getFailure()); + assertNotNull(listener.getResult()); + assertEquals(CLUSTER_BLOCKS, listener.getResult().getComponent()); + String uploadedFileName = listener.getResult().getUploadedFilename(); + String[] pathTokens = uploadedFileName.split(PATH_DELIMITER); + assertEquals(5, pathTokens.length); + assertEquals(encodeString(CLUSTER_NAME), pathTokens[0]); + assertEquals(CLUSTER_STATE_PATH_TOKEN, pathTokens[1]); + assertEquals(CLUSTER_UUID, pathTokens[2]); + assertEquals(CLUSTER_STATE_EPHEMERAL_PATH_TOKEN, pathTokens[3]); + String[] splitFileName = pathTokens[4].split(DELIMITER); + assertEquals(4, splitFileName.length); + assertEquals(CLUSTER_BLOCKS, splitFileName[0]); + assertEquals(invertLong(VERSION), splitFileName[1]); + assertEquals(CLUSTER_STATE_ATTRIBUTES_CURRENT_CODEC_VERSION, Integer.parseInt(splitFileName[3])); } - public void testGetAsyncMetadataReadAction_ClusterBlocks() throws IOException { + public void testGetAsyncMetadataReadAction_ClusterBlocks() throws IOException, InterruptedException { ClusterBlocks clusterBlocks = randomClusterBlocks(); String fileName = randomAlphaOfLength(10); when(blobStoreTransferService.downloadBlob(anyIterable(), anyString())).thenReturn( @@ -127,29 +207,133 @@ public void testGetAsyncMetadataReadAction_ClusterBlocks() throws IOException { ); RemoteClusterBlocks remoteClusterBlocks = new RemoteClusterBlocks(fileName, "cluster-uuid", compressor); CountDownLatch latch = new CountDownLatch(1); - AtomicReference readClusterBlocks = new AtomicReference<>(); - LatchedActionListener assertingListener = new LatchedActionListener<>( - ActionListener.wrap(response -> readClusterBlocks.set((ClusterBlocks) response.getObj()), Assert::assertNull), - latch - ); + TestCapturingListener listener = new TestCapturingListener<>(); - CheckedRunnable runnable = remoteClusterStateAttributesManager.getAsyncMetadataReadAction( + remoteClusterStateAttributesManager.getAsyncMetadataReadAction( CLUSTER_BLOCKS, remoteClusterBlocks, - assertingListener + new LatchedActionListener<>(listener, latch) + ).run(); + latch.await(); + assertNull(listener.getFailure()); + assertNotNull(listener.getResult()); + assertEquals(CLUSTER_STATE_ATTRIBUTE, listener.getResult().getComponent()); + assertEquals(CLUSTER_BLOCKS, listener.getResult().getComponentName()); + ClusterBlocks readClusterBlocks = (ClusterBlocks) listener.getResult().getObj(); + assertEquals(clusterBlocks.global(), readClusterBlocks.global()); + assertEquals(clusterBlocks.indices().keySet(), readClusterBlocks.indices().keySet()); + for (String index : clusterBlocks.indices().keySet()) { + assertEquals(clusterBlocks.indices().get(index), readClusterBlocks.indices().get(index)); + } + } + + public void testGetAsyncMetadataWriteAction_Custom() throws IOException, InterruptedException { + Custom custom = getClusterStateCustom(); + RemoteClusterStateCustoms remoteClusterStateCustoms = new RemoteClusterStateCustoms( + custom, + custom.getWriteableName(), + VERSION, + CLUSTER_UUID, + compressor, + namedWriteableRegistry ); + doAnswer(invocationOnMock -> { + invocationOnMock.getArgument(4, ActionListener.class).onResponse(null); + return null; + }).when(blobStoreTransferService) + .uploadBlob(any(InputStream.class), anyIterable(), anyString(), eq(URGENT), any(ActionListener.class)); + final TestCapturingListener listener = new TestCapturingListener<>(); + final CountDownLatch latch = new CountDownLatch(1); + remoteClusterStateAttributesManager.getAsyncMetadataWriteAction( + CLUSTER_STATE_CUSTOM, + remoteClusterStateCustoms, + new LatchedActionListener<>(listener, latch) + ).run(); + latch.await(); + assertNull(listener.getFailure()); + assertNotNull(listener.getResult()); + assertEquals(String.join(CUSTOM_DELIMITER, CLUSTER_STATE_CUSTOM, custom.getWriteableName()), listener.getResult().getComponent()); + String uploadedFileName = listener.getResult().getUploadedFilename(); + String[] pathTokens = uploadedFileName.split(PATH_DELIMITER); + assertEquals(5, pathTokens.length); + assertEquals(encodeString(CLUSTER_NAME), pathTokens[0]); + assertEquals(CLUSTER_STATE_PATH_TOKEN, pathTokens[1]); + assertEquals(CLUSTER_UUID, pathTokens[2]); + assertEquals(CLUSTER_STATE_EPHEMERAL_PATH_TOKEN, pathTokens[3]); + String[] splitFileName = pathTokens[4].split(DELIMITER); + assertEquals(4, splitFileName.length); + assertEquals(String.join(CUSTOM_DELIMITER, CLUSTER_STATE_CUSTOM, custom.getWriteableName()), splitFileName[0]); + assertEquals(invertLong(VERSION), splitFileName[1]); + assertEquals(CLUSTER_STATE_ATTRIBUTES_CURRENT_CODEC_VERSION, Integer.parseInt(splitFileName[3])); + } - try { - runnable.run(); - latch.await(); - assertEquals(clusterBlocks.global(), readClusterBlocks.get().global()); - assertEquals(clusterBlocks.indices().keySet(), readClusterBlocks.get().indices().keySet()); - for (String index : clusterBlocks.indices().keySet()) { - assertEquals(clusterBlocks.indices().get(index), readClusterBlocks.get().indices().get(index)); - } - } catch (Exception e) { - throw new RuntimeException(e); - } + public void testGetAsyncMetadataReadAction_Custom() throws IOException, InterruptedException { + Custom custom = getClusterStateCustom(); + String fileName = randomAlphaOfLength(10); + RemoteClusterStateCustoms remoteClusterStateCustoms = new RemoteClusterStateCustoms( + fileName, + custom.getWriteableName(), + CLUSTER_UUID, + compressor, + namedWriteableRegistry + ); + when(blobStoreTransferService.downloadBlob(anyIterable(), anyString())).thenReturn( + remoteClusterStateCustoms.clusterStateCustomsFormat.serialize(custom, fileName, compressor).streamInput() + ); + TestCapturingListener capturingListener = new TestCapturingListener<>(); + final CountDownLatch latch = new CountDownLatch(1); + remoteClusterStateAttributesManager.getAsyncMetadataReadAction( + CLUSTER_STATE_CUSTOM, + remoteClusterStateCustoms, + new LatchedActionListener<>(capturingListener, latch) + ).run(); + latch.await(); + assertNull(capturingListener.getFailure()); + assertNotNull(capturingListener.getResult()); + assertEquals(custom, capturingListener.getResult().getObj()); + assertEquals(CLUSTER_STATE_ATTRIBUTE, capturingListener.getResult().getComponent()); + assertEquals(CLUSTER_STATE_CUSTOM, capturingListener.getResult().getComponentName()); + } + + public void testGetAsyncMetadataWriteAction_Exception() throws IOException, InterruptedException { + DiscoveryNodes discoveryNodes = getDiscoveryNodes(); + RemoteDiscoveryNodes remoteDiscoveryNodes = new RemoteDiscoveryNodes(discoveryNodes, VERSION, CLUSTER_UUID, compressor); + + IOException ioException = new IOException("mock test exception"); + doAnswer(invocationOnMock -> { + invocationOnMock.getArgument(4, ActionListener.class).onFailure(ioException); + return null; + }).when(blobStoreTransferService) + .uploadBlob(any(InputStream.class), anyIterable(), anyString(), eq(URGENT), any(ActionListener.class)); + + TestCapturingListener capturingListener = new TestCapturingListener<>(); + final CountDownLatch latch = new CountDownLatch(1); + remoteClusterStateAttributesManager.getAsyncMetadataWriteAction( + DISCOVERY_NODES, + remoteDiscoveryNodes, + new LatchedActionListener<>(capturingListener, latch) + ).run(); + latch.await(); + assertNull(capturingListener.getResult()); + assertTrue(capturingListener.getFailure() instanceof RemoteStateTransferException); + assertEquals(ioException, capturingListener.getFailure().getCause()); + } + + public void testGetAsyncMetadataReadAction_Exception() throws IOException, InterruptedException { + String fileName = randomAlphaOfLength(10); + RemoteDiscoveryNodes remoteDiscoveryNodes = new RemoteDiscoveryNodes(fileName, CLUSTER_UUID, compressor); + Exception ioException = new IOException("mock test exception"); + when(blobStoreTransferService.downloadBlob(anyIterable(), anyString())).thenThrow(ioException); + CountDownLatch latch = new CountDownLatch(1); + TestCapturingListener capturingListener = new TestCapturingListener<>(); + remoteClusterStateAttributesManager.getAsyncMetadataReadAction( + DISCOVERY_NODES, + remoteDiscoveryNodes, + new LatchedActionListener<>(capturingListener, latch) + ).run(); + latch.await(); + assertNull(capturingListener.getResult()); + assertEquals(ioException, capturingListener.getFailure()); } public void testGetUpdatedCustoms() { From 8ad199dac020c91c80180b4f8042f65c9994a81a Mon Sep 17 00:00:00 2001 From: Ashish Singh Date: Fri, 28 Jun 2024 15:30:28 +0530 Subject: [PATCH 021/167] Add Ashish Singh to codeowners (#14592) Signed-off-by: Ashish Singh --- .github/CODEOWNERS | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/CODEOWNERS b/.github/CODEOWNERS index 8d69e98220b69..8ceecb3abb4a2 100644 --- a/.github/CODEOWNERS +++ b/.github/CODEOWNERS @@ -11,7 +11,7 @@ # 3. Use the command palette to run the CODEOWNERS: Show owners of current file command, which will display all code owners for the current file. # Default ownership for all repo files -* @anasalkouz @andrross @Bukhtawar @CEHENKLE @dblock @dbwiddis @gbbafna @kotwanikunal @mch2 @msfroh @nknize @owaiskazi19 @reta @Rishikesh1159 @sachinpkale @saratvemulapalli @shwetathareja @sohami @VachaShah +* @anasalkouz @andrross @ashking94 @Bukhtawar @CEHENKLE @dblock @dbwiddis @gbbafna @kotwanikunal @mch2 @msfroh @nknize @owaiskazi19 @reta @Rishikesh1159 @sachinpkale @saratvemulapalli @shwetathareja @sohami @VachaShah /modules/transport-netty4/ @peternied From 8e493f313eba21ce9946f2604c214bb66840d7f3 Mon Sep 17 00:00:00 2001 From: Liyun Xiu Date: Fri, 28 Jun 2024 12:02:49 -0700 Subject: [PATCH 022/167] Add batching processor base type AbstractBatchingProcessor (#14554) Signed-off-by: Liyun Xiu --- CHANGELOG.md | 1 + .../ingest/AbstractBatchingProcessor.java | 136 +++++++++++++++ .../AbstractBatchingProcessorTests.java | 160 ++++++++++++++++++ 3 files changed, 297 insertions(+) create mode 100644 server/src/main/java/org/opensearch/ingest/AbstractBatchingProcessor.java create mode 100644 server/src/test/java/org/opensearch/ingest/AbstractBatchingProcessorTests.java diff --git a/CHANGELOG.md b/CHANGELOG.md index c6b2d815750f9..8835032785430 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -9,6 +9,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - [Remote Store] Rate limiter for remote store low priority uploads ([#14374](https://github.com/opensearch-project/OpenSearch/pull/14374/)) - Apply the date histogram rewrite optimization to range aggregation ([#13865](https://github.com/opensearch-project/OpenSearch/pull/13865)) - [Writable Warm] Add composite directory implementation and integrate it with FileCache ([12782](https://github.com/opensearch-project/OpenSearch/pull/12782)) +- Add batching supported processor base type AbstractBatchingProcessor ([#14554](https://github.com/opensearch-project/OpenSearch/pull/14554)) - Fix race condition while parsing derived fields from search definition ([14445](https://github.com/opensearch-project/OpenSearch/pull/14445)) - Add allowlist setting for ingest-common and search-pipeline-common processors ([#14439](https://github.com/opensearch-project/OpenSearch/issues/14439)) diff --git a/server/src/main/java/org/opensearch/ingest/AbstractBatchingProcessor.java b/server/src/main/java/org/opensearch/ingest/AbstractBatchingProcessor.java new file mode 100644 index 0000000000000..55413b9bbdad1 --- /dev/null +++ b/server/src/main/java/org/opensearch/ingest/AbstractBatchingProcessor.java @@ -0,0 +1,136 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.ingest; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; +import java.util.Map; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.function.Consumer; + +import static org.opensearch.ingest.ConfigurationUtils.newConfigurationException; + +/** + * Abstract base class for batch processors. + * + * @opensearch.internal + */ +public abstract class AbstractBatchingProcessor extends AbstractProcessor { + + public static final String BATCH_SIZE_FIELD = "batch_size"; + private static final int DEFAULT_BATCH_SIZE = 1; + protected final int batchSize; + + protected AbstractBatchingProcessor(String tag, String description, int batchSize) { + super(tag, description); + this.batchSize = batchSize; + } + + /** + * Internal logic to process batched documents, must be implemented by concrete batch processors. + * + * @param ingestDocumentWrappers {@link List} of {@link IngestDocumentWrapper} to be processed. + * @param handler {@link Consumer} to be called with the results of the processing. + */ + protected abstract void subBatchExecute( + List ingestDocumentWrappers, + Consumer> handler + ); + + @Override + public void batchExecute(List ingestDocumentWrappers, Consumer> handler) { + if (ingestDocumentWrappers.isEmpty()) { + handler.accept(Collections.emptyList()); + return; + } + + // if batch size is larger than document size, send one batch + if (this.batchSize >= ingestDocumentWrappers.size()) { + subBatchExecute(ingestDocumentWrappers, handler); + return; + } + + // split documents into multiple batches and send each batch to batch processors + List> batches = cutBatches(ingestDocumentWrappers); + int size = ingestDocumentWrappers.size(); + AtomicInteger counter = new AtomicInteger(size); + List allResults = Collections.synchronizedList(new ArrayList<>()); + for (List batch : batches) { + this.subBatchExecute(batch, batchResults -> { + allResults.addAll(batchResults); + if (counter.addAndGet(-batchResults.size()) == 0) { + handler.accept(allResults); + } + assert counter.get() >= 0 : "counter is negative"; + }); + } + } + + private List> cutBatches(List ingestDocumentWrappers) { + List> batches = new ArrayList<>(); + for (int i = 0; i < ingestDocumentWrappers.size(); i += this.batchSize) { + batches.add(ingestDocumentWrappers.subList(i, Math.min(i + this.batchSize, ingestDocumentWrappers.size()))); + } + return batches; + } + + /** + * Factory class for creating {@link AbstractBatchingProcessor} instances. + * + * @opensearch.internal + */ + public abstract static class Factory implements Processor.Factory { + final String processorType; + + protected Factory(String processorType) { + this.processorType = processorType; + } + + /** + * Creates a new processor instance. + * + * @param processorFactories The processor factories. + * @param tag The processor tag. + * @param description The processor description. + * @param config The processor configuration. + * @return The new AbstractBatchProcessor instance. + * @throws Exception If the processor could not be created. + */ + @Override + public AbstractBatchingProcessor create( + Map processorFactories, + String tag, + String description, + Map config + ) throws Exception { + int batchSize = ConfigurationUtils.readIntProperty(this.processorType, tag, config, BATCH_SIZE_FIELD, DEFAULT_BATCH_SIZE); + if (batchSize < 1) { + throw newConfigurationException(this.processorType, tag, BATCH_SIZE_FIELD, "batch size must be a positive integer"); + } + return newProcessor(tag, description, batchSize, config); + } + + /** + * Returns a new processor instance. + * + * @param tag tag of the processor + * @param description description of the processor + * @param batchSize batch size of the processor + * @param config configuration of the processor + * @return a new batch processor instance + */ + protected abstract AbstractBatchingProcessor newProcessor( + String tag, + String description, + int batchSize, + Map config + ); + } +} diff --git a/server/src/test/java/org/opensearch/ingest/AbstractBatchingProcessorTests.java b/server/src/test/java/org/opensearch/ingest/AbstractBatchingProcessorTests.java new file mode 100644 index 0000000000000..54fc30cb5befa --- /dev/null +++ b/server/src/test/java/org/opensearch/ingest/AbstractBatchingProcessorTests.java @@ -0,0 +1,160 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.ingest; + +import org.opensearch.OpenSearchParseException; +import org.opensearch.test.OpenSearchTestCase; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collections; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.function.Consumer; + +public class AbstractBatchingProcessorTests extends OpenSearchTestCase { + + public void testBatchExecute_emptyInput() { + DummyProcessor processor = new DummyProcessor(3); + Consumer> handler = (results) -> assertTrue(results.isEmpty()); + processor.batchExecute(Collections.emptyList(), handler); + assertTrue(processor.getSubBatches().isEmpty()); + } + + public void testBatchExecute_singleBatchSize() { + DummyProcessor processor = new DummyProcessor(3); + List wrapperList = Arrays.asList( + IngestDocumentPreparer.createIngestDocumentWrapper(1), + IngestDocumentPreparer.createIngestDocumentWrapper(2), + IngestDocumentPreparer.createIngestDocumentWrapper(3) + ); + List resultList = new ArrayList<>(); + processor.batchExecute(wrapperList, resultList::addAll); + assertEquals(wrapperList, resultList); + assertEquals(1, processor.getSubBatches().size()); + assertEquals(wrapperList, processor.getSubBatches().get(0)); + } + + public void testBatchExecute_multipleBatches() { + DummyProcessor processor = new DummyProcessor(2); + List wrapperList = Arrays.asList( + IngestDocumentPreparer.createIngestDocumentWrapper(1), + IngestDocumentPreparer.createIngestDocumentWrapper(2), + IngestDocumentPreparer.createIngestDocumentWrapper(3), + IngestDocumentPreparer.createIngestDocumentWrapper(4), + IngestDocumentPreparer.createIngestDocumentWrapper(5) + ); + List resultList = new ArrayList<>(); + processor.batchExecute(wrapperList, resultList::addAll); + assertEquals(wrapperList, resultList); + assertEquals(3, processor.getSubBatches().size()); + assertEquals(wrapperList.subList(0, 2), processor.getSubBatches().get(0)); + assertEquals(wrapperList.subList(2, 4), processor.getSubBatches().get(1)); + assertEquals(wrapperList.subList(4, 5), processor.getSubBatches().get(2)); + } + + public void testBatchExecute_randomBatches() { + int batchSize = randomIntBetween(2, 32); + int docCount = randomIntBetween(2, 32); + DummyProcessor processor = new DummyProcessor(batchSize); + List wrapperList = new ArrayList<>(); + for (int i = 0; i < docCount; ++i) { + wrapperList.add(IngestDocumentPreparer.createIngestDocumentWrapper(i)); + } + List resultList = new ArrayList<>(); + processor.batchExecute(wrapperList, resultList::addAll); + assertEquals(wrapperList, resultList); + assertEquals(docCount / batchSize + (docCount % batchSize == 0 ? 0 : 1), processor.getSubBatches().size()); + } + + public void testBatchExecute_defaultBatchSize() { + DummyProcessor processor = new DummyProcessor(1); + List wrapperList = Arrays.asList( + IngestDocumentPreparer.createIngestDocumentWrapper(1), + IngestDocumentPreparer.createIngestDocumentWrapper(2), + IngestDocumentPreparer.createIngestDocumentWrapper(3) + ); + List resultList = new ArrayList<>(); + processor.batchExecute(wrapperList, resultList::addAll); + assertEquals(wrapperList, resultList); + assertEquals(3, processor.getSubBatches().size()); + assertEquals(wrapperList.subList(0, 1), processor.getSubBatches().get(0)); + assertEquals(wrapperList.subList(1, 2), processor.getSubBatches().get(1)); + assertEquals(wrapperList.subList(2, 3), processor.getSubBatches().get(2)); + } + + public void testFactory_invalidBatchSize() { + Map config = new HashMap<>(); + config.put("batch_size", 0); + DummyProcessor.DummyProcessorFactory factory = new DummyProcessor.DummyProcessorFactory("DummyProcessor"); + OpenSearchParseException exception = assertThrows(OpenSearchParseException.class, () -> factory.create(config)); + assertEquals("[batch_size] batch size must be a positive integer", exception.getMessage()); + } + + public void testFactory_defaultBatchSize() throws Exception { + Map config = new HashMap<>(); + DummyProcessor.DummyProcessorFactory factory = new DummyProcessor.DummyProcessorFactory("DummyProcessor"); + DummyProcessor processor = (DummyProcessor) factory.create(config); + assertEquals(1, processor.batchSize); + } + + public void testFactory_callNewProcessor() throws Exception { + Map config = new HashMap<>(); + config.put("batch_size", 3); + DummyProcessor.DummyProcessorFactory factory = new DummyProcessor.DummyProcessorFactory("DummyProcessor"); + DummyProcessor processor = (DummyProcessor) factory.create(config); + assertEquals(3, processor.batchSize); + } + + static class DummyProcessor extends AbstractBatchingProcessor { + private List> subBatches = new ArrayList<>(); + + public List> getSubBatches() { + return subBatches; + } + + protected DummyProcessor(int batchSize) { + super("tag", "description", batchSize); + } + + @Override + public void subBatchExecute(List ingestDocumentWrappers, Consumer> handler) { + subBatches.add(ingestDocumentWrappers); + handler.accept(ingestDocumentWrappers); + } + + @Override + public IngestDocument execute(IngestDocument ingestDocument) throws Exception { + return ingestDocument; + } + + @Override + public String getType() { + return null; + } + + public static class DummyProcessorFactory extends Factory { + + protected DummyProcessorFactory(String processorType) { + super(processorType); + } + + public AbstractBatchingProcessor create(Map config) throws Exception { + final Map processorFactories = new HashMap<>(); + return super.create(processorFactories, "tag", "description", config); + } + + @Override + protected AbstractBatchingProcessor newProcessor(String tag, String description, int batchSize, Map config) { + return new DummyProcessor(batchSize); + } + } + } +} From 5c8623f15f1fbec40328f05f53814404e3438ff7 Mon Sep 17 00:00:00 2001 From: Andriy Redko Date: Fri, 28 Jun 2024 17:01:43 -0400 Subject: [PATCH 023/167] Add @InternalApi annotation to japicmp exclusions (#14597) Signed-off-by: Andriy Redko --- CHANGELOG.md | 1 + server/build.gradle | 1 + 2 files changed, 2 insertions(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index 8835032785430..c7cfbba928da9 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -34,6 +34,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Updated the `indices.query.bool.max_clause_count` setting from being static to dynamically updateable ([#13568](https://github.com/opensearch-project/OpenSearch/pull/13568)) - Make the class CommunityIdProcessor final ([#14448](https://github.com/opensearch-project/OpenSearch/pull/14448)) - Allow @InternalApi annotation on classes not meant to be constructed outside of the OpenSearch core ([#14575](https://github.com/opensearch-project/OpenSearch/pull/14575)) +- Add @InternalApi annotation to japicmp exclusions ([#14597](https://github.com/opensearch-project/OpenSearch/pull/14597)) ### Deprecated diff --git a/server/build.gradle b/server/build.gradle index b8a99facbf964..429af5d0ac258 100644 --- a/server/build.gradle +++ b/server/build.gradle @@ -409,6 +409,7 @@ tasks.register("japicmp", me.champeau.gradle.japicmp.JapicmpTask) { failOnModification = true ignoreMissingClasses = true annotationIncludes = ['@org.opensearch.common.annotation.PublicApi', '@org.opensearch.common.annotation.DeprecatedApi'] + annotationExcludes = ['@org.opensearch.common.annotation.InternalApi'] txtOutputFile = layout.buildDirectory.file("reports/java-compatibility/report.txt") htmlOutputFile = layout.buildDirectory.file("reports/java-compatibility/report.html") dependsOn downloadJapicmpCompareTarget From c71fd4a2f2e6d5d7d9f2f304c573180027af8f44 Mon Sep 17 00:00:00 2001 From: Vatsal <36672090+imvtsl@users.noreply.github.com> Date: Fri, 28 Jun 2024 22:16:47 -0700 Subject: [PATCH 024/167] =?UTF-8?q?Fix=20issue=2014519:Parsing=20a=20GetRe?= =?UTF-8?q?sult=20returns=20NPE=20if=20found=20field=20is=20mis=E2=80=A6?= =?UTF-8?q?=20(#14552)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Fix issue 14519:Parsing a GetResult returns NPE if found field is missing. Signed-off-by: Vatsal Signed-off-by: vatsal * Fix issue 14519:Parsing a GetResult returns NPE if found field is missing. Signed-off-by: Vatsal Signed-off-by: vatsal * Fix issue 14519:Fix wildcart import. Signed-off-by: Vatsal Signed-off-by: vatsal * Fix issue 14519:Fix wildcart import. Signed-off-by: Vatsal Signed-off-by: vatsal * Fix issue 14519:Fix spotless issues. Signed-off-by: Vatsal Signed-off-by: vatsal * Fix issue 14519:update changelog Signed-off-by: vatsal --------- Signed-off-by: vatsal Signed-off-by: Daniel Widdis Co-authored-by: Daniel Widdis --- CHANGELOG.md | 1 + .../org/opensearch/index/get/GetResult.java | 10 ++++++++++ .../opensearch/index/get/GetResultTests.java | 20 +++++++++++++++++++ 3 files changed, 31 insertions(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index c7cfbba928da9..e9470d9bb4727 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -52,6 +52,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Add ListPitInfo::getKeepAlive() getter ([#14495](https://github.com/opensearch-project/OpenSearch/pull/14495)) - Fix FuzzyQuery in keyword field will use IndexOrDocValuesQuery when both of index and doc_value are true ([#14378](https://github.com/opensearch-project/OpenSearch/pull/14378)) - Fix file cache initialization ([#14004](https://github.com/opensearch-project/OpenSearch/pull/14004)) +- Handle NPE in GetResult if "found" field is missing ([#14552](https://github.com/opensearch-project/OpenSearch/pull/14552)) ### Security diff --git a/server/src/main/java/org/opensearch/index/get/GetResult.java b/server/src/main/java/org/opensearch/index/get/GetResult.java index c0dd1cd2ecb30..27a2826f71e19 100644 --- a/server/src/main/java/org/opensearch/index/get/GetResult.java +++ b/server/src/main/java/org/opensearch/index/get/GetResult.java @@ -37,6 +37,7 @@ import org.opensearch.common.annotation.PublicApi; import org.opensearch.common.document.DocumentField; import org.opensearch.common.xcontent.XContentHelper; +import org.opensearch.core.common.ParsingException; import org.opensearch.core.common.Strings; import org.opensearch.core.common.bytes.BytesReference; import org.opensearch.core.common.io.stream.StreamInput; @@ -56,6 +57,7 @@ import java.util.Collections; import java.util.HashMap; import java.util.Iterator; +import java.util.Locale; import java.util.Map; import java.util.Objects; @@ -398,6 +400,14 @@ public static GetResult fromXContentEmbedded(XContentParser parser, String index } } } + + if (found == null) { + throw new ParsingException( + parser.getTokenLocation(), + String.format(Locale.ROOT, "Missing required field [%s]", GetResult.FOUND) + ); + } + return new GetResult(index, id, seqNo, primaryTerm, version, found, source, documentFields, metaFields); } diff --git a/server/src/test/java/org/opensearch/index/get/GetResultTests.java b/server/src/test/java/org/opensearch/index/get/GetResultTests.java index 64b14744a40d2..2001bb84454cd 100644 --- a/server/src/test/java/org/opensearch/index/get/GetResultTests.java +++ b/server/src/test/java/org/opensearch/index/get/GetResultTests.java @@ -35,12 +35,16 @@ import org.opensearch.common.collect.Tuple; import org.opensearch.common.document.DocumentField; import org.opensearch.common.io.stream.BytesStreamOutput; +import org.opensearch.common.xcontent.LoggingDeprecationHandler; import org.opensearch.common.xcontent.XContentType; +import org.opensearch.common.xcontent.json.JsonXContent; +import org.opensearch.core.common.ParsingException; import org.opensearch.core.common.Strings; import org.opensearch.core.common.bytes.BytesArray; import org.opensearch.core.common.bytes.BytesReference; import org.opensearch.core.xcontent.MediaType; import org.opensearch.core.xcontent.MediaTypeRegistry; +import org.opensearch.core.xcontent.NamedXContentRegistry; import org.opensearch.core.xcontent.ToXContent; import org.opensearch.core.xcontent.XContentParser; import org.opensearch.index.mapper.IdFieldMapper; @@ -220,6 +224,22 @@ public void testEqualsAndHashcode() { ); } + public void testFomXContentEmbeddedFoundParsingException() throws IOException { + String json = "{\"_index\":\"foo\",\"_id\":\"bar\"}"; + try ( + XContentParser parser = JsonXContent.jsonXContent.createParser( + NamedXContentRegistry.EMPTY, + LoggingDeprecationHandler.INSTANCE, + json + ) + ) { + ensureExpectedToken(XContentParser.Token.START_OBJECT, parser.nextToken(), parser); + ParsingException parsingException = assertThrows(ParsingException.class, () -> GetResult.fromXContentEmbedded(parser)); + assertEquals("Missing required field [found]", parsingException.getMessage()); + } + + } + public static GetResult copyGetResult(GetResult getResult) { return new GetResult( getResult.getIndex(), From 6267e94b00d62c3fba80abf02ea584c85ea3aad0 Mon Sep 17 00:00:00 2001 From: Bharathwaj G Date: Mon, 1 Jul 2024 14:38:03 +0530 Subject: [PATCH 025/167] Star tree mapping changes (#14605) * Star tree mapping changes with feature flag --------- Signed-off-by: Bharathwaj G --- distribution/src/config/opensearch.yml | 4 + .../index/mapper/StarTreeMapperIT.java | 440 ++++++++++ .../metadata/MetadataCreateIndexService.java | 5 + .../metadata/MetadataMappingService.java | 12 +- .../common/settings/ClusterSettings.java | 6 +- .../common/settings/FeatureFlagSettings.java | 3 +- .../common/settings/IndexScopedSettings.java | 10 + .../opensearch/common/util/FeatureFlags.java | 10 +- .../org/opensearch/index/IndexModule.java | 10 +- .../org/opensearch/index/IndexService.java | 11 +- .../CompositeIndexSettings.java | 55 ++ .../CompositeIndexValidator.java | 46 ++ .../datacube/DateDimension.java | 72 ++ .../compositeindex/datacube/Dimension.java | 22 + .../datacube/DimensionFactory.java | 99 +++ .../index/compositeindex/datacube/Metric.java | 65 ++ .../compositeindex/datacube/MetricStat.java | 44 + .../datacube/NumericDimension.java | 57 ++ .../compositeindex/datacube/package-info.java | 11 + .../datacube/startree/StarTreeField.java | 94 +++ .../startree/StarTreeFieldConfiguration.java | 108 +++ .../startree/StarTreeIndexSettings.java | 116 +++ .../datacube/startree/StarTreeValidator.java | 94 +++ .../datacube/startree/package-info.java | 11 + .../index/compositeindex/package-info.java | 13 + .../mapper/CompositeDataCubeFieldType.java | 56 ++ .../mapper/CompositeMappedFieldType.java | 75 ++ .../org/opensearch/index/mapper/Mapper.java | 5 + .../index/mapper/MapperService.java | 17 + .../opensearch/index/mapper/ObjectMapper.java | 106 ++- .../index/mapper/RootObjectMapper.java | 15 +- .../index/mapper/StarTreeMapper.java | 406 +++++++++ .../org/opensearch/indices/IndicesModule.java | 2 + .../opensearch/indices/IndicesService.java | 17 +- .../main/java/org/opensearch/node/Node.java | 5 +- .../index/mapper/ObjectMapperTests.java | 73 ++ .../index/mapper/StarTreeMapperTests.java | 767 ++++++++++++++++++ .../index/mapper/MapperTestCase.java | 2 +- .../aggregations/AggregatorTestCase.java | 2 + 39 files changed, 2951 insertions(+), 15 deletions(-) create mode 100644 server/src/internalClusterTest/java/org/opensearch/index/mapper/StarTreeMapperIT.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/CompositeIndexSettings.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/CompositeIndexValidator.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/DateDimension.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/Dimension.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/DimensionFactory.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/Metric.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/MetricStat.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/NumericDimension.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/package-info.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/StarTreeField.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/StarTreeFieldConfiguration.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/StarTreeIndexSettings.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/StarTreeValidator.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/package-info.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/package-info.java create mode 100644 server/src/main/java/org/opensearch/index/mapper/CompositeDataCubeFieldType.java create mode 100644 server/src/main/java/org/opensearch/index/mapper/CompositeMappedFieldType.java create mode 100644 server/src/main/java/org/opensearch/index/mapper/StarTreeMapper.java create mode 100644 server/src/test/java/org/opensearch/index/mapper/StarTreeMapperTests.java diff --git a/distribution/src/config/opensearch.yml b/distribution/src/config/opensearch.yml index 10bab9b3fce92..4115601f62ada 100644 --- a/distribution/src/config/opensearch.yml +++ b/distribution/src/config/opensearch.yml @@ -125,3 +125,7 @@ ${path.logs} # Gates the functionality of enabling Opensearch to use pluggable caches with respective store names via setting. # #opensearch.experimental.feature.pluggable.caching.enabled: false +# +# Gates the functionality of star tree index, which improves the performance of search aggregations. +# +#opensearch.experimental.feature.composite_index.star_tree.enabled: true diff --git a/server/src/internalClusterTest/java/org/opensearch/index/mapper/StarTreeMapperIT.java b/server/src/internalClusterTest/java/org/opensearch/index/mapper/StarTreeMapperIT.java new file mode 100644 index 0000000000000..8e5193b650868 --- /dev/null +++ b/server/src/internalClusterTest/java/org/opensearch/index/mapper/StarTreeMapperIT.java @@ -0,0 +1,440 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.mapper; + +import org.opensearch.action.support.master.AcknowledgedResponse; +import org.opensearch.common.Rounding; +import org.opensearch.common.settings.Settings; +import org.opensearch.common.util.FeatureFlags; +import org.opensearch.core.index.Index; +import org.opensearch.core.xcontent.XContentBuilder; +import org.opensearch.index.IndexService; +import org.opensearch.index.compositeindex.CompositeIndexSettings; +import org.opensearch.index.compositeindex.datacube.DateDimension; +import org.opensearch.index.compositeindex.datacube.MetricStat; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeFieldConfiguration; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeIndexSettings; +import org.opensearch.indices.IndicesService; +import org.opensearch.test.OpenSearchIntegTestCase; +import org.junit.After; +import org.junit.Before; + +import java.io.IOException; +import java.util.Arrays; +import java.util.Collections; +import java.util.List; +import java.util.Set; + +import static org.opensearch.common.xcontent.XContentFactory.jsonBuilder; +import static org.opensearch.test.hamcrest.OpenSearchAssertions.assertAcked; + +/** + * Integration tests for star tree mapper + */ +public class StarTreeMapperIT extends OpenSearchIntegTestCase { + private static final String TEST_INDEX = "test"; + + private static XContentBuilder createMinimalTestMapping(boolean invalidDim, boolean invalidMetric, boolean keywordDim) { + try { + return jsonBuilder().startObject() + .startObject("composite") + .startObject("startree-1") + .field("type", "star_tree") + .startObject("config") + .startArray("ordered_dimensions") + .startObject() + .field("name", "timestamp") + .endObject() + .startObject() + .field("name", getDim(invalidDim, keywordDim)) + .endObject() + .endArray() + .startArray("metrics") + .startObject() + .field("name", getDim(invalidMetric, false)) + .endObject() + .endArray() + .endObject() + .endObject() + .endObject() + .startObject("properties") + .startObject("timestamp") + .field("type", "date") + .endObject() + .startObject("numeric_dv") + .field("type", "integer") + .field("doc_values", true) + .endObject() + .startObject("numeric") + .field("type", "integer") + .field("doc_values", false) + .endObject() + .startObject("keyword_dv") + .field("type", "keyword") + .field("doc_values", true) + .endObject() + .startObject("keyword") + .field("type", "keyword") + .field("doc_values", false) + .endObject() + .endObject() + .endObject(); + } catch (IOException e) { + throw new IllegalStateException(e); + } + } + + private static XContentBuilder createMaxDimTestMapping() { + try { + return jsonBuilder().startObject() + .startObject("composite") + .startObject("startree-1") + .field("type", "star_tree") + .startObject("config") + .startArray("ordered_dimensions") + .startObject() + .field("name", "timestamp") + .startArray("calendar_intervals") + .value("day") + .value("month") + .endArray() + .endObject() + .startObject() + .field("name", "dim2") + .endObject() + .startObject() + .field("name", "dim3") + .endObject() + .endArray() + .startArray("metrics") + .startObject() + .field("name", "dim2") + .endObject() + .endArray() + .endObject() + .endObject() + .endObject() + .startObject("properties") + .startObject("timestamp") + .field("type", "date") + .endObject() + .startObject("dim2") + .field("type", "integer") + .field("doc_values", true) + .endObject() + .startObject("dim3") + .field("type", "integer") + .field("doc_values", true) + .endObject() + .endObject() + .endObject(); + } catch (IOException e) { + throw new IllegalStateException(e); + } + } + + private static XContentBuilder createTestMappingWithoutStarTree(boolean invalidDim, boolean invalidMetric, boolean keywordDim) { + try { + return jsonBuilder().startObject() + .startObject("properties") + .startObject("timestamp") + .field("type", "date") + .endObject() + .startObject("numeric_dv") + .field("type", "integer") + .field("doc_values", true) + .endObject() + .startObject("numeric") + .field("type", "integer") + .field("doc_values", false) + .endObject() + .startObject("keyword_dv") + .field("type", "keyword") + .field("doc_values", true) + .endObject() + .startObject("keyword") + .field("type", "keyword") + .field("doc_values", false) + .endObject() + .endObject() + .endObject(); + } catch (IOException e) { + throw new IllegalStateException(e); + } + } + + private static XContentBuilder createUpdateTestMapping(boolean changeDim, boolean sameStarTree) { + try { + return jsonBuilder().startObject() + .startObject("composite") + .startObject(sameStarTree ? "startree-1" : "startree-2") + .field("type", "star_tree") + .startObject("config") + .startArray("ordered_dimensions") + .startObject() + .field("name", "timestamp") + .endObject() + .startObject() + .field("name", changeDim ? "numeric_new" : getDim(false, false)) + .endObject() + .endArray() + .startArray("metrics") + .startObject() + .field("name", getDim(false, false)) + .endObject() + .endArray() + .endObject() + .endObject() + .endObject() + .startObject("properties") + .startObject("timestamp") + .field("type", "date") + .endObject() + .startObject("numeric_dv") + .field("type", "integer") + .field("doc_values", true) + .endObject() + .startObject("numeric") + .field("type", "integer") + .field("doc_values", false) + .endObject() + .startObject("numeric_new") + .field("type", "integer") + .field("doc_values", true) + .endObject() + .startObject("keyword_dv") + .field("type", "keyword") + .field("doc_values", true) + .endObject() + .startObject("keyword") + .field("type", "keyword") + .field("doc_values", false) + .endObject() + .endObject() + .endObject(); + } catch (IOException e) { + throw new IllegalStateException(e); + } + } + + private static String getDim(boolean hasDocValues, boolean isKeyword) { + if (hasDocValues) { + return "numeric"; + } else if (isKeyword) { + return "keyword"; + } + return "numeric_dv"; + } + + @Override + protected Settings featureFlagSettings() { + return Settings.builder().put(super.featureFlagSettings()).put(FeatureFlags.STAR_TREE_INDEX, "true").build(); + } + + @Before + public final void setupNodeSettings() { + Settings request = Settings.builder().put(CompositeIndexSettings.STAR_TREE_INDEX_ENABLED_SETTING.getKey(), true).build(); + assertAcked(client().admin().cluster().prepareUpdateSettings().setPersistentSettings(request).get()); + } + + public void testValidCompositeIndex() { + prepareCreate(TEST_INDEX).setMapping(createMinimalTestMapping(false, false, false)).get(); + Iterable dataNodeInstances = internalCluster().getDataNodeInstances(IndicesService.class); + for (IndicesService service : dataNodeInstances) { + final Index index = resolveIndex("test"); + if (service.hasIndex(index)) { + IndexService indexService = service.indexService(index); + Set fts = indexService.mapperService().getCompositeFieldTypes(); + + for (CompositeMappedFieldType ft : fts) { + assertTrue(ft instanceof StarTreeMapper.StarTreeFieldType); + StarTreeMapper.StarTreeFieldType starTreeFieldType = (StarTreeMapper.StarTreeFieldType) ft; + assertEquals("timestamp", starTreeFieldType.getDimensions().get(0).getField()); + assertTrue(starTreeFieldType.getDimensions().get(0) instanceof DateDimension); + DateDimension dateDim = (DateDimension) starTreeFieldType.getDimensions().get(0); + List expectedTimeUnits = Arrays.asList( + Rounding.DateTimeUnit.MINUTES_OF_HOUR, + Rounding.DateTimeUnit.HOUR_OF_DAY + ); + assertEquals(expectedTimeUnits, dateDim.getIntervals()); + assertEquals("numeric_dv", starTreeFieldType.getDimensions().get(1).getField()); + assertEquals("numeric_dv", starTreeFieldType.getMetrics().get(0).getField()); + List expectedMetrics = Arrays.asList( + MetricStat.AVG, + MetricStat.COUNT, + MetricStat.SUM, + MetricStat.MAX, + MetricStat.MIN + ); + assertEquals(expectedMetrics, starTreeFieldType.getMetrics().get(0).getMetrics()); + assertEquals(10000, starTreeFieldType.getStarTreeConfig().maxLeafDocs()); + assertEquals( + StarTreeFieldConfiguration.StarTreeBuildMode.ON_HEAP, + starTreeFieldType.getStarTreeConfig().getBuildMode() + ); + assertEquals(Collections.emptySet(), starTreeFieldType.getStarTreeConfig().getSkipStarNodeCreationInDims()); + } + } + } + } + + public void testUpdateIndexWithAdditionOfStarTree() { + prepareCreate(TEST_INDEX).setMapping(createMinimalTestMapping(false, false, false)).get(); + + IllegalArgumentException ex = expectThrows( + IllegalArgumentException.class, + () -> client().admin().indices().preparePutMapping(TEST_INDEX).setSource(createUpdateTestMapping(false, false)).get() + ); + assertEquals("Index cannot have more than [1] star tree fields", ex.getMessage()); + } + + public void testUpdateIndexWithNewerStarTree() { + prepareCreate(TEST_INDEX).setMapping(createTestMappingWithoutStarTree(false, false, false)).get(); + + IllegalArgumentException ex = expectThrows( + IllegalArgumentException.class, + () -> client().admin().indices().preparePutMapping(TEST_INDEX).setSource(createUpdateTestMapping(false, false)).get() + ); + assertEquals( + "Composite fields must be specified during index creation, addition of new composite fields during update is not supported", + ex.getMessage() + ); + } + + public void testUpdateIndexWhenMappingIsDifferent() { + prepareCreate(TEST_INDEX).setMapping(createMinimalTestMapping(false, false, false)).get(); + + // update some field in the mapping + IllegalArgumentException ex = expectThrows( + IllegalArgumentException.class, + () -> client().admin().indices().preparePutMapping(TEST_INDEX).setSource(createUpdateTestMapping(true, true)).get() + ); + assertTrue(ex.getMessage().contains("Cannot update parameter [config] from")); + } + + public void testUpdateIndexWhenMappingIsSame() { + prepareCreate(TEST_INDEX).setMapping(createMinimalTestMapping(false, false, false)).get(); + + // update some field in the mapping + AcknowledgedResponse putMappingResponse = client().admin() + .indices() + .preparePutMapping(TEST_INDEX) + .setSource(createMinimalTestMapping(false, false, false)) + .get(); + assertAcked(putMappingResponse); + + Iterable dataNodeInstances = internalCluster().getDataNodeInstances(IndicesService.class); + for (IndicesService service : dataNodeInstances) { + final Index index = resolveIndex("test"); + if (service.hasIndex(index)) { + IndexService indexService = service.indexService(index); + Set fts = indexService.mapperService().getCompositeFieldTypes(); + + for (CompositeMappedFieldType ft : fts) { + assertTrue(ft instanceof StarTreeMapper.StarTreeFieldType); + StarTreeMapper.StarTreeFieldType starTreeFieldType = (StarTreeMapper.StarTreeFieldType) ft; + assertEquals("timestamp", starTreeFieldType.getDimensions().get(0).getField()); + assertTrue(starTreeFieldType.getDimensions().get(0) instanceof DateDimension); + DateDimension dateDim = (DateDimension) starTreeFieldType.getDimensions().get(0); + List expectedTimeUnits = Arrays.asList( + Rounding.DateTimeUnit.MINUTES_OF_HOUR, + Rounding.DateTimeUnit.HOUR_OF_DAY + ); + assertEquals(expectedTimeUnits, dateDim.getIntervals()); + assertEquals("numeric_dv", starTreeFieldType.getDimensions().get(1).getField()); + assertEquals("numeric_dv", starTreeFieldType.getMetrics().get(0).getField()); + List expectedMetrics = Arrays.asList( + MetricStat.AVG, + MetricStat.COUNT, + MetricStat.SUM, + MetricStat.MAX, + MetricStat.MIN + ); + assertEquals(expectedMetrics, starTreeFieldType.getMetrics().get(0).getMetrics()); + assertEquals(10000, starTreeFieldType.getStarTreeConfig().maxLeafDocs()); + assertEquals( + StarTreeFieldConfiguration.StarTreeBuildMode.ON_HEAP, + starTreeFieldType.getStarTreeConfig().getBuildMode() + ); + assertEquals(Collections.emptySet(), starTreeFieldType.getStarTreeConfig().getSkipStarNodeCreationInDims()); + } + } + } + } + + public void testInvalidDimCompositeIndex() { + IllegalArgumentException ex = expectThrows( + IllegalArgumentException.class, + () -> prepareCreate(TEST_INDEX).setMapping(createMinimalTestMapping(true, false, false)).get() + ); + assertEquals( + "Aggregations not supported for the dimension field [numeric] with field type [integer] as part of star tree field", + ex.getMessage() + ); + } + + public void testMaxDimsCompositeIndex() { + MapperParsingException ex = expectThrows( + MapperParsingException.class, + () -> prepareCreate(TEST_INDEX).setMapping(createMaxDimTestMapping()) + .setSettings(Settings.builder().put(StarTreeIndexSettings.STAR_TREE_MAX_DIMENSIONS_SETTING.getKey(), 2)) + .get() + ); + assertEquals( + "Failed to parse mapping [_doc]: ordered_dimensions cannot have more than 2 dimensions for star tree field [startree-1]", + ex.getMessage() + ); + } + + public void testMaxCalendarIntervalsCompositeIndex() { + MapperParsingException ex = expectThrows( + MapperParsingException.class, + () -> prepareCreate(TEST_INDEX).setMapping(createMaxDimTestMapping()) + .setSettings(Settings.builder().put(StarTreeIndexSettings.STAR_TREE_MAX_DATE_INTERVALS_SETTING.getKey(), 1)) + .get() + ); + assertEquals( + "Failed to parse mapping [_doc]: At most [1] calendar intervals are allowed in dimension [timestamp]", + ex.getMessage() + ); + } + + public void testUnsupportedDim() { + MapperParsingException ex = expectThrows( + MapperParsingException.class, + () -> prepareCreate(TEST_INDEX).setMapping(createMinimalTestMapping(false, false, true)).get() + ); + assertEquals( + "Failed to parse mapping [_doc]: unsupported field type associated with dimension [keyword] as part of star tree field [startree-1]", + ex.getMessage() + ); + } + + public void testInvalidMetric() { + IllegalArgumentException ex = expectThrows( + IllegalArgumentException.class, + () -> prepareCreate(TEST_INDEX).setMapping(createMinimalTestMapping(false, true, false)).get() + ); + assertEquals( + "Aggregations not supported for the metrics field [numeric] with field type [integer] as part of star tree field", + ex.getMessage() + ); + } + + @After + public final void cleanupNodeSettings() { + assertAcked( + client().admin() + .cluster() + .prepareUpdateSettings() + .setPersistentSettings(Settings.builder().putNull("*")) + .setTransientSettings(Settings.builder().putNull("*")) + ); + } +} diff --git a/server/src/main/java/org/opensearch/cluster/metadata/MetadataCreateIndexService.java b/server/src/main/java/org/opensearch/cluster/metadata/MetadataCreateIndexService.java index 16edec112f123..7973745ce84b3 100644 --- a/server/src/main/java/org/opensearch/cluster/metadata/MetadataCreateIndexService.java +++ b/server/src/main/java/org/opensearch/cluster/metadata/MetadataCreateIndexService.java @@ -85,6 +85,7 @@ import org.opensearch.index.IndexNotFoundException; import org.opensearch.index.IndexService; import org.opensearch.index.IndexSettings; +import org.opensearch.index.compositeindex.CompositeIndexValidator; import org.opensearch.index.mapper.DocumentMapper; import org.opensearch.index.mapper.MapperService; import org.opensearch.index.mapper.MapperService.MergeReason; @@ -1318,6 +1319,10 @@ private static void updateIndexMappingsAndBuildSortOrder( } } + if (mapperService.isCompositeIndexPresent()) { + CompositeIndexValidator.validate(mapperService, indexService.getCompositeIndexSettings(), indexService.getIndexSettings()); + } + if (sourceMetadata == null) { // now that the mapping is merged we can validate the index sort. // we cannot validate for index shrinking since the mapping is empty diff --git a/server/src/main/java/org/opensearch/cluster/metadata/MetadataMappingService.java b/server/src/main/java/org/opensearch/cluster/metadata/MetadataMappingService.java index 1406287149e8d..43894db86c512 100644 --- a/server/src/main/java/org/opensearch/cluster/metadata/MetadataMappingService.java +++ b/server/src/main/java/org/opensearch/cluster/metadata/MetadataMappingService.java @@ -55,6 +55,7 @@ import org.opensearch.core.common.Strings; import org.opensearch.core.index.Index; import org.opensearch.index.IndexService; +import org.opensearch.index.compositeindex.CompositeIndexValidator; import org.opensearch.index.mapper.DocumentMapper; import org.opensearch.index.mapper.MapperService; import org.opensearch.index.mapper.MapperService.MergeReason; @@ -282,6 +283,7 @@ private ClusterState applyRequest( // first, simulate: just call merge and ignore the result existingMapper.merge(newMapper.mapping(), MergeReason.MAPPING_UPDATE); } + } Metadata.Builder builder = Metadata.builder(metadata); boolean updated = false; @@ -291,7 +293,7 @@ private ClusterState applyRequest( // we use the exact same indexService and metadata we used to validate above here to actually apply the update final Index index = indexMetadata.getIndex(); final MapperService mapperService = indexMapperServices.get(index); - + boolean isCompositeFieldPresent = !mapperService.getCompositeFieldTypes().isEmpty(); CompressedXContent existingSource = null; DocumentMapper existingMapper = mapperService.documentMapper(); if (existingMapper != null) { @@ -302,6 +304,14 @@ private ClusterState applyRequest( mappingUpdateSource, MergeReason.MAPPING_UPDATE ); + + CompositeIndexValidator.validate( + mapperService, + indicesService.getCompositeIndexSettings(), + mapperService.getIndexSettings(), + isCompositeFieldPresent + ); + CompressedXContent updatedSource = mergedMapper.mappingSource(); if (existingSource != null) { diff --git a/server/src/main/java/org/opensearch/common/settings/ClusterSettings.java b/server/src/main/java/org/opensearch/common/settings/ClusterSettings.java index 233a8d732d178..5dcf23ae52294 100644 --- a/server/src/main/java/org/opensearch/common/settings/ClusterSettings.java +++ b/server/src/main/java/org/opensearch/common/settings/ClusterSettings.java @@ -115,6 +115,7 @@ import org.opensearch.index.ShardIndexingPressureMemoryManager; import org.opensearch.index.ShardIndexingPressureSettings; import org.opensearch.index.ShardIndexingPressureStore; +import org.opensearch.index.compositeindex.CompositeIndexSettings; import org.opensearch.index.remote.RemoteStorePressureSettings; import org.opensearch.index.remote.RemoteStoreStatsTrackerFactory; import org.opensearch.index.store.remote.filecache.FileCacheSettings; @@ -754,7 +755,10 @@ public void apply(Settings value, Settings current, Settings previous) { RemoteStoreSettings.CLUSTER_REMOTE_STORE_PATH_HASH_ALGORITHM_SETTING, RemoteStoreSettings.CLUSTER_REMOTE_MAX_TRANSLOG_READERS, RemoteStoreSettings.CLUSTER_REMOTE_STORE_TRANSLOG_METADATA, - SearchService.CLUSTER_ALLOW_DERIVED_FIELD_SETTING + SearchService.CLUSTER_ALLOW_DERIVED_FIELD_SETTING, + + // Composite index settings + CompositeIndexSettings.STAR_TREE_INDEX_ENABLED_SETTING ) ) ); diff --git a/server/src/main/java/org/opensearch/common/settings/FeatureFlagSettings.java b/server/src/main/java/org/opensearch/common/settings/FeatureFlagSettings.java index 238df1bd90113..b6166f5d3cce1 100644 --- a/server/src/main/java/org/opensearch/common/settings/FeatureFlagSettings.java +++ b/server/src/main/java/org/opensearch/common/settings/FeatureFlagSettings.java @@ -37,6 +37,7 @@ protected FeatureFlagSettings( FeatureFlags.TIERED_REMOTE_INDEX_SETTING, FeatureFlags.REMOTE_STORE_MIGRATION_EXPERIMENTAL_SETTING, FeatureFlags.PLUGGABLE_CACHE_SETTING, - FeatureFlags.REMOTE_PUBLICATION_EXPERIMENTAL_SETTING + FeatureFlags.REMOTE_PUBLICATION_EXPERIMENTAL_SETTING, + FeatureFlags.STAR_TREE_INDEX_SETTING ); } diff --git a/server/src/main/java/org/opensearch/common/settings/IndexScopedSettings.java b/server/src/main/java/org/opensearch/common/settings/IndexScopedSettings.java index 1488f5d30b4ba..ca2c4dab6102b 100644 --- a/server/src/main/java/org/opensearch/common/settings/IndexScopedSettings.java +++ b/server/src/main/java/org/opensearch/common/settings/IndexScopedSettings.java @@ -52,6 +52,7 @@ import org.opensearch.index.SearchSlowLog; import org.opensearch.index.TieredMergePolicyProvider; import org.opensearch.index.cache.bitset.BitsetFilterCache; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeIndexSettings; import org.opensearch.index.engine.EngineConfig; import org.opensearch.index.fielddata.IndexFieldDataService; import org.opensearch.index.mapper.FieldMapper; @@ -239,6 +240,15 @@ public final class IndexScopedSettings extends AbstractScopedSettings { // Settings for concurrent segment search IndexSettings.INDEX_CONCURRENT_SEGMENT_SEARCH_SETTING, IndexSettings.ALLOW_DERIVED_FIELDS, + + // Settings for star tree index + StarTreeIndexSettings.STAR_TREE_DEFAULT_MAX_LEAF_DOCS, + StarTreeIndexSettings.STAR_TREE_MAX_DIMENSIONS_SETTING, + StarTreeIndexSettings.STAR_TREE_MAX_FIELDS_SETTING, + StarTreeIndexSettings.DEFAULT_METRICS_LIST, + StarTreeIndexSettings.DEFAULT_DATE_INTERVALS, + StarTreeIndexSettings.STAR_TREE_MAX_DATE_INTERVALS_SETTING, + // validate that built-in similarities don't get redefined Setting.groupSetting("index.similarity.", (s) -> { Map groups = s.getAsGroups(); diff --git a/server/src/main/java/org/opensearch/common/util/FeatureFlags.java b/server/src/main/java/org/opensearch/common/util/FeatureFlags.java index 6c6e2f2d600f0..ceb2559a0e16c 100644 --- a/server/src/main/java/org/opensearch/common/util/FeatureFlags.java +++ b/server/src/main/java/org/opensearch/common/util/FeatureFlags.java @@ -100,6 +100,13 @@ public class FeatureFlags { Property.NodeScope ); + /** + * Gates the functionality of star tree index, which improves the performance of search + * aggregations. + */ + public static final String STAR_TREE_INDEX = "opensearch.experimental.feature.composite_index.star_tree.enabled"; + public static final Setting STAR_TREE_INDEX_SETTING = Setting.boolSetting(STAR_TREE_INDEX, false, Property.NodeScope); + private static final List> ALL_FEATURE_FLAG_SETTINGS = List.of( REMOTE_STORE_MIGRATION_EXPERIMENTAL_SETTING, EXTENSIONS_SETTING, @@ -108,7 +115,8 @@ public class FeatureFlags { DATETIME_FORMATTER_CACHING_SETTING, TIERED_REMOTE_INDEX_SETTING, PLUGGABLE_CACHE_SETTING, - REMOTE_PUBLICATION_EXPERIMENTAL_SETTING + REMOTE_PUBLICATION_EXPERIMENTAL_SETTING, + STAR_TREE_INDEX_SETTING ); /** * Should store the settings from opensearch.yml. diff --git a/server/src/main/java/org/opensearch/index/IndexModule.java b/server/src/main/java/org/opensearch/index/IndexModule.java index 4c494a6b35153..09b904394ee09 100644 --- a/server/src/main/java/org/opensearch/index/IndexModule.java +++ b/server/src/main/java/org/opensearch/index/IndexModule.java @@ -66,6 +66,7 @@ import org.opensearch.index.cache.query.DisabledQueryCache; import org.opensearch.index.cache.query.IndexQueryCache; import org.opensearch.index.cache.query.QueryCache; +import org.opensearch.index.compositeindex.CompositeIndexSettings; import org.opensearch.index.engine.Engine; import org.opensearch.index.engine.EngineConfigFactory; import org.opensearch.index.engine.EngineFactory; @@ -311,6 +312,7 @@ public Iterator> settings() { private final BooleanSupplier allowExpensiveQueries; private final Map recoveryStateFactories; private final FileCache fileCache; + private final CompositeIndexSettings compositeIndexSettings; /** * Construct the index module for the index with the specified index settings. The index module contains extension points for plugins @@ -330,7 +332,8 @@ public IndexModule( final BooleanSupplier allowExpensiveQueries, final IndexNameExpressionResolver expressionResolver, final Map recoveryStateFactories, - final FileCache fileCache + final FileCache fileCache, + final CompositeIndexSettings compositeIndexSettings ) { this.indexSettings = indexSettings; this.analysisRegistry = analysisRegistry; @@ -343,6 +346,7 @@ public IndexModule( this.expressionResolver = expressionResolver; this.recoveryStateFactories = recoveryStateFactories; this.fileCache = fileCache; + this.compositeIndexSettings = compositeIndexSettings; } public IndexModule( @@ -364,6 +368,7 @@ public IndexModule( allowExpensiveQueries, expressionResolver, recoveryStateFactories, + null, null ); } @@ -739,7 +744,8 @@ public IndexService newIndexService( clusterDefaultRefreshIntervalSupplier, recoverySettings, remoteStoreSettings, - fileCache + fileCache, + compositeIndexSettings ); success = true; return indexService; diff --git a/server/src/main/java/org/opensearch/index/IndexService.java b/server/src/main/java/org/opensearch/index/IndexService.java index a7849bcf80474..1c0db0095bb98 100644 --- a/server/src/main/java/org/opensearch/index/IndexService.java +++ b/server/src/main/java/org/opensearch/index/IndexService.java @@ -73,6 +73,7 @@ import org.opensearch.index.cache.IndexCache; import org.opensearch.index.cache.bitset.BitsetFilterCache; import org.opensearch.index.cache.query.QueryCache; +import org.opensearch.index.compositeindex.CompositeIndexSettings; import org.opensearch.index.engine.Engine; import org.opensearch.index.engine.EngineConfigFactory; import org.opensearch.index.engine.EngineFactory; @@ -192,6 +193,7 @@ public class IndexService extends AbstractIndexComponent implements IndicesClust private final RecoverySettings recoverySettings; private final RemoteStoreSettings remoteStoreSettings; private final FileCache fileCache; + private final CompositeIndexSettings compositeIndexSettings; public IndexService( IndexSettings indexSettings, @@ -228,7 +230,8 @@ public IndexService( Supplier clusterDefaultRefreshIntervalSupplier, RecoverySettings recoverySettings, RemoteStoreSettings remoteStoreSettings, - FileCache fileCache + FileCache fileCache, + CompositeIndexSettings compositeIndexSettings ) { super(indexSettings); this.allowExpensiveQueries = allowExpensiveQueries; @@ -306,6 +309,7 @@ public IndexService( this.translogFactorySupplier = translogFactorySupplier; this.recoverySettings = recoverySettings; this.remoteStoreSettings = remoteStoreSettings; + this.compositeIndexSettings = compositeIndexSettings; this.fileCache = fileCache; updateFsyncTaskIfNecessary(); } @@ -381,6 +385,7 @@ public IndexService( clusterDefaultRefreshIntervalSupplier, recoverySettings, remoteStoreSettings, + null, null ); } @@ -1110,6 +1115,10 @@ private void rescheduleRefreshTasks() { } } + public CompositeIndexSettings getCompositeIndexSettings() { + return compositeIndexSettings; + } + /** * Shard Store Deleter Interface * diff --git a/server/src/main/java/org/opensearch/index/compositeindex/CompositeIndexSettings.java b/server/src/main/java/org/opensearch/index/compositeindex/CompositeIndexSettings.java new file mode 100644 index 0000000000000..014dd22426a10 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/CompositeIndexSettings.java @@ -0,0 +1,55 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex; + +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.common.settings.ClusterSettings; +import org.opensearch.common.settings.Setting; +import org.opensearch.common.settings.Settings; +import org.opensearch.common.util.FeatureFlags; + +/** + * Cluster level settings for composite indices + * + * @opensearch.experimental + */ +@ExperimentalApi +public class CompositeIndexSettings { + public static final Setting STAR_TREE_INDEX_ENABLED_SETTING = Setting.boolSetting( + "indices.composite_index.star_tree.enabled", + false, + value -> { + if (FeatureFlags.isEnabled(FeatureFlags.STAR_TREE_INDEX_SETTING) == false && value == true) { + throw new IllegalArgumentException( + "star tree index is under an experimental feature and can be activated only by enabling " + + FeatureFlags.STAR_TREE_INDEX_SETTING.getKey() + + " feature flag in the JVM options" + ); + } + }, + Setting.Property.NodeScope, + Setting.Property.Dynamic + ); + + private volatile boolean starTreeIndexCreationEnabled; + + public CompositeIndexSettings(Settings settings, ClusterSettings clusterSettings) { + this.starTreeIndexCreationEnabled = STAR_TREE_INDEX_ENABLED_SETTING.get(settings); + clusterSettings.addSettingsUpdateConsumer(STAR_TREE_INDEX_ENABLED_SETTING, this::starTreeIndexCreationEnabled); + + } + + private void starTreeIndexCreationEnabled(boolean value) { + this.starTreeIndexCreationEnabled = value; + } + + public boolean isStarTreeIndexCreationEnabled() { + return starTreeIndexCreationEnabled; + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/CompositeIndexValidator.java b/server/src/main/java/org/opensearch/index/compositeindex/CompositeIndexValidator.java new file mode 100644 index 0000000000000..995352e3ce6a5 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/CompositeIndexValidator.java @@ -0,0 +1,46 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex; + +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.index.IndexSettings; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeValidator; +import org.opensearch.index.mapper.MapperService; + +import java.util.Locale; + +/** + * Validation for composite indices as part of mappings + * + * @opensearch.experimental + */ +@ExperimentalApi +public class CompositeIndexValidator { + + public static void validate(MapperService mapperService, CompositeIndexSettings compositeIndexSettings, IndexSettings indexSettings) { + StarTreeValidator.validate(mapperService, compositeIndexSettings, indexSettings); + } + + public static void validate( + MapperService mapperService, + CompositeIndexSettings compositeIndexSettings, + IndexSettings indexSettings, + boolean isCompositeFieldPresent + ) { + if (!isCompositeFieldPresent && mapperService.isCompositeIndexPresent()) { + throw new IllegalArgumentException( + String.format( + Locale.ROOT, + "Composite fields must be specified during index creation, addition of new composite fields during update is not supported" + ) + ); + } + StarTreeValidator.validate(mapperService, compositeIndexSettings, indexSettings); + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/DateDimension.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/DateDimension.java new file mode 100644 index 0000000000000..074016db2aed7 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/DateDimension.java @@ -0,0 +1,72 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube; + +import org.opensearch.common.Rounding; +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.core.xcontent.XContentBuilder; +import org.opensearch.index.mapper.CompositeDataCubeFieldType; + +import java.io.IOException; +import java.util.List; +import java.util.Objects; + +/** + * Date dimension class + * + * @opensearch.experimental + */ +@ExperimentalApi +public class DateDimension implements Dimension { + private final List calendarIntervals; + public static final String CALENDAR_INTERVALS = "calendar_intervals"; + public static final String DATE = "date"; + private final String field; + + public DateDimension(String field, List calendarIntervals) { + this.field = field; + this.calendarIntervals = calendarIntervals; + } + + public List getIntervals() { + return calendarIntervals; + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); + builder.field(CompositeDataCubeFieldType.NAME, this.getField()); + builder.field(CompositeDataCubeFieldType.TYPE, DATE); + builder.startArray(CALENDAR_INTERVALS); + for (Rounding.DateTimeUnit interval : calendarIntervals) { + builder.value(interval.shortName()); + } + builder.endArray(); + builder.endObject(); + return builder; + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + DateDimension that = (DateDimension) o; + return Objects.equals(field, that.getField()) && Objects.equals(calendarIntervals, that.calendarIntervals); + } + + @Override + public int hashCode() { + return Objects.hash(field, calendarIntervals); + } + + @Override + public String getField() { + return field; + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/Dimension.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/Dimension.java new file mode 100644 index 0000000000000..0151a474579be --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/Dimension.java @@ -0,0 +1,22 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube; + +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.core.xcontent.ToXContent; + +/** + * Base interface for data-cube dimensions + * + * @opensearch.experimental + */ +@ExperimentalApi +public interface Dimension extends ToXContent { + String getField(); +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/DimensionFactory.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/DimensionFactory.java new file mode 100644 index 0000000000000..6a09e947217f5 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/DimensionFactory.java @@ -0,0 +1,99 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube; + +import org.opensearch.common.Rounding; +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.common.xcontent.support.XContentMapValues; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeIndexSettings; +import org.opensearch.index.mapper.DateFieldMapper; +import org.opensearch.index.mapper.Mapper; +import org.opensearch.index.mapper.NumberFieldMapper; + +import java.util.ArrayList; +import java.util.List; +import java.util.Locale; +import java.util.Map; +import java.util.stream.Collectors; + +import static org.opensearch.index.compositeindex.datacube.DateDimension.CALENDAR_INTERVALS; + +/** + * Dimension factory class mainly used to parse and create dimension from the mappings + * + * @opensearch.experimental + */ +@ExperimentalApi +public class DimensionFactory { + public static Dimension parseAndCreateDimension( + String name, + String type, + Map dimensionMap, + Mapper.TypeParser.ParserContext c + ) { + switch (type) { + case DateDimension.DATE: + return parseAndCreateDateDimension(name, dimensionMap, c); + case NumericDimension.NUMERIC: + return new NumericDimension(name); + default: + throw new IllegalArgumentException( + String.format(Locale.ROOT, "unsupported field type associated with dimension [%s] as part of star tree field", name) + ); + } + } + + public static Dimension parseAndCreateDimension( + String name, + Mapper.Builder builder, + Map dimensionMap, + Mapper.TypeParser.ParserContext c + ) { + if (builder instanceof DateFieldMapper.Builder) { + return parseAndCreateDateDimension(name, dimensionMap, c); + } else if (builder instanceof NumberFieldMapper.Builder) { + return new NumericDimension(name); + } + throw new IllegalArgumentException( + String.format(Locale.ROOT, "unsupported field type associated with star tree dimension [%s]", name) + ); + } + + private static DateDimension parseAndCreateDateDimension( + String name, + Map dimensionMap, + Mapper.TypeParser.ParserContext c + ) { + List calendarIntervals = new ArrayList<>(); + List intervalStrings = XContentMapValues.extractRawValues(CALENDAR_INTERVALS, dimensionMap) + .stream() + .map(Object::toString) + .collect(Collectors.toList()); + if (intervalStrings == null || intervalStrings.isEmpty()) { + calendarIntervals = StarTreeIndexSettings.DEFAULT_DATE_INTERVALS.get(c.getSettings()); + } else { + if (intervalStrings.size() > StarTreeIndexSettings.STAR_TREE_MAX_DATE_INTERVALS_SETTING.get(c.getSettings())) { + throw new IllegalArgumentException( + String.format( + Locale.ROOT, + "At most [%s] calendar intervals are allowed in dimension [%s]", + StarTreeIndexSettings.STAR_TREE_MAX_DATE_INTERVALS_SETTING.get(c.getSettings()), + name + ) + ); + } + for (String interval : intervalStrings) { + calendarIntervals.add(StarTreeIndexSettings.getTimeUnit(interval)); + } + calendarIntervals = new ArrayList<>(calendarIntervals); + } + dimensionMap.remove(CALENDAR_INTERVALS); + return new DateDimension(name, calendarIntervals); + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/Metric.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/Metric.java new file mode 100644 index 0000000000000..9accb0201170a --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/Metric.java @@ -0,0 +1,65 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube; + +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.core.xcontent.ToXContent; +import org.opensearch.core.xcontent.XContentBuilder; + +import java.io.IOException; +import java.util.List; +import java.util.Objects; + +/** + * Holds details of metrics field as part of composite field + */ +@ExperimentalApi +public class Metric implements ToXContent { + private final String field; + private final List metrics; + + public Metric(String field, List metrics) { + this.field = field; + this.metrics = metrics; + } + + public String getField() { + return field; + } + + public List getMetrics() { + return metrics; + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); + builder.field("name", field); + builder.startArray("stats"); + for (MetricStat metricType : metrics) { + builder.value(metricType.getTypeName()); + } + builder.endArray(); + builder.endObject(); + return builder; + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + Metric metric = (Metric) o; + return Objects.equals(field, metric.field) && Objects.equals(metrics, metric.metrics); + } + + @Override + public int hashCode() { + return Objects.hash(field, metrics); + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/MetricStat.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/MetricStat.java new file mode 100644 index 0000000000000..fbde296b15f7e --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/MetricStat.java @@ -0,0 +1,44 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube; + +import org.opensearch.common.annotation.ExperimentalApi; + +/** + * Supported metric types for composite index + * + * @opensearch.experimental + */ +@ExperimentalApi +public enum MetricStat { + COUNT("count"), + AVG("avg"), + SUM("sum"), + MIN("min"), + MAX("max"); + + private final String typeName; + + MetricStat(String typeName) { + this.typeName = typeName; + } + + public String getTypeName() { + return typeName; + } + + public static MetricStat fromTypeName(String typeName) { + for (MetricStat metric : MetricStat.values()) { + if (metric.getTypeName().equalsIgnoreCase(typeName)) { + return metric; + } + } + throw new IllegalArgumentException("Invalid metric stat: " + typeName); + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/NumericDimension.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/NumericDimension.java new file mode 100644 index 0000000000000..9c25ef5b25503 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/NumericDimension.java @@ -0,0 +1,57 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube; + +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.core.xcontent.XContentBuilder; +import org.opensearch.index.mapper.CompositeDataCubeFieldType; + +import java.io.IOException; +import java.util.Objects; + +/** + * Composite index numeric dimension class + * + * @opensearch.experimental + */ +@ExperimentalApi +public class NumericDimension implements Dimension { + public static final String NUMERIC = "numeric"; + private final String field; + + public NumericDimension(String field) { + this.field = field; + } + + public String getField() { + return field; + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); + builder.field(CompositeDataCubeFieldType.NAME, field); + builder.field(CompositeDataCubeFieldType.TYPE, NUMERIC); + builder.endObject(); + return builder; + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + NumericDimension dimension = (NumericDimension) o; + return Objects.equals(field, dimension.getField()); + } + + @Override + public int hashCode() { + return Objects.hash(field); + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/package-info.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/package-info.java new file mode 100644 index 0000000000000..320876ea937bf --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/package-info.java @@ -0,0 +1,11 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ +/** + * Core classes for handling data cube indices such as star tree index. + */ +package org.opensearch.index.compositeindex.datacube; diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/StarTreeField.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/StarTreeField.java new file mode 100644 index 0000000000000..922ddcbea4fe2 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/StarTreeField.java @@ -0,0 +1,94 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree; + +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.core.xcontent.ToXContent; +import org.opensearch.core.xcontent.XContentBuilder; +import org.opensearch.index.compositeindex.datacube.Dimension; +import org.opensearch.index.compositeindex.datacube.Metric; + +import java.io.IOException; +import java.util.List; +import java.util.Objects; + +/** + * Star tree field which contains dimensions, metrics and specs + * + * @opensearch.experimental + */ +@ExperimentalApi +public class StarTreeField implements ToXContent { + private final String name; + private final List dimensionsOrder; + private final List metrics; + private final StarTreeFieldConfiguration starTreeConfig; + + public StarTreeField(String name, List dimensions, List metrics, StarTreeFieldConfiguration starTreeConfig) { + this.name = name; + this.dimensionsOrder = dimensions; + this.metrics = metrics; + this.starTreeConfig = starTreeConfig; + } + + public String getName() { + return name; + } + + public List getDimensionsOrder() { + return dimensionsOrder; + } + + public List getMetrics() { + return metrics; + } + + public StarTreeFieldConfiguration getStarTreeConfig() { + return starTreeConfig; + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); + builder.field("name", name); + if (dimensionsOrder != null && !dimensionsOrder.isEmpty()) { + builder.startArray("ordered_dimensions"); + for (Dimension dimension : dimensionsOrder) { + dimension.toXContent(builder, params); + } + builder.endArray(); + } + if (metrics != null && !metrics.isEmpty()) { + builder.startArray("metrics"); + for (Metric metric : metrics) { + metric.toXContent(builder, params); + } + builder.endArray(); + } + starTreeConfig.toXContent(builder, params); + builder.endObject(); + return builder; + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + StarTreeField that = (StarTreeField) o; + return Objects.equals(name, that.name) + && Objects.equals(dimensionsOrder, that.dimensionsOrder) + && Objects.equals(metrics, that.metrics) + && Objects.equals(starTreeConfig, that.starTreeConfig); + } + + @Override + public int hashCode() { + return Objects.hash(name, dimensionsOrder, metrics, starTreeConfig); + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/StarTreeFieldConfiguration.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/StarTreeFieldConfiguration.java new file mode 100644 index 0000000000000..755c064c2c60a --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/StarTreeFieldConfiguration.java @@ -0,0 +1,108 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree; + +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.core.xcontent.ToXContent; +import org.opensearch.core.xcontent.XContentBuilder; + +import java.io.IOException; +import java.util.Locale; +import java.util.Objects; +import java.util.Set; +import java.util.concurrent.atomic.AtomicInteger; + +/** + * Star tree index specific configuration + * + * @opensearch.experimental + */ +@ExperimentalApi +public class StarTreeFieldConfiguration implements ToXContent { + + private final AtomicInteger maxLeafDocs = new AtomicInteger(); + private final Set skipStarNodeCreationInDims; + private final StarTreeBuildMode buildMode; + + public StarTreeFieldConfiguration(int maxLeafDocs, Set skipStarNodeCreationInDims, StarTreeBuildMode buildMode) { + this.maxLeafDocs.set(maxLeafDocs); + this.skipStarNodeCreationInDims = skipStarNodeCreationInDims; + this.buildMode = buildMode; + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + // build mode is internal and not part of user mappings config, hence not added as part of toXContent + builder.field("max_leaf_docs", maxLeafDocs.get()); + builder.startArray("skip_star_node_creation_for_dimensions"); + for (String dim : skipStarNodeCreationInDims) { + builder.value(dim); + } + builder.endArray(); + return builder; + } + + /** + * Star tree build mode using which sorting and aggregations are performed during index creation. + * + * @opensearch.experimental + */ + @ExperimentalApi + public enum StarTreeBuildMode { + // TODO : remove onheap support unless this proves useful + ON_HEAP("onheap"), + OFF_HEAP("offheap"); + + private final String typeName; + + StarTreeBuildMode(String typeName) { + this.typeName = typeName; + } + + public String getTypeName() { + return typeName; + } + + public static StarTreeBuildMode fromTypeName(String typeName) { + for (StarTreeBuildMode starTreeBuildMode : StarTreeBuildMode.values()) { + if (starTreeBuildMode.getTypeName().equalsIgnoreCase(typeName)) { + return starTreeBuildMode; + } + } + throw new IllegalArgumentException(String.format(Locale.ROOT, "Invalid star tree build mode: [%s] ", typeName)); + } + } + + public int maxLeafDocs() { + return maxLeafDocs.get(); + } + + public StarTreeBuildMode getBuildMode() { + return buildMode; + } + + public Set getSkipStarNodeCreationInDims() { + return skipStarNodeCreationInDims; + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + StarTreeFieldConfiguration that = (StarTreeFieldConfiguration) o; + return Objects.equals(maxLeafDocs.get(), that.maxLeafDocs.get()) + && Objects.equals(skipStarNodeCreationInDims, that.skipStarNodeCreationInDims) + && buildMode == that.buildMode; + } + + @Override + public int hashCode() { + return Objects.hash(maxLeafDocs.get(), skipStarNodeCreationInDims, buildMode); + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/StarTreeIndexSettings.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/StarTreeIndexSettings.java new file mode 100644 index 0000000000000..a2ac545be3cc9 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/StarTreeIndexSettings.java @@ -0,0 +1,116 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree; + +import org.opensearch.common.Rounding; +import org.opensearch.common.settings.Setting; +import org.opensearch.index.compositeindex.datacube.MetricStat; +import org.opensearch.search.aggregations.bucket.histogram.DateHistogramAggregationBuilder; + +import java.util.Arrays; +import java.util.List; + +/** + * Index settings for star tree fields. The settings are final as right now + * there is no support for update of star tree mapping. + * + * @opensearch.experimental + */ +public class StarTreeIndexSettings { + + public static int STAR_TREE_MAX_DIMENSIONS_DEFAULT = 10; + /** + * This setting determines the max number of star tree fields that can be part of composite index mapping. For each + * star tree field, we will generate associated star tree index. + */ + public static final Setting STAR_TREE_MAX_FIELDS_SETTING = Setting.intSetting( + "index.composite_index.star_tree.max_fields", + 1, + 1, + 1, + Setting.Property.IndexScope, + Setting.Property.Final + ); + + /** + * This setting determines the max number of dimensions that can be part of star tree index field. Number of + * dimensions and associated cardinality has direct effect of star tree index size and query performance. + */ + public static final Setting STAR_TREE_MAX_DIMENSIONS_SETTING = Setting.intSetting( + "index.composite_index.star_tree.field.max_dimensions", + STAR_TREE_MAX_DIMENSIONS_DEFAULT, + 2, + 10, + Setting.Property.IndexScope, + Setting.Property.Final + ); + + /** + * This setting determines the max number of date intervals that can be part of star tree date field. + */ + public static final Setting STAR_TREE_MAX_DATE_INTERVALS_SETTING = Setting.intSetting( + "index.composite_index.star_tree.field.max_date_intervals", + 3, + 1, + 3, + Setting.Property.IndexScope, + Setting.Property.Final + ); + + /** + * This setting configures the default "maxLeafDocs" setting of star tree. This affects both query performance and + * star tree index size. Lesser the leaves, better the query latency but higher storage size and vice versa + *

+ * We can remove this later or change it to an enum based constant setting. + * + * @opensearch.experimental + */ + public static final Setting STAR_TREE_DEFAULT_MAX_LEAF_DOCS = Setting.intSetting( + "index.composite_index.star_tree.default.max_leaf_docs", + 10000, + 1, + Setting.Property.IndexScope, + Setting.Property.Final + ); + + /** + * Default intervals for date dimension as part of star tree fields + */ + public static final Setting> DEFAULT_DATE_INTERVALS = Setting.listSetting( + "index.composite_index.star_tree.field.default.date_intervals", + Arrays.asList(Rounding.DateTimeUnit.MINUTES_OF_HOUR.shortName(), Rounding.DateTimeUnit.HOUR_OF_DAY.shortName()), + StarTreeIndexSettings::getTimeUnit, + Setting.Property.IndexScope, + Setting.Property.Final + ); + + /** + * Default metrics for metrics as part of star tree fields + */ + public static final Setting> DEFAULT_METRICS_LIST = Setting.listSetting( + "index.composite_index.star_tree.field.default.metrics", + Arrays.asList( + MetricStat.AVG.toString(), + MetricStat.COUNT.toString(), + MetricStat.SUM.toString(), + MetricStat.MAX.toString(), + MetricStat.MIN.toString() + ), + MetricStat::fromTypeName, + Setting.Property.IndexScope, + Setting.Property.Final + ); + + public static Rounding.DateTimeUnit getTimeUnit(String expression) { + if (!DateHistogramAggregationBuilder.DATE_FIELD_UNITS.containsKey(expression)) { + throw new IllegalArgumentException("unknown calendar intervals specified in star tree index mapping"); + } + return DateHistogramAggregationBuilder.DATE_FIELD_UNITS.get(expression); + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/StarTreeValidator.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/StarTreeValidator.java new file mode 100644 index 0000000000000..cbed46604681d --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/StarTreeValidator.java @@ -0,0 +1,94 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree; + +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.index.IndexSettings; +import org.opensearch.index.compositeindex.CompositeIndexSettings; +import org.opensearch.index.compositeindex.datacube.Dimension; +import org.opensearch.index.compositeindex.datacube.Metric; +import org.opensearch.index.mapper.CompositeMappedFieldType; +import org.opensearch.index.mapper.MappedFieldType; +import org.opensearch.index.mapper.MapperService; +import org.opensearch.index.mapper.StarTreeMapper; + +import java.util.Locale; +import java.util.Set; + +/** + * Validations for star tree fields as part of mappings + * + * @opensearch.experimental + */ +@ExperimentalApi +public class StarTreeValidator { + public static void validate(MapperService mapperService, CompositeIndexSettings compositeIndexSettings, IndexSettings indexSettings) { + Set compositeFieldTypes = mapperService.getCompositeFieldTypes(); + if (compositeFieldTypes.size() > StarTreeIndexSettings.STAR_TREE_MAX_FIELDS_SETTING.get(indexSettings.getSettings())) { + throw new IllegalArgumentException( + String.format( + Locale.ROOT, + "Index cannot have more than [%s] star tree fields", + StarTreeIndexSettings.STAR_TREE_MAX_FIELDS_SETTING.get(indexSettings.getSettings()) + ) + ); + } + for (CompositeMappedFieldType compositeFieldType : compositeFieldTypes) { + if (!(compositeFieldType instanceof StarTreeMapper.StarTreeFieldType)) { + continue; + } + if (!compositeIndexSettings.isStarTreeIndexCreationEnabled()) { + throw new IllegalArgumentException( + String.format( + Locale.ROOT, + "star tree index cannot be created, enable it using [%s] setting", + CompositeIndexSettings.STAR_TREE_INDEX_ENABLED_SETTING.getKey() + ) + ); + } + StarTreeMapper.StarTreeFieldType dataCubeFieldType = (StarTreeMapper.StarTreeFieldType) compositeFieldType; + for (Dimension dim : dataCubeFieldType.getDimensions()) { + MappedFieldType ft = mapperService.fieldType(dim.getField()); + if (ft == null) { + throw new IllegalArgumentException( + String.format(Locale.ROOT, "unknown dimension field [%s] as part of star tree field", dim.getField()) + ); + } + if (ft.isAggregatable() == false) { + throw new IllegalArgumentException( + String.format( + Locale.ROOT, + "Aggregations not supported for the dimension field [%s] with field type [%s] as part of star tree field", + dim.getField(), + ft.typeName() + ) + ); + } + } + for (Metric metric : dataCubeFieldType.getMetrics()) { + MappedFieldType ft = mapperService.fieldType(metric.getField()); + if (ft == null) { + throw new IllegalArgumentException( + String.format(Locale.ROOT, "unknown metric field [%s] as part of star tree field", metric.getField()) + ); + } + if (ft.isAggregatable() == false) { + throw new IllegalArgumentException( + String.format( + Locale.ROOT, + "Aggregations not supported for the metrics field [%s] with field type [%s] as part of star tree field", + metric.getField(), + ft.typeName() + ) + ); + } + } + } + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/package-info.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/package-info.java new file mode 100644 index 0000000000000..4f4e670478e2f --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/package-info.java @@ -0,0 +1,11 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ +/** + * Core classes for handling star tree index. + */ +package org.opensearch.index.compositeindex.datacube.startree; diff --git a/server/src/main/java/org/opensearch/index/compositeindex/package-info.java b/server/src/main/java/org/opensearch/index/compositeindex/package-info.java new file mode 100644 index 0000000000000..59f18efec26b1 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/package-info.java @@ -0,0 +1,13 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +/** + * Core classes for handling composite indices. + * @opensearch.experimental + */ +package org.opensearch.index.compositeindex; diff --git a/server/src/main/java/org/opensearch/index/mapper/CompositeDataCubeFieldType.java b/server/src/main/java/org/opensearch/index/mapper/CompositeDataCubeFieldType.java new file mode 100644 index 0000000000000..baf6442f0c08c --- /dev/null +++ b/server/src/main/java/org/opensearch/index/mapper/CompositeDataCubeFieldType.java @@ -0,0 +1,56 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.mapper; + +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.index.compositeindex.datacube.Dimension; +import org.opensearch.index.compositeindex.datacube.Metric; + +import java.util.ArrayList; +import java.util.HashSet; +import java.util.List; +import java.util.Set; + +/** + * Base class for multi field data cube fields + * + * @opensearch.experimental + */ +@ExperimentalApi +public abstract class CompositeDataCubeFieldType extends CompositeMappedFieldType { + public static final String NAME = "name"; + public static final String TYPE = "type"; + private final List dimensions; + private final List metrics; + + public CompositeDataCubeFieldType(String name, List dims, List metrics, CompositeFieldType type) { + super(name, getFields(dims, metrics), type); + this.dimensions = dims; + this.metrics = metrics; + } + + private static List getFields(List dims, List metrics) { + Set fields = new HashSet<>(); + for (Dimension dim : dims) { + fields.add(dim.getField()); + } + for (Metric metric : metrics) { + fields.add(metric.getField()); + } + return new ArrayList<>(fields); + } + + public List getDimensions() { + return dimensions; + } + + public List getMetrics() { + return metrics; + } +} diff --git a/server/src/main/java/org/opensearch/index/mapper/CompositeMappedFieldType.java b/server/src/main/java/org/opensearch/index/mapper/CompositeMappedFieldType.java new file mode 100644 index 0000000000000..f52ce29a86dd2 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/mapper/CompositeMappedFieldType.java @@ -0,0 +1,75 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.mapper; + +import org.opensearch.common.annotation.ExperimentalApi; + +import java.util.Collections; +import java.util.List; +import java.util.Map; + +/** + * Base class for composite field types + * + * @opensearch.experimental + */ +@ExperimentalApi +public abstract class CompositeMappedFieldType extends MappedFieldType { + private final List fields; + private final CompositeFieldType type; + + public CompositeMappedFieldType( + String name, + boolean isIndexed, + boolean isStored, + boolean hasDocValues, + TextSearchInfo textSearchInfo, + Map meta, + List fields, + CompositeFieldType type + ) { + super(name, isIndexed, isStored, hasDocValues, textSearchInfo, meta); + this.fields = fields; + this.type = type; + } + + public CompositeMappedFieldType(String name, List fields, CompositeFieldType type) { + this(name, false, false, false, TextSearchInfo.NONE, Collections.emptyMap(), fields, type); + } + + /** + * Supported composite field types + */ + public enum CompositeFieldType { + STAR_TREE("star_tree"); + + private final String name; + + CompositeFieldType(String name) { + this.name = name; + } + + public String getName() { + return name; + } + + public static CompositeFieldType fromName(String name) { + for (CompositeFieldType metric : CompositeFieldType.values()) { + if (metric.getName().equalsIgnoreCase(name)) { + return metric; + } + } + throw new IllegalArgumentException("Invalid metric stat: " + name); + } + } + + public List fields() { + return fields; + } +} diff --git a/server/src/main/java/org/opensearch/index/mapper/Mapper.java b/server/src/main/java/org/opensearch/index/mapper/Mapper.java index bd5d3f15c0706..46a5050d4fc18 100644 --- a/server/src/main/java/org/opensearch/index/mapper/Mapper.java +++ b/server/src/main/java/org/opensearch/index/mapper/Mapper.java @@ -253,6 +253,11 @@ public boolean isWithinMultiField() { } Mapper.Builder parse(String name, Map node, ParserContext parserContext) throws MapperParsingException; + + default Mapper.Builder parse(String name, Map node, ParserContext parserContext, ObjectMapper.Builder objBuilder) + throws MapperParsingException { + throw new UnsupportedOperationException("should not be invoked"); + } } private final String simpleName; diff --git a/server/src/main/java/org/opensearch/index/mapper/MapperService.java b/server/src/main/java/org/opensearch/index/mapper/MapperService.java index a1f3894c9f14c..c2e7411a3b47a 100644 --- a/server/src/main/java/org/opensearch/index/mapper/MapperService.java +++ b/server/src/main/java/org/opensearch/index/mapper/MapperService.java @@ -650,6 +650,23 @@ public Iterable fieldTypes() { return this.mapper == null ? Collections.emptySet() : this.mapper.fieldTypes(); } + public boolean isCompositeIndexPresent() { + return this.mapper != null && !getCompositeFieldTypes().isEmpty(); + } + + public Set getCompositeFieldTypes() { + Set compositeMappedFieldTypes = new HashSet<>(); + if (this.mapper == null) { + return Collections.emptySet(); + } + for (MappedFieldType type : this.mapper.fieldTypes()) { + if (type instanceof CompositeMappedFieldType) { + compositeMappedFieldTypes.add((CompositeMappedFieldType) type); + } + } + return compositeMappedFieldTypes; + } + public ObjectMapper getObjectMapper(String name) { return this.mapper == null ? null : this.mapper.objectMappers().get(name); } diff --git a/server/src/main/java/org/opensearch/index/mapper/ObjectMapper.java b/server/src/main/java/org/opensearch/index/mapper/ObjectMapper.java index 92ffdb60e6cde..be3adfe8b2c4e 100644 --- a/server/src/main/java/org/opensearch/index/mapper/ObjectMapper.java +++ b/server/src/main/java/org/opensearch/index/mapper/ObjectMapper.java @@ -42,9 +42,11 @@ import org.opensearch.common.collect.CopyOnWriteHashMap; import org.opensearch.common.logging.DeprecationLogger; import org.opensearch.common.settings.Settings; +import org.opensearch.common.util.FeatureFlags; import org.opensearch.common.xcontent.support.XContentMapValues; import org.opensearch.core.xcontent.ToXContent; import org.opensearch.core.xcontent.XContentBuilder; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeIndexSettings; import org.opensearch.index.mapper.MapperService.MergeReason; import java.io.IOException; @@ -176,6 +178,7 @@ public void setIncludeInRoot(boolean value) { * @opensearch.internal */ @SuppressWarnings("rawtypes") + @PublicApi(since = "1.0.0") public static class Builder extends Mapper.Builder { protected Explicit enabled = new Explicit<>(true, false); @@ -262,14 +265,25 @@ public static class TypeParser implements Mapper.TypeParser { public Mapper.Builder parse(String name, Map node, ParserContext parserContext) throws MapperParsingException { ObjectMapper.Builder builder = new Builder(name); parseNested(name, node, builder, parserContext); + Object compositeField = null; for (Iterator> iterator = node.entrySet().iterator(); iterator.hasNext();) { Map.Entry entry = iterator.next(); String fieldName = entry.getKey(); Object fieldNode = entry.getValue(); - if (parseObjectOrDocumentTypeProperties(fieldName, fieldNode, parserContext, builder)) { + if (fieldName.equals("composite")) { + compositeField = fieldNode; iterator.remove(); + } else { + if (parseObjectOrDocumentTypeProperties(fieldName, fieldNode, parserContext, builder)) { + iterator.remove(); + } } } + // Important : Composite field is made up of 2 or more source fields of the index, so this must be called + // after parsing all other properties + if (compositeField != null) { + parseCompositeField(builder, (Map) compositeField, parserContext); + } return builder; } @@ -407,6 +421,96 @@ protected static void parseDerived(ObjectMapper.Builder objBuilder, Map compositeNode, + ParserContext parserContext + ) { + if (!FeatureFlags.isEnabled(FeatureFlags.STAR_TREE_INDEX_SETTING)) { + throw new IllegalArgumentException( + "star tree index is under an experimental feature and can be activated only by enabling " + + FeatureFlags.STAR_TREE_INDEX_SETTING.getKey() + + " feature flag in the JVM options" + ); + } + Iterator> iterator = compositeNode.entrySet().iterator(); + if (compositeNode.size() > StarTreeIndexSettings.STAR_TREE_MAX_FIELDS_SETTING.get(parserContext.getSettings())) { + throw new IllegalArgumentException( + String.format( + Locale.ROOT, + "Composite fields cannot have more than [%s] fields", + StarTreeIndexSettings.STAR_TREE_MAX_FIELDS_SETTING.get(parserContext.getSettings()) + ) + ); + } + while (iterator.hasNext()) { + Map.Entry entry = iterator.next(); + String fieldName = entry.getKey(); + // Should accept empty arrays, as a work around for when the + // user can't provide an empty Map. (PHP for example) + boolean isEmptyList = entry.getValue() instanceof List && ((List) entry.getValue()).isEmpty(); + if (entry.getValue() instanceof Map) { + @SuppressWarnings("unchecked") + Map propNode = (Map) entry.getValue(); + String type; + Object typeNode = propNode.get("type"); + if (typeNode != null) { + type = typeNode.toString(); + } else { + // lets see if we can derive this... + throw new MapperParsingException("No type specified for field [" + fieldName + "]"); + } + Mapper.TypeParser typeParser = getSupportedCompositeTypeParser(type, parserContext); + if (typeParser == null) { + throw new MapperParsingException("No handler for type [" + type + "] declared on field [" + fieldName + "]"); + } + String[] fieldNameParts = fieldName.split("\\."); + // field name is just ".", which is invalid + if (fieldNameParts.length < 1) { + throw new MapperParsingException("Invalid field name " + fieldName); + } + String realFieldName = fieldNameParts[fieldNameParts.length - 1]; + Mapper.Builder fieldBuilder = typeParser.parse(realFieldName, propNode, parserContext, objBuilder); + for (int i = fieldNameParts.length - 2; i >= 0; --i) { + ObjectMapper.Builder intermediate = new ObjectMapper.Builder<>(fieldNameParts[i]); + intermediate.add(fieldBuilder); + fieldBuilder = intermediate; + } + objBuilder.add(fieldBuilder); + propNode.remove("type"); + DocumentMapperParser.checkNoRemainingFields(fieldName, propNode, parserContext.indexVersionCreated()); + iterator.remove(); + } else if (isEmptyList) { + iterator.remove(); + } else { + throw new MapperParsingException( + "Expected map for property [fields] on field [" + fieldName + "] but got a " + fieldName.getClass() + ); + } + } + + DocumentMapperParser.checkNoRemainingFields( + compositeNode, + parserContext.indexVersionCreated(), + "DocType mapping definition has unsupported parameters: " + ); + } + + private static Mapper.TypeParser getSupportedCompositeTypeParser(String type, ParserContext parserContext) { + switch (type) { + case StarTreeMapper.CONTENT_TYPE: + return parserContext.typeParser(type); + default: + throw new IllegalArgumentException( + String.format(Locale.ROOT, "Type [%s] isn't supported in composite field context.", type) + ); + } + } + protected static void parseProperties(ObjectMapper.Builder objBuilder, Map propsNode, ParserContext parserContext) { Iterator> iterator = propsNode.entrySet().iterator(); while (iterator.hasNext()) { diff --git a/server/src/main/java/org/opensearch/index/mapper/RootObjectMapper.java b/server/src/main/java/org/opensearch/index/mapper/RootObjectMapper.java index 9504e6eafc046..e06e5be4633f9 100644 --- a/server/src/main/java/org/opensearch/index/mapper/RootObjectMapper.java +++ b/server/src/main/java/org/opensearch/index/mapper/RootObjectMapper.java @@ -177,15 +177,26 @@ public Mapper.Builder parse(String name, Map node, ParserContext RootObjectMapper.Builder builder = new Builder(name); Iterator> iterator = node.entrySet().iterator(); + Object compositeField = null; while (iterator.hasNext()) { Map.Entry entry = iterator.next(); String fieldName = entry.getKey(); Object fieldNode = entry.getValue(); - if (parseObjectOrDocumentTypeProperties(fieldName, fieldNode, parserContext, builder) - || processField(builder, fieldName, fieldNode, parserContext)) { + if (fieldName.equals("composite")) { + compositeField = fieldNode; iterator.remove(); + } else { + if (parseObjectOrDocumentTypeProperties(fieldName, fieldNode, parserContext, builder) + || processField(builder, fieldName, fieldNode, parserContext)) { + iterator.remove(); + } } } + // Important : Composite field is made up of 2 or more source properties of the index, so this must be called + // after parsing all other properties + if (compositeField != null) { + parseCompositeField(builder, (Map) compositeField, parserContext); + } return builder; } diff --git a/server/src/main/java/org/opensearch/index/mapper/StarTreeMapper.java b/server/src/main/java/org/opensearch/index/mapper/StarTreeMapper.java new file mode 100644 index 0000000000000..d2debe762e9be --- /dev/null +++ b/server/src/main/java/org/opensearch/index/mapper/StarTreeMapper.java @@ -0,0 +1,406 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.mapper; + +import org.apache.lucene.search.Query; +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.common.xcontent.support.XContentMapValues; +import org.opensearch.index.compositeindex.datacube.Dimension; +import org.opensearch.index.compositeindex.datacube.DimensionFactory; +import org.opensearch.index.compositeindex.datacube.Metric; +import org.opensearch.index.compositeindex.datacube.MetricStat; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeField; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeFieldConfiguration; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeIndexSettings; +import org.opensearch.index.query.QueryShardContext; +import org.opensearch.search.lookup.SearchLookup; + +import java.util.ArrayList; +import java.util.LinkedHashSet; +import java.util.LinkedList; +import java.util.List; +import java.util.Locale; +import java.util.Map; +import java.util.Optional; +import java.util.Set; +import java.util.stream.Collectors; + +/** + * A field mapper for star tree fields + * + * @opensearch.experimental + */ +@ExperimentalApi +public class StarTreeMapper extends ParametrizedFieldMapper { + public static final String CONTENT_TYPE = "star_tree"; + public static final String CONFIG = "config"; + public static final String MAX_LEAF_DOCS = "max_leaf_docs"; + public static final String SKIP_STAR_NODE_IN_DIMS = "skip_star_node_creation_for_dimensions"; + public static final String BUILD_MODE = "build_mode"; + public static final String ORDERED_DIMENSIONS = "ordered_dimensions"; + public static final String METRICS = "metrics"; + public static final String STATS = "stats"; + + @Override + public ParametrizedFieldMapper.Builder getMergeBuilder() { + return new Builder(simpleName(), objBuilder).init(this); + + } + + /** + * Builder for the star tree field mapper + * + * @opensearch.internal + */ + public static class Builder extends ParametrizedFieldMapper.Builder { + private ObjectMapper.Builder objbuilder; + private static final Set> ALLOWED_DIMENSION_MAPPER_BUILDERS = Set.of( + NumberFieldMapper.Builder.class, + DateFieldMapper.Builder.class + ); + private static final Set> ALLOWED_METRIC_MAPPER_BUILDERS = Set.of(NumberFieldMapper.Builder.class); + + @SuppressWarnings("unchecked") + private final Parameter config = new Parameter<>(CONFIG, false, () -> null, (name, context, nodeObj) -> { + if (nodeObj instanceof Map) { + Map paramMap = (Map) nodeObj; + int maxLeafDocs = XContentMapValues.nodeIntegerValue( + paramMap.get(MAX_LEAF_DOCS), + StarTreeIndexSettings.STAR_TREE_DEFAULT_MAX_LEAF_DOCS.get(context.getSettings()) + ); + if (maxLeafDocs < 1) { + throw new IllegalArgumentException( + String.format(Locale.ROOT, "%s [%s] must be greater than 0", MAX_LEAF_DOCS, maxLeafDocs) + ); + } + paramMap.remove(MAX_LEAF_DOCS); + Set skipStarInDims = new LinkedHashSet<>( + List.of(XContentMapValues.nodeStringArrayValue(paramMap.getOrDefault(SKIP_STAR_NODE_IN_DIMS, new ArrayList()))) + ); + paramMap.remove(SKIP_STAR_NODE_IN_DIMS); + // TODO : change this to off heap once off heap gets implemented + StarTreeFieldConfiguration.StarTreeBuildMode buildMode = StarTreeFieldConfiguration.StarTreeBuildMode.ON_HEAP; + + List dimensions = buildDimensions(name, paramMap, context); + paramMap.remove(ORDERED_DIMENSIONS); + List metrics = buildMetrics(name, paramMap, context); + paramMap.remove(METRICS); + paramMap.remove(CompositeDataCubeFieldType.NAME); + for (String dim : skipStarInDims) { + if (dimensions.stream().filter(d -> d.getField().equals(dim)).findAny().isEmpty()) { + throw new IllegalArgumentException( + String.format( + Locale.ROOT, + "[%s] in skip_star_node_creation_for_dimensions should be part of ordered_dimensions", + dim + ) + ); + } + } + StarTreeFieldConfiguration spec = new StarTreeFieldConfiguration(maxLeafDocs, skipStarInDims, buildMode); + DocumentMapperParser.checkNoRemainingFields( + paramMap, + context.indexVersionCreated(), + "Star tree mapping definition has unsupported parameters: " + ); + return new StarTreeField(this.name, dimensions, metrics, spec); + + } else { + throw new IllegalArgumentException( + String.format(Locale.ROOT, "unable to parse config for star tree field [%s]", this.name) + ); + } + }, m -> toType(m).starTreeField); + + /** + * Build dimensions from mapping + */ + @SuppressWarnings("unchecked") + private List buildDimensions(String fieldName, Map map, Mapper.TypeParser.ParserContext context) { + Object dims = XContentMapValues.extractValue("ordered_dimensions", map); + if (dims == null) { + throw new IllegalArgumentException( + String.format(Locale.ROOT, "ordered_dimensions is required for star tree field [%s]", fieldName) + ); + } + List dimensions = new LinkedList<>(); + if (dims instanceof List) { + List dimList = (List) dims; + if (dimList.size() > context.getSettings() + .getAsInt( + StarTreeIndexSettings.STAR_TREE_MAX_DIMENSIONS_SETTING.getKey(), + StarTreeIndexSettings.STAR_TREE_MAX_DIMENSIONS_DEFAULT + )) { + throw new IllegalArgumentException( + String.format( + Locale.ROOT, + "ordered_dimensions cannot have more than %s dimensions for star tree field [%s]", + context.getSettings() + .getAsInt( + StarTreeIndexSettings.STAR_TREE_MAX_DIMENSIONS_SETTING.getKey(), + StarTreeIndexSettings.STAR_TREE_MAX_DIMENSIONS_DEFAULT + ), + fieldName + ) + ); + } + if (dimList.size() < 2) { + throw new IllegalArgumentException( + String.format(Locale.ROOT, "Atleast two dimensions are required to build star tree index field [%s]", fieldName) + ); + } + for (Object dim : dimList) { + dimensions.add(getDimension(fieldName, dim, context)); + } + } else { + throw new MapperParsingException( + String.format(Locale.ROOT, "unable to parse ordered_dimensions for star tree field [%s]", fieldName) + ); + } + return dimensions; + } + + /** + * Get dimension based on mapping + */ + @SuppressWarnings("unchecked") + private Dimension getDimension(String fieldName, Object dimensionMapping, Mapper.TypeParser.ParserContext context) { + Dimension dimension; + Map dimensionMap = (Map) dimensionMapping; + String name = (String) XContentMapValues.extractValue(CompositeDataCubeFieldType.NAME, dimensionMap); + dimensionMap.remove(CompositeDataCubeFieldType.NAME); + if (this.objbuilder == null || this.objbuilder.mappersBuilders == null) { + String type = (String) XContentMapValues.extractValue(CompositeDataCubeFieldType.TYPE, dimensionMap); + dimensionMap.remove(CompositeDataCubeFieldType.TYPE); + if (type == null) { + throw new MapperParsingException( + String.format(Locale.ROOT, "unable to parse ordered_dimensions for star tree field [%s]", fieldName) + ); + } + return DimensionFactory.parseAndCreateDimension(name, type, dimensionMap, context); + } else { + Optional dimBuilder = findMapperBuilderByName(name, this.objbuilder.mappersBuilders); + if (dimBuilder.isEmpty()) { + throw new IllegalArgumentException(String.format(Locale.ROOT, "unknown dimension field [%s]", name)); + } + if (!isBuilderAllowedForDimension(dimBuilder.get())) { + throw new IllegalArgumentException( + String.format( + Locale.ROOT, + "unsupported field type associated with dimension [%s] as part of star tree field [%s]", + name, + fieldName + ) + ); + } + dimension = DimensionFactory.parseAndCreateDimension(name, dimBuilder.get(), dimensionMap, context); + } + DocumentMapperParser.checkNoRemainingFields( + dimensionMap, + context.indexVersionCreated(), + "Star tree mapping definition has unsupported parameters: " + ); + return dimension; + } + + /** + * Build metrics from mapping + */ + @SuppressWarnings("unchecked") + private List buildMetrics(String fieldName, Map map, Mapper.TypeParser.ParserContext context) { + List metrics = new LinkedList<>(); + Object metricsFromInput = XContentMapValues.extractValue(METRICS, map); + if (metricsFromInput == null) { + throw new IllegalArgumentException( + String.format(Locale.ROOT, "metrics section is required for star tree field [%s]", fieldName) + ); + } + if (metricsFromInput instanceof List) { + List metricsList = (List) metricsFromInput; + for (Object metric : metricsList) { + Map metricMap = (Map) metric; + String name = (String) XContentMapValues.extractValue(CompositeDataCubeFieldType.NAME, metricMap); + metricMap.remove(CompositeDataCubeFieldType.NAME); + if (objbuilder == null || objbuilder.mappersBuilders == null) { + metrics.add(getMetric(name, metricMap, context)); + } else { + Optional meticBuilder = findMapperBuilderByName(name, this.objbuilder.mappersBuilders); + if (meticBuilder.isEmpty()) { + throw new IllegalArgumentException(String.format(Locale.ROOT, "unknown metric field [%s]", name)); + } + if (!isBuilderAllowedForMetric(meticBuilder.get())) { + throw new IllegalArgumentException( + String.format(Locale.ROOT, "non-numeric field type is associated with star tree metric [%s]", this.name) + ); + } + metrics.add(getMetric(name, metricMap, context)); + DocumentMapperParser.checkNoRemainingFields( + metricMap, + context.indexVersionCreated(), + "Star tree mapping definition has unsupported parameters: " + ); + } + } + } else { + throw new MapperParsingException(String.format(Locale.ROOT, "unable to parse metrics for star tree field [%s]", this.name)); + } + + return metrics; + } + + @SuppressWarnings("unchecked") + private Metric getMetric(String name, Map metric, Mapper.TypeParser.ParserContext context) { + List metricTypes; + List metricStrings = XContentMapValues.extractRawValues(STATS, metric) + .stream() + .map(Object::toString) + .collect(Collectors.toList()); + metric.remove(STATS); + if (metricStrings.isEmpty()) { + metricTypes = new ArrayList<>(StarTreeIndexSettings.DEFAULT_METRICS_LIST.get(context.getSettings())); + } else { + Set metricSet = new LinkedHashSet<>(); + for (String metricString : metricStrings) { + metricSet.add(MetricStat.fromTypeName(metricString)); + } + metricTypes = new ArrayList<>(metricSet); + } + return new Metric(name, metricTypes); + } + + @Override + protected List> getParameters() { + return List.of(config); + } + + private static boolean isBuilderAllowedForDimension(Mapper.Builder builder) { + return ALLOWED_DIMENSION_MAPPER_BUILDERS.stream().anyMatch(allowedType -> allowedType.isInstance(builder)); + } + + private static boolean isBuilderAllowedForMetric(Mapper.Builder builder) { + return ALLOWED_METRIC_MAPPER_BUILDERS.stream().anyMatch(allowedType -> allowedType.isInstance(builder)); + } + + private Optional findMapperBuilderByName(String field, List mappersBuilders) { + return mappersBuilders.stream().filter(builder -> builder.name().equals(field)).findFirst(); + } + + public Builder(String name, ObjectMapper.Builder objBuilder) { + super(name); + this.objbuilder = objBuilder; + } + + @Override + public ParametrizedFieldMapper build(BuilderContext context) { + StarTreeFieldType type = new StarTreeFieldType(name, this.config.get()); + return new StarTreeMapper(name, type, this, objbuilder); + } + } + + private static StarTreeMapper toType(FieldMapper in) { + return (StarTreeMapper) in; + } + + /** + * Concrete parse for star tree type + * + * @opensearch.internal + */ + public static class TypeParser implements Mapper.TypeParser { + + /** + * default constructor of VectorFieldMapper.TypeParser + */ + public TypeParser() {} + + @Override + public Mapper.Builder parse(String name, Map node, ParserContext context) throws MapperParsingException { + Builder builder = new StarTreeMapper.Builder(name, null); + builder.parse(name, context, node); + return builder; + } + + @Override + public Mapper.Builder parse(String name, Map node, ParserContext context, ObjectMapper.Builder objBuilder) + throws MapperParsingException { + Builder builder = new StarTreeMapper.Builder(name, objBuilder); + builder.parse(name, context, node); + return builder; + } + } + + private final StarTreeField starTreeField; + + private final ObjectMapper.Builder objBuilder; + + protected StarTreeMapper(String simpleName, StarTreeFieldType type, Builder builder, ObjectMapper.Builder objbuilder) { + super(simpleName, type, MultiFields.empty(), CopyTo.empty()); + this.starTreeField = builder.config.get(); + this.objBuilder = objbuilder; + } + + @Override + public StarTreeFieldType fieldType() { + return (StarTreeFieldType) super.fieldType(); + } + + @Override + protected String contentType() { + return CONTENT_TYPE; + } + + @Override + protected void parseCreateField(ParseContext context) { + throw new MapperParsingException( + String.format( + Locale.ROOT, + "Field [%s] is a star tree field and cannot be added inside a document. Use the index API request parameters.", + name() + ) + ); + } + + /** + * Star tree mapped field type containing dimensions, metrics, star tree specs + * + * @opensearch.experimental + */ + @ExperimentalApi + public static final class StarTreeFieldType extends CompositeDataCubeFieldType { + + private final StarTreeFieldConfiguration starTreeConfig; + + public StarTreeFieldType(String name, StarTreeField starTreeField) { + super(name, starTreeField.getDimensionsOrder(), starTreeField.getMetrics(), CompositeFieldType.STAR_TREE); + this.starTreeConfig = starTreeField.getStarTreeConfig(); + } + + public StarTreeFieldConfiguration getStarTreeConfig() { + return starTreeConfig; + } + + @Override + public ValueFetcher valueFetcher(QueryShardContext context, SearchLookup searchLookup, String format) { + // TODO : evaluate later + throw new UnsupportedOperationException("Cannot fetch values for star tree field [" + name() + "]."); + } + + @Override + public String typeName() { + return CONTENT_TYPE; + } + + @Override + public Query termQuery(Object value, QueryShardContext context) { + // TODO : evaluate later + throw new UnsupportedOperationException("Cannot perform terms query on star tree field [" + name() + "]."); + } + } + +} diff --git a/server/src/main/java/org/opensearch/indices/IndicesModule.java b/server/src/main/java/org/opensearch/indices/IndicesModule.java index 033b163bb0d67..f7e52ce9fc1ae 100644 --- a/server/src/main/java/org/opensearch/indices/IndicesModule.java +++ b/server/src/main/java/org/opensearch/indices/IndicesModule.java @@ -70,6 +70,7 @@ import org.opensearch.index.mapper.RoutingFieldMapper; import org.opensearch.index.mapper.SeqNoFieldMapper; import org.opensearch.index.mapper.SourceFieldMapper; +import org.opensearch.index.mapper.StarTreeMapper; import org.opensearch.index.mapper.TextFieldMapper; import org.opensearch.index.mapper.VersionFieldMapper; import org.opensearch.index.mapper.WildcardFieldMapper; @@ -174,6 +175,7 @@ public static Map getMappers(List mappe mappers.put(ConstantKeywordFieldMapper.CONTENT_TYPE, new ConstantKeywordFieldMapper.TypeParser()); mappers.put(DerivedFieldMapper.CONTENT_TYPE, DerivedFieldMapper.PARSER); mappers.put(WildcardFieldMapper.CONTENT_TYPE, WildcardFieldMapper.PARSER); + mappers.put(StarTreeMapper.CONTENT_TYPE, new StarTreeMapper.TypeParser()); for (MapperPlugin mapperPlugin : mapperPlugins) { for (Map.Entry entry : mapperPlugin.getMappers().entrySet()) { diff --git a/server/src/main/java/org/opensearch/indices/IndicesService.java b/server/src/main/java/org/opensearch/indices/IndicesService.java index a7d879fc06981..902ca95643625 100644 --- a/server/src/main/java/org/opensearch/indices/IndicesService.java +++ b/server/src/main/java/org/opensearch/indices/IndicesService.java @@ -106,6 +106,7 @@ import org.opensearch.index.IndexSettings; import org.opensearch.index.analysis.AnalysisRegistry; import org.opensearch.index.cache.request.ShardRequestCache; +import org.opensearch.index.compositeindex.CompositeIndexSettings; import org.opensearch.index.engine.CommitStats; import org.opensearch.index.engine.EngineConfig; import org.opensearch.index.engine.EngineConfigFactory; @@ -356,6 +357,7 @@ public class IndicesService extends AbstractLifecycleComponent private volatile TimeValue clusterDefaultRefreshInterval; private final SearchRequestStats searchRequestStats; private final FileCache fileCache; + private final CompositeIndexSettings compositeIndexSettings; @Override protected void doStart() { @@ -391,7 +393,8 @@ public IndicesService( RecoverySettings recoverySettings, CacheService cacheService, RemoteStoreSettings remoteStoreSettings, - FileCache fileCache + FileCache fileCache, + CompositeIndexSettings compositeIndexSettings ) { this.settings = settings; this.threadPool = threadPool; @@ -498,6 +501,7 @@ protected void closeInternal() { .addSettingsUpdateConsumer(CLUSTER_DEFAULT_INDEX_REFRESH_INTERVAL_SETTING, this::onRefreshIntervalUpdate); this.recoverySettings = recoverySettings; this.remoteStoreSettings = remoteStoreSettings; + this.compositeIndexSettings = compositeIndexSettings; this.fileCache = fileCache; } @@ -558,6 +562,7 @@ public IndicesService( recoverySettings, cacheService, remoteStoreSettings, + null, null ); } @@ -939,7 +944,8 @@ private synchronized IndexService createIndexService( () -> allowExpensiveQueries, indexNameExpressionResolver, recoveryStateFactories, - fileCache + fileCache, + compositeIndexSettings ); for (IndexingOperationListener operationListener : indexingOperationListeners) { indexModule.addIndexOperationListener(operationListener); @@ -1030,7 +1036,8 @@ public synchronized MapperService createIndexMapperService(IndexMetadata indexMe () -> allowExpensiveQueries, indexNameExpressionResolver, recoveryStateFactories, - fileCache + fileCache, + compositeIndexSettings ); pluginsService.onIndexModule(indexModule); return indexModule.newIndexMapperService(xContentRegistry, mapperRegistry, scriptService); @@ -2098,4 +2105,8 @@ private TimeValue getClusterDefaultRefreshInterval() { public RemoteStoreSettings getRemoteStoreSettings() { return this.remoteStoreSettings; } + + public CompositeIndexSettings getCompositeIndexSettings() { + return this.compositeIndexSettings; + } } diff --git a/server/src/main/java/org/opensearch/node/Node.java b/server/src/main/java/org/opensearch/node/Node.java index 505c9264d62bb..85ef547e27787 100644 --- a/server/src/main/java/org/opensearch/node/Node.java +++ b/server/src/main/java/org/opensearch/node/Node.java @@ -149,6 +149,7 @@ import org.opensearch.index.IndexingPressureService; import org.opensearch.index.SegmentReplicationStatsTracker; import org.opensearch.index.analysis.AnalysisRegistry; +import org.opensearch.index.compositeindex.CompositeIndexSettings; import org.opensearch.index.engine.EngineFactory; import org.opensearch.index.recovery.RemoteStoreRestoreService; import org.opensearch.index.remote.RemoteIndexPathUploader; @@ -834,6 +835,7 @@ protected Node( final RecoverySettings recoverySettings = new RecoverySettings(settings, settingsModule.getClusterSettings()); final RemoteStoreSettings remoteStoreSettings = new RemoteStoreSettings(settings, settingsModule.getClusterSettings()); + final CompositeIndexSettings compositeIndexSettings = new CompositeIndexSettings(settings, settingsModule.getClusterSettings()); final IndexStorePlugin.DirectoryFactory remoteDirectoryFactory = new RemoteSegmentStoreDirectoryFactory( repositoriesServiceReference::get, @@ -874,7 +876,8 @@ protected Node( recoverySettings, cacheService, remoteStoreSettings, - fileCache + fileCache, + compositeIndexSettings ); final IngestService ingestService = new IngestService( diff --git a/server/src/test/java/org/opensearch/index/mapper/ObjectMapperTests.java b/server/src/test/java/org/opensearch/index/mapper/ObjectMapperTests.java index b10a7d8155056..504bc622ec12e 100644 --- a/server/src/test/java/org/opensearch/index/mapper/ObjectMapperTests.java +++ b/server/src/test/java/org/opensearch/index/mapper/ObjectMapperTests.java @@ -33,6 +33,8 @@ package org.opensearch.index.mapper; import org.opensearch.common.compress.CompressedXContent; +import org.opensearch.common.settings.Settings; +import org.opensearch.common.util.FeatureFlags; import org.opensearch.common.xcontent.XContentFactory; import org.opensearch.core.common.bytes.BytesArray; import org.opensearch.core.xcontent.MediaTypeRegistry; @@ -46,6 +48,7 @@ import java.io.IOException; import java.util.Collection; +import static org.opensearch.common.util.FeatureFlags.STAR_TREE_INDEX; import static org.hamcrest.Matchers.containsString; public class ObjectMapperTests extends OpenSearchSingleNodeTestCase { @@ -487,6 +490,76 @@ public void testDerivedFields() throws Exception { assertEquals("date", mapper.typeName()); } + public void testCompositeFields() throws Exception { + String mapping = XContentFactory.jsonBuilder() + .startObject() + .startObject("tweet") + .startObject("composite") + .startObject("startree") + .field("type", "star_tree") + .startObject("config") + .startArray("ordered_dimensions") + .startObject() + .field("name", "@timestamp") + .endObject() + .startObject() + .field("name", "status") + .endObject() + .endArray() + .startArray("metrics") + .startObject() + .field("name", "status") + .endObject() + .startObject() + .field("name", "metric_field") + .endObject() + .endArray() + .endObject() + .endObject() + .endObject() + .startObject("properties") + .startObject("@timestamp") + .field("type", "date") + .endObject() + .startObject("status") + .field("type", "integer") + .endObject() + .startObject("metric_field") + .field("type", "integer") + .endObject() + .endObject() + .endObject() + .endObject() + .toString(); + + IllegalArgumentException ex = expectThrows( + IllegalArgumentException.class, + () -> createIndex("invalid").mapperService().documentMapperParser().parse("tweet", new CompressedXContent(mapping)) + ); + assertEquals( + "star tree index is under an experimental feature and can be activated only by enabling opensearch.experimental.feature.composite_index.star_tree.enabled feature flag in the JVM options", + ex.getMessage() + ); + + final Settings starTreeEnabledSettings = Settings.builder().put(STAR_TREE_INDEX, "true").build(); + FeatureFlags.initializeFeatureFlags(starTreeEnabledSettings); + + DocumentMapper documentMapper = createIndex("test").mapperService() + .documentMapperParser() + .parse("tweet", new CompressedXContent(mapping)); + + Mapper mapper = documentMapper.root().getMapper("startree"); + assertTrue(mapper instanceof StarTreeMapper); + StarTreeMapper starTreeMapper = (StarTreeMapper) mapper; + assertEquals("star_tree", starTreeMapper.fieldType().typeName()); + // Check that field in properties was parsed correctly as well + mapper = documentMapper.root().getMapper("@timestamp"); + assertNotNull(mapper); + assertEquals("date", mapper.typeName()); + + FeatureFlags.initializeFeatureFlags(Settings.EMPTY); + } + @Override protected Collection> getPlugins() { return pluginList(InternalSettingsPlugin.class); diff --git a/server/src/test/java/org/opensearch/index/mapper/StarTreeMapperTests.java b/server/src/test/java/org/opensearch/index/mapper/StarTreeMapperTests.java new file mode 100644 index 0000000000000..3144b1b007924 --- /dev/null +++ b/server/src/test/java/org/opensearch/index/mapper/StarTreeMapperTests.java @@ -0,0 +1,767 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.mapper; + +import org.opensearch.common.CheckedConsumer; +import org.opensearch.common.Rounding; +import org.opensearch.common.settings.ClusterSettings; +import org.opensearch.common.settings.Settings; +import org.opensearch.common.util.FeatureFlags; +import org.opensearch.core.xcontent.XContentBuilder; +import org.opensearch.index.compositeindex.CompositeIndexSettings; +import org.opensearch.index.compositeindex.CompositeIndexValidator; +import org.opensearch.index.compositeindex.datacube.DateDimension; +import org.opensearch.index.compositeindex.datacube.Dimension; +import org.opensearch.index.compositeindex.datacube.Metric; +import org.opensearch.index.compositeindex.datacube.MetricStat; +import org.opensearch.index.compositeindex.datacube.NumericDimension; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeField; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeFieldConfiguration; +import org.junit.After; +import org.junit.Before; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collections; +import java.util.HashSet; +import java.util.List; +import java.util.Set; + +import static org.hamcrest.Matchers.containsString; + +/** + * Tests for {@link StarTreeMapper}. + */ +public class StarTreeMapperTests extends MapperTestCase { + + @Before + public void setup() { + FeatureFlags.initializeFeatureFlags(Settings.builder().put(FeatureFlags.STAR_TREE_INDEX, true).build()); + } + + @After + public void teardown() { + FeatureFlags.initializeFeatureFlags(Settings.EMPTY); + } + + public void testValidStarTree() throws IOException { + MapperService mapperService = createMapperService(getExpandedMapping("status", "size")); + Set compositeFieldTypes = mapperService.getCompositeFieldTypes(); + for (CompositeMappedFieldType type : compositeFieldTypes) { + StarTreeMapper.StarTreeFieldType starTreeFieldType = (StarTreeMapper.StarTreeFieldType) type; + assertEquals("@timestamp", starTreeFieldType.getDimensions().get(0).getField()); + assertTrue(starTreeFieldType.getDimensions().get(0) instanceof DateDimension); + DateDimension dateDim = (DateDimension) starTreeFieldType.getDimensions().get(0); + List expectedTimeUnits = Arrays.asList( + Rounding.DateTimeUnit.DAY_OF_MONTH, + Rounding.DateTimeUnit.MONTH_OF_YEAR + ); + assertEquals(expectedTimeUnits, dateDim.getIntervals()); + assertEquals("status", starTreeFieldType.getDimensions().get(1).getField()); + assertEquals("size", starTreeFieldType.getMetrics().get(0).getField()); + List expectedMetrics = Arrays.asList(MetricStat.SUM, MetricStat.AVG); + assertEquals(expectedMetrics, starTreeFieldType.getMetrics().get(0).getMetrics()); + assertEquals(100, starTreeFieldType.getStarTreeConfig().maxLeafDocs()); + assertEquals(StarTreeFieldConfiguration.StarTreeBuildMode.ON_HEAP, starTreeFieldType.getStarTreeConfig().getBuildMode()); + assertEquals( + new HashSet<>(Arrays.asList("@timestamp", "status")), + starTreeFieldType.getStarTreeConfig().getSkipStarNodeCreationInDims() + ); + } + } + + public void testValidStarTreeDefaults() throws IOException { + MapperService mapperService = createMapperService(getMinMapping()); + Set compositeFieldTypes = mapperService.getCompositeFieldTypes(); + for (CompositeMappedFieldType type : compositeFieldTypes) { + StarTreeMapper.StarTreeFieldType starTreeFieldType = (StarTreeMapper.StarTreeFieldType) type; + assertEquals("@timestamp", starTreeFieldType.getDimensions().get(0).getField()); + assertTrue(starTreeFieldType.getDimensions().get(0) instanceof DateDimension); + DateDimension dateDim = (DateDimension) starTreeFieldType.getDimensions().get(0); + List expectedTimeUnits = Arrays.asList( + Rounding.DateTimeUnit.MINUTES_OF_HOUR, + Rounding.DateTimeUnit.HOUR_OF_DAY + ); + assertEquals(expectedTimeUnits, dateDim.getIntervals()); + assertEquals("status", starTreeFieldType.getDimensions().get(1).getField()); + assertEquals("status", starTreeFieldType.getMetrics().get(0).getField()); + List expectedMetrics = Arrays.asList( + MetricStat.AVG, + MetricStat.COUNT, + MetricStat.SUM, + MetricStat.MAX, + MetricStat.MIN + ); + assertEquals(expectedMetrics, starTreeFieldType.getMetrics().get(0).getMetrics()); + assertEquals(10000, starTreeFieldType.getStarTreeConfig().maxLeafDocs()); + assertEquals(StarTreeFieldConfiguration.StarTreeBuildMode.ON_HEAP, starTreeFieldType.getStarTreeConfig().getBuildMode()); + assertEquals(Collections.emptySet(), starTreeFieldType.getStarTreeConfig().getSkipStarNodeCreationInDims()); + } + } + + public void testInvalidDim() { + MapperParsingException ex = expectThrows( + MapperParsingException.class, + () -> createMapperService(getExpandedMapping("invalid", "size")) + ); + assertEquals("Failed to parse mapping [_doc]: unknown dimension field [invalid]", ex.getMessage()); + } + + public void testInvalidMetric() { + MapperParsingException ex = expectThrows( + MapperParsingException.class, + () -> createMapperService(getExpandedMapping("status", "invalid")) + ); + assertEquals("Failed to parse mapping [_doc]: unknown metric field [invalid]", ex.getMessage()); + } + + public void testNoMetrics() { + MapperParsingException ex = expectThrows( + MapperParsingException.class, + () -> createMapperService(getMinMapping(false, true, false, false)) + ); + assertThat( + ex.getMessage(), + containsString("Failed to parse mapping [_doc]: metrics section is required for star tree field [startree]") + ); + } + + public void testInvalidParam() { + MapperParsingException ex = expectThrows( + MapperParsingException.class, + () -> createMapperService(getInvalidMapping(false, false, false, false, true)) + ); + assertEquals( + "Failed to parse mapping [_doc]: Star tree mapping definition has unsupported parameters: [invalid : {invalid=invalid}]", + ex.getMessage() + ); + } + + public void testNoDims() { + MapperParsingException ex = expectThrows( + MapperParsingException.class, + () -> createMapperService(getMinMapping(true, false, false, false)) + ); + assertThat( + ex.getMessage(), + containsString("Failed to parse mapping [_doc]: ordered_dimensions is required for star tree field [startree]") + ); + } + + public void testMissingDims() { + MapperParsingException ex = expectThrows( + MapperParsingException.class, + () -> createMapperService(getMinMapping(false, false, true, false)) + ); + assertThat(ex.getMessage(), containsString("Failed to parse mapping [_doc]: unknown dimension field [@timestamp]")); + } + + public void testMissingMetrics() { + MapperParsingException ex = expectThrows( + MapperParsingException.class, + () -> createMapperService(getMinMapping(false, false, false, true)) + ); + assertThat(ex.getMessage(), containsString("Failed to parse mapping [_doc]: unknown metric field [metric_field]")); + } + + public void testInvalidMetricType() { + MapperParsingException ex = expectThrows( + MapperParsingException.class, + () -> createMapperService(getInvalidMapping(false, false, false, true)) + ); + assertEquals( + "Failed to parse mapping [_doc]: non-numeric field type is associated with star tree metric [startree]", + ex.getMessage() + ); + } + + public void testInvalidDimType() { + MapperParsingException ex = expectThrows( + MapperParsingException.class, + () -> createMapperService(getInvalidMapping(false, false, true, false)) + ); + assertEquals( + "Failed to parse mapping [_doc]: unsupported field type associated with dimension [@timestamp] as part of star tree field [startree]", + ex.getMessage() + ); + } + + public void testInvalidSkipDim() { + MapperParsingException ex = expectThrows( + MapperParsingException.class, + () -> createMapperService(getInvalidMapping(false, true, false, false)) + ); + assertEquals( + "Failed to parse mapping [_doc]: [invalid] in skip_star_node_creation_for_dimensions should be part of ordered_dimensions", + ex.getMessage() + ); + } + + public void testInvalidSingleDim() { + MapperParsingException ex = expectThrows( + MapperParsingException.class, + () -> createMapperService(getInvalidMapping(true, false, false, false)) + ); + assertEquals( + "Failed to parse mapping [_doc]: Atleast two dimensions are required to build star tree index field [startree]", + ex.getMessage() + ); + } + + public void testMetric() { + List m1 = new ArrayList<>(); + m1.add(MetricStat.MAX); + Metric metric1 = new Metric("name", m1); + Metric metric2 = new Metric("name", m1); + assertEquals(metric1, metric2); + List m2 = new ArrayList<>(); + m2.add(MetricStat.MAX); + m2.add(MetricStat.COUNT); + metric2 = new Metric("name", m2); + assertNotEquals(metric1, metric2); + + assertEquals(MetricStat.COUNT, MetricStat.fromTypeName("count")); + assertEquals(MetricStat.MAX, MetricStat.fromTypeName("max")); + assertEquals(MetricStat.MIN, MetricStat.fromTypeName("min")); + assertEquals(MetricStat.SUM, MetricStat.fromTypeName("sum")); + assertEquals(MetricStat.AVG, MetricStat.fromTypeName("avg")); + IllegalArgumentException ex = expectThrows(IllegalArgumentException.class, () -> MetricStat.fromTypeName("invalid")); + assertEquals("Invalid metric stat: invalid", ex.getMessage()); + } + + public void testDimensions() { + List d1CalendarIntervals = new ArrayList<>(); + d1CalendarIntervals.add(Rounding.DateTimeUnit.HOUR_OF_DAY); + DateDimension d1 = new DateDimension("name", d1CalendarIntervals); + DateDimension d2 = new DateDimension("name", d1CalendarIntervals); + assertEquals(d1, d2); + d2 = new DateDimension("name1", d1CalendarIntervals); + assertNotEquals(d1, d2); + List d2CalendarIntervals = new ArrayList<>(); + d2CalendarIntervals.add(Rounding.DateTimeUnit.HOUR_OF_DAY); + d2CalendarIntervals.add(Rounding.DateTimeUnit.HOUR_OF_DAY); + d2 = new DateDimension("name", d2CalendarIntervals); + assertNotEquals(d1, d2); + NumericDimension n1 = new NumericDimension("name"); + NumericDimension n2 = new NumericDimension("name"); + assertEquals(n1, n2); + n2 = new NumericDimension("name1"); + assertNotEquals(n1, n2); + } + + public void testStarTreeField() { + List m1 = new ArrayList<>(); + m1.add(MetricStat.MAX); + Metric metric1 = new Metric("name", m1); + List d1CalendarIntervals = new ArrayList<>(); + d1CalendarIntervals.add(Rounding.DateTimeUnit.HOUR_OF_DAY); + DateDimension d1 = new DateDimension("name", d1CalendarIntervals); + NumericDimension n1 = new NumericDimension("numeric"); + NumericDimension n2 = new NumericDimension("name1"); + + List metrics = List.of(metric1); + List dims = List.of(d1, n2); + StarTreeFieldConfiguration config = new StarTreeFieldConfiguration( + 100, + Set.of("name"), + StarTreeFieldConfiguration.StarTreeBuildMode.OFF_HEAP + ); + + StarTreeField field1 = new StarTreeField("starTree", dims, metrics, config); + StarTreeField field2 = new StarTreeField("starTree", dims, metrics, config); + assertEquals(field1, field2); + + dims = List.of(d1, n2, n1); + field2 = new StarTreeField("starTree", dims, metrics, config); + assertNotEquals(field1, field2); + + dims = List.of(d1, n2); + metrics = List.of(metric1, metric1); + field2 = new StarTreeField("starTree", dims, metrics, config); + assertNotEquals(field1, field2); + + dims = List.of(d1, n2); + metrics = List.of(metric1); + StarTreeFieldConfiguration config1 = new StarTreeFieldConfiguration( + 1000, + Set.of("name"), + StarTreeFieldConfiguration.StarTreeBuildMode.OFF_HEAP + ); + field2 = new StarTreeField("starTree", dims, metrics, config1); + assertNotEquals(field1, field2); + + config1 = new StarTreeFieldConfiguration(100, Set.of("name", "field2"), StarTreeFieldConfiguration.StarTreeBuildMode.OFF_HEAP); + field2 = new StarTreeField("starTree", dims, metrics, config1); + assertNotEquals(field1, field2); + + config1 = new StarTreeFieldConfiguration(100, Set.of("name"), StarTreeFieldConfiguration.StarTreeBuildMode.ON_HEAP); + field2 = new StarTreeField("starTree", dims, metrics, config1); + assertNotEquals(field1, field2); + + field2 = new StarTreeField("starTree", dims, metrics, config); + assertEquals(field1, field2); + } + + public void testValidations() throws IOException { + MapperService mapperService = createMapperService(getExpandedMapping("status", "size")); + Settings settings = Settings.builder().put(CompositeIndexSettings.STAR_TREE_INDEX_ENABLED_SETTING.getKey(), true).build(); + CompositeIndexSettings enabledCompositeIndexSettings = new CompositeIndexSettings( + settings, + new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS) + ); + CompositeIndexValidator.validate(mapperService, enabledCompositeIndexSettings, mapperService.getIndexSettings()); + settings = Settings.builder().put(CompositeIndexSettings.STAR_TREE_INDEX_ENABLED_SETTING.getKey(), false).build(); + CompositeIndexSettings compositeIndexSettings = new CompositeIndexSettings( + settings, + new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS) + ); + MapperService finalMapperService = mapperService; + IllegalArgumentException ex = expectThrows( + IllegalArgumentException.class, + () -> CompositeIndexValidator.validate(finalMapperService, compositeIndexSettings, finalMapperService.getIndexSettings()) + ); + assertEquals( + "star tree index cannot be created, enable it using [indices.composite_index.star_tree.enabled] setting", + ex.getMessage() + ); + + MapperService mapperServiceInvalid = createMapperService(getInvalidMappingWithDv(false, false, false, true)); + ex = expectThrows( + IllegalArgumentException.class, + () -> CompositeIndexValidator.validate( + mapperServiceInvalid, + enabledCompositeIndexSettings, + mapperServiceInvalid.getIndexSettings() + ) + ); + assertEquals( + "Aggregations not supported for the metrics field [metric_field] with field type [integer] as part of star tree field", + ex.getMessage() + ); + + MapperService mapperServiceInvalidDim = createMapperService(getInvalidMappingWithDv(false, false, true, false)); + ex = expectThrows( + IllegalArgumentException.class, + () -> CompositeIndexValidator.validate( + mapperServiceInvalidDim, + enabledCompositeIndexSettings, + mapperServiceInvalidDim.getIndexSettings() + ) + ); + assertEquals( + "Aggregations not supported for the dimension field [@timestamp] with field type [date] as part of star tree field", + ex.getMessage() + ); + + MapperParsingException mapperParsingExceptionex = expectThrows( + MapperParsingException.class, + () -> createMapperService(getMinMappingWith2StarTrees()) + ); + assertEquals( + "Failed to parse mapping [_doc]: Composite fields cannot have more than [1] fields", + mapperParsingExceptionex.getMessage() + ); + } + + private XContentBuilder getExpandedMapping(String dim, String metric) throws IOException { + return topMapping(b -> { + b.startObject("composite"); + b.startObject("startree"); + b.field("type", "star_tree"); + b.startObject("config"); + b.field("max_leaf_docs", 100); + b.startArray("skip_star_node_creation_for_dimensions"); + { + b.value("@timestamp"); + b.value("status"); + } + b.endArray(); + b.startArray("ordered_dimensions"); + b.startObject(); + b.field("name", "@timestamp"); + b.startArray("calendar_intervals"); + b.value("day"); + b.value("month"); + b.endArray(); + b.endObject(); + b.startObject(); + b.field("name", dim); + b.endObject(); + b.endArray(); + b.startArray("metrics"); + b.startObject(); + b.field("name", metric); + b.startArray("stats"); + b.value("sum"); + b.value("avg"); + b.endArray(); + b.endObject(); + b.endArray(); + b.endObject(); + b.endObject(); + b.endObject(); + b.startObject("properties"); + b.startObject("@timestamp"); + b.field("type", "date"); + b.endObject(); + b.startObject("status"); + b.field("type", "integer"); + b.endObject(); + b.startObject("size"); + b.field("type", "integer"); + b.endObject(); + b.endObject(); + }); + } + + private XContentBuilder getMinMapping() throws IOException { + return getMinMapping(false, false, false, false); + } + + private XContentBuilder getMinMapping(boolean isEmptyDims, boolean isEmptyMetrics, boolean missingDim, boolean missingMetric) + throws IOException { + return topMapping(b -> { + b.startObject("composite"); + b.startObject("startree"); + b.field("type", "star_tree"); + b.startObject("config"); + if (!isEmptyDims) { + b.startArray("ordered_dimensions"); + b.startObject(); + b.field("name", "@timestamp"); + b.endObject(); + b.startObject(); + b.field("name", "status"); + b.endObject(); + b.endArray(); + } + if (!isEmptyMetrics) { + b.startArray("metrics"); + b.startObject(); + b.field("name", "status"); + b.endObject(); + b.startObject(); + b.field("name", "metric_field"); + b.endObject(); + b.endArray(); + } + b.endObject(); + b.endObject(); + b.endObject(); + b.startObject("properties"); + if (!missingDim) { + b.startObject("@timestamp"); + b.field("type", "date"); + b.endObject(); + } + b.startObject("status"); + b.field("type", "integer"); + b.endObject(); + if (!missingMetric) { + b.startObject("metric_field"); + b.field("type", "integer"); + b.endObject(); + } + b.endObject(); + }); + } + + private XContentBuilder getMinMappingWith2StarTrees() throws IOException { + return topMapping(b -> { + b.startObject("composite"); + b.startObject("startree"); + b.field("type", "star_tree"); + b.startObject("config"); + + b.startArray("ordered_dimensions"); + b.startObject(); + b.field("name", "@timestamp"); + b.endObject(); + b.startObject(); + b.field("name", "status"); + b.endObject(); + b.endArray(); + + b.startArray("metrics"); + b.startObject(); + b.field("name", "status"); + b.endObject(); + b.startObject(); + b.field("name", "metric_field"); + b.endObject(); + b.endArray(); + + b.endObject(); + b.endObject(); + + b.startObject("startree1"); + b.field("type", "star_tree"); + b.startObject("config"); + + b.startArray("ordered_dimensions"); + b.startObject(); + b.field("name", "@timestamp"); + b.endObject(); + b.startObject(); + b.field("name", "status"); + b.endObject(); + b.endArray(); + + b.startArray("metrics"); + b.startObject(); + b.field("name", "status"); + b.endObject(); + b.startObject(); + b.field("name", "metric_field"); + b.endObject(); + b.endArray(); + + b.endObject(); + b.endObject(); + b.endObject(); + b.startObject("properties"); + b.startObject("@timestamp"); + b.field("type", "date"); + b.endObject(); + b.startObject("status"); + b.field("type", "integer"); + b.endObject(); + b.startObject("metric_field"); + b.field("type", "integer"); + b.endObject(); + + b.endObject(); + }); + } + + private XContentBuilder getInvalidMapping( + boolean singleDim, + boolean invalidSkipDims, + boolean invalidDimType, + boolean invalidMetricType, + boolean invalidParam + ) throws IOException { + return topMapping(b -> { + b.startObject("composite"); + b.startObject("startree"); + b.field("type", "star_tree"); + b.startObject("config"); + + b.startArray("skip_star_node_creation_for_dimensions"); + { + if (invalidSkipDims) { + b.value("invalid"); + } + b.value("status"); + } + b.endArray(); + if (invalidParam) { + b.startObject("invalid"); + b.field("invalid", "invalid"); + b.endObject(); + } + b.startArray("ordered_dimensions"); + if (!singleDim) { + b.startObject(); + b.field("name", "@timestamp"); + b.endObject(); + } + b.startObject(); + b.field("name", "status"); + b.endObject(); + b.endArray(); + b.startArray("metrics"); + b.startObject(); + b.field("name", "status"); + b.endObject(); + b.startObject(); + b.field("name", "metric_field"); + b.endObject(); + b.endArray(); + b.endObject(); + b.endObject(); + b.endObject(); + b.startObject("properties"); + b.startObject("@timestamp"); + if (!invalidDimType) { + b.field("type", "date"); + } else { + b.field("type", "keyword"); + } + b.endObject(); + + b.startObject("status"); + b.field("type", "integer"); + b.endObject(); + b.startObject("metric_field"); + if (invalidMetricType) { + b.field("type", "date"); + } else { + b.field("type", "integer"); + } + b.endObject(); + b.endObject(); + }); + } + + private XContentBuilder getInvalidMappingWithDv( + boolean singleDim, + boolean invalidSkipDims, + boolean invalidDimType, + boolean invalidMetricType + ) throws IOException { + return topMapping(b -> { + b.startObject("composite"); + b.startObject("startree"); + b.field("type", "star_tree"); + b.startObject("config"); + + b.startArray("skip_star_node_creation_for_dimensions"); + { + if (invalidSkipDims) { + b.value("invalid"); + } + b.value("status"); + } + b.endArray(); + b.startArray("ordered_dimensions"); + if (!singleDim) { + b.startObject(); + b.field("name", "@timestamp"); + b.endObject(); + } + b.startObject(); + b.field("name", "status"); + b.endObject(); + b.endArray(); + b.startArray("metrics"); + b.startObject(); + b.field("name", "status"); + b.endObject(); + b.startObject(); + b.field("name", "metric_field"); + b.endObject(); + b.endArray(); + b.endObject(); + b.endObject(); + b.endObject(); + b.startObject("properties"); + b.startObject("@timestamp"); + if (!invalidDimType) { + b.field("type", "date"); + b.field("doc_values", "true"); + } else { + b.field("type", "date"); + b.field("doc_values", "false"); + } + b.endObject(); + + b.startObject("status"); + b.field("type", "integer"); + b.endObject(); + b.startObject("metric_field"); + if (invalidMetricType) { + b.field("type", "integer"); + b.field("doc_values", "false"); + } else { + b.field("type", "integer"); + b.field("doc_values", "true"); + } + b.endObject(); + b.endObject(); + }); + } + + private XContentBuilder getInvalidMapping(boolean singleDim, boolean invalidSkipDims, boolean invalidDimType, boolean invalidMetricType) + throws IOException { + return getInvalidMapping(singleDim, invalidSkipDims, invalidDimType, invalidMetricType, false); + } + + protected boolean supportsOrIgnoresBoost() { + return false; + } + + protected boolean supportsMeta() { + return false; + } + + @Override + protected void assertExistsQuery(MapperService mapperService) {} + + // Overriding fieldMapping to make it create composite mappings by default. + // This way, the parent tests are checking the right behavior for this Mapper. + @Override + protected final XContentBuilder fieldMapping(CheckedConsumer buildField) throws IOException { + return topMapping(b -> { + b.startObject("composite"); + b.startObject("startree"); + buildField.accept(b); + b.endObject(); + b.endObject(); + b.startObject("properties"); + b.startObject("size"); + b.field("type", "integer"); + b.endObject(); + b.startObject("status"); + b.field("type", "integer"); + b.endObject(); + b.endObject(); + }); + } + + @Override + public void testEmptyName() { + MapperParsingException e = expectThrows(MapperParsingException.class, () -> createMapperService(topMapping(b -> { + b.startObject("composite"); + b.startObject(""); + minimalMapping(b); + b.endObject(); + b.endObject(); + b.startObject("properties"); + b.startObject("size"); + b.field("type", "integer"); + b.endObject(); + b.startObject("status"); + b.field("type", "integer"); + b.endObject(); + b.endObject(); + }))); + assertThat(e.getMessage(), containsString("name cannot be empty string")); + assertParseMinimalWarnings(); + } + + @Override + protected void minimalMapping(XContentBuilder b) throws IOException { + b.field("type", "star_tree"); + b.startObject("config"); + b.startArray("ordered_dimensions"); + b.startObject(); + b.field("name", "size"); + b.endObject(); + b.startObject(); + b.field("name", "status"); + b.endObject(); + b.endArray(); + b.startArray("metrics"); + b.startObject(); + b.field("name", "status"); + b.endObject(); + b.endArray(); + b.endObject(); + } + + @Override + protected void writeFieldValue(XContentBuilder builder) throws IOException {} + + @Override + protected void registerParameters(ParameterChecker checker) throws IOException { + + } +} diff --git a/test/framework/src/main/java/org/opensearch/index/mapper/MapperTestCase.java b/test/framework/src/main/java/org/opensearch/index/mapper/MapperTestCase.java index dc5954907a4fa..01a4005255f29 100644 --- a/test/framework/src/main/java/org/opensearch/index/mapper/MapperTestCase.java +++ b/test/framework/src/main/java/org/opensearch/index/mapper/MapperTestCase.java @@ -174,7 +174,7 @@ protected static void assertNoDocValuesField(ParseContext.Document doc, String f } } - public final void testEmptyName() { + public void testEmptyName() { MapperParsingException e = expectThrows(MapperParsingException.class, () -> createMapperService(mapping(b -> { b.startObject(""); minimalMapping(b); diff --git a/test/framework/src/main/java/org/opensearch/search/aggregations/AggregatorTestCase.java b/test/framework/src/main/java/org/opensearch/search/aggregations/AggregatorTestCase.java index 28323a94af721..544fb100a17bf 100644 --- a/test/framework/src/main/java/org/opensearch/search/aggregations/AggregatorTestCase.java +++ b/test/framework/src/main/java/org/opensearch/search/aggregations/AggregatorTestCase.java @@ -115,6 +115,7 @@ import org.opensearch.index.mapper.ObjectMapper.Nested; import org.opensearch.index.mapper.RangeFieldMapper; import org.opensearch.index.mapper.RangeType; +import org.opensearch.index.mapper.StarTreeMapper; import org.opensearch.index.mapper.TextFieldMapper; import org.opensearch.index.query.QueryShardContext; import org.opensearch.index.shard.IndexShard; @@ -201,6 +202,7 @@ public abstract class AggregatorTestCase extends OpenSearchTestCase { denylist.add(CompletionFieldMapper.CONTENT_TYPE); // TODO support completion denylist.add(FieldAliasMapper.CONTENT_TYPE); // TODO support alias denylist.add(DerivedFieldMapper.CONTENT_TYPE); // TODO support derived fields + denylist.add(StarTreeMapper.CONTENT_TYPE); // TODO evaluate support for star tree fields TYPE_TEST_DENYLIST = denylist; } From 8904557e835082b85267eccbfd78daf27ba06256 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Mon, 1 Jul 2024 18:51:29 -0400 Subject: [PATCH 026/167] Bump com.microsoft.azure:msal4j from 1.15.1 to 1.16.0 in /plugins/repository-azure (#14610) * Bump com.microsoft.azure:msal4j in /plugins/repository-azure Bumps [com.microsoft.azure:msal4j](https://github.com/AzureAD/microsoft-authentication-library-for-java) from 1.15.1 to 1.16.0. - [Release notes](https://github.com/AzureAD/microsoft-authentication-library-for-java/releases) - [Changelog](https://github.com/AzureAD/microsoft-authentication-library-for-java/blob/dev/changelog.txt) - [Commits](https://github.com/AzureAD/microsoft-authentication-library-for-java/compare/v1.15.1...v1.16.0) --- updated-dependencies: - dependency-name: com.microsoft.azure:msal4j dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] * Updating SHAs Signed-off-by: dependabot[bot] * Update changelog Signed-off-by: dependabot[bot] --------- Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] --- CHANGELOG.md | 1 + plugins/repository-azure/build.gradle | 2 +- plugins/repository-azure/licenses/msal4j-1.15.1.jar.sha1 | 1 - plugins/repository-azure/licenses/msal4j-1.16.0.jar.sha1 | 1 + 4 files changed, 3 insertions(+), 2 deletions(-) delete mode 100644 plugins/repository-azure/licenses/msal4j-1.15.1.jar.sha1 create mode 100644 plugins/repository-azure/licenses/msal4j-1.16.0.jar.sha1 diff --git a/CHANGELOG.md b/CHANGELOG.md index e9470d9bb4727..a8e8f120ddd2f 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -27,6 +27,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Bump `opentelemetry` from 1.36.0 to 1.39.0 ([#14457](https://github.com/opensearch-project/OpenSearch/pull/14457)) - Bump `azure-identity` from 1.11.4 to 1.13.0, Bump `msal4j` from 1.14.3 to 1.15.1, Bump `msal4j-persistence-extension` from 1.2.0 to 1.3.0 ([#14506](https://github.com/opensearch-project/OpenSearch/pull/14506)) - Bump `com.azure:azure-storage-common` from 12.21.2 to 12.25.1 ([#14517](https://github.com/opensearch-project/OpenSearch/pull/14517)) +- Bump `com.microsoft.azure:msal4j` from 1.15.1 to 1.16.0 ([#14610](https://github.com/opensearch-project/OpenSearch/pull/14610)) ### Changed - [Tiered Caching] Move query recomputation logic outside write lock ([#14187](https://github.com/opensearch-project/OpenSearch/pull/14187)) diff --git a/plugins/repository-azure/build.gradle b/plugins/repository-azure/build.gradle index f88d291a8eb4a..13b711019ff2a 100644 --- a/plugins/repository-azure/build.gradle +++ b/plugins/repository-azure/build.gradle @@ -61,7 +61,7 @@ dependencies { // Start of transitive dependencies for azure-identity api 'com.microsoft.azure:msal4j-persistence-extension:1.3.0' api "net.java.dev.jna:jna-platform:${versions.jna}" - api 'com.microsoft.azure:msal4j:1.15.1' + api 'com.microsoft.azure:msal4j:1.16.0' api 'com.nimbusds:oauth2-oidc-sdk:11.9.1' api 'com.nimbusds:nimbus-jose-jwt:9.40' api 'com.nimbusds:content-type:2.3' diff --git a/plugins/repository-azure/licenses/msal4j-1.15.1.jar.sha1 b/plugins/repository-azure/licenses/msal4j-1.15.1.jar.sha1 deleted file mode 100644 index 797f21d0d4995..0000000000000 --- a/plugins/repository-azure/licenses/msal4j-1.15.1.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -cd1daa94b81bd97153536b661c31295f99cbb8e7 \ No newline at end of file diff --git a/plugins/repository-azure/licenses/msal4j-1.16.0.jar.sha1 b/plugins/repository-azure/licenses/msal4j-1.16.0.jar.sha1 new file mode 100644 index 0000000000000..29fe5022a1570 --- /dev/null +++ b/plugins/repository-azure/licenses/msal4j-1.16.0.jar.sha1 @@ -0,0 +1 @@ +708a0a986ed091054f1c08866712e5b41aec6700 \ No newline at end of file From f9512db4e4f773b16384649bfd4006b6865006b1 Mon Sep 17 00:00:00 2001 From: Peter Alfonsi Date: Mon, 1 Jul 2024 15:54:39 -0700 Subject: [PATCH 027/167] [Bugfix] Fix ICacheKeySerializerTests flakiness (#14564) * Fix testInvalidInput flakiness Signed-off-by: Peter Alfonsi * Addressed andrross's comment Signed-off-by: Peter Alfonsi * rerun security check Signed-off-by: Peter Alfonsi --------- Signed-off-by: Peter Alfonsi Co-authored-by: Peter Alfonsi --- .../common/cache/serializer/ICacheKeySerializerTests.java | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/server/src/test/java/org/opensearch/common/cache/serializer/ICacheKeySerializerTests.java b/server/src/test/java/org/opensearch/common/cache/serializer/ICacheKeySerializerTests.java index 7713fdf1d0adc..4b0fc3d2a7366 100644 --- a/server/src/test/java/org/opensearch/common/cache/serializer/ICacheKeySerializerTests.java +++ b/server/src/test/java/org/opensearch/common/cache/serializer/ICacheKeySerializerTests.java @@ -43,10 +43,9 @@ public void testInvalidInput() throws Exception { ICacheKeySerializer serializer = new ICacheKeySerializer<>(keySer); Random rand = Randomness.get(); - byte[] randomInput = new byte[1000]; - rand.nextBytes(randomInput); - - assertThrows(OpenSearchException.class, () -> serializer.deserialize(randomInput)); + // The first thing the serializer reads is a VInt for the number of dimensions. + // This is an invalid input for StreamInput.readVInt(), so we are guaranteed to have an exception + assertThrows(OpenSearchException.class, () -> serializer.deserialize(new byte[] { -1, -1, -1, -1, -1 })); } public void testDimNumbers() throws Exception { From f1f4f89e4ccc9bc3c30717a0c4645b673c6f88ca Mon Sep 17 00:00:00 2001 From: Vatsal <36672090+imvtsl@users.noreply.github.com> Date: Tue, 2 Jul 2024 09:40:14 -0700 Subject: [PATCH 028/167] Correct typo in method name (#14621) Signed-off-by: vatsal --- .../src/test/java/org/opensearch/index/get/GetResultTests.java | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/server/src/test/java/org/opensearch/index/get/GetResultTests.java b/server/src/test/java/org/opensearch/index/get/GetResultTests.java index 2001bb84454cd..ef8c48f2753c7 100644 --- a/server/src/test/java/org/opensearch/index/get/GetResultTests.java +++ b/server/src/test/java/org/opensearch/index/get/GetResultTests.java @@ -224,7 +224,7 @@ public void testEqualsAndHashcode() { ); } - public void testFomXContentEmbeddedFoundParsingException() throws IOException { + public void testFromXContentEmbeddedFoundParsingException() throws IOException { String json = "{\"_index\":\"foo\",\"_id\":\"bar\"}"; try ( XContentParser parser = JsonXContent.jsonXContent.createParser( From 0742453ecc9b4f36ce72218d18d844baa9defe4e Mon Sep 17 00:00:00 2001 From: Siddhant Deshmukh Date: Tue, 2 Jul 2024 13:18:39 -0700 Subject: [PATCH 029/167] Refactoring FilterPath.parse by using an iterative approach instead of recursion. (#14200) * Refactor FilterPath parse function (#12067) Signed-off-by: Robin Friedmann * Implement unit tests for FilterPathTests (#12067) Signed-off-by: Robin Friedmann * Write warn log if Filter is empty; Add comments (#12067) Signed-off-by: Robin Friedmann * Add changelog Signed-off-by: Siddhant Deshmukh * Remove unnecessary log statement Signed-off-by: Siddhant Deshmukh * Remove unused logger Signed-off-by: Siddhant Deshmukh * Spotless apply Signed-off-by: Siddhant Deshmukh * Remove incorrect changelog Signed-off-by: Siddhant Deshmukh --------- Signed-off-by: Siddhant Deshmukh Co-authored-by: Robin Friedmann --- CHANGELOG.md | 1 + .../core/xcontent/filtering/FilterPath.java | 30 ++++++++----------- .../xcontent/filtering/FilterPathTests.java | 17 +++++++++++ 3 files changed, 31 insertions(+), 17 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index a8e8f120ddd2f..14804bbd5974a 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -54,6 +54,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Fix FuzzyQuery in keyword field will use IndexOrDocValuesQuery when both of index and doc_value are true ([#14378](https://github.com/opensearch-project/OpenSearch/pull/14378)) - Fix file cache initialization ([#14004](https://github.com/opensearch-project/OpenSearch/pull/14004)) - Handle NPE in GetResult if "found" field is missing ([#14552](https://github.com/opensearch-project/OpenSearch/pull/14552)) +- Refactoring FilterPath.parse by using an iterative approach ([#14200](https://github.com/opensearch-project/OpenSearch/pull/14200)) ### Security diff --git a/libs/core/src/main/java/org/opensearch/core/xcontent/filtering/FilterPath.java b/libs/core/src/main/java/org/opensearch/core/xcontent/filtering/FilterPath.java index 5389538a8c7dd..b8da9787165f8 100644 --- a/libs/core/src/main/java/org/opensearch/core/xcontent/filtering/FilterPath.java +++ b/libs/core/src/main/java/org/opensearch/core/xcontent/filtering/FilterPath.java @@ -46,7 +46,6 @@ public class FilterPath { static final FilterPath EMPTY = new FilterPath(); - private final String filter; private final String segment; private final FilterPath next; @@ -99,32 +98,29 @@ public static FilterPath[] compile(Set filters) { List paths = new ArrayList<>(); for (String filter : filters) { - if (filter != null) { + if (filter != null && !filter.isEmpty()) { filter = filter.trim(); if (filter.length() > 0) { - paths.add(parse(filter, filter)); + paths.add(parse(filter)); } } } return paths.toArray(new FilterPath[0]); } - private static FilterPath parse(final String filter, final String segment) { - int end = segment.length(); - - for (int i = 0; i < end;) { - char c = segment.charAt(i); + private static FilterPath parse(final String filter) { + // Split the filter into segments using a regex + // that avoids splitting escaped dots. + String[] segments = filter.split("(?= 0; i--) { + // Replace escaped dots with actual dots in the current segment. + String segment = segments[i].replaceAll("\\\\.", "."); + next = new FilterPath(filter, segment, next); } - return new FilterPath(filter, segment.replaceAll("\\\\.", "."), EMPTY); + + return next; } @Override diff --git a/libs/core/src/test/java/org/opensearch/core/xcontent/filtering/FilterPathTests.java b/libs/core/src/test/java/org/opensearch/core/xcontent/filtering/FilterPathTests.java index 0c5a17b70a956..d3191609f6119 100644 --- a/libs/core/src/test/java/org/opensearch/core/xcontent/filtering/FilterPathTests.java +++ b/libs/core/src/test/java/org/opensearch/core/xcontent/filtering/FilterPathTests.java @@ -35,6 +35,7 @@ import org.opensearch.common.util.set.Sets; import org.opensearch.test.OpenSearchTestCase; +import java.util.HashSet; import java.util.Set; import static java.util.Collections.singleton; @@ -369,4 +370,20 @@ public void testMultipleFilterPaths() { assertThat(filterPath.getSegment(), is(emptyString())); assertSame(filterPath, FilterPath.EMPTY); } + + public void testCompileWithEmptyString() { + Set filters = new HashSet<>(); + filters.add(""); + FilterPath[] filterPaths = FilterPath.compile(filters); + assertNotNull(filterPaths); + assertEquals(0, filterPaths.length); + } + + public void testCompileWithNull() { + Set filters = new HashSet<>(); + filters.add(null); + FilterPath[] filterPaths = FilterPath.compile(filters); + assertNotNull(filterPaths); + assertEquals(0, filterPaths.length); + } } From e82b432940be34941c999dd1fe5cdc9fed06f02b Mon Sep 17 00:00:00 2001 From: rishavz_sagar Date: Wed, 3 Jul 2024 15:25:51 +0530 Subject: [PATCH 030/167] Removing String format in RemoteStoreMigrationAllocationDecider to optimise performance(#14612) Signed-off-by: RS146BIJAY --- ...RemoteStoreMigrationAllocationDecider.java | 34 +++++++++---------- 1 file changed, 16 insertions(+), 18 deletions(-) diff --git a/server/src/main/java/org/opensearch/cluster/routing/allocation/decider/RemoteStoreMigrationAllocationDecider.java b/server/src/main/java/org/opensearch/cluster/routing/allocation/decider/RemoteStoreMigrationAllocationDecider.java index 4fc5fff805663..67fe4ea1dcb1b 100644 --- a/server/src/main/java/org/opensearch/cluster/routing/allocation/decider/RemoteStoreMigrationAllocationDecider.java +++ b/server/src/main/java/org/opensearch/cluster/routing/allocation/decider/RemoteStoreMigrationAllocationDecider.java @@ -44,8 +44,6 @@ import org.opensearch.node.remotestore.RemoteStoreNodeService.CompatibilityMode; import org.opensearch.node.remotestore.RemoteStoreNodeService.Direction; -import java.util.Locale; - /** * A new allocation decider for migration of document replication clusters to remote store backed clusters: * - For STRICT compatibility mode, the decision is always YES @@ -101,7 +99,7 @@ public Decision canAllocate(ShardRouting shardRouting, RoutingNode node, Routing if (migrationDirection.equals(Direction.NONE)) { // remote backed indices on docrep nodes and non remote backed indices on remote nodes are not allowed boolean isNoDecision = remoteSettingsBackedIndex ^ targetNode.isRemoteStoreNode(); - String reason = String.format(Locale.ROOT, " for %sremote store backed index", remoteSettingsBackedIndex ? "" : "non "); + String reason = " for " + (remoteSettingsBackedIndex ? "" : "non ") + "remote store backed index"; return allocation.decision( isNoDecision ? Decision.NO : Decision.YES, NAME, @@ -114,11 +112,9 @@ public Decision canAllocate(ShardRouting shardRouting, RoutingNode node, Routing // check for remote store backed indices if (remoteSettingsBackedIndex && targetNode.isRemoteStoreNode() == false) { // allocations and relocations must be to a remote node - String reason = String.format( - Locale.ROOT, - " because a remote store backed index's shard copy can only be %s to a remote node", - ((shardRouting.assignedToNode() == false) ? "allocated" : "relocated") - ); + String reason = new StringBuilder(" because a remote store backed index's shard copy can only be ").append( + (shardRouting.assignedToNode() == false) ? "allocated" : "relocated" + ).append(" to a remote node").toString(); return allocation.decision(Decision.NO, NAME, getDecisionDetails(false, shardRouting, targetNode, reason)); } @@ -168,16 +164,18 @@ private Decision replicaShardDecision(ShardRouting replicaShardRouting, Discover // get detailed reason for the decision private String getDecisionDetails(boolean isYes, ShardRouting shardRouting, DiscoveryNode targetNode, String reason) { - return String.format( - Locale.ROOT, - "[%s migration_direction]: %s shard copy %s be %s to a %s node%s", - migrationDirection.direction, - (shardRouting.primary() ? "primary" : "replica"), - (isYes ? "can" : "can not"), - ((shardRouting.assignedToNode() == false) ? "allocated" : "relocated"), - (targetNode.isRemoteStoreNode() ? "remote" : "non-remote"), - reason - ); + return new StringBuilder("[").append(migrationDirection.direction) + .append(" migration_direction]: ") + .append(shardRouting.primary() ? "primary" : "replica") + .append(" shard copy ") + .append(isYes ? "can" : "can not") + .append(" be ") + .append((shardRouting.assignedToNode() == false) ? "allocated" : "relocated") + .append(" to a ") + .append(targetNode.isRemoteStoreNode() ? "remote" : "non-remote") + .append(" node") + .append(reason) + .toString(); } } From 501a7024d18f7c7bc135b5e686835f0bcdf00d30 Mon Sep 17 00:00:00 2001 From: Sooraj Sinha <81695996+soosinha@users.noreply.github.com> Date: Wed, 3 Jul 2024 22:33:54 +0530 Subject: [PATCH 031/167] Clear templates before Adding; Use NamedWriteableAwareStreamInput for RemoteCustomMetadata; Correct the check for deciding upload of HashesOfConsistentSettings (#14513) * Clear templates before Adding; Use NamedWriteableAwareStreamInput for RemoteCustomMetadata * Correct the check for deciding upload of hashes of consistent settings Signed-off-by: Sooraj Sinha --- .../opensearch/cluster/metadata/Metadata.java | 1 + .../remote/RemoteClusterStateService.java | 5 +- .../remote/model/RemoteCustomMetadata.java | 5 +- .../cluster/metadata/MetadataTests.java | 27 ++++ .../RemoteClusterStateServiceTests.java | 125 +++++++++++++++++- .../model/RemoteCustomMetadataTests.java | 87 +++++++++++- 6 files changed, 239 insertions(+), 11 deletions(-) diff --git a/server/src/main/java/org/opensearch/cluster/metadata/Metadata.java b/server/src/main/java/org/opensearch/cluster/metadata/Metadata.java index a0ef8de07fbf2..e3f63b1c27b83 100644 --- a/server/src/main/java/org/opensearch/cluster/metadata/Metadata.java +++ b/server/src/main/java/org/opensearch/cluster/metadata/Metadata.java @@ -1287,6 +1287,7 @@ public Builder templates(Map templates) { } public Builder templates(TemplatesMetadata templatesMetadata) { + this.templates.clear(); this.templates.putAll(templatesMetadata.getTemplates()); return this; } diff --git a/server/src/main/java/org/opensearch/gateway/remote/RemoteClusterStateService.java b/server/src/main/java/org/opensearch/gateway/remote/RemoteClusterStateService.java index ef14adeb42b68..74abe9cd257b4 100644 --- a/server/src/main/java/org/opensearch/gateway/remote/RemoteClusterStateService.java +++ b/server/src/main/java/org/opensearch/gateway/remote/RemoteClusterStateService.java @@ -356,7 +356,7 @@ public RemoteClusterStateManifestInfo writeIncrementalMetadata( && clusterState.getNodes().delta(previousClusterState.getNodes()).hasChanges(); final boolean updateClusterBlocks = isPublicationEnabled && !clusterState.blocks().equals(previousClusterState.blocks()); final boolean updateHashesOfConsistentSettings = isPublicationEnabled - || Metadata.isHashesOfConsistentSettingsEqual(previousClusterState.metadata(), clusterState.metadata()) == false; + && Metadata.isHashesOfConsistentSettingsEqual(previousClusterState.metadata(), clusterState.metadata()) == false; uploadedMetadataResults = writeMetadataInParallel( clusterState, @@ -476,7 +476,8 @@ public RemoteClusterStateManifestInfo writeIncrementalMetadata( return manifestDetails; } - private UploadedMetadataResults writeMetadataInParallel( + // package private for testing + UploadedMetadataResults writeMetadataInParallel( ClusterState clusterState, List indexToUpload, Map prevIndexMetadataByName, diff --git a/server/src/main/java/org/opensearch/gateway/remote/model/RemoteCustomMetadata.java b/server/src/main/java/org/opensearch/gateway/remote/model/RemoteCustomMetadata.java index 4c7069ee8be9e..ec5dfbec820d4 100644 --- a/server/src/main/java/org/opensearch/gateway/remote/model/RemoteCustomMetadata.java +++ b/server/src/main/java/org/opensearch/gateway/remote/model/RemoteCustomMetadata.java @@ -12,6 +12,7 @@ import org.opensearch.common.io.Streams; import org.opensearch.common.remote.AbstractRemoteWritableBlobEntity; import org.opensearch.common.remote.BlobPathParameters; +import org.opensearch.core.common.io.stream.NamedWriteableAwareStreamInput; import org.opensearch.core.common.io.stream.NamedWriteableRegistry; import org.opensearch.core.common.io.stream.StreamInput; import org.opensearch.core.compress.Compressor; @@ -122,6 +123,8 @@ public UploadedMetadata getUploadedMetadata() { public static Custom readFrom(StreamInput streamInput, NamedWriteableRegistry namedWriteableRegistry, String customType) throws IOException { - return namedWriteableRegistry.getReader(Custom.class, customType).read(streamInput); + try (StreamInput in = new NamedWriteableAwareStreamInput(streamInput, namedWriteableRegistry)) { + return namedWriteableRegistry.getReader(Custom.class, customType).read(in); + } } } diff --git a/server/src/test/java/org/opensearch/cluster/metadata/MetadataTests.java b/server/src/test/java/org/opensearch/cluster/metadata/MetadataTests.java index 618fcb923bc60..a434a713f330b 100644 --- a/server/src/test/java/org/opensearch/cluster/metadata/MetadataTests.java +++ b/server/src/test/java/org/opensearch/cluster/metadata/MetadataTests.java @@ -1482,6 +1482,33 @@ public void testIsSegmentReplicationDisabled() { assertFalse(metadata.isSegmentReplicationEnabled(indexName)); } + public void testTemplatesMetadata() { + TemplatesMetadata templatesMetadata1 = TemplatesMetadata.builder() + .put( + IndexTemplateMetadata.builder("template_1") + .patterns(Arrays.asList("bar-*", "foo-*")) + .settings(Settings.builder().put("random_index_setting_" + randomAlphaOfLength(3), randomAlphaOfLength(5)).build()) + .build() + ) + .build(); + Metadata metadata1 = Metadata.builder().templates(templatesMetadata1).build(); + assertThat(metadata1.templates(), is(templatesMetadata1.getTemplates())); + + TemplatesMetadata templatesMetadata2 = TemplatesMetadata.builder() + .put( + IndexTemplateMetadata.builder("template_2") + .patterns(Arrays.asList("bar-*", "foo-*")) + .settings(Settings.builder().put("random_index_setting_" + randomAlphaOfLength(3), randomAlphaOfLength(5)).build()) + .build() + ) + .build(); + + Metadata metadata2 = Metadata.builder(metadata1).templates(templatesMetadata2).build(); + + assertThat(metadata2.templates(), is(templatesMetadata2.getTemplates())); + + } + public static Metadata randomMetadata() { Metadata.Builder md = Metadata.builder() .put(buildIndexMetadata("index", "alias", randomBoolean() ? null : randomBoolean()).build(), randomBoolean()) diff --git a/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateServiceTests.java b/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateServiceTests.java index c8fd982fec1e1..d983a4d8c4027 100644 --- a/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateServiceTests.java +++ b/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateServiceTests.java @@ -20,6 +20,7 @@ import org.opensearch.cluster.metadata.IndexTemplateMetadata; import org.opensearch.cluster.metadata.Metadata; import org.opensearch.cluster.metadata.TemplatesMetadata; +import org.opensearch.cluster.node.DiscoveryNode; import org.opensearch.cluster.node.DiscoveryNodes; import org.opensearch.cluster.routing.RoutingTable; import org.opensearch.cluster.routing.remote.InternalRemoteRoutingTableService; @@ -92,6 +93,7 @@ import org.mockito.ArgumentCaptor; import org.mockito.ArgumentMatchers; +import org.mockito.Mockito; import static java.util.stream.Collectors.toList; import static org.opensearch.common.util.FeatureFlags.REMOTE_PUBLICATION_EXPERIMENTAL; @@ -111,6 +113,7 @@ import static org.opensearch.node.remotestore.RemoteStoreNodeAttribute.REMOTE_STORE_REPOSITORY_SETTINGS_ATTRIBUTE_KEY_PREFIX; import static org.opensearch.node.remotestore.RemoteStoreNodeAttribute.REMOTE_STORE_REPOSITORY_TYPE_ATTRIBUTE_KEY_FORMAT; import static org.opensearch.node.remotestore.RemoteStoreNodeAttribute.REMOTE_STORE_ROUTING_TABLE_REPOSITORY_NAME_ATTRIBUTE_KEY; +import static org.hamcrest.Matchers.anEmptyMap; import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.is; import static org.hamcrest.Matchers.not; @@ -118,6 +121,7 @@ import static org.hamcrest.Matchers.nullValue; import static org.mockito.ArgumentMatchers.any; import static org.mockito.ArgumentMatchers.anyString; +import static org.mockito.ArgumentMatchers.eq; import static org.mockito.Mockito.doAnswer; import static org.mockito.Mockito.doThrow; import static org.mockito.Mockito.mock; @@ -518,11 +522,13 @@ public void testWriteIncrementalMetadataSuccess() throws IOException { final ClusterMetadataManifest previousManifest = ClusterMetadataManifest.builder().indices(Collections.emptyList()).build(); remoteClusterStateService.start(); - final ClusterMetadataManifest manifest = remoteClusterStateService.writeIncrementalMetadata( + final RemoteClusterStateService rcssSpy = Mockito.spy(remoteClusterStateService); + final RemoteClusterStateManifestInfo manifestInfo = rcssSpy.writeIncrementalMetadata( previousClusterState, clusterState, previousManifest - ).getClusterMetadataManifest(); + ); + final ClusterMetadataManifest manifest = manifestInfo.getClusterMetadataManifest(); final UploadedIndexMetadata uploadedIndexMetadata = new UploadedIndexMetadata("test-index", "index-uuid", "metadata-filename__2"); final List indices = List.of(uploadedIndexMetadata); @@ -535,6 +541,24 @@ public void testWriteIncrementalMetadataSuccess() throws IOException { .previousClusterUUID("prev-cluster-uuid") .build(); + Mockito.verify(rcssSpy) + .writeMetadataInParallel( + eq(clusterState), + eq(new ArrayList(clusterState.metadata().indices().values())), + eq(Collections.singletonMap(indices.get(0).getIndexName(), null)), + eq(clusterState.metadata().customs()), + eq(true), + eq(true), + eq(true), + eq(false), + eq(false), + eq(false), + eq(Collections.emptyMap()), + eq(false), + eq(Collections.emptyList()) + ); + + assertThat(manifestInfo.getManifestFileName(), notNullValue()); assertThat(manifest.getIndices().size(), is(1)); assertThat(manifest.getIndices().get(0).getIndexName(), is(uploadedIndexMetadata.getIndexName())); assertThat(manifest.getIndices().get(0).getIndexUUID(), is(uploadedIndexMetadata.getIndexUUID())); @@ -543,6 +567,95 @@ public void testWriteIncrementalMetadataSuccess() throws IOException { assertThat(manifest.getStateVersion(), is(expectedManifest.getStateVersion())); assertThat(manifest.getClusterUUID(), is(expectedManifest.getClusterUUID())); assertThat(manifest.getStateUUID(), is(expectedManifest.getStateUUID())); + assertThat(manifest.getHashesOfConsistentSettings(), nullValue()); + assertThat(manifest.getDiscoveryNodesMetadata(), nullValue()); + assertThat(manifest.getClusterBlocksMetadata(), nullValue()); + assertThat(manifest.getClusterStateCustomMap(), anEmptyMap()); + assertThat(manifest.getTransientSettingsMetadata(), nullValue()); + assertThat(manifest.getTemplatesMetadata(), notNullValue()); + assertThat(manifest.getCoordinationMetadata(), notNullValue()); + assertThat(manifest.getCustomMetadataMap().size(), is(2)); + assertThat(manifest.getIndicesRouting().size(), is(0)); + } + + public void testWriteIncrementalMetadataSuccessWhenPublicationEnabled() throws IOException { + publicationEnabled = true; + Settings nodeSettings = Settings.builder().put(REMOTE_PUBLICATION_EXPERIMENTAL, publicationEnabled).build(); + FeatureFlags.initializeFeatureFlags(nodeSettings); + remoteClusterStateService = new RemoteClusterStateService( + "test-node-id", + repositoriesServiceSupplier, + settings, + clusterService, + () -> 0L, + threadPool, + List.of(new RemoteIndexPathUploader(threadPool, settings, repositoriesServiceSupplier, clusterSettings)), + writableRegistry() + ); + final ClusterState clusterState = generateClusterStateWithOneIndex().nodes(nodesWithLocalNodeClusterManager()).build(); + mockBlobStoreObjects(); + final CoordinationMetadata coordinationMetadata = CoordinationMetadata.builder().term(1L).build(); + final ClusterState previousClusterState = ClusterState.builder(ClusterName.DEFAULT) + .metadata(Metadata.builder().coordinationMetadata(coordinationMetadata)) + .build(); + + final ClusterMetadataManifest previousManifest = ClusterMetadataManifest.builder().indices(Collections.emptyList()).build(); + + remoteClusterStateService.start(); + final RemoteClusterStateService rcssSpy = Mockito.spy(remoteClusterStateService); + final RemoteClusterStateManifestInfo manifestInfo = rcssSpy.writeIncrementalMetadata( + previousClusterState, + clusterState, + previousManifest + ); + final ClusterMetadataManifest manifest = manifestInfo.getClusterMetadataManifest(); + final UploadedIndexMetadata uploadedIndexMetadata = new UploadedIndexMetadata("test-index", "index-uuid", "metadata-filename__2"); + final List indices = List.of(uploadedIndexMetadata); + + final ClusterMetadataManifest expectedManifest = ClusterMetadataManifest.builder() + .indices(indices) + .clusterTerm(1L) + .stateVersion(1L) + .stateUUID("state-uuid") + .clusterUUID("cluster-uuid") + .previousClusterUUID("prev-cluster-uuid") + .build(); + + Mockito.verify(rcssSpy) + .writeMetadataInParallel( + eq(clusterState), + eq(new ArrayList(clusterState.metadata().indices().values())), + eq(Collections.singletonMap(indices.get(0).getIndexName(), null)), + eq(clusterState.metadata().customs()), + eq(true), + eq(true), + eq(true), + eq(true), + eq(false), + eq(false), + eq(Collections.emptyMap()), + eq(true), + Mockito.anyList() + ); + + assertThat(manifestInfo.getManifestFileName(), notNullValue()); + assertThat(manifest.getIndices().size(), is(1)); + assertThat(manifest.getIndices().get(0).getIndexName(), is(uploadedIndexMetadata.getIndexName())); + assertThat(manifest.getIndices().get(0).getIndexUUID(), is(uploadedIndexMetadata.getIndexUUID())); + assertThat(manifest.getIndices().get(0).getUploadedFilename(), notNullValue()); + assertThat(manifest.getClusterTerm(), is(expectedManifest.getClusterTerm())); + assertThat(manifest.getStateVersion(), is(expectedManifest.getStateVersion())); + assertThat(manifest.getClusterUUID(), is(expectedManifest.getClusterUUID())); + assertThat(manifest.getStateUUID(), is(expectedManifest.getStateUUID())); + assertThat(manifest.getHashesOfConsistentSettings(), notNullValue()); + assertThat(manifest.getDiscoveryNodesMetadata(), notNullValue()); + assertThat(manifest.getClusterBlocksMetadata(), nullValue()); + assertThat(manifest.getClusterStateCustomMap(), anEmptyMap()); + assertThat(manifest.getTransientSettingsMetadata(), nullValue()); + assertThat(manifest.getTemplatesMetadata(), notNullValue()); + assertThat(manifest.getCoordinationMetadata(), notNullValue()); + assertThat(manifest.getCustomMetadataMap().size(), is(2)); + assertThat(manifest.getIndicesRouting().size(), is(1)); } /* @@ -2012,7 +2125,9 @@ static ClusterState.Builder generateClusterStateWithOneIndex() { .build(); final CoordinationMetadata coordinationMetadata = CoordinationMetadata.builder().term(1L).build(); final Settings settings = Settings.builder().put("mock-settings", true).build(); - final TemplatesMetadata templatesMetadata = TemplatesMetadata.EMPTY_METADATA; + final TemplatesMetadata templatesMetadata = TemplatesMetadata.builder() + .put(IndexTemplateMetadata.builder("template1").settings(idxSettings).patterns(List.of("test*")).build()) + .build(); final CustomMetadata1 customMetadata1 = new CustomMetadata1("custom-metadata-1"); return ClusterState.builder(ClusterName.DEFAULT) .version(1L) @@ -2025,6 +2140,7 @@ static ClusterState.Builder generateClusterStateWithOneIndex() { .coordinationMetadata(coordinationMetadata) .persistentSettings(settings) .templates(templatesMetadata) + .hashesOfConsistentSettings(Map.of("key1", "value1", "key2", "value2")) .putCustom(customMetadata1.getWriteableName(), customMetadata1) .build() ) @@ -2032,7 +2148,8 @@ static ClusterState.Builder generateClusterStateWithOneIndex() { } static DiscoveryNodes nodesWithLocalNodeClusterManager() { - return DiscoveryNodes.builder().clusterManagerNodeId("cluster-manager-id").localNodeId("cluster-manager-id").build(); + final DiscoveryNode localNode = new DiscoveryNode("cluster-manager-id", buildNewFakeTransportAddress(), Version.CURRENT); + return DiscoveryNodes.builder().clusterManagerNodeId("cluster-manager-id").localNodeId("cluster-manager-id").add(localNode).build(); } private static class CustomMetadata1 extends TestCustomMetadata { diff --git a/server/src/test/java/org/opensearch/gateway/remote/model/RemoteCustomMetadataTests.java b/server/src/test/java/org/opensearch/gateway/remote/model/RemoteCustomMetadataTests.java index 1e28817be79f2..60cceb205f43d 100644 --- a/server/src/test/java/org/opensearch/gateway/remote/model/RemoteCustomMetadataTests.java +++ b/server/src/test/java/org/opensearch/gateway/remote/model/RemoteCustomMetadataTests.java @@ -8,6 +8,8 @@ package org.opensearch.gateway.remote.model; +import org.opensearch.Version; +import org.opensearch.cluster.ClusterModule; import org.opensearch.cluster.metadata.IndexGraveyard; import org.opensearch.cluster.metadata.Metadata.Custom; import org.opensearch.common.blobstore.BlobPath; @@ -16,13 +18,20 @@ import org.opensearch.common.settings.ClusterSettings; import org.opensearch.common.settings.Settings; import org.opensearch.core.common.io.stream.NamedWriteableRegistry; +import org.opensearch.core.common.io.stream.NamedWriteableRegistry.Entry; +import org.opensearch.core.common.io.stream.StreamInput; +import org.opensearch.core.common.io.stream.StreamOutput; import org.opensearch.core.compress.Compressor; import org.opensearch.core.compress.NoneCompressor; import org.opensearch.core.index.Index; +import org.opensearch.core.xcontent.XContentBuilder; import org.opensearch.gateway.remote.ClusterMetadataManifest.UploadedMetadata; import org.opensearch.gateway.remote.RemoteClusterStateUtils; import org.opensearch.index.remote.RemoteStoreUtils; import org.opensearch.index.translog.transfer.BlobStoreTransferService; +import org.opensearch.persistent.PersistentTaskParams; +import org.opensearch.persistent.PersistentTasksCustomMetadata; +import org.opensearch.persistent.PersistentTasksCustomMetadata.Assignment; import org.opensearch.repositories.blobstore.BlobStoreRepository; import org.opensearch.test.OpenSearchTestCase; import org.opensearch.threadpool.TestThreadPool; @@ -33,6 +42,7 @@ import java.io.IOException; import java.io.InputStream; import java.util.List; +import java.util.Objects; import static org.opensearch.gateway.remote.RemoteClusterStateUtils.GLOBAL_METADATA_CURRENT_CODEC_VERSION; import static org.opensearch.gateway.remote.model.RemoteCustomMetadata.CUSTOM_DELIMITER; @@ -216,24 +226,93 @@ public void testGetUploadedMetadata() throws IOException { public void testSerDe() throws IOException { Custom customMetadata = getCustomMetadata(); + verifySerDe(customMetadata, IndexGraveyard.TYPE); + } + + public void testSerDeForPersistentTasks() throws IOException { + Custom customMetadata = getPersistentTasksMetadata(); + verifySerDe(customMetadata, PersistentTasksCustomMetadata.TYPE); + } + + private void verifySerDe(Custom objectToUpload, String objectType) throws IOException { RemoteCustomMetadata remoteObjectForUpload = new RemoteCustomMetadata( - customMetadata, - IndexGraveyard.TYPE, + objectToUpload, + objectType, METADATA_VERSION, clusterUUID, compressor, - namedWriteableRegistry + customWritableRegistry() ); try (InputStream inputStream = remoteObjectForUpload.serialize()) { remoteObjectForUpload.setFullBlobName(BlobPath.cleanPath()); assertThat(inputStream.available(), greaterThan(0)); Custom readCustomMetadata = remoteObjectForUpload.deserialize(inputStream); - assertThat(readCustomMetadata, is(customMetadata)); + assertThat(readCustomMetadata, is(objectToUpload)); } } + private NamedWriteableRegistry customWritableRegistry() { + List entries = ClusterModule.getNamedWriteables(); + entries.add(new Entry(PersistentTaskParams.class, TestPersistentTaskParams.PARAM_NAME, TestPersistentTaskParams::new)); + return new NamedWriteableRegistry(entries); + } + public static Custom getCustomMetadata() { return IndexGraveyard.builder().addTombstone(new Index("test-index", "3q2423")).build(); } + private static Custom getPersistentTasksMetadata() { + return PersistentTasksCustomMetadata.builder() + .addTask("_task_1", "testTaskName", new TestPersistentTaskParams("task param data"), new Assignment(null, "_reason")) + .build(); + } + + public static class TestPersistentTaskParams implements PersistentTaskParams { + + private static final String PARAM_NAME = "testTaskName"; + + private final String data; + + public TestPersistentTaskParams(String data) { + this.data = data; + } + + public TestPersistentTaskParams(StreamInput in) throws IOException { + this(in.readString()); + } + + @Override + public String getWriteableName() { + return PARAM_NAME; + } + + @Override + public Version getMinimalSupportedVersion() { + return Version.V_2_13_0; + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeString(data); + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + return builder.startObject().field("data_field", data); + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + TestPersistentTaskParams that = (TestPersistentTaskParams) o; + return Objects.equals(data, that.data); + } + + @Override + public int hashCode() { + return Objects.hash(data); + } + } + } From 58d1164f74e921a26b7a73b6185b38cc87bbc7a9 Mon Sep 17 00:00:00 2001 From: rishavz_sagar Date: Thu, 4 Jul 2024 11:50:47 +0530 Subject: [PATCH 032/167] Improve reroute performance by optimising List.removeAll in LocalShardsBalancer to filter remote search shard from relocation decision (#14613) Signed-off-by: RS146BIJAY --- .../allocator/LocalShardsBalancer.java | 21 +++++++++---------- 1 file changed, 10 insertions(+), 11 deletions(-) diff --git a/server/src/main/java/org/opensearch/cluster/routing/allocation/allocator/LocalShardsBalancer.java b/server/src/main/java/org/opensearch/cluster/routing/allocation/allocator/LocalShardsBalancer.java index ec25d041bda43..6978c988fd648 100644 --- a/server/src/main/java/org/opensearch/cluster/routing/allocation/allocator/LocalShardsBalancer.java +++ b/server/src/main/java/org/opensearch/cluster/routing/allocation/allocator/LocalShardsBalancer.java @@ -32,7 +32,6 @@ import org.opensearch.gateway.PriorityComparator; import java.util.ArrayList; -import java.util.Arrays; import java.util.Collections; import java.util.Comparator; import java.util.HashMap; @@ -41,7 +40,6 @@ import java.util.List; import java.util.Map; import java.util.Set; -import java.util.stream.Collectors; import java.util.stream.Stream; import java.util.stream.StreamSupport; @@ -779,15 +777,16 @@ void allocateUnassigned() { * if we allocate for instance (0, R, IDX1) we move the second replica to the secondary array and proceed with * the next replica. If we could not find a node to allocate (0,R,IDX1) we move all it's replicas to ignoreUnassigned. */ - ShardRouting[] unassignedShards = unassigned.drain(); - List allUnassignedShards = Arrays.stream(unassignedShards).collect(Collectors.toList()); - List localUnassignedShards = allUnassignedShards.stream() - .filter(shard -> RoutingPool.LOCAL_ONLY.equals(RoutingPool.getShardPool(shard, allocation))) - .collect(Collectors.toList()); - allUnassignedShards.removeAll(localUnassignedShards); - allUnassignedShards.forEach(shard -> routingNodes.unassigned().add(shard)); - unassignedShards = localUnassignedShards.toArray(new ShardRouting[0]); - ShardRouting[] primary = unassignedShards; + List primaryList = new ArrayList<>(); + for (ShardRouting shard : unassigned.drain()) { + if (RoutingPool.LOCAL_ONLY.equals(RoutingPool.getShardPool(shard, allocation))) { + primaryList.add(shard); + } else { + routingNodes.unassigned().add(shard); + } + } + + ShardRouting[] primary = primaryList.toArray(new ShardRouting[0]); ShardRouting[] secondary = new ShardRouting[primary.length]; int secondaryLength = 0; int primaryLength = primary.length; From 74230b76a255d52b4aca711734864f7b6e4b314c Mon Sep 17 00:00:00 2001 From: Sachin Kale Date: Fri, 5 Jul 2024 09:16:37 +0530 Subject: [PATCH 033/167] Fix assertion failure while deleting remote backed index (#14601) Signed-off-by: Sachin Kale --- .../RemoteMigrationIndexMetadataUpdateIT.java | 2 -- .../opensearch/remotestore/RemoteStoreIT.java | 16 +++++++++++++++- .../java/org/opensearch/index/IndexService.java | 17 +++++++++++++++-- 3 files changed, 30 insertions(+), 5 deletions(-) diff --git a/server/src/internalClusterTest/java/org/opensearch/remotemigration/RemoteMigrationIndexMetadataUpdateIT.java b/server/src/internalClusterTest/java/org/opensearch/remotemigration/RemoteMigrationIndexMetadataUpdateIT.java index 6885d37c4aab0..216c104dfecc1 100644 --- a/server/src/internalClusterTest/java/org/opensearch/remotemigration/RemoteMigrationIndexMetadataUpdateIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/remotemigration/RemoteMigrationIndexMetadataUpdateIT.java @@ -275,7 +275,6 @@ initalMetadataVersion < internalCluster().client() * After shard relocation completes, shuts down the docrep nodes and asserts remote * index settings are applied even when the index is in YELLOW state */ - @AwaitsFix(bugUrl = "https://github.com/opensearch-project/OpenSearch/issues/13737") public void testIndexSettingsUpdatedEvenForMisconfiguredReplicas() throws Exception { internalCluster().startClusterManagerOnlyNode(); @@ -332,7 +331,6 @@ public void testIndexSettingsUpdatedEvenForMisconfiguredReplicas() throws Except * After shard relocation completes, restarts the docrep node holding extra replica shard copy * and asserts remote index settings are applied as soon as the docrep replica copy is unassigned */ - @AwaitsFix(bugUrl = "https://github.com/opensearch-project/OpenSearch/issues/13871") public void testIndexSettingsUpdatedWhenDocrepNodeIsRestarted() throws Exception { internalCluster().startClusterManagerOnlyNode(); diff --git a/server/src/internalClusterTest/java/org/opensearch/remotestore/RemoteStoreIT.java b/server/src/internalClusterTest/java/org/opensearch/remotestore/RemoteStoreIT.java index 96d6338e5913b..194dce5f4a57a 100644 --- a/server/src/internalClusterTest/java/org/opensearch/remotestore/RemoteStoreIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/remotestore/RemoteStoreIT.java @@ -65,7 +65,6 @@ import static org.opensearch.index.remote.RemoteStoreEnums.DataType.METADATA; import static org.opensearch.index.shard.IndexShardTestCase.getTranslog; import static org.opensearch.indices.RemoteStoreSettings.CLUSTER_REMOTE_TRANSLOG_BUFFER_INTERVAL_SETTING; -import static org.opensearch.test.OpenSearchTestCase.getShardLevelBlobPath; import static org.opensearch.test.hamcrest.OpenSearchAssertions.assertAcked; import static org.opensearch.test.hamcrest.OpenSearchAssertions.assertHitCount; import static org.hamcrest.Matchers.comparesEqualTo; @@ -133,6 +132,21 @@ private void testPeerRecovery(int numberOfIterations, boolean invokeFlush) throw ); } + public void testRemoteStoreIndexCreationAndDeletionWithReferencedStore() throws InterruptedException, ExecutionException { + String dataNode = internalCluster().startNodes(1).get(0); + createIndex(INDEX_NAME, remoteStoreIndexSettings(0)); + ensureYellowAndNoInitializingShards(INDEX_NAME); + ensureGreen(INDEX_NAME); + + IndexShard indexShard = getIndexShard(dataNode, INDEX_NAME); + + // Simulating a condition where store is already in use by increasing ref count, this helps in testing index + // deletion when refresh is in-progress. + indexShard.store().incRef(); + assertAcked(client().admin().indices().prepareDelete(INDEX_NAME)); + indexShard.store().decRef(); + } + public void testPeerRecoveryWithRemoteStoreAndRemoteTranslogNoDataFlush() throws Exception { testPeerRecovery(1, true); } diff --git a/server/src/main/java/org/opensearch/index/IndexService.java b/server/src/main/java/org/opensearch/index/IndexService.java index 1c0db0095bb98..12b02d3dbd6fa 100644 --- a/server/src/main/java/org/opensearch/index/IndexService.java +++ b/server/src/main/java/org/opensearch/index/IndexService.java @@ -602,7 +602,21 @@ public synchronized IndexShard createShard( this.indexSettings.getRemoteStorePathStrategy() ); } - remoteStore = new Store(shardId, this.indexSettings, remoteDirectory, lock, Store.OnClose.EMPTY, path); + // When an instance of Store is created, a shardlock is created which is released on closing the instance of store. + // Currently, we create 2 instances of store for remote store backed indices: store and remoteStore. + // As there can be only one shardlock acquired for a given shard, the lock is shared between store and remoteStore. + // This creates an issue when we are deleting the index as it results in closing both store and remoteStore. + // Sample test failure: https://github.com/opensearch-project/OpenSearch/issues/13871 + // The following method provides ShardLock that is not maintained by NodeEnvironment. + // As part of https://github.com/opensearch-project/OpenSearch/issues/13075, we want to move away from keeping 2 + // store instances. + ShardLock remoteStoreLock = new ShardLock(shardId) { + @Override + protected void closeInternal() { + // Do nothing for shard lock on remote store + } + }; + remoteStore = new Store(shardId, this.indexSettings, remoteDirectory, remoteStoreLock, Store.OnClose.EMPTY, path); } else { // Disallow shards with remote store based settings to be created on non-remote store enabled nodes // Even though we have `RemoteStoreMigrationAllocationDecider` in place to prevent something like this from happening at the @@ -625,7 +639,6 @@ public synchronized IndexShard createShard( } else { directory = directoryFactory.newDirectory(this.indexSettings, path); } - store = new Store( shardId, this.indexSettings, From f14b5c8c1f52acb4cfe0c088fed09ea075743d69 Mon Sep 17 00:00:00 2001 From: Craig Perkins Date: Fri, 5 Jul 2024 11:59:40 -0400 Subject: [PATCH 034/167] Allow system index warning in OpenSearchRestTestCase.refreshAllIndices (#14635) * Allow system index warning Signed-off-by: Craig Perkins * Add to CHANGELOG Signed-off-by: Craig Perkins * Address code review comments Signed-off-by: Craig Perkins --------- Signed-off-by: Craig Perkins --- CHANGELOG.md | 1 + .../opensearch/test/rest/OpenSearchRestTestCase.java | 12 ++++++++---- 2 files changed, 9 insertions(+), 4 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 14804bbd5974a..1f7d1ea5b3d19 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -36,6 +36,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Make the class CommunityIdProcessor final ([#14448](https://github.com/opensearch-project/OpenSearch/pull/14448)) - Allow @InternalApi annotation on classes not meant to be constructed outside of the OpenSearch core ([#14575](https://github.com/opensearch-project/OpenSearch/pull/14575)) - Add @InternalApi annotation to japicmp exclusions ([#14597](https://github.com/opensearch-project/OpenSearch/pull/14597)) +- Allow system index warning in OpenSearchRestTestCase.refreshAllIndices ([#14635](https://github.com/opensearch-project/OpenSearch/pull/14635)) ### Deprecated diff --git a/test/framework/src/main/java/org/opensearch/test/rest/OpenSearchRestTestCase.java b/test/framework/src/main/java/org/opensearch/test/rest/OpenSearchRestTestCase.java index b7c31685bafa6..8c612d258f183 100644 --- a/test/framework/src/main/java/org/opensearch/test/rest/OpenSearchRestTestCase.java +++ b/test/framework/src/main/java/org/opensearch/test/rest/OpenSearchRestTestCase.java @@ -708,11 +708,15 @@ protected void refreshAllIndices() throws IOException { requestOptions.setWarningsHandler(warnings -> { if (warnings.isEmpty()) { return false; - } else if (warnings.size() > 1) { - return true; - } else { - return warnings.get(0).startsWith("this request accesses system indices:") == false; } + boolean allSystemIndexWarnings = true; + for (String warning : warnings) { + if (!warning.startsWith("this request accesses system indices:")) { + allSystemIndexWarnings = false; + break; + } + } + return !allSystemIndexWarnings; }); refreshRequest.setOptions(requestOptions); client().performRequest(refreshRequest); From f0c2fa6138f2b6c7a5eb5d9501fb8782271a3cd3 Mon Sep 17 00:00:00 2001 From: Bharathwaj G Date: Mon, 8 Jul 2024 19:31:58 +0530 Subject: [PATCH 035/167] Star tree codec changes (#14514) --------- Signed-off-by: Bharathwaj G --- .../opensearch/index/codec/CodecService.java | 16 ++- .../codec/composite/Composite99Codec.java | 57 +++++++++ .../composite/Composite99DocValuesFormat.java | 64 ++++++++++ .../composite/Composite99DocValuesReader.java | 89 ++++++++++++++ .../composite/Composite99DocValuesWriter.java | 114 ++++++++++++++++++ .../composite/CompositeCodecFactory.java | 42 +++++++ .../codec/composite/CompositeIndexReader.java | 34 ++++++ .../codec/composite/CompositeIndexValues.java | 21 ++++ .../datacube/startree/StarTreeValues.java | 35 ++++++ .../datacube/startree/package-info.java | 12 ++ .../index/codec/composite/package-info.java | 12 ++ .../mapper/CompositeMappedFieldType.java | 3 + .../index/mapper/MapperService.java | 9 ++ .../services/org.apache.lucene.codecs.Codec | 1 + .../opensearch/index/codec/CodecTests.java | 47 ++++++++ .../StarTreeDocValuesFormatTests.java | 110 +++++++++++++++++ 16 files changed, 662 insertions(+), 4 deletions(-) create mode 100644 server/src/main/java/org/opensearch/index/codec/composite/Composite99Codec.java create mode 100644 server/src/main/java/org/opensearch/index/codec/composite/Composite99DocValuesFormat.java create mode 100644 server/src/main/java/org/opensearch/index/codec/composite/Composite99DocValuesReader.java create mode 100644 server/src/main/java/org/opensearch/index/codec/composite/Composite99DocValuesWriter.java create mode 100644 server/src/main/java/org/opensearch/index/codec/composite/CompositeCodecFactory.java create mode 100644 server/src/main/java/org/opensearch/index/codec/composite/CompositeIndexReader.java create mode 100644 server/src/main/java/org/opensearch/index/codec/composite/CompositeIndexValues.java create mode 100644 server/src/main/java/org/opensearch/index/codec/composite/datacube/startree/StarTreeValues.java create mode 100644 server/src/main/java/org/opensearch/index/codec/composite/datacube/startree/package-info.java create mode 100644 server/src/main/java/org/opensearch/index/codec/composite/package-info.java create mode 100644 server/src/main/resources/META-INF/services/org.apache.lucene.codecs.Codec create mode 100644 server/src/test/java/org/opensearch/index/codec/composite/datacube/startree/StarTreeDocValuesFormatTests.java diff --git a/server/src/main/java/org/opensearch/index/codec/CodecService.java b/server/src/main/java/org/opensearch/index/codec/CodecService.java index 67f38536a0d11..59fafdf1ba74e 100644 --- a/server/src/main/java/org/opensearch/index/codec/CodecService.java +++ b/server/src/main/java/org/opensearch/index/codec/CodecService.java @@ -39,6 +39,7 @@ import org.opensearch.common.Nullable; import org.opensearch.common.collect.MapBuilder; import org.opensearch.index.IndexSettings; +import org.opensearch.index.codec.composite.CompositeCodecFactory; import org.opensearch.index.mapper.MapperService; import java.util.Map; @@ -63,6 +64,7 @@ public class CodecService { * the raw unfiltered lucene default. useful for testing */ public static final String LUCENE_DEFAULT_CODEC = "lucene_default"; + private final CompositeCodecFactory compositeCodecFactory = new CompositeCodecFactory(); public CodecService(@Nullable MapperService mapperService, IndexSettings indexSettings, Logger logger) { final MapBuilder codecs = MapBuilder.newMapBuilder(); @@ -73,10 +75,16 @@ public CodecService(@Nullable MapperService mapperService, IndexSettings indexSe codecs.put(BEST_COMPRESSION_CODEC, new Lucene99Codec(Mode.BEST_COMPRESSION)); codecs.put(ZLIB, new Lucene99Codec(Mode.BEST_COMPRESSION)); } else { - codecs.put(DEFAULT_CODEC, new PerFieldMappingPostingFormatCodec(Mode.BEST_SPEED, mapperService, logger)); - codecs.put(LZ4, new PerFieldMappingPostingFormatCodec(Mode.BEST_SPEED, mapperService, logger)); - codecs.put(BEST_COMPRESSION_CODEC, new PerFieldMappingPostingFormatCodec(Mode.BEST_COMPRESSION, mapperService, logger)); - codecs.put(ZLIB, new PerFieldMappingPostingFormatCodec(Mode.BEST_COMPRESSION, mapperService, logger)); + // CompositeCodec still delegates to PerFieldMappingPostingFormatCodec + // We can still support all the compression codecs when composite index is present + if (mapperService.isCompositeIndexPresent()) { + codecs.putAll(compositeCodecFactory.getCompositeIndexCodecs(mapperService, logger)); + } else { + codecs.put(DEFAULT_CODEC, new PerFieldMappingPostingFormatCodec(Mode.BEST_SPEED, mapperService, logger)); + codecs.put(LZ4, new PerFieldMappingPostingFormatCodec(Mode.BEST_SPEED, mapperService, logger)); + codecs.put(BEST_COMPRESSION_CODEC, new PerFieldMappingPostingFormatCodec(Mode.BEST_COMPRESSION, mapperService, logger)); + codecs.put(ZLIB, new PerFieldMappingPostingFormatCodec(Mode.BEST_COMPRESSION, mapperService, logger)); + } } codecs.put(LUCENE_DEFAULT_CODEC, Codec.getDefault()); for (String codec : Codec.availableCodecs()) { diff --git a/server/src/main/java/org/opensearch/index/codec/composite/Composite99Codec.java b/server/src/main/java/org/opensearch/index/codec/composite/Composite99Codec.java new file mode 100644 index 0000000000000..de04944e67cd2 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/codec/composite/Composite99Codec.java @@ -0,0 +1,57 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.codec.composite; + +import org.apache.logging.log4j.Logger; +import org.apache.lucene.codecs.Codec; +import org.apache.lucene.codecs.DocValuesFormat; +import org.apache.lucene.codecs.FilterCodec; +import org.apache.lucene.codecs.lucene99.Lucene99Codec; +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.index.codec.PerFieldMappingPostingFormatCodec; +import org.opensearch.index.mapper.MapperService; + +/** + * Extends the Codec to support new file formats for composite indices eg: star tree index + * based on the mappings. + * + * @opensearch.experimental + */ +@ExperimentalApi +public class Composite99Codec extends FilterCodec { + public static final String COMPOSITE_INDEX_CODEC_NAME = "Composite99Codec"; + private final MapperService mapperService; + + // needed for SPI - this is used in reader path + public Composite99Codec() { + this(COMPOSITE_INDEX_CODEC_NAME, new Lucene99Codec(), null); + } + + public Composite99Codec(Lucene99Codec.Mode compressionMode, MapperService mapperService, Logger logger) { + this(COMPOSITE_INDEX_CODEC_NAME, new PerFieldMappingPostingFormatCodec(compressionMode, mapperService, logger), mapperService); + } + + /** + * Sole constructor. When subclassing this codec, create a no-arg ctor and pass the delegate codec and a unique name to + * this ctor. + * + * @param name name of the codec + * @param delegate codec delegate + * @param mapperService mapper service instance + */ + protected Composite99Codec(String name, Codec delegate, MapperService mapperService) { + super(name, delegate); + this.mapperService = mapperService; + } + + @Override + public DocValuesFormat docValuesFormat() { + return new Composite99DocValuesFormat(mapperService); + } +} diff --git a/server/src/main/java/org/opensearch/index/codec/composite/Composite99DocValuesFormat.java b/server/src/main/java/org/opensearch/index/codec/composite/Composite99DocValuesFormat.java new file mode 100644 index 0000000000000..216ed4f68f333 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/codec/composite/Composite99DocValuesFormat.java @@ -0,0 +1,64 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.codec.composite; + +import org.apache.lucene.codecs.DocValuesConsumer; +import org.apache.lucene.codecs.DocValuesFormat; +import org.apache.lucene.codecs.DocValuesProducer; +import org.apache.lucene.codecs.lucene90.Lucene90DocValuesFormat; +import org.apache.lucene.index.SegmentReadState; +import org.apache.lucene.index.SegmentWriteState; +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.index.mapper.MapperService; + +import java.io.IOException; + +/** + * DocValues format to handle composite indices + * + * @opensearch.experimental + */ +@ExperimentalApi +public class Composite99DocValuesFormat extends DocValuesFormat { + /** + * Creates a new docvalues format. + * + *

The provided name will be written into the index segment in some configurations (such as + * when using {@code PerFieldDocValuesFormat}): in such configurations, for the segment to be read + * this class should be registered with Java's SPI mechanism (registered in META-INF/ of your jar + * file, etc). + */ + private final DocValuesFormat delegate; + private final MapperService mapperService; + + // needed for SPI + public Composite99DocValuesFormat() { + this(new Lucene90DocValuesFormat(), null); + } + + public Composite99DocValuesFormat(MapperService mapperService) { + this(new Lucene90DocValuesFormat(), mapperService); + } + + public Composite99DocValuesFormat(DocValuesFormat delegate, MapperService mapperService) { + super(delegate.getName()); + this.delegate = delegate; + this.mapperService = mapperService; + } + + @Override + public DocValuesConsumer fieldsConsumer(SegmentWriteState state) throws IOException { + return new Composite99DocValuesWriter(delegate.fieldsConsumer(state), state, mapperService); + } + + @Override + public DocValuesProducer fieldsProducer(SegmentReadState state) throws IOException { + return new Composite99DocValuesReader(delegate.fieldsProducer(state), state); + } +} diff --git a/server/src/main/java/org/opensearch/index/codec/composite/Composite99DocValuesReader.java b/server/src/main/java/org/opensearch/index/codec/composite/Composite99DocValuesReader.java new file mode 100644 index 0000000000000..82c844088cfd4 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/codec/composite/Composite99DocValuesReader.java @@ -0,0 +1,89 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.codec.composite; + +import org.apache.lucene.codecs.DocValuesProducer; +import org.apache.lucene.index.BinaryDocValues; +import org.apache.lucene.index.FieldInfo; +import org.apache.lucene.index.NumericDocValues; +import org.apache.lucene.index.SegmentReadState; +import org.apache.lucene.index.SortedDocValues; +import org.apache.lucene.index.SortedNumericDocValues; +import org.apache.lucene.index.SortedSetDocValues; +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.index.mapper.CompositeMappedFieldType; + +import java.io.IOException; +import java.util.List; + +/** + * Reader for star tree index and star tree doc values from the segments + * + * @opensearch.experimental + */ +@ExperimentalApi +public class Composite99DocValuesReader extends DocValuesProducer implements CompositeIndexReader { + private DocValuesProducer delegate; + + public Composite99DocValuesReader(DocValuesProducer producer, SegmentReadState state) throws IOException { + this.delegate = producer; + // TODO : read star tree files + } + + @Override + public NumericDocValues getNumeric(FieldInfo field) throws IOException { + return delegate.getNumeric(field); + } + + @Override + public BinaryDocValues getBinary(FieldInfo field) throws IOException { + return delegate.getBinary(field); + } + + @Override + public SortedDocValues getSorted(FieldInfo field) throws IOException { + return delegate.getSorted(field); + } + + @Override + public SortedNumericDocValues getSortedNumeric(FieldInfo field) throws IOException { + return delegate.getSortedNumeric(field); + } + + @Override + public SortedSetDocValues getSortedSet(FieldInfo field) throws IOException { + return delegate.getSortedSet(field); + } + + @Override + public void checkIntegrity() throws IOException { + delegate.checkIntegrity(); + // Todo : check integrity of composite index related [star tree] files + } + + @Override + public void close() throws IOException { + delegate.close(); + // Todo: close composite index related files [star tree] files + } + + @Override + public List getCompositeIndexFields() { + // todo : read from file formats and get the field names. + throw new UnsupportedOperationException(); + + } + + @Override + public CompositeIndexValues getCompositeIndexValues(String field, CompositeMappedFieldType.CompositeFieldType fieldType) + throws IOException { + // TODO : read compositeIndexValues [starTreeValues] from star tree files + throw new UnsupportedOperationException(); + } +} diff --git a/server/src/main/java/org/opensearch/index/codec/composite/Composite99DocValuesWriter.java b/server/src/main/java/org/opensearch/index/codec/composite/Composite99DocValuesWriter.java new file mode 100644 index 0000000000000..75bbf78dbdad2 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/codec/composite/Composite99DocValuesWriter.java @@ -0,0 +1,114 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.codec.composite; + +import org.apache.lucene.codecs.DocValuesConsumer; +import org.apache.lucene.codecs.DocValuesProducer; +import org.apache.lucene.index.FieldInfo; +import org.apache.lucene.index.MergeState; +import org.apache.lucene.index.SegmentWriteState; +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.index.mapper.CompositeMappedFieldType; +import org.opensearch.index.mapper.MapperService; +import org.opensearch.index.mapper.StarTreeMapper; + +import java.io.IOException; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.atomic.AtomicReference; + +/** + * This class write the star tree index and star tree doc values + * based on the doc values structures of the original index + * + * @opensearch.experimental + */ +@ExperimentalApi +public class Composite99DocValuesWriter extends DocValuesConsumer { + private final DocValuesConsumer delegate; + private final SegmentWriteState state; + private final MapperService mapperService; + AtomicReference mergeState = new AtomicReference<>(); + private final Set compositeMappedFieldTypes; + private final Set compositeFieldSet; + + private final Map fieldProducerMap = new HashMap<>(); + + public Composite99DocValuesWriter(DocValuesConsumer delegate, SegmentWriteState segmentWriteState, MapperService mapperService) { + + this.delegate = delegate; + this.state = segmentWriteState; + this.mapperService = mapperService; + this.compositeMappedFieldTypes = mapperService.getCompositeFieldTypes(); + compositeFieldSet = new HashSet<>(); + for (CompositeMappedFieldType type : compositeMappedFieldTypes) { + compositeFieldSet.addAll(type.fields()); + } + } + + @Override + public void addNumericField(FieldInfo field, DocValuesProducer valuesProducer) throws IOException { + delegate.addNumericField(field, valuesProducer); + } + + @Override + public void addBinaryField(FieldInfo field, DocValuesProducer valuesProducer) throws IOException { + delegate.addBinaryField(field, valuesProducer); + } + + @Override + public void addSortedField(FieldInfo field, DocValuesProducer valuesProducer) throws IOException { + delegate.addSortedField(field, valuesProducer); + } + + @Override + public void addSortedNumericField(FieldInfo field, DocValuesProducer valuesProducer) throws IOException { + delegate.addSortedNumericField(field, valuesProducer); + // Perform this only during flush flow + if (mergeState.get() == null) { + createCompositeIndicesIfPossible(valuesProducer, field); + } + } + + @Override + public void addSortedSetField(FieldInfo field, DocValuesProducer valuesProducer) throws IOException { + delegate.addSortedSetField(field, valuesProducer); + } + + @Override + public void close() throws IOException { + delegate.close(); + } + + private void createCompositeIndicesIfPossible(DocValuesProducer valuesProducer, FieldInfo field) throws IOException { + if (compositeFieldSet.isEmpty()) return; + if (compositeFieldSet.contains(field.name)) { + fieldProducerMap.put(field.name, valuesProducer); + compositeFieldSet.remove(field.name); + } + // we have all the required fields to build composite fields + if (compositeFieldSet.isEmpty()) { + for (CompositeMappedFieldType mappedType : compositeMappedFieldTypes) { + if (mappedType instanceof StarTreeMapper.StarTreeFieldType) { + // TODO : Call StarTree builder + } + } + } + } + + @Override + public void merge(MergeState mergeState) throws IOException { + this.mergeState.compareAndSet(null, mergeState); + super.merge(mergeState); + // TODO : handle merge star tree + // mergeStarTreeFields(mergeState); + } +} diff --git a/server/src/main/java/org/opensearch/index/codec/composite/CompositeCodecFactory.java b/server/src/main/java/org/opensearch/index/codec/composite/CompositeCodecFactory.java new file mode 100644 index 0000000000000..3acedc6a27d7f --- /dev/null +++ b/server/src/main/java/org/opensearch/index/codec/composite/CompositeCodecFactory.java @@ -0,0 +1,42 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.codec.composite; + +import org.apache.logging.log4j.Logger; +import org.apache.lucene.codecs.Codec; +import org.apache.lucene.codecs.lucene99.Lucene99Codec; +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.index.mapper.MapperService; + +import java.util.HashMap; +import java.util.Map; + +import static org.opensearch.index.codec.CodecService.BEST_COMPRESSION_CODEC; +import static org.opensearch.index.codec.CodecService.DEFAULT_CODEC; +import static org.opensearch.index.codec.CodecService.LZ4; +import static org.opensearch.index.codec.CodecService.ZLIB; + +/** + * Factory class to return the latest composite codec for all the modes + * + * @opensearch.experimental + */ +@ExperimentalApi +public class CompositeCodecFactory { + public CompositeCodecFactory() {} + + public Map getCompositeIndexCodecs(MapperService mapperService, Logger logger) { + Map codecs = new HashMap<>(); + codecs.put(DEFAULT_CODEC, new Composite99Codec(Lucene99Codec.Mode.BEST_SPEED, mapperService, logger)); + codecs.put(LZ4, new Composite99Codec(Lucene99Codec.Mode.BEST_SPEED, mapperService, logger)); + codecs.put(BEST_COMPRESSION_CODEC, new Composite99Codec(Lucene99Codec.Mode.BEST_COMPRESSION, mapperService, logger)); + codecs.put(ZLIB, new Composite99Codec(Lucene99Codec.Mode.BEST_COMPRESSION, mapperService, logger)); + return codecs; + } +} diff --git a/server/src/main/java/org/opensearch/index/codec/composite/CompositeIndexReader.java b/server/src/main/java/org/opensearch/index/codec/composite/CompositeIndexReader.java new file mode 100644 index 0000000000000..d02438b75377d --- /dev/null +++ b/server/src/main/java/org/opensearch/index/codec/composite/CompositeIndexReader.java @@ -0,0 +1,34 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.codec.composite; + +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.index.mapper.CompositeMappedFieldType; + +import java.io.IOException; +import java.util.List; + +/** + * Interface that abstracts the functionality to read composite index structures from the segment + * + * @opensearch.experimental + */ +@ExperimentalApi +public interface CompositeIndexReader { + /** + * Get list of composite index fields from the segment + * + */ + List getCompositeIndexFields(); + + /** + * Get composite index values based on the field name and the field type + */ + CompositeIndexValues getCompositeIndexValues(String field, CompositeMappedFieldType.CompositeFieldType fieldType) throws IOException; +} diff --git a/server/src/main/java/org/opensearch/index/codec/composite/CompositeIndexValues.java b/server/src/main/java/org/opensearch/index/codec/composite/CompositeIndexValues.java new file mode 100644 index 0000000000000..f8848aceab343 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/codec/composite/CompositeIndexValues.java @@ -0,0 +1,21 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.codec.composite; + +import org.opensearch.common.annotation.ExperimentalApi; + +/** + * Interface for composite index values + * + * @opensearch.experimental + */ +@ExperimentalApi +public interface CompositeIndexValues { + CompositeIndexValues getValues(); +} diff --git a/server/src/main/java/org/opensearch/index/codec/composite/datacube/startree/StarTreeValues.java b/server/src/main/java/org/opensearch/index/codec/composite/datacube/startree/StarTreeValues.java new file mode 100644 index 0000000000000..2a5b96ce2620a --- /dev/null +++ b/server/src/main/java/org/opensearch/index/codec/composite/datacube/startree/StarTreeValues.java @@ -0,0 +1,35 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.codec.composite.datacube.startree; + +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.index.codec.composite.CompositeIndexValues; + +import java.util.List; + +/** + * Concrete class that holds the star tree associated values from the segment + * + * @opensearch.experimental + */ +@ExperimentalApi +public class StarTreeValues implements CompositeIndexValues { + private final List dimensionsOrder; + + // TODO : come up with full set of vales such as dimensions and metrics doc values + star tree + public StarTreeValues(List dimensionsOrder) { + super(); + this.dimensionsOrder = List.copyOf(dimensionsOrder); + } + + @Override + public CompositeIndexValues getValues() { + return this; + } +} diff --git a/server/src/main/java/org/opensearch/index/codec/composite/datacube/startree/package-info.java b/server/src/main/java/org/opensearch/index/codec/composite/datacube/startree/package-info.java new file mode 100644 index 0000000000000..67808ad51289a --- /dev/null +++ b/server/src/main/java/org/opensearch/index/codec/composite/datacube/startree/package-info.java @@ -0,0 +1,12 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +/** + * classes responsible for handling all star tree structures and operations as part of codec + */ +package org.opensearch.index.codec.composite.datacube.startree; diff --git a/server/src/main/java/org/opensearch/index/codec/composite/package-info.java b/server/src/main/java/org/opensearch/index/codec/composite/package-info.java new file mode 100644 index 0000000000000..5d15e99c00975 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/codec/composite/package-info.java @@ -0,0 +1,12 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +/** + * classes responsible for handling all composite index codecs and operations + */ +package org.opensearch.index.codec.composite; diff --git a/server/src/main/java/org/opensearch/index/mapper/CompositeMappedFieldType.java b/server/src/main/java/org/opensearch/index/mapper/CompositeMappedFieldType.java index f52ce29a86dd2..e067e70621304 100644 --- a/server/src/main/java/org/opensearch/index/mapper/CompositeMappedFieldType.java +++ b/server/src/main/java/org/opensearch/index/mapper/CompositeMappedFieldType.java @@ -45,7 +45,10 @@ public CompositeMappedFieldType(String name, List fields, CompositeField /** * Supported composite field types + * + * @opensearch.experimental */ + @ExperimentalApi public enum CompositeFieldType { STAR_TREE("star_tree"); diff --git a/server/src/main/java/org/opensearch/index/mapper/MapperService.java b/server/src/main/java/org/opensearch/index/mapper/MapperService.java index c2e7411a3b47a..530a3092a5aa7 100644 --- a/server/src/main/java/org/opensearch/index/mapper/MapperService.java +++ b/server/src/main/java/org/opensearch/index/mapper/MapperService.java @@ -226,6 +226,8 @@ public enum MergeReason { private final BooleanSupplier idFieldDataEnabled; + private volatile Set compositeMappedFieldTypes; + public MapperService( IndexSettings indexSettings, IndexAnalyzers indexAnalyzers, @@ -542,6 +544,9 @@ private synchronized Map internalMerge(DocumentMapper ma } assert results.values().stream().allMatch(this::assertSerialization); + + // initialize composite fields post merge + this.compositeMappedFieldTypes = getCompositeFieldTypesFromMapper(); return results; } @@ -655,6 +660,10 @@ public boolean isCompositeIndexPresent() { } public Set getCompositeFieldTypes() { + return compositeMappedFieldTypes; + } + + private Set getCompositeFieldTypesFromMapper() { Set compositeMappedFieldTypes = new HashSet<>(); if (this.mapper == null) { return Collections.emptySet(); diff --git a/server/src/main/resources/META-INF/services/org.apache.lucene.codecs.Codec b/server/src/main/resources/META-INF/services/org.apache.lucene.codecs.Codec new file mode 100644 index 0000000000000..e030a813373c1 --- /dev/null +++ b/server/src/main/resources/META-INF/services/org.apache.lucene.codecs.Codec @@ -0,0 +1 @@ +org.opensearch.index.codec.composite.Composite99Codec diff --git a/server/src/test/java/org/opensearch/index/codec/CodecTests.java b/server/src/test/java/org/opensearch/index/codec/CodecTests.java index b31edd79411d0..7146b7dc51753 100644 --- a/server/src/test/java/org/opensearch/index/codec/CodecTests.java +++ b/server/src/test/java/org/opensearch/index/codec/CodecTests.java @@ -48,6 +48,7 @@ import org.opensearch.env.Environment; import org.opensearch.index.IndexSettings; import org.opensearch.index.analysis.IndexAnalyzers; +import org.opensearch.index.codec.composite.Composite99Codec; import org.opensearch.index.engine.EngineConfig; import org.opensearch.index.mapper.MapperService; import org.opensearch.index.similarity.SimilarityService; @@ -59,6 +60,8 @@ import java.io.IOException; import java.util.Collections; +import org.mockito.Mockito; + import static org.opensearch.index.engine.EngineConfig.INDEX_CODEC_COMPRESSION_LEVEL_SETTING; import static org.hamcrest.Matchers.instanceOf; @@ -76,23 +79,52 @@ public void testDefault() throws Exception { assertStoredFieldsCompressionEquals(Lucene99Codec.Mode.BEST_SPEED, codec); } + public void testDefaultWithCompositeIndex() throws Exception { + Codec codec = createCodecService(false, true).codec("default"); + assertStoredFieldsCompressionEquals(Lucene99Codec.Mode.BEST_SPEED, codec); + assert codec instanceof Composite99Codec; + } + public void testBestCompression() throws Exception { Codec codec = createCodecService(false).codec("best_compression"); assertStoredFieldsCompressionEquals(Lucene99Codec.Mode.BEST_COMPRESSION, codec); } + public void testBestCompressionWithCompositeIndex() throws Exception { + Codec codec = createCodecService(false, true).codec("best_compression"); + assertStoredFieldsCompressionEquals(Lucene99Codec.Mode.BEST_COMPRESSION, codec); + assert codec instanceof Composite99Codec; + } + public void testLZ4() throws Exception { Codec codec = createCodecService(false).codec("lz4"); assertStoredFieldsCompressionEquals(Lucene99Codec.Mode.BEST_SPEED, codec); assert codec instanceof PerFieldMappingPostingFormatCodec; } + public void testLZ4WithCompositeIndex() throws Exception { + Codec codec = createCodecService(false, true).codec("lz4"); + assertStoredFieldsCompressionEquals(Lucene99Codec.Mode.BEST_SPEED, codec); + assert codec instanceof Composite99Codec; + } + public void testZlib() throws Exception { Codec codec = createCodecService(false).codec("zlib"); assertStoredFieldsCompressionEquals(Lucene99Codec.Mode.BEST_COMPRESSION, codec); assert codec instanceof PerFieldMappingPostingFormatCodec; } + public void testZlibWithCompositeIndex() throws Exception { + Codec codec = createCodecService(false, true).codec("zlib"); + assertStoredFieldsCompressionEquals(Lucene99Codec.Mode.BEST_COMPRESSION, codec); + assert codec instanceof Composite99Codec; + } + + public void testResolveDefaultCodecsWithCompositeIndex() throws Exception { + CodecService codecService = createCodecService(false, true); + assertThat(codecService.codec("default"), instanceOf(Composite99Codec.class)); + } + public void testBestCompressionWithCompressionLevel() { final Settings settings = Settings.builder() .put(INDEX_CODEC_COMPRESSION_LEVEL_SETTING.getKey(), randomIntBetween(1, 6)) @@ -150,10 +182,17 @@ private void assertStoredFieldsCompressionEquals(Lucene99Codec.Mode expected, Co } private CodecService createCodecService(boolean isMapperServiceNull) throws IOException { + return createCodecService(isMapperServiceNull, false); + } + + private CodecService createCodecService(boolean isMapperServiceNull, boolean isCompositeIndexPresent) throws IOException { Settings nodeSettings = Settings.builder().put(Environment.PATH_HOME_SETTING.getKey(), createTempDir()).build(); if (isMapperServiceNull) { return new CodecService(null, IndexSettingsModule.newIndexSettings("_na", nodeSettings), LogManager.getLogger("test")); } + if (isCompositeIndexPresent) { + return buildCodecServiceWithCompositeIndex(nodeSettings); + } return buildCodecService(nodeSettings); } @@ -176,6 +215,14 @@ private CodecService buildCodecService(Settings nodeSettings) throws IOException return new CodecService(service, indexSettings, LogManager.getLogger("test")); } + private CodecService buildCodecServiceWithCompositeIndex(Settings nodeSettings) throws IOException { + + IndexSettings indexSettings = IndexSettingsModule.newIndexSettings("_na", nodeSettings); + MapperService service = Mockito.mock(MapperService.class); + Mockito.when(service.isCompositeIndexPresent()).thenReturn(true); + return new CodecService(service, indexSettings, LogManager.getLogger("test")); + } + private SegmentReader getSegmentReader(Codec codec) throws IOException { Directory dir = newDirectory(); IndexWriterConfig iwc = newIndexWriterConfig(null); diff --git a/server/src/test/java/org/opensearch/index/codec/composite/datacube/startree/StarTreeDocValuesFormatTests.java b/server/src/test/java/org/opensearch/index/codec/composite/datacube/startree/StarTreeDocValuesFormatTests.java new file mode 100644 index 0000000000000..6c6d26656e4de --- /dev/null +++ b/server/src/test/java/org/opensearch/index/codec/composite/datacube/startree/StarTreeDocValuesFormatTests.java @@ -0,0 +1,110 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.codec.composite.datacube.startree; + +import org.apache.logging.log4j.LogManager; +import org.apache.logging.log4j.Logger; +import org.apache.lucene.codecs.Codec; +import org.apache.lucene.codecs.lucene99.Lucene99Codec; +import org.apache.lucene.document.Document; +import org.apache.lucene.document.SortedNumericDocValuesField; +import org.apache.lucene.index.IndexWriterConfig; +import org.apache.lucene.store.Directory; +import org.apache.lucene.tests.index.BaseDocValuesFormatTestCase; +import org.apache.lucene.tests.index.RandomIndexWriter; +import org.apache.lucene.tests.util.LuceneTestCase; +import org.opensearch.common.Rounding; +import org.opensearch.index.codec.composite.Composite99Codec; +import org.opensearch.index.compositeindex.datacube.DateDimension; +import org.opensearch.index.compositeindex.datacube.Dimension; +import org.opensearch.index.compositeindex.datacube.Metric; +import org.opensearch.index.compositeindex.datacube.MetricStat; +import org.opensearch.index.compositeindex.datacube.NumericDimension; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeField; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeFieldConfiguration; +import org.opensearch.index.mapper.MapperService; +import org.opensearch.index.mapper.StarTreeMapper; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; +import java.util.Set; + +import org.mockito.Mockito; + +/** + * Star tree doc values Lucene tests + */ +@LuceneTestCase.SuppressSysoutChecks(bugUrl = "we log a lot on purpose") +public class StarTreeDocValuesFormatTests extends BaseDocValuesFormatTestCase { + @Override + protected Codec getCodec() { + MapperService service = Mockito.mock(MapperService.class); + Mockito.when(service.getCompositeFieldTypes()).thenReturn(Set.of(getStarTreeFieldType())); + final Logger testLogger = LogManager.getLogger(StarTreeDocValuesFormatTests.class); + return new Composite99Codec(Lucene99Codec.Mode.BEST_SPEED, service, testLogger); + } + + private StarTreeMapper.StarTreeFieldType getStarTreeFieldType() { + List m1 = new ArrayList<>(); + m1.add(MetricStat.MAX); + Metric metric = new Metric("sndv", m1); + List d1CalendarIntervals = new ArrayList<>(); + d1CalendarIntervals.add(Rounding.DateTimeUnit.HOUR_OF_DAY); + StarTreeField starTreeField = getStarTreeField(d1CalendarIntervals, metric); + + return new StarTreeMapper.StarTreeFieldType("star_tree", starTreeField); + } + + private static StarTreeField getStarTreeField(List d1CalendarIntervals, Metric metric1) { + DateDimension d1 = new DateDimension("field", d1CalendarIntervals); + NumericDimension d2 = new NumericDimension("dv"); + + List metrics = List.of(metric1); + List dims = List.of(d1, d2); + StarTreeFieldConfiguration config = new StarTreeFieldConfiguration( + 100, + Collections.emptySet(), + StarTreeFieldConfiguration.StarTreeBuildMode.OFF_HEAP + ); + + return new StarTreeField("starTree", dims, metrics, config); + } + + public void testStarTreeDocValues() throws IOException { + Directory directory = newDirectory(); + IndexWriterConfig conf = newIndexWriterConfig(null); + conf.setMergePolicy(newLogMergePolicy()); + RandomIndexWriter iw = new RandomIndexWriter(random(), directory, conf); + Document doc = new Document(); + doc.add(new SortedNumericDocValuesField("sndv", 1)); + doc.add(new SortedNumericDocValuesField("dv", 1)); + doc.add(new SortedNumericDocValuesField("field", 1)); + iw.addDocument(doc); + doc.add(new SortedNumericDocValuesField("sndv", 1)); + doc.add(new SortedNumericDocValuesField("dv", 1)); + doc.add(new SortedNumericDocValuesField("field", 1)); + iw.addDocument(doc); + iw.forceMerge(1); + doc.add(new SortedNumericDocValuesField("sndv", 2)); + doc.add(new SortedNumericDocValuesField("dv", 2)); + doc.add(new SortedNumericDocValuesField("field", 2)); + iw.addDocument(doc); + doc.add(new SortedNumericDocValuesField("sndv", 2)); + doc.add(new SortedNumericDocValuesField("dv", 2)); + doc.add(new SortedNumericDocValuesField("field", 2)); + iw.addDocument(doc); + iw.forceMerge(1); + iw.close(); + + // TODO : validate star tree structures that got created + directory.close(); + } +} From ed1d85216014aeb861a18645557f52a4b4ca7694 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Mon, 8 Jul 2024 10:19:29 -0400 Subject: [PATCH 036/167] Bump com.github.spullara.mustache.java:compiler from 0.9.13 to 0.9.14 in /modules/lang-mustache (#14672) * Bump com.github.spullara.mustache.java:compiler Bumps [com.github.spullara.mustache.java:compiler](https://github.com/spullara/mustache.java) from 0.9.13 to 0.9.14. - [Commits](https://github.com/spullara/mustache.java/compare/mustache.java-0.9.13...mustache.java-0.9.14) --- updated-dependencies: - dependency-name: com.github.spullara.mustache.java:compiler dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] * Updating SHAs Signed-off-by: dependabot[bot] * Update changelog Signed-off-by: dependabot[bot] --------- Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] --- CHANGELOG.md | 1 + modules/lang-mustache/build.gradle | 2 +- modules/lang-mustache/licenses/compiler-0.9.13.jar.sha1 | 1 - modules/lang-mustache/licenses/compiler-0.9.14.jar.sha1 | 1 + 4 files changed, 3 insertions(+), 2 deletions(-) delete mode 100644 modules/lang-mustache/licenses/compiler-0.9.13.jar.sha1 create mode 100644 modules/lang-mustache/licenses/compiler-0.9.14.jar.sha1 diff --git a/CHANGELOG.md b/CHANGELOG.md index 1f7d1ea5b3d19..96b0d49c7e6ad 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -28,6 +28,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Bump `azure-identity` from 1.11.4 to 1.13.0, Bump `msal4j` from 1.14.3 to 1.15.1, Bump `msal4j-persistence-extension` from 1.2.0 to 1.3.0 ([#14506](https://github.com/opensearch-project/OpenSearch/pull/14506)) - Bump `com.azure:azure-storage-common` from 12.21.2 to 12.25.1 ([#14517](https://github.com/opensearch-project/OpenSearch/pull/14517)) - Bump `com.microsoft.azure:msal4j` from 1.15.1 to 1.16.0 ([#14610](https://github.com/opensearch-project/OpenSearch/pull/14610)) +- Bump `com.github.spullara.mustache.java:compiler` from 0.9.13 to 0.9.14 ([#14672](https://github.com/opensearch-project/OpenSearch/pull/14672)) ### Changed - [Tiered Caching] Move query recomputation logic outside write lock ([#14187](https://github.com/opensearch-project/OpenSearch/pull/14187)) diff --git a/modules/lang-mustache/build.gradle b/modules/lang-mustache/build.gradle index bcf5c07ea8c64..a836124f94b41 100644 --- a/modules/lang-mustache/build.gradle +++ b/modules/lang-mustache/build.gradle @@ -38,7 +38,7 @@ opensearchplugin { } dependencies { - api "com.github.spullara.mustache.java:compiler:0.9.13" + api "com.github.spullara.mustache.java:compiler:0.9.14" } restResources { diff --git a/modules/lang-mustache/licenses/compiler-0.9.13.jar.sha1 b/modules/lang-mustache/licenses/compiler-0.9.13.jar.sha1 deleted file mode 100644 index 70d53aac260eb..0000000000000 --- a/modules/lang-mustache/licenses/compiler-0.9.13.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -60666500a7dce7a5d3e17c09b46ea6f037192bd5 \ No newline at end of file diff --git a/modules/lang-mustache/licenses/compiler-0.9.14.jar.sha1 b/modules/lang-mustache/licenses/compiler-0.9.14.jar.sha1 new file mode 100644 index 0000000000000..29069ac90817a --- /dev/null +++ b/modules/lang-mustache/licenses/compiler-0.9.14.jar.sha1 @@ -0,0 +1 @@ +e6df8b5aabb80d6eb6d8fef312a56d66b7659ba6 \ No newline at end of file From 41fa0855faf1913b0f5334398578cd6e2307d853 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Mon, 8 Jul 2024 11:29:48 -0400 Subject: [PATCH 037/167] Bump net.minidev:accessors-smart from 2.5.0 to 2.5.1 in /plugins/repository-azure (#14673) * Bump net.minidev:accessors-smart in /plugins/repository-azure Bumps [net.minidev:accessors-smart](https://github.com/netplex/json-smart-v2) from 2.5.0 to 2.5.1. - [Release notes](https://github.com/netplex/json-smart-v2/releases) - [Commits](https://github.com/netplex/json-smart-v2/compare/2.5.0...2.5.1) --- updated-dependencies: - dependency-name: net.minidev:accessors-smart dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] * Updating SHAs Signed-off-by: dependabot[bot] * Update changelog Signed-off-by: dependabot[bot] --------- Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] --- CHANGELOG.md | 1 + plugins/repository-azure/build.gradle | 2 +- .../repository-azure/licenses/accessors-smart-2.5.0.jar.sha1 | 1 - .../repository-azure/licenses/accessors-smart-2.5.1.jar.sha1 | 1 + 4 files changed, 3 insertions(+), 2 deletions(-) delete mode 100644 plugins/repository-azure/licenses/accessors-smart-2.5.0.jar.sha1 create mode 100644 plugins/repository-azure/licenses/accessors-smart-2.5.1.jar.sha1 diff --git a/CHANGELOG.md b/CHANGELOG.md index 96b0d49c7e6ad..cfb12ee853a8f 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -29,6 +29,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Bump `com.azure:azure-storage-common` from 12.21.2 to 12.25.1 ([#14517](https://github.com/opensearch-project/OpenSearch/pull/14517)) - Bump `com.microsoft.azure:msal4j` from 1.15.1 to 1.16.0 ([#14610](https://github.com/opensearch-project/OpenSearch/pull/14610)) - Bump `com.github.spullara.mustache.java:compiler` from 0.9.13 to 0.9.14 ([#14672](https://github.com/opensearch-project/OpenSearch/pull/14672)) +- Bump `net.minidev:accessors-smart` from 2.5.0 to 2.5.1 ([#14673](https://github.com/opensearch-project/OpenSearch/pull/14673)) ### Changed - [Tiered Caching] Move query recomputation logic outside write lock ([#14187](https://github.com/opensearch-project/OpenSearch/pull/14187)) diff --git a/plugins/repository-azure/build.gradle b/plugins/repository-azure/build.gradle index 13b711019ff2a..0f822a02e05d8 100644 --- a/plugins/repository-azure/build.gradle +++ b/plugins/repository-azure/build.gradle @@ -69,7 +69,7 @@ dependencies { // Both msal4j:1.14.3 and oauth2-oidc-sdk:11.9.1 has compile dependency on different versions of json-smart, // selected the higher version which is 2.5.0 api 'net.minidev:json-smart:2.5.0' - api 'net.minidev:accessors-smart:2.5.0' + api 'net.minidev:accessors-smart:2.5.1' api "org.ow2.asm:asm:${versions.asm}" // End of transitive dependencies for azure-identity api "io.projectreactor.netty:reactor-netty-core:${versions.reactor_netty}" diff --git a/plugins/repository-azure/licenses/accessors-smart-2.5.0.jar.sha1 b/plugins/repository-azure/licenses/accessors-smart-2.5.0.jar.sha1 deleted file mode 100644 index 1578c94fcdc7b..0000000000000 --- a/plugins/repository-azure/licenses/accessors-smart-2.5.0.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -aca011492dfe9c26f4e0659028a4fe0970829dd8 \ No newline at end of file diff --git a/plugins/repository-azure/licenses/accessors-smart-2.5.1.jar.sha1 b/plugins/repository-azure/licenses/accessors-smart-2.5.1.jar.sha1 new file mode 100644 index 0000000000000..8f7452437323d --- /dev/null +++ b/plugins/repository-azure/licenses/accessors-smart-2.5.1.jar.sha1 @@ -0,0 +1 @@ +19b820261eb2e7de7d5bde11d1c06e4501dd7e5f \ No newline at end of file From 0684342b9e3711118614f38fb0648d8e6428477e Mon Sep 17 00:00:00 2001 From: Kaushal Kumar Date: Mon, 8 Jul 2024 14:39:39 -0700 Subject: [PATCH 038/167] Adding QueryGroup schema (#13669) * rebase with opensearch/main Signed-off-by: Kaushal Kumar * add resourceLimitGroupId propagation logic from coordinator to data nodes Signed-off-by: Kaushal Kumar * add sandbox schema Signed-off-by: Kaushal Kumar * add resourceLimitGroupTests Signed-off-by: Kaushal Kumar * add resourceLimitGroupMetadata tests Signed-off-by: Kaushal Kumar * run spotlessApply Signed-off-by: Kaushal Kumar * add mode field in ResourceLimitGroup schema Signed-off-by: Kaushal Kumar * fix breaking testcases Signed-off-by: Kaushal Kumar * add task cancellation skeleton Signed-off-by: Kaushal Kumar * add multitenant labels in searchSource builder Signed-off-by: Kaushal Kumar * write custom xcontent parser for ResourceLimitGroup Signed-off-by: Kaushal Kumar * remove unrelated changes Signed-off-by: Kaushal Kumar * remove non-existing import fro cluster settings Signed-off-by: Kaushal Kumar * remove non releated changes Signed-off-by: Kaushal Kumar * add _id as the resourceLimitGroup key Signed-off-by: Kaushal Kumar * add change to register resource limit group metadata Signed-off-by: Kaushal Kumar * add updatedAt in resource limit group Signed-off-by: Kaushal Kumar * rename resourceLimitGroup to queryGroup Signed-off-by: Kaushal Kumar * address the comments on PR Signed-off-by: Kaushal Kumar * rename the mode member var to resiliency mode Signed-off-by: Kaushal Kumar * address comments Signed-off-by: Kaushal Kumar * add change in CHANGELOG Signed-off-by: Kaushal Kumar * add tests for custom namedWritable QueryGroupMetadata Signed-off-by: Kaushal Kumar * structure resourceLimits into an object Signed-off-by: Kaushal Kumar * add QueryGroup.toXContent test case Signed-off-by: Kaushal Kumar * fix precommit errors Signed-off-by: Kaushal Kumar * fix precommit errors Signed-off-by: Kaushal Kumar * fix assemble errors Signed-off-by: Kaushal Kumar * fix checkstyle errors Signed-off-by: Kaushal Kumar * address comments Signed-off-by: Kaushal Kumar --------- Signed-off-by: Kaushal Kumar --- CHANGELOG.md | 1 + .../org/opensearch/cluster/ClusterModule.java | 3 + .../opensearch/cluster/metadata/Metadata.java | 19 ++ .../cluster/metadata/QueryGroup.java | 317 ++++++++++++++++++ .../cluster/metadata/QueryGroupMetadata.java | 185 ++++++++++ .../org/opensearch/search/ResourceType.java | 14 +- .../SearchBackpressureService.java | 4 +- .../cluster/ClusterModuleTests.java | 10 + .../metadata/QueryGroupMetadataTests.java | 93 +++++ .../cluster/metadata/QueryGroupTests.java | 158 +++++++++ .../SearchBackpressureServiceTests.java | 12 +- .../trackers/NodeDuressTrackersTests.java | 8 +- 12 files changed, 810 insertions(+), 14 deletions(-) create mode 100644 server/src/main/java/org/opensearch/cluster/metadata/QueryGroup.java create mode 100644 server/src/main/java/org/opensearch/cluster/metadata/QueryGroupMetadata.java create mode 100644 server/src/test/java/org/opensearch/cluster/metadata/QueryGroupMetadataTests.java create mode 100644 server/src/test/java/org/opensearch/cluster/metadata/QueryGroupTests.java diff --git a/CHANGELOG.md b/CHANGELOG.md index cfb12ee853a8f..4d0990db31d20 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -9,6 +9,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - [Remote Store] Rate limiter for remote store low priority uploads ([#14374](https://github.com/opensearch-project/OpenSearch/pull/14374/)) - Apply the date histogram rewrite optimization to range aggregation ([#13865](https://github.com/opensearch-project/OpenSearch/pull/13865)) - [Writable Warm] Add composite directory implementation and integrate it with FileCache ([12782](https://github.com/opensearch-project/OpenSearch/pull/12782)) +- [Workload Management] Add QueryGroup schema ([13669](https://github.com/opensearch-project/OpenSearch/pull/13669)) - Add batching supported processor base type AbstractBatchingProcessor ([#14554](https://github.com/opensearch-project/OpenSearch/pull/14554)) - Fix race condition while parsing derived fields from search definition ([14445](https://github.com/opensearch-project/OpenSearch/pull/14445)) - Add allowlist setting for ingest-common and search-pipeline-common processors ([#14439](https://github.com/opensearch-project/OpenSearch/issues/14439)) diff --git a/server/src/main/java/org/opensearch/cluster/ClusterModule.java b/server/src/main/java/org/opensearch/cluster/ClusterModule.java index c7fd263bda56a..bb51c42252448 100644 --- a/server/src/main/java/org/opensearch/cluster/ClusterModule.java +++ b/server/src/main/java/org/opensearch/cluster/ClusterModule.java @@ -48,6 +48,7 @@ import org.opensearch.cluster.metadata.MetadataIndexTemplateService; import org.opensearch.cluster.metadata.MetadataMappingService; import org.opensearch.cluster.metadata.MetadataUpdateSettingsService; +import org.opensearch.cluster.metadata.QueryGroupMetadata; import org.opensearch.cluster.metadata.RepositoriesMetadata; import org.opensearch.cluster.metadata.ViewMetadata; import org.opensearch.cluster.metadata.WeightedRoutingMetadata; @@ -214,6 +215,8 @@ public static List getNamedWriteables() { DecommissionAttributeMetadata::new, DecommissionAttributeMetadata::readDiffFrom ); + + registerMetadataCustom(entries, QueryGroupMetadata.TYPE, QueryGroupMetadata::new, QueryGroupMetadata::readDiffFrom); // Task Status (not Diffable) entries.add(new Entry(Task.Status.class, PersistentTasksNodeService.Status.NAME, PersistentTasksNodeService.Status::new)); return entries; diff --git a/server/src/main/java/org/opensearch/cluster/metadata/Metadata.java b/server/src/main/java/org/opensearch/cluster/metadata/Metadata.java index e3f63b1c27b83..2a54f6444ffda 100644 --- a/server/src/main/java/org/opensearch/cluster/metadata/Metadata.java +++ b/server/src/main/java/org/opensearch/cluster/metadata/Metadata.java @@ -1368,6 +1368,25 @@ public Builder removeDataStream(String name) { return this; } + public Builder queryGroups(final Map queryGroups) { + this.customs.put(QueryGroupMetadata.TYPE, new QueryGroupMetadata(queryGroups)); + return this; + } + + public Builder put(final QueryGroup queryGroup) { + Objects.requireNonNull(queryGroup, "queryGroup should not be null"); + Map existing = new HashMap<>(getQueryGroups()); + existing.put(queryGroup.get_id(), queryGroup); + return queryGroups(existing); + } + + private Map getQueryGroups() { + return Optional.ofNullable(this.customs.get(QueryGroupMetadata.TYPE)) + .map(o -> (QueryGroupMetadata) o) + .map(QueryGroupMetadata::queryGroups) + .orElse(Collections.emptyMap()); + } + private Map getViews() { return Optional.ofNullable(customs.get(ViewMetadata.TYPE)) .map(o -> (ViewMetadata) o) diff --git a/server/src/main/java/org/opensearch/cluster/metadata/QueryGroup.java b/server/src/main/java/org/opensearch/cluster/metadata/QueryGroup.java new file mode 100644 index 0000000000000..beaab198073df --- /dev/null +++ b/server/src/main/java/org/opensearch/cluster/metadata/QueryGroup.java @@ -0,0 +1,317 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.cluster.metadata; + +import org.opensearch.cluster.AbstractDiffable; +import org.opensearch.cluster.Diff; +import org.opensearch.common.UUIDs; +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.core.common.io.stream.StreamInput; +import org.opensearch.core.common.io.stream.StreamOutput; +import org.opensearch.core.xcontent.ToXContentObject; +import org.opensearch.core.xcontent.XContentBuilder; +import org.opensearch.core.xcontent.XContentParser; +import org.opensearch.search.ResourceType; +import org.joda.time.Instant; + +import java.io.IOException; +import java.util.HashMap; +import java.util.Map; +import java.util.Objects; + +/** + * Class to define the QueryGroup schema + * { + * "_id": "fafjafjkaf9ag8a9ga9g7ag0aagaga", + * "resourceLimits": { + * "jvm": 0.4 + * }, + * "resiliency_mode": "enforced", + * "name": "analytics", + * "updatedAt": 4513232415 + * } + */ +@ExperimentalApi +public class QueryGroup extends AbstractDiffable implements ToXContentObject { + + private static final int MAX_CHARS_ALLOWED_IN_NAME = 50; + private final String name; + private final String _id; + private final ResiliencyMode resiliencyMode; + // It is an epoch in millis + private final long updatedAtInMillis; + private final Map resourceLimits; + + public QueryGroup(String name, ResiliencyMode resiliencyMode, Map resourceLimits) { + this(name, UUIDs.randomBase64UUID(), resiliencyMode, resourceLimits, Instant.now().getMillis()); + } + + public QueryGroup(String name, String _id, ResiliencyMode resiliencyMode, Map resourceLimits, long updatedAt) { + Objects.requireNonNull(name, "QueryGroup.name can't be null"); + Objects.requireNonNull(resourceLimits, "QueryGroup.resourceLimits can't be null"); + Objects.requireNonNull(resiliencyMode, "QueryGroup.resiliencyMode can't be null"); + Objects.requireNonNull(_id, "QueryGroup._id can't be null"); + + if (name.length() > MAX_CHARS_ALLOWED_IN_NAME) { + throw new IllegalArgumentException("QueryGroup.name shouldn't be more than 50 chars long"); + } + + if (resourceLimits.isEmpty()) { + throw new IllegalArgumentException("QueryGroup.resourceLimits should at least have 1 resource limit"); + } + validateResourceLimits(resourceLimits); + if (!isValid(updatedAt)) { + throw new IllegalArgumentException("QueryGroup.updatedAtInMillis is not a valid epoch"); + } + + this.name = name; + this._id = _id; + this.resiliencyMode = resiliencyMode; + this.resourceLimits = resourceLimits; + this.updatedAtInMillis = updatedAt; + } + + private static boolean isValid(long updatedAt) { + long minValidTimestamp = Instant.ofEpochMilli(0L).getMillis(); + + // Use Instant.now() to get the current time in seconds since epoch + long currentSeconds = Instant.now().getMillis(); + + // Check if the timestamp is within a reasonable range + return minValidTimestamp <= updatedAt && updatedAt <= currentSeconds; + } + + public QueryGroup(StreamInput in) throws IOException { + this( + in.readString(), + in.readString(), + ResiliencyMode.fromName(in.readString()), + in.readMap((i) -> ResourceType.fromName(i.readString()), StreamInput::readGenericValue), + in.readLong() + ); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeString(name); + out.writeString(_id); + out.writeString(resiliencyMode.getName()); + out.writeMap(resourceLimits, ResourceType::writeTo, StreamOutput::writeGenericValue); + out.writeLong(updatedAtInMillis); + } + + private void validateResourceLimits(Map resourceLimits) { + for (Map.Entry resource : resourceLimits.entrySet()) { + Double threshold = (Double) resource.getValue(); + Objects.requireNonNull(resource.getKey(), "resourceName can't be null"); + Objects.requireNonNull(threshold, "resource limit threshold for" + resource.getKey().getName() + " : can't be null"); + + if (Double.compare(threshold, 1.0) > 0) { + throw new IllegalArgumentException("resource value should be less than 1.0"); + } + } + } + + @Override + public XContentBuilder toXContent(final XContentBuilder builder, final Params params) throws IOException { + builder.startObject(); + builder.field("_id", _id); + builder.field("name", name); + builder.field("resiliency_mode", resiliencyMode.getName()); + builder.field("updatedAt", updatedAtInMillis); + // write resource limits + builder.startObject("resourceLimits"); + for (ResourceType resourceType : ResourceType.values()) { + if (resourceLimits.containsKey(resourceType)) { + builder.field(resourceType.getName(), resourceLimits.get(resourceType)); + } + } + builder.endObject(); + + builder.endObject(); + return builder; + } + + public static QueryGroup fromXContent(final XContentParser parser) throws IOException { + if (parser.currentToken() == null) { // fresh parser? move to the first token + parser.nextToken(); + } + + Builder builder = builder(); + + XContentParser.Token token = parser.currentToken(); + + if (token != XContentParser.Token.START_OBJECT) { + throw new IllegalArgumentException("Expected START_OBJECT token but found [" + parser.currentName() + "]"); + } + + String fieldName = ""; + // Map to hold resources + final Map resourceLimits = new HashMap<>(); + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + fieldName = parser.currentName(); + } else if (token.isValue()) { + if (fieldName.equals("_id")) { + builder._id(parser.text()); + } else if (fieldName.equals("name")) { + builder.name(parser.text()); + } else if (fieldName.equals("resiliency_mode")) { + builder.mode(parser.text()); + } else if (fieldName.equals("updatedAt")) { + builder.updatedAt(parser.longValue()); + } else { + throw new IllegalArgumentException(fieldName + " is not a valid field in QueryGroup"); + } + } else if (token == XContentParser.Token.START_OBJECT) { + + if (!fieldName.equals("resourceLimits")) { + throw new IllegalArgumentException( + "QueryGroup.resourceLimits is an object and expected token was { " + " but found " + token + ); + } + + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + fieldName = parser.currentName(); + } else { + resourceLimits.put(ResourceType.fromName(fieldName), parser.doubleValue()); + } + } + + } + } + builder.resourceLimits(resourceLimits); + return builder.build(); + } + + public static Diff readDiff(final StreamInput in) throws IOException { + return readDiffFrom(QueryGroup::new, in); + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + QueryGroup that = (QueryGroup) o; + return Objects.equals(name, that.name) + && Objects.equals(resourceLimits, that.resourceLimits) + && Objects.equals(_id, that._id) + && updatedAtInMillis == that.updatedAtInMillis; + } + + @Override + public int hashCode() { + return Objects.hash(name, resourceLimits, updatedAtInMillis, _id); + } + + public String getName() { + return name; + } + + public ResiliencyMode getResiliencyMode() { + return resiliencyMode; + } + + public Map getResourceLimits() { + return resourceLimits; + } + + public String get_id() { + return _id; + } + + public long getUpdatedAtInMillis() { + return updatedAtInMillis; + } + + /** + * builder method for the {@link QueryGroup} + * @return Builder object + */ + public static Builder builder() { + return new Builder(); + } + + /** + * This enum models the different QueryGroup resiliency modes + * SOFT - means that this query group can consume more than query group resource limits if node is not in duress + * ENFORCED - means that it will never breach the assigned limits and will cancel as soon as the limits are breached + * MONITOR - it will not cause any cancellation but just log the eligible task cancellations + */ + @ExperimentalApi + public enum ResiliencyMode { + SOFT("soft"), + ENFORCED("enforced"), + MONITOR("monitor"); + + private final String name; + + ResiliencyMode(String mode) { + this.name = mode; + } + + public String getName() { + return name; + } + + public static ResiliencyMode fromName(String s) { + for (ResiliencyMode mode : values()) { + if (mode.getName().equalsIgnoreCase(s)) return mode; + + } + throw new IllegalArgumentException("Invalid value for QueryGroupMode: " + s); + } + + } + + /** + * Builder class for {@link QueryGroup} + */ + @ExperimentalApi + public static class Builder { + private String name; + private String _id; + private ResiliencyMode resiliencyMode; + private long updatedAt; + private Map resourceLimits; + + private Builder() {} + + public Builder name(String name) { + this.name = name; + return this; + } + + public Builder _id(String _id) { + this._id = _id; + return this; + } + + public Builder mode(String mode) { + this.resiliencyMode = ResiliencyMode.fromName(mode); + return this; + } + + public Builder updatedAt(long updatedAt) { + this.updatedAt = updatedAt; + return this; + } + + public Builder resourceLimits(Map resourceLimits) { + this.resourceLimits = resourceLimits; + return this; + } + + public QueryGroup build() { + return new QueryGroup(name, _id, resiliencyMode, resourceLimits, updatedAt); + } + + } +} diff --git a/server/src/main/java/org/opensearch/cluster/metadata/QueryGroupMetadata.java b/server/src/main/java/org/opensearch/cluster/metadata/QueryGroupMetadata.java new file mode 100644 index 0000000000000..79732bc505ee2 --- /dev/null +++ b/server/src/main/java/org/opensearch/cluster/metadata/QueryGroupMetadata.java @@ -0,0 +1,185 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.cluster.metadata; + +import org.opensearch.Version; +import org.opensearch.cluster.Diff; +import org.opensearch.cluster.DiffableUtils; +import org.opensearch.cluster.NamedDiff; +import org.opensearch.core.ParseField; +import org.opensearch.core.common.Strings; +import org.opensearch.core.common.io.stream.StreamInput; +import org.opensearch.core.common.io.stream.StreamOutput; +import org.opensearch.core.xcontent.MediaTypeRegistry; +import org.opensearch.core.xcontent.XContentBuilder; +import org.opensearch.core.xcontent.XContentParser; + +import java.io.IOException; +import java.util.EnumSet; +import java.util.HashMap; +import java.util.Map; +import java.util.Objects; + +import static org.opensearch.cluster.metadata.Metadata.ALL_CONTEXTS; + +/** + * This class holds the QueryGroupMetadata + * sample schema + * { + * "queryGroups": { + * "_id": { + * {@link QueryGroup} + * }, + * ... + * } + * } + */ +public class QueryGroupMetadata implements Metadata.Custom { + public static final String TYPE = "queryGroups"; + private static final ParseField QUERY_GROUP_FIELD = new ParseField("queryGroups"); + + private final Map queryGroups; + + public QueryGroupMetadata(Map queryGroups) { + this.queryGroups = queryGroups; + } + + public QueryGroupMetadata(StreamInput in) throws IOException { + this.queryGroups = in.readMap(StreamInput::readString, QueryGroup::new); + } + + public Map queryGroups() { + return this.queryGroups; + } + + /** + * Returns the name of the writeable object + */ + @Override + public String getWriteableName() { + return TYPE; + } + + /** + * The minimal version of the recipient this object can be sent to + */ + @Override + public Version getMinimalSupportedVersion() { + return Version.V_3_0_0; + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeMap(queryGroups, StreamOutput::writeString, (stream, val) -> val.writeTo(stream)); + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + for (Map.Entry entry : queryGroups.entrySet()) { + builder.field(entry.getKey(), entry.getValue()); + } + return builder; + } + + public static QueryGroupMetadata fromXContent(XContentParser parser) throws IOException { + Map queryGroupMap = new HashMap<>(); + + if (parser.currentToken() == null) { + parser.nextToken(); + } + + if (parser.currentToken() != XContentParser.Token.START_OBJECT) { + throw new IllegalArgumentException( + "QueryGroupMetadata.fromXContent was expecting a { token but found : " + parser.currentToken() + ); + } + XContentParser.Token token = parser.currentToken(); + String fieldName = parser.currentName(); + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + fieldName = parser.currentName(); + } else { + QueryGroup queryGroup = QueryGroup.fromXContent(parser); + queryGroupMap.put(fieldName, queryGroup); + } + } + + return new QueryGroupMetadata(queryGroupMap); + } + + @Override + public Diff diff(final Metadata.Custom previousState) { + return new QueryGroupMetadataDiff((QueryGroupMetadata) previousState, this); + } + + public static NamedDiff readDiffFrom(StreamInput in) throws IOException { + return new QueryGroupMetadataDiff(in); + } + + @Override + public EnumSet context() { + return ALL_CONTEXTS; + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + QueryGroupMetadata that = (QueryGroupMetadata) o; + return Objects.equals(queryGroups, that.queryGroups); + } + + @Override + public int hashCode() { + return Objects.hash(queryGroups); + } + + @Override + public String toString() { + return Strings.toString(MediaTypeRegistry.JSON, this); + } + + /** + * QueryGroupMetadataDiff + */ + static class QueryGroupMetadataDiff implements NamedDiff { + final Diff> dataStreamDiff; + + QueryGroupMetadataDiff(final QueryGroupMetadata before, final QueryGroupMetadata after) { + dataStreamDiff = DiffableUtils.diff(before.queryGroups, after.queryGroups, DiffableUtils.getStringKeySerializer()); + } + + QueryGroupMetadataDiff(final StreamInput in) throws IOException { + this.dataStreamDiff = DiffableUtils.readJdkMapDiff( + in, + DiffableUtils.getStringKeySerializer(), + QueryGroup::new, + QueryGroup::readDiff + ); + } + + /** + * Returns the name of the writeable object + */ + @Override + public String getWriteableName() { + return TYPE; + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + dataStreamDiff.writeTo(out); + } + + @Override + public Metadata.Custom apply(Metadata.Custom part) { + return new QueryGroupMetadata(new HashMap<>(dataStreamDiff.apply(((QueryGroupMetadata) part).queryGroups))); + } + } +} diff --git a/server/src/main/java/org/opensearch/search/ResourceType.java b/server/src/main/java/org/opensearch/search/ResourceType.java index 5bbcd7de1c2ce..fe5ce4dd2bb50 100644 --- a/server/src/main/java/org/opensearch/search/ResourceType.java +++ b/server/src/main/java/org/opensearch/search/ResourceType.java @@ -8,12 +8,18 @@ package org.opensearch.search; +import org.opensearch.common.annotation.PublicApi; +import org.opensearch.core.common.io.stream.StreamOutput; + +import java.io.IOException; + /** * Enum to hold the resource type */ +@PublicApi(since = "2.x") public enum ResourceType { CPU("cpu"), - JVM("jvm"); + MEMORY("memory"); private final String name; @@ -35,7 +41,11 @@ public static ResourceType fromName(String s) { throw new IllegalArgumentException("Unknown resource type: [" + s + "]"); } - private String getName() { + public static void writeTo(StreamOutput out, ResourceType resourceType) throws IOException { + out.writeString(resourceType.getName()); + } + + public String getName() { return name; } } diff --git a/server/src/main/java/org/opensearch/search/backpressure/SearchBackpressureService.java b/server/src/main/java/org/opensearch/search/backpressure/SearchBackpressureService.java index 3e8ed3070e4ef..c26c5d63a3573 100644 --- a/server/src/main/java/org/opensearch/search/backpressure/SearchBackpressureService.java +++ b/server/src/main/java/org/opensearch/search/backpressure/SearchBackpressureService.java @@ -71,7 +71,7 @@ public class SearchBackpressureService extends AbstractLifecycleComponent implem TaskResourceUsageTrackerType.CPU_USAGE_TRACKER, (nodeDuressTrackers) -> nodeDuressTrackers.isResourceInDuress(ResourceType.CPU), TaskResourceUsageTrackerType.HEAP_USAGE_TRACKER, - (nodeDuressTrackers) -> isHeapTrackingSupported() && nodeDuressTrackers.isResourceInDuress(ResourceType.JVM), + (nodeDuressTrackers) -> isHeapTrackingSupported() && nodeDuressTrackers.isResourceInDuress(ResourceType.MEMORY), TaskResourceUsageTrackerType.ELAPSED_TIME_TRACKER, (nodeDuressTrackers) -> true ); @@ -105,7 +105,7 @@ public SearchBackpressureService( ) ); put( - ResourceType.JVM, + ResourceType.MEMORY, new NodeDuressTracker( () -> JvmStats.jvmStats().getMem().getHeapUsedPercent() / 100.0 >= settings.getNodeDuressSettings() .getHeapThreshold(), diff --git a/server/src/test/java/org/opensearch/cluster/ClusterModuleTests.java b/server/src/test/java/org/opensearch/cluster/ClusterModuleTests.java index f2d99a51f1c9a..97706927ba857 100644 --- a/server/src/test/java/org/opensearch/cluster/ClusterModuleTests.java +++ b/server/src/test/java/org/opensearch/cluster/ClusterModuleTests.java @@ -33,6 +33,7 @@ package org.opensearch.cluster; import org.opensearch.cluster.metadata.Metadata; +import org.opensearch.cluster.metadata.QueryGroupMetadata; import org.opensearch.cluster.metadata.RepositoriesMetadata; import org.opensearch.cluster.routing.ShardRouting; import org.opensearch.cluster.routing.allocation.ExistingShardsAllocator; @@ -69,6 +70,7 @@ import org.opensearch.common.settings.Settings; import org.opensearch.common.settings.SettingsModule; import org.opensearch.common.util.concurrent.ThreadContext; +import org.opensearch.core.common.io.stream.NamedWriteableRegistry; import org.opensearch.gateway.GatewayAllocator; import org.opensearch.plugins.ClusterPlugin; import org.opensearch.telemetry.metrics.noop.NoopMetricsRegistry; @@ -327,6 +329,14 @@ public void testRejectsDuplicateExistingShardsAllocatorName() { ); } + public void testQueryGroupMetadataRegister() { + List customEntries = ClusterModule.getNamedWriteables(); + assertTrue( + customEntries.stream() + .anyMatch(entry -> entry.categoryClass == Metadata.Custom.class && entry.name.equals(QueryGroupMetadata.TYPE)) + ); + } + private static ClusterPlugin existingShardsAllocatorPlugin(final String allocatorName) { return new ClusterPlugin() { @Override diff --git a/server/src/test/java/org/opensearch/cluster/metadata/QueryGroupMetadataTests.java b/server/src/test/java/org/opensearch/cluster/metadata/QueryGroupMetadataTests.java new file mode 100644 index 0000000000000..d70a9ce5e10cd --- /dev/null +++ b/server/src/test/java/org/opensearch/cluster/metadata/QueryGroupMetadataTests.java @@ -0,0 +1,93 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.cluster.metadata; + +import org.opensearch.cluster.Diff; +import org.opensearch.common.xcontent.json.JsonXContent; +import org.opensearch.core.common.io.stream.NamedWriteableRegistry; +import org.opensearch.core.common.io.stream.Writeable; +import org.opensearch.core.xcontent.XContentBuilder; +import org.opensearch.core.xcontent.XContentParser; +import org.opensearch.search.ResourceType; +import org.opensearch.test.AbstractDiffableSerializationTestCase; + +import java.io.IOException; +import java.util.Collections; +import java.util.Map; + +import static org.opensearch.cluster.metadata.QueryGroupTests.createRandomQueryGroup; + +public class QueryGroupMetadataTests extends AbstractDiffableSerializationTestCase { + + public void testToXContent() throws IOException { + long updatedAt = 1720047207; + QueryGroupMetadata queryGroupMetadata = new QueryGroupMetadata( + Map.of( + "ajakgakg983r92_4242", + new QueryGroup( + "test", + "ajakgakg983r92_4242", + QueryGroup.ResiliencyMode.ENFORCED, + Map.of(ResourceType.MEMORY, 0.5), + updatedAt + ) + ) + ); + XContentBuilder builder = JsonXContent.contentBuilder(); + builder.startObject(); + queryGroupMetadata.toXContent(builder, null); + builder.endObject(); + assertEquals( + "{\"ajakgakg983r92_4242\":{\"_id\":\"ajakgakg983r92_4242\",\"name\":\"test\",\"resiliency_mode\":\"enforced\",\"updatedAt\":1720047207,\"resourceLimits\":{\"memory\":0.5}}}", + builder.toString() + ); + } + + @Override + protected NamedWriteableRegistry getNamedWriteableRegistry() { + return new NamedWriteableRegistry( + Collections.singletonList( + new NamedWriteableRegistry.Entry(QueryGroupMetadata.class, QueryGroupMetadata.TYPE, QueryGroupMetadata::new) + ) + ); + } + + @Override + protected Metadata.Custom makeTestChanges(Metadata.Custom testInstance) { + final QueryGroup queryGroup = createRandomQueryGroup("asdfakgjwrir23r25"); + final QueryGroupMetadata queryGroupMetadata = new QueryGroupMetadata(Map.of(queryGroup.get_id(), queryGroup)); + return queryGroupMetadata; + } + + @Override + protected Writeable.Reader> diffReader() { + return QueryGroupMetadata::readDiffFrom; + } + + @Override + protected Metadata.Custom doParseInstance(XContentParser parser) throws IOException { + return QueryGroupMetadata.fromXContent(parser); + } + + @Override + protected Writeable.Reader instanceReader() { + return QueryGroupMetadata::new; + } + + @Override + protected QueryGroupMetadata createTestInstance() { + return new QueryGroupMetadata(getRandomQueryGroups()); + } + + private Map getRandomQueryGroups() { + QueryGroup qg1 = createRandomQueryGroup("1243gsgsdgs"); + QueryGroup qg2 = createRandomQueryGroup("lkajga8080"); + return Map.of(qg1.get_id(), qg1, qg2.get_id(), qg2); + } +} diff --git a/server/src/test/java/org/opensearch/cluster/metadata/QueryGroupTests.java b/server/src/test/java/org/opensearch/cluster/metadata/QueryGroupTests.java new file mode 100644 index 0000000000000..c564f0778e6f0 --- /dev/null +++ b/server/src/test/java/org/opensearch/cluster/metadata/QueryGroupTests.java @@ -0,0 +1,158 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.cluster.metadata; + +import org.opensearch.common.UUIDs; +import org.opensearch.common.xcontent.json.JsonXContent; +import org.opensearch.core.common.io.stream.Writeable; +import org.opensearch.core.xcontent.ToXContent; +import org.opensearch.core.xcontent.XContentBuilder; +import org.opensearch.core.xcontent.XContentParser; +import org.opensearch.search.ResourceType; +import org.opensearch.test.AbstractSerializingTestCase; +import org.joda.time.Instant; + +import java.io.IOException; +import java.util.Collections; +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +public class QueryGroupTests extends AbstractSerializingTestCase { + + private static final List allowedModes = List.of( + QueryGroup.ResiliencyMode.SOFT, + QueryGroup.ResiliencyMode.ENFORCED, + QueryGroup.ResiliencyMode.MONITOR + ); + + static QueryGroup createRandomQueryGroup(String _id) { + String name = randomAlphaOfLength(10); + Map resourceLimit = new HashMap<>(); + resourceLimit.put(ResourceType.MEMORY, randomDoubleBetween(0.0, 0.80, false)); + return new QueryGroup(name, _id, randomMode(), resourceLimit, Instant.now().getMillis()); + } + + private static QueryGroup.ResiliencyMode randomMode() { + return allowedModes.get(randomIntBetween(0, allowedModes.size() - 1)); + } + + /** + * Parses to a new instance using the provided {@link XContentParser} + * + * @param parser + */ + @Override + protected QueryGroup doParseInstance(XContentParser parser) throws IOException { + return QueryGroup.fromXContent(parser); + } + + /** + * Returns a {@link Writeable.Reader} that can be used to de-serialize the instance + */ + @Override + protected Writeable.Reader instanceReader() { + return QueryGroup::new; + } + + /** + * Creates a random test instance to use in the tests. This method will be + * called multiple times during test execution and should return a different + * random instance each time it is called. + */ + @Override + protected QueryGroup createTestInstance() { + return createRandomQueryGroup("1232sfraeradf_"); + } + + public void testNullName() { + assertThrows( + NullPointerException.class, + () -> new QueryGroup(null, "_id", randomMode(), Collections.emptyMap(), Instant.now().getMillis()) + ); + } + + public void testNullId() { + assertThrows( + NullPointerException.class, + () -> new QueryGroup("Dummy", null, randomMode(), Collections.emptyMap(), Instant.now().getMillis()) + ); + } + + public void testNullResourceLimits() { + assertThrows(NullPointerException.class, () -> new QueryGroup("analytics", "_id", randomMode(), null, Instant.now().getMillis())); + } + + public void testEmptyResourceLimits() { + assertThrows( + IllegalArgumentException.class, + () -> new QueryGroup("analytics", "_id", randomMode(), Collections.emptyMap(), Instant.now().getMillis()) + ); + } + + public void testIllegalQueryGroupMode() { + assertThrows( + NullPointerException.class, + () -> new QueryGroup("analytics", "_id", null, Map.of(ResourceType.MEMORY, (Object) 0.4), Instant.now().getMillis()) + ); + } + + public void testInvalidResourceLimitWhenInvalidSystemResourceValueIsGiven() { + assertThrows( + IllegalArgumentException.class, + () -> new QueryGroup( + "analytics", + "_id", + randomMode(), + Map.of(ResourceType.MEMORY, (Object) randomDoubleBetween(1.1, 1.8, false)), + Instant.now().getMillis() + ) + ); + } + + public void testValidQueryGroup() { + QueryGroup queryGroup = new QueryGroup( + "analytics", + "_id", + randomMode(), + Map.of(ResourceType.MEMORY, randomDoubleBetween(0.01, 0.8, false)), + Instant.ofEpochMilli(1717187289).getMillis() + ); + + assertNotNull(queryGroup.getName()); + assertEquals("analytics", queryGroup.getName()); + assertNotNull(queryGroup.getResourceLimits()); + assertFalse(queryGroup.getResourceLimits().isEmpty()); + assertEquals(1, queryGroup.getResourceLimits().size()); + assertTrue(allowedModes.contains(queryGroup.getResiliencyMode())); + assertEquals(1717187289, queryGroup.getUpdatedAtInMillis()); + } + + public void testToXContent() throws IOException { + long currentTimeInMillis = Instant.now().getMillis(); + String queryGroupId = UUIDs.randomBase64UUID(); + QueryGroup queryGroup = new QueryGroup( + "TestQueryGroup", + queryGroupId, + QueryGroup.ResiliencyMode.ENFORCED, + Map.of(ResourceType.CPU, 0.30, ResourceType.MEMORY, 0.40), + currentTimeInMillis + ); + XContentBuilder builder = JsonXContent.contentBuilder(); + queryGroup.toXContent(builder, ToXContent.EMPTY_PARAMS); + assertEquals( + "{\"_id\":\"" + + queryGroupId + + "\",\"name\":\"TestQueryGroup\",\"resiliency_mode\":\"enforced\",\"updatedAt\":" + + currentTimeInMillis + + ",\"resourceLimits\":{\"cpu\":0.3,\"memory\":0.4}}", + builder.toString() + ); + } +} diff --git a/server/src/test/java/org/opensearch/search/backpressure/SearchBackpressureServiceTests.java b/server/src/test/java/org/opensearch/search/backpressure/SearchBackpressureServiceTests.java index 43df482fcc2ae..15d0fcd10d701 100644 --- a/server/src/test/java/org/opensearch/search/backpressure/SearchBackpressureServiceTests.java +++ b/server/src/test/java/org/opensearch/search/backpressure/SearchBackpressureServiceTests.java @@ -57,7 +57,7 @@ import java.util.function.LongSupplier; import static org.opensearch.search.ResourceType.CPU; -import static org.opensearch.search.ResourceType.JVM; +import static org.opensearch.search.ResourceType.MEMORY; import static org.opensearch.search.backpressure.SearchBackpressureTestHelpers.createMockTaskWithResourceStats; import static org.mockito.ArgumentMatchers.anyBoolean; import static org.mockito.ArgumentMatchers.anyString; @@ -102,7 +102,7 @@ public void testIsNodeInDuress() { EnumMap duressTrackers = new EnumMap<>(ResourceType.class) { { - put(ResourceType.JVM, heapUsageTracker); + put(ResourceType.MEMORY, heapUsageTracker); put(ResourceType.CPU, cpuUsageTracker); } }; @@ -233,7 +233,7 @@ public void testSearchTaskInFlightCancellation() { EnumMap duressTrackers = new EnumMap<>(ResourceType.class) { { - put(JVM, heapUsageTracker); + put(MEMORY, heapUsageTracker); put(CPU, mockNodeDuressTracker); } }; @@ -308,7 +308,7 @@ public void testSearchShardTaskInFlightCancellation() { EnumMap duressTrackers = new EnumMap<>(ResourceType.class) { { - put(JVM, new NodeDuressTracker(() -> false, () -> 3)); + put(MEMORY, new NodeDuressTracker(() -> false, () -> 3)); put(CPU, mockNodeDuressTracker); } }; @@ -401,7 +401,7 @@ public void testNonCancellationOfHeapBasedTasksWhenHeapNotInDuress() { EnumMap duressTrackers = new EnumMap<>(ResourceType.class) { { - put(JVM, new NodeDuressTracker(() -> false, () -> 3)); + put(MEMORY, new NodeDuressTracker(() -> false, () -> 3)); put(CPU, new NodeDuressTracker(() -> true, () -> 3)); } }; @@ -495,7 +495,7 @@ public void testNonCancellationWhenSearchTrafficIsNotQualifyingForCancellation() EnumMap duressTrackers = new EnumMap<>(ResourceType.class) { { - put(JVM, new NodeDuressTracker(() -> false, () -> 3)); + put(MEMORY, new NodeDuressTracker(() -> false, () -> 3)); put(CPU, new NodeDuressTracker(() -> true, () -> 3)); } }; diff --git a/server/src/test/java/org/opensearch/search/backpressure/trackers/NodeDuressTrackersTests.java b/server/src/test/java/org/opensearch/search/backpressure/trackers/NodeDuressTrackersTests.java index 2db251ee461db..801576bdf89d4 100644 --- a/server/src/test/java/org/opensearch/search/backpressure/trackers/NodeDuressTrackersTests.java +++ b/server/src/test/java/org/opensearch/search/backpressure/trackers/NodeDuressTrackersTests.java @@ -19,7 +19,7 @@ public class NodeDuressTrackersTests extends OpenSearchTestCase { public void testNodeNotInDuress() { EnumMap map = new EnumMap<>(ResourceType.class) { { - put(ResourceType.JVM, new NodeDuressTracker(() -> false, () -> 2)); + put(ResourceType.MEMORY, new NodeDuressTracker(() -> false, () -> 2)); put(ResourceType.CPU, new NodeDuressTracker(() -> false, () -> 2)); } }; @@ -34,7 +34,7 @@ public void testNodeNotInDuress() { public void testNodeInDuressWhenHeapInDuress() { EnumMap map = new EnumMap<>(ResourceType.class) { { - put(ResourceType.JVM, new NodeDuressTracker(() -> true, () -> 3)); + put(ResourceType.MEMORY, new NodeDuressTracker(() -> true, () -> 3)); put(ResourceType.CPU, new NodeDuressTracker(() -> false, () -> 1)); } }; @@ -51,7 +51,7 @@ public void testNodeInDuressWhenHeapInDuress() { public void testNodeInDuressWhenCPUInDuress() { EnumMap map = new EnumMap<>(ResourceType.class) { { - put(ResourceType.JVM, new NodeDuressTracker(() -> false, () -> 1)); + put(ResourceType.MEMORY, new NodeDuressTracker(() -> false, () -> 1)); put(ResourceType.CPU, new NodeDuressTracker(() -> true, () -> 3)); } }; @@ -68,7 +68,7 @@ public void testNodeInDuressWhenCPUInDuress() { public void testNodeInDuressWhenCPUAndHeapInDuress() { EnumMap map = new EnumMap<>(ResourceType.class) { { - put(ResourceType.JVM, new NodeDuressTracker(() -> true, () -> 3)); + put(ResourceType.MEMORY, new NodeDuressTracker(() -> true, () -> 3)); put(ResourceType.CPU, new NodeDuressTracker(() -> false, () -> 3)); } }; From 51af2e2203abf41f42f41decc05e6a49e6b6d40d Mon Sep 17 00:00:00 2001 From: Shivansh Arora Date: Tue, 9 Jul 2024 13:18:25 +0530 Subject: [PATCH 039/167] Add UTs for RemoteIndexMetadataManager (#14660) Signed-off-by: Shivansh Arora Co-authored-by: Arpit-Bandejiya --- .../remote/RemoteIndexMetadataManager.java | 21 -- .../RemoteIndexMetadataManagerTests.java | 190 ++++++++++++++++++ 2 files changed, 190 insertions(+), 21 deletions(-) create mode 100644 server/src/test/java/org/opensearch/gateway/remote/RemoteIndexMetadataManagerTests.java diff --git a/server/src/main/java/org/opensearch/gateway/remote/RemoteIndexMetadataManager.java b/server/src/main/java/org/opensearch/gateway/remote/RemoteIndexMetadataManager.java index a84161b202a22..c595f19279354 100644 --- a/server/src/main/java/org/opensearch/gateway/remote/RemoteIndexMetadataManager.java +++ b/server/src/main/java/org/opensearch/gateway/remote/RemoteIndexMetadataManager.java @@ -26,10 +26,7 @@ import org.opensearch.threadpool.ThreadPool; import java.io.IOException; -import java.util.HashMap; import java.util.Locale; -import java.util.Map; -import java.util.Objects; /** * A Manager which provides APIs to write and read Index Metadata to remote store @@ -136,24 +133,6 @@ IndexMetadata getIndexMetadata(ClusterMetadataManifest.UploadedIndexMetadata upl } } - /** - * Fetch latest index metadata from remote cluster state - * - * @param clusterMetadataManifest manifest file of cluster - * @param clusterUUID uuid of cluster state to refer to in remote - * @return {@code Map} latest IndexUUID to IndexMetadata map - */ - Map getIndexMetadataMap(String clusterUUID, ClusterMetadataManifest clusterMetadataManifest) { - assert Objects.equals(clusterUUID, clusterMetadataManifest.getClusterUUID()) - : "Corrupt ClusterMetadataManifest found. Cluster UUID mismatch."; - Map remoteIndexMetadata = new HashMap<>(); - for (ClusterMetadataManifest.UploadedIndexMetadata uploadedIndexMetadata : clusterMetadataManifest.getIndices()) { - IndexMetadata indexMetadata = getIndexMetadata(uploadedIndexMetadata, clusterUUID); - remoteIndexMetadata.put(uploadedIndexMetadata.getIndexUUID(), indexMetadata); - } - return remoteIndexMetadata; - } - public TimeValue getIndexMetadataUploadTimeout() { return this.indexMetadataUploadTimeout; } diff --git a/server/src/test/java/org/opensearch/gateway/remote/RemoteIndexMetadataManagerTests.java b/server/src/test/java/org/opensearch/gateway/remote/RemoteIndexMetadataManagerTests.java new file mode 100644 index 0000000000000..817fc7b55d09a --- /dev/null +++ b/server/src/test/java/org/opensearch/gateway/remote/RemoteIndexMetadataManagerTests.java @@ -0,0 +1,190 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.gateway.remote; + +import org.opensearch.Version; +import org.opensearch.action.LatchedActionListener; +import org.opensearch.cluster.metadata.AliasMetadata; +import org.opensearch.cluster.metadata.IndexMetadata; +import org.opensearch.common.Nullable; +import org.opensearch.common.blobstore.AsyncMultiStreamBlobContainer; +import org.opensearch.common.blobstore.BlobContainer; +import org.opensearch.common.blobstore.BlobPath; +import org.opensearch.common.blobstore.BlobStore; +import org.opensearch.common.blobstore.stream.write.WritePriority; +import org.opensearch.common.settings.ClusterSettings; +import org.opensearch.common.settings.Settings; +import org.opensearch.common.util.TestCapturingListener; +import org.opensearch.core.action.ActionListener; +import org.opensearch.core.compress.Compressor; +import org.opensearch.core.compress.NoneCompressor; +import org.opensearch.gateway.remote.model.RemoteReadResult; +import org.opensearch.index.remote.RemoteStoreUtils; +import org.opensearch.index.translog.transfer.BlobStoreTransferService; +import org.opensearch.repositories.blobstore.BlobStoreRepository; +import org.opensearch.test.OpenSearchTestCase; +import org.opensearch.threadpool.TestThreadPool; +import org.opensearch.threadpool.ThreadPool; +import org.junit.After; +import org.junit.Before; + +import java.io.IOException; +import java.util.concurrent.CountDownLatch; + +import static org.opensearch.gateway.remote.RemoteClusterStateService.FORMAT_PARAMS; +import static org.opensearch.gateway.remote.RemoteClusterStateUtils.DELIMITER; +import static org.opensearch.gateway.remote.RemoteClusterStateUtils.PATH_DELIMITER; +import static org.opensearch.gateway.remote.model.RemoteIndexMetadata.INDEX; +import static org.opensearch.gateway.remote.model.RemoteIndexMetadata.INDEX_METADATA_FORMAT; +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.ArgumentMatchers.anyIterable; +import static org.mockito.ArgumentMatchers.anyString; +import static org.mockito.ArgumentMatchers.eq; +import static org.mockito.Mockito.doAnswer; +import static org.mockito.Mockito.doThrow; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; + +public class RemoteIndexMetadataManagerTests extends OpenSearchTestCase { + + private RemoteIndexMetadataManager remoteIndexMetadataManager; + private BlobStoreRepository blobStoreRepository; + private BlobStoreTransferService blobStoreTransferService; + private Compressor compressor; + private final ThreadPool threadPool = new TestThreadPool(getClass().getName()); + + @Before + public void setup() { + ClusterSettings clusterSettings = new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS); + blobStoreRepository = mock(BlobStoreRepository.class); + BlobPath blobPath = new BlobPath().add("random-path"); + when((blobStoreRepository.basePath())).thenReturn(blobPath); + blobStoreTransferService = mock(BlobStoreTransferService.class); + compressor = new NoneCompressor(); + when(blobStoreRepository.getCompressor()).thenReturn(compressor); + remoteIndexMetadataManager = new RemoteIndexMetadataManager( + clusterSettings, + "test-cluster", + blobStoreRepository, + blobStoreTransferService, + threadPool + ); + } + + @After + public void tearDown() throws Exception { + super.tearDown(); + threadPool.shutdown(); + } + + public void testGetAsyncIndexMetadataWriteAction_Success() throws Exception { + IndexMetadata indexMetadata = getIndexMetadata(randomAlphaOfLength(10), randomBoolean(), randomAlphaOfLength(10)); + BlobContainer blobContainer = mock(AsyncMultiStreamBlobContainer.class); + BlobStore blobStore = mock(BlobStore.class); + when(blobStore.blobContainer(any())).thenReturn(blobContainer); + TestCapturingListener listener = new TestCapturingListener<>(); + CountDownLatch latch = new CountDownLatch(1); + String expectedFilePrefix = String.join(DELIMITER, "metadata", RemoteStoreUtils.invertLong(indexMetadata.getVersion())); + + doAnswer((invocationOnMock -> { + invocationOnMock.getArgument(4, ActionListener.class).onResponse(null); + return null; + })).when(blobStoreTransferService).uploadBlob(any(), any(), any(), eq(WritePriority.URGENT), any(ActionListener.class)); + + remoteIndexMetadataManager.getAsyncIndexMetadataWriteAction( + indexMetadata, + "cluster-uuid", + new LatchedActionListener<>(listener, latch) + ).run(); + latch.await(); + + assertNull(listener.getFailure()); + assertNotNull(listener.getResult()); + ClusterMetadataManifest.UploadedMetadata uploadedMetadata = listener.getResult(); + assertEquals(INDEX + "--" + indexMetadata.getIndex().getName(), uploadedMetadata.getComponent()); + String uploadedFileName = uploadedMetadata.getUploadedFilename(); + String[] pathTokens = uploadedFileName.split(PATH_DELIMITER); + assertEquals(7, pathTokens.length); + assertEquals(INDEX, pathTokens[4]); + assertEquals(indexMetadata.getIndex().getUUID(), pathTokens[5]); + assertTrue(pathTokens[6].startsWith(expectedFilePrefix)); + } + + public void testGetAsyncIndexMetadataWriteAction_IOFailure() throws Exception { + IndexMetadata indexMetadata = getIndexMetadata(randomAlphaOfLength(10), randomBoolean(), randomAlphaOfLength(10)); + BlobContainer blobContainer = mock(AsyncMultiStreamBlobContainer.class); + BlobStore blobStore = mock(BlobStore.class); + when(blobStore.blobContainer(any())).thenReturn(blobContainer); + TestCapturingListener listener = new TestCapturingListener<>(); + CountDownLatch latch = new CountDownLatch(1); + + doAnswer((invocationOnMock -> { + invocationOnMock.getArgument(4, ActionListener.class).onFailure(new IOException("failure")); + return null; + })).when(blobStoreTransferService).uploadBlob(any(), any(), any(), eq(WritePriority.URGENT), any(ActionListener.class)); + + remoteIndexMetadataManager.getAsyncIndexMetadataWriteAction( + indexMetadata, + "cluster-uuid", + new LatchedActionListener<>(listener, latch) + ).run(); + latch.await(); + assertNull(listener.getResult()); + assertNotNull(listener.getFailure()); + assertTrue(listener.getFailure() instanceof RemoteStateTransferException); + } + + public void testGetAsyncIndexMetadataReadAction_Success() throws Exception { + IndexMetadata indexMetadata = getIndexMetadata(randomAlphaOfLength(10), randomBoolean(), randomAlphaOfLength(10)); + String fileName = randomAlphaOfLength(10); + fileName = fileName + DELIMITER + '2'; + when(blobStoreTransferService.downloadBlob(anyIterable(), anyString())).thenReturn( + INDEX_METADATA_FORMAT.serialize(indexMetadata, fileName, compressor, FORMAT_PARAMS).streamInput() + ); + TestCapturingListener listener = new TestCapturingListener<>(); + CountDownLatch latch = new CountDownLatch(1); + + remoteIndexMetadataManager.getAsyncIndexMetadataReadAction("cluster-uuid", fileName, new LatchedActionListener<>(listener, latch)) + .run(); + latch.await(); + assertNull(listener.getFailure()); + assertNotNull(listener.getResult()); + assertEquals(indexMetadata, listener.getResult().getObj()); + } + + public void testGetAsyncIndexMetadataReadAction_IOFailure() throws Exception { + String fileName = randomAlphaOfLength(10); + fileName = fileName + DELIMITER + '2'; + Exception exception = new IOException("testing failure"); + doThrow(exception).when(blobStoreTransferService).downloadBlob(anyIterable(), anyString()); + TestCapturingListener listener = new TestCapturingListener<>(); + CountDownLatch latch = new CountDownLatch(1); + + remoteIndexMetadataManager.getAsyncIndexMetadataReadAction("cluster-uuid", fileName, new LatchedActionListener<>(listener, latch)) + .run(); + latch.await(); + assertNull(listener.getResult()); + assertNotNull(listener.getFailure()); + assertEquals(exception, listener.getFailure()); + } + + private IndexMetadata getIndexMetadata(String name, @Nullable Boolean writeIndex, String... aliases) { + IndexMetadata.Builder builder = IndexMetadata.builder(name) + .settings( + Settings.builder() + .put("index.version.created", Version.CURRENT.id) + .put("index.number_of_shards", 1) + .put("index.number_of_replicas", 1) + ); + for (String alias : aliases) { + builder.putAlias(AliasMetadata.builder(alias).writeIndex(writeIndex).build()); + } + return builder.build(); + } +} From 2e639131b3bb88316f53bfeb9262cbcd81606d50 Mon Sep 17 00:00:00 2001 From: gaobinlong Date: Tue, 9 Jul 2024 19:29:44 +0800 Subject: [PATCH 040/167] Fix match_phrase_prefix_query not working on text field with multiple values and index_prefixes (#10959) * Fix match_phrase_prefix_query not working on text field with multiple values and index_prefixes Signed-off-by: Gao Binlong * Add more test Signed-off-by: Gao Binlong * modify change log Signed-off-by: Gao Binlong * Fix test failure Signed-off-by: Gao Binlong * Change the indexAnalyzer used by prefix field Signed-off-by: Gao Binlong * Skip old version for yaml test Signed-off-by: Gao Binlong * Optimize some code Signed-off-by: Gao Binlong * Fix test failure Signed-off-by: Gao Binlong * Modify yaml test description Signed-off-by: Gao Binlong * Remove the name parameter for setAnalyzer() Signed-off-by: Gao Binlong --------- Signed-off-by: Gao Binlong --- CHANGELOG.md | 1 + .../test/search/190_index_prefix_search.yml | 52 ++++++++++++++++++- .../index/mapper/TextFieldMapper.java | 28 +++++++--- .../index/mapper/TextFieldMapperTests.java | 51 ++++++++++++++++++ .../index/mapper/TextFieldTypeTests.java | 1 + 5 files changed, 124 insertions(+), 9 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 4d0990db31d20..2cffa99fb66a5 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -49,6 +49,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Fix bug in SBP cancellation logic ([#13259](https://github.com/opensearch-project/OpenSearch/pull/13474)) - Fix handling of Short and Byte data types in ScriptProcessor ingest pipeline ([#14379](https://github.com/opensearch-project/OpenSearch/issues/14379)) - Switch to iterative version of WKT format parser ([#14086](https://github.com/opensearch-project/OpenSearch/pull/14086)) +- Fix match_phrase_prefix_query not working on text field with multiple values and index_prefixes ([#10959](https://github.com/opensearch-project/OpenSearch/pull/10959)) - Fix the computed max shards of cluster to avoid int overflow ([#14155](https://github.com/opensearch-project/OpenSearch/pull/14155)) - Fixed rest-high-level client searchTemplate & mtermVectors endpoints to have a leading slash ([#14465](https://github.com/opensearch-project/OpenSearch/pull/14465)) - Write shard level metadata blob when snapshotting searchable snapshot indexes ([#13190](https://github.com/opensearch-project/OpenSearch/pull/13190)) diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/search/190_index_prefix_search.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/search/190_index_prefix_search.yml index 25d3dd160e031..8b031c132f979 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/search/190_index_prefix_search.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/search/190_index_prefix_search.yml @@ -10,7 +10,12 @@ setup: index_prefixes: min_chars: 2 max_chars: 5 - + text_with_pos_inc_gap: + type: text + position_increment_gap: 201 + index_prefixes: + min_chars: 2 + max_chars: 5 - do: index: index: test @@ -23,6 +28,18 @@ setup: id: 2 body: { text: sentence with UPPERCASE WORDS } + - do: + index: + index: test + id: 3 + body: { text: ["foo", "b-12"] } + + - do: + index: + index: test + id: 4 + body: { text_with_pos_inc_gap: ["foo", "b-12"] } + - do: indices.refresh: index: [test] @@ -116,3 +133,36 @@ setup: ] - match: {hits.total: 1} + +# related issue: https://github.com/opensearch-project/OpenSearch/issues/9203 +--- +"search index prefixes with multiple values": + - skip: + version: " - 2.99.99" + reason: "the bug was fixed in 3.0.0" + - do: + search: + rest_total_hits_as_int: true + index: test + body: + query: + match_phrase_prefix: + text: "b-12" + + - match: {hits.total: 1} + +--- +"search index prefixes with multiple values and custom position_increment_gap": + - skip: + version: " - 2.99.99" + reason: "the bug was fixed in 3.0.0" + - do: + search: + rest_total_hits_as_int: true + index: test + body: + query: + match_phrase_prefix: + text_with_pos_inc_gap: "b-12" + + - match: {hits.total: 1} diff --git a/server/src/main/java/org/opensearch/index/mapper/TextFieldMapper.java b/server/src/main/java/org/opensearch/index/mapper/TextFieldMapper.java index d0e041e68a81d..ba053a3aeee1d 100644 --- a/server/src/main/java/org/opensearch/index/mapper/TextFieldMapper.java +++ b/server/src/main/java/org/opensearch/index/mapper/TextFieldMapper.java @@ -448,7 +448,6 @@ protected PrefixFieldMapper buildPrefixMapper(BuilderContext context, FieldType pft.setStoreTermVectorOffsets(true); } PrefixFieldType prefixFieldType = new PrefixFieldType(tft, fullName + "._index_prefix", indexPrefixes.get()); - prefixFieldType.setAnalyzer(analyzers.getIndexAnalyzer()); tft.setPrefixFieldType(prefixFieldType); return new PrefixFieldMapper(pft, prefixFieldType); } @@ -522,12 +521,14 @@ private static class PrefixWrappedAnalyzer extends AnalyzerWrapper { private final int minChars; private final int maxChars; private final Analyzer delegate; + private final int positionIncrementGap; - PrefixWrappedAnalyzer(Analyzer delegate, int minChars, int maxChars) { + PrefixWrappedAnalyzer(Analyzer delegate, int minChars, int maxChars, int positionIncrementGap) { super(delegate.getReuseStrategy()); this.delegate = delegate; this.minChars = minChars; this.maxChars = maxChars; + this.positionIncrementGap = positionIncrementGap; } @Override @@ -535,6 +536,11 @@ protected Analyzer getWrappedAnalyzer(String fieldName) { return delegate; } + @Override + public int getPositionIncrementGap(String fieldName) { + return positionIncrementGap; + } + @Override protected TokenStreamComponents wrapComponents(String fieldName, TokenStreamComponents components) { TokenFilter filter = new EdgeNGramTokenFilter(components.getTokenStream(), minChars, maxChars, false); @@ -588,17 +594,18 @@ static final class PrefixFieldType extends StringFieldType { final int minChars; final int maxChars; - final TextFieldType parentField; + final TextFieldType parent; PrefixFieldType(TextFieldType parentField, String name, PrefixConfig config) { this(parentField, name, config.minChars, config.maxChars); } - PrefixFieldType(TextFieldType parentField, String name, int minChars, int maxChars) { - super(name, true, false, false, parentField.getTextSearchInfo(), Collections.emptyMap()); + PrefixFieldType(TextFieldType parent, String name, int minChars, int maxChars) { + super(name, true, false, false, parent.getTextSearchInfo(), Collections.emptyMap()); this.minChars = minChars; this.maxChars = maxChars; - this.parentField = parentField; + this.parent = parent; + setAnalyzer(parent.indexAnalyzer()); } @Override @@ -609,8 +616,13 @@ public ValueFetcher valueFetcher(QueryShardContext context, SearchLookup searchL } void setAnalyzer(NamedAnalyzer delegate) { + String analyzerName = delegate.name(); setIndexAnalyzer( - new NamedAnalyzer(delegate.name(), AnalyzerScope.INDEX, new PrefixWrappedAnalyzer(delegate.analyzer(), minChars, maxChars)) + new NamedAnalyzer( + analyzerName, + AnalyzerScope.INDEX, + new PrefixWrappedAnalyzer(delegate.analyzer(), minChars, maxChars, delegate.getPositionIncrementGap(analyzerName)) + ) ); } @@ -639,7 +651,7 @@ public Query prefixQuery(String value, MultiTermQuery.RewriteMethod method, bool Automaton automaton = Operations.concatenate(automata); AutomatonQuery query = AutomatonQueries.createAutomatonQuery(new Term(name(), value + "*"), automaton, method); return new BooleanQuery.Builder().add(query, BooleanClause.Occur.SHOULD) - .add(new TermQuery(new Term(parentField.name(), value)), BooleanClause.Occur.SHOULD) + .add(new TermQuery(new Term(parent.name(), value)), BooleanClause.Occur.SHOULD) .build(); } diff --git a/server/src/test/java/org/opensearch/index/mapper/TextFieldMapperTests.java b/server/src/test/java/org/opensearch/index/mapper/TextFieldMapperTests.java index a22bfa5e845b1..0253caea9759d 100644 --- a/server/src/test/java/org/opensearch/index/mapper/TextFieldMapperTests.java +++ b/server/src/test/java/org/opensearch/index/mapper/TextFieldMapperTests.java @@ -380,6 +380,57 @@ public void testIndexOptions() throws IOException { } } + public void testPositionIncrementGapOnIndexPrefixField() throws IOException { + // test default position_increment_gap + MapperService mapperService = createMapperService( + fieldMapping(b -> b.field("type", "text").field("analyzer", "default").startObject("index_prefixes").endObject()) + ); + ParsedDocument doc = mapperService.documentMapper().parse(source(b -> b.array("field", new String[] { "a", "b 12" }))); + + withLuceneIndex(mapperService, iw -> iw.addDocument(doc.rootDoc()), reader -> { + TermsEnum terms = getOnlyLeafReader(reader).terms("field").iterator(); + assertTrue(terms.seekExact(new BytesRef("12"))); + PostingsEnum postings = terms.postings(null, PostingsEnum.POSITIONS); + assertEquals(0, postings.nextDoc()); + assertEquals(TextFieldMapper.Defaults.POSITION_INCREMENT_GAP + 2, postings.nextPosition()); + }); + + withLuceneIndex(mapperService, iw -> iw.addDocument(doc.rootDoc()), reader -> { + TermsEnum terms = getOnlyLeafReader(reader).terms("field._index_prefix").iterator(); + assertTrue(terms.seekExact(new BytesRef("12"))); + PostingsEnum postings = terms.postings(null, PostingsEnum.POSITIONS); + assertEquals(0, postings.nextDoc()); + assertEquals(TextFieldMapper.Defaults.POSITION_INCREMENT_GAP + 2, postings.nextPosition()); + }); + + // test custom position_increment_gap + final int positionIncrementGap = randomIntBetween(1, 1000); + MapperService mapperService2 = createMapperService( + fieldMapping( + b -> b.field("type", "text") + .field("position_increment_gap", positionIncrementGap) + .field("analyzer", "default") + .startObject("index_prefixes") + .endObject() + ) + ); + ParsedDocument doc2 = mapperService2.documentMapper().parse(source(b -> b.array("field", new String[] { "a", "b 12" }))); + withLuceneIndex(mapperService2, iw -> iw.addDocument(doc2.rootDoc()), reader -> { + TermsEnum terms = getOnlyLeafReader(reader).terms("field").iterator(); + assertTrue(terms.seekExact(new BytesRef("12"))); + PostingsEnum postings = terms.postings(null, PostingsEnum.POSITIONS); + assertEquals(0, postings.nextDoc()); + assertEquals(positionIncrementGap + 2, postings.nextPosition()); + }); + withLuceneIndex(mapperService2, iw -> iw.addDocument(doc2.rootDoc()), reader -> { + TermsEnum terms = getOnlyLeafReader(reader).terms("field._index_prefix").iterator(); + assertTrue(terms.seekExact(new BytesRef("12"))); + PostingsEnum postings = terms.postings(null, PostingsEnum.POSITIONS); + assertEquals(0, postings.nextDoc()); + assertEquals(positionIncrementGap + 2, postings.nextPosition()); + }); + } + public void testDefaultPositionIncrementGap() throws IOException { MapperService mapperService = createMapperService(fieldMapping(this::minimalMapping)); ParsedDocument doc = mapperService.documentMapper().parse(source(b -> b.array("field", new String[] { "a", "b" }))); diff --git a/server/src/test/java/org/opensearch/index/mapper/TextFieldTypeTests.java b/server/src/test/java/org/opensearch/index/mapper/TextFieldTypeTests.java index 9c177bbec61fd..e672f94819541 100644 --- a/server/src/test/java/org/opensearch/index/mapper/TextFieldTypeTests.java +++ b/server/src/test/java/org/opensearch/index/mapper/TextFieldTypeTests.java @@ -167,6 +167,7 @@ public void testFuzzyQuery() { public void testIndexPrefixes() { TextFieldType ft = createFieldType(true); + ft.setIndexAnalyzer(Lucene.STANDARD_ANALYZER); ft.setPrefixFieldType(new TextFieldMapper.PrefixFieldType(ft, "field._index_prefix", 2, 10)); Query q = ft.prefixQuery("goin", CONSTANT_SCORE_REWRITE, false, randomMockShardContext()); From 6d0484a75bbb5897f29660ede02ced7c796ba8cf Mon Sep 17 00:00:00 2001 From: rishavz_sagar Date: Tue, 9 Jul 2024 20:20:38 +0530 Subject: [PATCH 041/167] Offline calculation of total shard per node and caching it for weight calculation inside LocalShardBalancer (#14675) Signed-off-by: RS146BIJAY --- .../allocation/allocator/LocalShardsBalancer.java | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/server/src/main/java/org/opensearch/cluster/routing/allocation/allocator/LocalShardsBalancer.java b/server/src/main/java/org/opensearch/cluster/routing/allocation/allocator/LocalShardsBalancer.java index 6978c988fd648..00eb79add9f1d 100644 --- a/server/src/main/java/org/opensearch/cluster/routing/allocation/allocator/LocalShardsBalancer.java +++ b/server/src/main/java/org/opensearch/cluster/routing/allocation/allocator/LocalShardsBalancer.java @@ -68,6 +68,7 @@ public class LocalShardsBalancer extends ShardsBalancer { private final float avgPrimaryShardsPerNode; private final BalancedShardsAllocator.NodeSorter sorter; private final Set inEligibleTargetNode; + private int totalShardCount = 0; public LocalShardsBalancer( Logger logger, @@ -125,8 +126,7 @@ public float avgPrimaryShardsPerNode() { */ @Override public float avgShardsPerNode() { - float totalShards = nodes.values().stream().map(BalancedShardsAllocator.ModelNode::numShards).reduce(0, Integer::sum); - return totalShards / nodes.size(); + return totalShardCount / nodes.size(); } /** @@ -598,6 +598,7 @@ void moveShards() { final BalancedShardsAllocator.ModelNode sourceNode = nodes.get(shardRouting.currentNodeId()); final BalancedShardsAllocator.ModelNode targetNode = nodes.get(moveDecision.getTargetNode().getId()); sourceNode.removeShard(shardRouting); + --totalShardCount; Tuple relocatingShards = routingNodes.relocateShard( shardRouting, targetNode.getNodeId(), @@ -605,6 +606,7 @@ void moveShards() { allocation.changes() ); targetNode.addShard(relocatingShards.v2()); + ++totalShardCount; if (logger.isTraceEnabled()) { logger.trace("Moved shard [{}] to node [{}]", shardRouting, targetNode.getRoutingNode()); } @@ -724,6 +726,7 @@ private Map buildModelFromAssigned() /* we skip relocating shards here since we expect an initializing shard with the same id coming in */ if (RoutingPool.LOCAL_ONLY.equals(RoutingPool.getShardPool(shard, allocation)) && shard.state() != RELOCATING) { node.addShard(shard); + ++totalShardCount; if (logger.isTraceEnabled()) { logger.trace("Assigned shard [{}] to node [{}]", shard, node.getNodeId()); } @@ -815,6 +818,7 @@ void allocateUnassigned() { ); shard = routingNodes.initializeShard(shard, minNode.getNodeId(), null, shardSize, allocation.changes()); minNode.addShard(shard); + ++totalShardCount; if (!shard.primary()) { // copy over the same replica shards to the secondary array so they will get allocated // in a subsequent iteration, allowing replicas of other shards to be allocated first @@ -844,6 +848,7 @@ void allocateUnassigned() { allocation.routingTable() ); minNode.addShard(shard.initialize(minNode.getNodeId(), null, shardSize)); + ++totalShardCount; } else { if (logger.isTraceEnabled()) { logger.trace("No Node found to assign shard [{}]", shard); @@ -1011,18 +1016,21 @@ private boolean tryRelocateShard(BalancedShardsAllocator.ModelNode minNode, Bala } final Decision decision = new Decision.Multi().add(allocationDecision).add(rebalanceDecision); maxNode.removeShard(shard); + --totalShardCount; long shardSize = allocation.clusterInfo().getShardSize(shard, ShardRouting.UNAVAILABLE_EXPECTED_SHARD_SIZE); if (decision.type() == Decision.Type.YES) { /* only allocate on the cluster if we are not throttled */ logger.debug("Relocate [{}] from [{}] to [{}]", shard, maxNode.getNodeId(), minNode.getNodeId()); minNode.addShard(routingNodes.relocateShard(shard, minNode.getNodeId(), shardSize, allocation.changes()).v1()); + ++totalShardCount; return true; } else { /* allocate on the model even if throttled */ logger.debug("Simulate relocation of [{}] from [{}] to [{}]", shard, maxNode.getNodeId(), minNode.getNodeId()); assert decision.type() == Decision.Type.THROTTLE; minNode.addShard(shard.relocate(minNode.getNodeId(), shardSize)); + ++totalShardCount; return false; } } From acc46316550ee203851d5c622d3b4724646d3f3e Mon Sep 17 00:00:00 2001 From: Chenyang Ji Date: Tue, 9 Jul 2024 11:14:55 -0700 Subject: [PATCH 042/167] [bug fix] validate lower bound for top n size (#14587) Signed-off-by: Chenyang Ji --- .../plugin/insights/core/service/TopQueriesService.java | 8 +++----- .../insights/core/service/TopQueriesServiceTests.java | 4 ++++ 2 files changed, 7 insertions(+), 5 deletions(-) diff --git a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/service/TopQueriesService.java b/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/service/TopQueriesService.java index c21b89be4dcca..bbe8b8fc40dac 100644 --- a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/service/TopQueriesService.java +++ b/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/service/TopQueriesService.java @@ -138,17 +138,15 @@ public int getTopNSize() { * @param size the wanted top N size */ public void validateTopNSize(final int size) { - if (size > QueryInsightsSettings.MAX_N_SIZE) { + if (size < 1 || size > QueryInsightsSettings.MAX_N_SIZE) { throw new IllegalArgumentException( "Top N size setting for [" + metricType + "]" - + " should be smaller than max top N size [" + + " should be between 1 and " + QueryInsightsSettings.MAX_N_SIZE - + "was (" + + ", was (" + size - + " > " - + QueryInsightsSettings.MAX_N_SIZE + ")" ); } diff --git a/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/core/service/TopQueriesServiceTests.java b/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/core/service/TopQueriesServiceTests.java index 3efd4c86833cc..8478fe1621698 100644 --- a/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/core/service/TopQueriesServiceTests.java +++ b/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/core/service/TopQueriesServiceTests.java @@ -78,6 +78,10 @@ public void testValidateTopNSize() { assertThrows(IllegalArgumentException.class, () -> { topQueriesService.validateTopNSize(QueryInsightsSettings.MAX_N_SIZE + 1); }); } + public void testValidateNegativeTopNSize() { + assertThrows(IllegalArgumentException.class, () -> { topQueriesService.validateTopNSize(-1); }); + } + public void testGetTopQueriesWhenNotEnabled() { topQueriesService.setEnabled(false); assertThrows(IllegalArgumentException.class, () -> { topQueriesService.getTopQueriesRecords(false); }); From bf56227ad1d679573beffd94f419f7b73cd07188 Mon Sep 17 00:00:00 2001 From: Craig Perkins Date: Tue, 9 Jul 2024 14:39:24 -0400 Subject: [PATCH 043/167] Create SystemIndexRegistry with helper method matchesSystemIndex (#14415) * Create new extension point in SystemIndexPlugin for a single plugin to get registered system indices Signed-off-by: Craig Perkins * Add to CHANGELOG Signed-off-by: Craig Perkins * WIP on system indices from IndexNameExpressionResolver Signed-off-by: Craig Perkins * Add test in IndexNameExpressionResolverTests Signed-off-by: Craig Perkins * Remove changes in SystemIndexPlugin Signed-off-by: Craig Perkins * Add method in IndexNameExpressionResolver to get matching system indices Signed-off-by: Craig Perkins * Show how resolver can be chained to get system indices Signed-off-by: Craig Perkins * Fix forbiddenApis check Signed-off-by: Craig Perkins * Update CHANGELOG Signed-off-by: Craig Perkins * Make SystemIndices internal Signed-off-by: Craig Perkins * Remove unneeded changes Signed-off-by: Craig Perkins * Fix CI failures Signed-off-by: Craig Perkins * Fix precommit errors Signed-off-by: Craig Perkins * Use Regex instead of WildcardMatcher Signed-off-by: Craig Perkins * Address code review feedback Signed-off-by: Craig Perkins * Allow caller to pass index expressions Signed-off-by: Craig Perkins * Create SystemIndexRegistry Signed-off-by: Craig Perkins * Update CHANGELOG Signed-off-by: Craig Perkins * Remove singleton limitation Signed-off-by: Craig Perkins * Add javadoc Signed-off-by: Craig Perkins * Add @ExperimentalApi annotation Signed-off-by: Craig Perkins --------- Signed-off-by: Craig Perkins --- CHANGELOG.md | 1 + .../indices/SystemIndexDescriptor.java | 4 +- .../indices/SystemIndexRegistry.java | 130 ++++++++++++++++++ .../org/opensearch/indices/SystemIndices.java | 88 +----------- .../main/java/org/opensearch/node/Node.java | 17 ++- .../indices/SystemIndicesTests.java | 99 ++++++++++++- 6 files changed, 242 insertions(+), 97 deletions(-) create mode 100644 server/src/main/java/org/opensearch/indices/SystemIndexRegistry.java diff --git a/CHANGELOG.md b/CHANGELOG.md index 2cffa99fb66a5..fe8d5d524097e 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -13,6 +13,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Add batching supported processor base type AbstractBatchingProcessor ([#14554](https://github.com/opensearch-project/OpenSearch/pull/14554)) - Fix race condition while parsing derived fields from search definition ([14445](https://github.com/opensearch-project/OpenSearch/pull/14445)) - Add allowlist setting for ingest-common and search-pipeline-common processors ([#14439](https://github.com/opensearch-project/OpenSearch/issues/14439)) +- Create SystemIndexRegistry with helper method matchesSystemIndex ([#14415](https://github.com/opensearch-project/OpenSearch/pull/14415)) ### Dependencies - Bump `org.gradle.test-retry` from 1.5.8 to 1.5.9 ([#13442](https://github.com/opensearch-project/OpenSearch/pull/13442)) diff --git a/server/src/main/java/org/opensearch/indices/SystemIndexDescriptor.java b/server/src/main/java/org/opensearch/indices/SystemIndexDescriptor.java index f3592a3561d3a..f3212b1e2fae1 100644 --- a/server/src/main/java/org/opensearch/indices/SystemIndexDescriptor.java +++ b/server/src/main/java/org/opensearch/indices/SystemIndexDescriptor.java @@ -33,6 +33,7 @@ package org.opensearch.indices; import org.apache.lucene.util.automaton.CharacterRunAutomaton; +import org.opensearch.common.annotation.PublicApi; import org.opensearch.common.regex.Regex; import java.util.Objects; @@ -40,8 +41,9 @@ /** * Describes a system index. Provides the information required to create and maintain the system index. * - * @opensearch.internal + * @opensearch.api */ +@PublicApi(since = "2.16.0") public class SystemIndexDescriptor { private final String indexPattern; private final String description; diff --git a/server/src/main/java/org/opensearch/indices/SystemIndexRegistry.java b/server/src/main/java/org/opensearch/indices/SystemIndexRegistry.java new file mode 100644 index 0000000000000..d9608e220d924 --- /dev/null +++ b/server/src/main/java/org/opensearch/indices/SystemIndexRegistry.java @@ -0,0 +1,130 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.indices; + +import org.apache.lucene.util.automaton.Automaton; +import org.apache.lucene.util.automaton.Operations; +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.common.collect.Tuple; +import org.opensearch.common.regex.Regex; +import org.opensearch.tasks.TaskResultsService; + +import java.util.Arrays; +import java.util.Collection; +import java.util.Collections; +import java.util.Comparator; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.stream.Collectors; + +import static java.util.Collections.singletonList; +import static java.util.Collections.singletonMap; +import static java.util.Collections.unmodifiableMap; +import static org.opensearch.tasks.TaskResultsService.TASK_INDEX; + +/** + * This class holds the {@link SystemIndexDescriptor} objects that represent system indices the + * node knows about. This class also contains static methods that identify if index expressions match + * registered system index patterns + * + * @opensearch.api + */ +@ExperimentalApi +public class SystemIndexRegistry { + private static final SystemIndexDescriptor TASK_INDEX_DESCRIPTOR = new SystemIndexDescriptor(TASK_INDEX + "*", "Task Result Index"); + private static final Map> SERVER_SYSTEM_INDEX_DESCRIPTORS = singletonMap( + TaskResultsService.class.getName(), + singletonList(TASK_INDEX_DESCRIPTOR) + ); + + private volatile static String[] SYSTEM_INDEX_PATTERNS = new String[0]; + volatile static Collection SYSTEM_INDEX_DESCRIPTORS = Collections.emptyList(); + + static void register(Map> pluginAndModulesDescriptors) { + final Map> descriptorsMap = buildSystemIndexDescriptorMap(pluginAndModulesDescriptors); + checkForOverlappingPatterns(descriptorsMap); + List descriptors = pluginAndModulesDescriptors.values() + .stream() + .flatMap(Collection::stream) + .collect(Collectors.toList()); + descriptors.add(TASK_INDEX_DESCRIPTOR); + + SYSTEM_INDEX_DESCRIPTORS = descriptors.stream().collect(Collectors.toUnmodifiableList()); + SYSTEM_INDEX_PATTERNS = descriptors.stream().map(SystemIndexDescriptor::getIndexPattern).toArray(String[]::new); + } + + public static List matchesSystemIndexPattern(String... indexExpressions) { + return Arrays.stream(indexExpressions) + .filter(pattern -> Regex.simpleMatch(SYSTEM_INDEX_PATTERNS, pattern)) + .collect(Collectors.toList()); + } + + /** + * Given a collection of {@link SystemIndexDescriptor}s and their sources, checks to see if the index patterns of the listed + * descriptors overlap with any of the other patterns. If any do, throws an exception. + * + * @param sourceToDescriptors A map of source (plugin) names to the SystemIndexDescriptors they provide. + * @throws IllegalStateException Thrown if any of the index patterns overlaps with another. + */ + static void checkForOverlappingPatterns(Map> sourceToDescriptors) { + List> sourceDescriptorPair = sourceToDescriptors.entrySet() + .stream() + .flatMap(entry -> entry.getValue().stream().map(descriptor -> new Tuple<>(entry.getKey(), descriptor))) + .sorted(Comparator.comparing(d -> d.v1() + ":" + d.v2().getIndexPattern())) // Consistent ordering -> consistent error message + .collect(Collectors.toList()); + + // This is O(n^2) with the number of system index descriptors, and each check is quadratic with the number of states in the + // automaton, but the absolute number of system index descriptors should be quite small (~10s at most), and the number of states + // per pattern should be low as well. If these assumptions change, this might need to be reworked. + sourceDescriptorPair.forEach(descriptorToCheck -> { + List> descriptorsMatchingThisPattern = sourceDescriptorPair.stream() + + .filter(d -> descriptorToCheck.v2() != d.v2()) // Exclude the pattern currently being checked + .filter(d -> overlaps(descriptorToCheck.v2(), d.v2())) + .collect(Collectors.toList()); + if (descriptorsMatchingThisPattern.isEmpty() == false) { + throw new IllegalStateException( + "a system index descriptor [" + + descriptorToCheck.v2() + + "] from [" + + descriptorToCheck.v1() + + "] overlaps with other system index descriptors: [" + + descriptorsMatchingThisPattern.stream() + .map(descriptor -> descriptor.v2() + " from [" + descriptor.v1() + "]") + .collect(Collectors.joining(", ")) + ); + } + }); + } + + private static boolean overlaps(SystemIndexDescriptor a1, SystemIndexDescriptor a2) { + Automaton a1Automaton = Regex.simpleMatchToAutomaton(a1.getIndexPattern()); + Automaton a2Automaton = Regex.simpleMatchToAutomaton(a2.getIndexPattern()); + return Operations.isEmpty(Operations.intersection(a1Automaton, a2Automaton)) == false; + } + + private static Map> buildSystemIndexDescriptorMap( + Map> pluginAndModulesMap + ) { + final Map> map = new HashMap<>( + pluginAndModulesMap.size() + SERVER_SYSTEM_INDEX_DESCRIPTORS.size() + ); + map.putAll(pluginAndModulesMap); + // put the server items last since we expect less of them + SERVER_SYSTEM_INDEX_DESCRIPTORS.forEach((source, descriptors) -> { + if (map.putIfAbsent(source, descriptors) != null) { + throw new IllegalArgumentException( + "plugin or module attempted to define the same source [" + source + "] as a built-in system index" + ); + } + }); + return unmodifiableMap(map); + } +} diff --git a/server/src/main/java/org/opensearch/indices/SystemIndices.java b/server/src/main/java/org/opensearch/indices/SystemIndices.java index a85e938c61b7a..bbf58fe91512f 100644 --- a/server/src/main/java/org/opensearch/indices/SystemIndices.java +++ b/server/src/main/java/org/opensearch/indices/SystemIndices.java @@ -40,25 +40,15 @@ import org.apache.lucene.util.automaton.MinimizationOperations; import org.apache.lucene.util.automaton.Operations; import org.opensearch.common.Nullable; -import org.opensearch.common.collect.Tuple; import org.opensearch.common.regex.Regex; import org.opensearch.core.index.Index; -import org.opensearch.tasks.TaskResultsService; import java.util.Collection; -import java.util.Comparator; -import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.Optional; import java.util.stream.Collectors; -import static java.util.Collections.singletonList; -import static java.util.Collections.singletonMap; -import static java.util.Collections.unmodifiableList; -import static java.util.Collections.unmodifiableMap; -import static org.opensearch.tasks.TaskResultsService.TASK_INDEX; - /** * This class holds the {@link SystemIndexDescriptor} objects that represent system indices the * node knows about. Methods for determining if an index should be a system index are also provided @@ -69,21 +59,11 @@ public class SystemIndices { private static final Logger logger = LogManager.getLogger(SystemIndices.class); - private static final Map> SERVER_SYSTEM_INDEX_DESCRIPTORS = singletonMap( - TaskResultsService.class.getName(), - singletonList(new SystemIndexDescriptor(TASK_INDEX + "*", "Task Result Index")) - ); - private final CharacterRunAutomaton runAutomaton; - private final Collection systemIndexDescriptors; public SystemIndices(Map> pluginAndModulesDescriptors) { - final Map> descriptorsMap = buildSystemIndexDescriptorMap(pluginAndModulesDescriptors); - checkForOverlappingPatterns(descriptorsMap); - this.systemIndexDescriptors = unmodifiableList( - descriptorsMap.values().stream().flatMap(Collection::stream).collect(Collectors.toList()) - ); - this.runAutomaton = buildCharacterRunAutomaton(systemIndexDescriptors); + SystemIndexRegistry.register(pluginAndModulesDescriptors); + this.runAutomaton = buildCharacterRunAutomaton(SystemIndexRegistry.SYSTEM_INDEX_DESCRIPTORS); } /** @@ -111,7 +91,7 @@ public boolean isSystemIndex(String indexName) { * @throws IllegalStateException if multiple descriptors match the name */ public @Nullable SystemIndexDescriptor findMatchingDescriptor(String name) { - final List matchingDescriptors = systemIndexDescriptors.stream() + final List matchingDescriptors = SystemIndexRegistry.SYSTEM_INDEX_DESCRIPTORS.stream() .filter(descriptor -> descriptor.matchesIndexPattern(name)) .collect(Collectors.toList()); @@ -168,66 +148,4 @@ private static CharacterRunAutomaton buildCharacterRunAutomaton(Collection> sourceToDescriptors) { - List> sourceDescriptorPair = sourceToDescriptors.entrySet() - .stream() - .flatMap(entry -> entry.getValue().stream().map(descriptor -> new Tuple<>(entry.getKey(), descriptor))) - .sorted(Comparator.comparing(d -> d.v1() + ":" + d.v2().getIndexPattern())) // Consistent ordering -> consistent error message - .collect(Collectors.toList()); - - // This is O(n^2) with the number of system index descriptors, and each check is quadratic with the number of states in the - // automaton, but the absolute number of system index descriptors should be quite small (~10s at most), and the number of states - // per pattern should be low as well. If these assumptions change, this might need to be reworked. - sourceDescriptorPair.forEach(descriptorToCheck -> { - List> descriptorsMatchingThisPattern = sourceDescriptorPair.stream() - - .filter(d -> descriptorToCheck.v2() != d.v2()) // Exclude the pattern currently being checked - .filter(d -> overlaps(descriptorToCheck.v2(), d.v2())) - .collect(Collectors.toList()); - if (descriptorsMatchingThisPattern.isEmpty() == false) { - throw new IllegalStateException( - "a system index descriptor [" - + descriptorToCheck.v2() - + "] from [" - + descriptorToCheck.v1() - + "] overlaps with other system index descriptors: [" - + descriptorsMatchingThisPattern.stream() - .map(descriptor -> descriptor.v2() + " from [" + descriptor.v1() + "]") - .collect(Collectors.joining(", ")) - ); - } - }); - } - - private static boolean overlaps(SystemIndexDescriptor a1, SystemIndexDescriptor a2) { - Automaton a1Automaton = Regex.simpleMatchToAutomaton(a1.getIndexPattern()); - Automaton a2Automaton = Regex.simpleMatchToAutomaton(a2.getIndexPattern()); - return Operations.isEmpty(Operations.intersection(a1Automaton, a2Automaton)) == false; - } - - private static Map> buildSystemIndexDescriptorMap( - Map> pluginAndModulesMap - ) { - final Map> map = new HashMap<>( - pluginAndModulesMap.size() + SERVER_SYSTEM_INDEX_DESCRIPTORS.size() - ); - map.putAll(pluginAndModulesMap); - // put the server items last since we expect less of them - SERVER_SYSTEM_INDEX_DESCRIPTORS.forEach((source, descriptors) -> { - if (map.putIfAbsent(source, descriptors) != null) { - throw new IllegalArgumentException( - "plugin or module attempted to define the same source [" + source + "] as a built-in system index" - ); - } - }); - return unmodifiableMap(map); - } } diff --git a/server/src/main/java/org/opensearch/node/Node.java b/server/src/main/java/org/opensearch/node/Node.java index 85ef547e27787..96a716af7f1a1 100644 --- a/server/src/main/java/org/opensearch/node/Node.java +++ b/server/src/main/java/org/opensearch/node/Node.java @@ -695,6 +695,14 @@ protected Node( repositoriesServiceReference::get, rerouteServiceReference::get ); + final Map> systemIndexDescriptorMap = Collections.unmodifiableMap( + pluginsService.filterPlugins(SystemIndexPlugin.class) + .stream() + .collect( + Collectors.toMap(plugin -> plugin.getClass().getSimpleName(), plugin -> plugin.getSystemIndexDescriptors(settings)) + ) + ); + final SystemIndices systemIndices = new SystemIndices(systemIndexDescriptorMap); final ClusterModule clusterModule = new ClusterModule( settings, clusterService, @@ -819,15 +827,6 @@ protected Node( .flatMap(m -> m.entrySet().stream()) .collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue)); - final Map> systemIndexDescriptorMap = Collections.unmodifiableMap( - pluginsService.filterPlugins(SystemIndexPlugin.class) - .stream() - .collect( - Collectors.toMap(plugin -> plugin.getClass().getSimpleName(), plugin -> plugin.getSystemIndexDescriptors(settings)) - ) - ); - final SystemIndices systemIndices = new SystemIndices(systemIndexDescriptorMap); - final RerouteService rerouteService = new BatchedRerouteService(clusterService, clusterModule.getAllocationService()::reroute); rerouteServiceReference.set(rerouteService); clusterService.setRerouteService(rerouteService); diff --git a/server/src/test/java/org/opensearch/indices/SystemIndicesTests.java b/server/src/test/java/org/opensearch/indices/SystemIndicesTests.java index 2b40c01c61242..8ac457c32d53a 100644 --- a/server/src/test/java/org/opensearch/indices/SystemIndicesTests.java +++ b/server/src/test/java/org/opensearch/indices/SystemIndicesTests.java @@ -32,12 +32,17 @@ package org.opensearch.indices; +import org.opensearch.common.settings.Settings; +import org.opensearch.plugins.Plugin; +import org.opensearch.plugins.SystemIndexPlugin; import org.opensearch.tasks.TaskResultsService; import org.opensearch.test.OpenSearchTestCase; import java.util.Arrays; import java.util.Collection; +import java.util.Collections; import java.util.HashMap; +import java.util.List; import java.util.Map; import static java.util.Collections.emptyMap; @@ -67,7 +72,7 @@ public void testBasicOverlappingPatterns() { IllegalStateException exception = expectThrows( IllegalStateException.class, - () -> SystemIndices.checkForOverlappingPatterns(descriptors) + () -> SystemIndexRegistry.checkForOverlappingPatterns(descriptors) ); assertThat( exception.getMessage(), @@ -104,7 +109,7 @@ public void testComplexOverlappingPatterns() { IllegalStateException exception = expectThrows( IllegalStateException.class, - () -> SystemIndices.checkForOverlappingPatterns(descriptors) + () -> SystemIndexRegistry.checkForOverlappingPatterns(descriptors) ); assertThat( exception.getMessage(), @@ -133,4 +138,94 @@ public void testPluginCannotOverrideBuiltInSystemIndex() { IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> new SystemIndices(pluginMap)); assertThat(e.getMessage(), containsString("plugin or module attempted to define the same source")); } + + public void testSystemIndexMatching() { + SystemIndexPlugin plugin1 = new SystemIndexPlugin1(); + SystemIndexPlugin plugin2 = new SystemIndexPlugin2(); + SystemIndexPlugin plugin3 = new SystemIndexPatternPlugin(); + SystemIndices pluginSystemIndices = new SystemIndices( + Map.of( + SystemIndexPlugin1.class.getCanonicalName(), + plugin1.getSystemIndexDescriptors(Settings.EMPTY), + SystemIndexPlugin2.class.getCanonicalName(), + plugin2.getSystemIndexDescriptors(Settings.EMPTY), + SystemIndexPatternPlugin.class.getCanonicalName(), + plugin3.getSystemIndexDescriptors(Settings.EMPTY) + ) + ); + + assertThat( + SystemIndexRegistry.matchesSystemIndexPattern(".system-index1", ".system-index2"), + equalTo(List.of(SystemIndexPlugin1.SYSTEM_INDEX_1, SystemIndexPlugin2.SYSTEM_INDEX_2)) + ); + assertThat(SystemIndexRegistry.matchesSystemIndexPattern(".system-index1"), equalTo(List.of(SystemIndexPlugin1.SYSTEM_INDEX_1))); + assertThat(SystemIndexRegistry.matchesSystemIndexPattern(".system-index2"), equalTo(List.of(SystemIndexPlugin2.SYSTEM_INDEX_2))); + assertThat(SystemIndexRegistry.matchesSystemIndexPattern(".system-index-pattern1"), equalTo(List.of(".system-index-pattern1"))); + assertThat( + SystemIndexRegistry.matchesSystemIndexPattern(".system-index-pattern-sub*"), + equalTo(List.of(".system-index-pattern-sub*")) + ); + assertThat( + SystemIndexRegistry.matchesSystemIndexPattern(".system-index-pattern1", ".system-index-pattern2"), + equalTo(List.of(".system-index-pattern1", ".system-index-pattern2")) + ); + assertThat( + SystemIndexRegistry.matchesSystemIndexPattern(".system-index1", ".system-index-pattern1"), + equalTo(List.of(".system-index1", ".system-index-pattern1")) + ); + assertThat( + SystemIndexRegistry.matchesSystemIndexPattern(".system-index1", ".system-index-pattern1", ".not-system"), + equalTo(List.of(".system-index1", ".system-index-pattern1")) + ); + assertThat(SystemIndexRegistry.matchesSystemIndexPattern(".not-system"), equalTo(Collections.emptyList())); + } + + public void testRegisteredSystemIndexExpansion() { + SystemIndexPlugin plugin1 = new SystemIndexPlugin1(); + SystemIndexPlugin plugin2 = new SystemIndexPlugin2(); + SystemIndices pluginSystemIndices = new SystemIndices( + Map.of( + SystemIndexPlugin1.class.getCanonicalName(), + plugin1.getSystemIndexDescriptors(Settings.EMPTY), + SystemIndexPlugin2.class.getCanonicalName(), + plugin2.getSystemIndexDescriptors(Settings.EMPTY) + ) + ); + List systemIndices = SystemIndexRegistry.matchesSystemIndexPattern( + SystemIndexPlugin1.SYSTEM_INDEX_1, + SystemIndexPlugin2.SYSTEM_INDEX_2 + ); + assertEquals(2, systemIndices.size()); + assertTrue(systemIndices.containsAll(List.of(SystemIndexPlugin1.SYSTEM_INDEX_1, SystemIndexPlugin2.SYSTEM_INDEX_2))); + } + + static final class SystemIndexPlugin1 extends Plugin implements SystemIndexPlugin { + public static final String SYSTEM_INDEX_1 = ".system-index1"; + + @Override + public Collection getSystemIndexDescriptors(Settings settings) { + final SystemIndexDescriptor systemIndexDescriptor = new SystemIndexDescriptor(SYSTEM_INDEX_1, "System index 1"); + return Collections.singletonList(systemIndexDescriptor); + } + } + + static final class SystemIndexPlugin2 extends Plugin implements SystemIndexPlugin { + public static final String SYSTEM_INDEX_2 = ".system-index2"; + + @Override + public Collection getSystemIndexDescriptors(Settings settings) { + final SystemIndexDescriptor systemIndexDescriptor = new SystemIndexDescriptor(SYSTEM_INDEX_2, "System index 2"); + return Collections.singletonList(systemIndexDescriptor); + } + } + + static final class SystemIndexPatternPlugin extends Plugin implements SystemIndexPlugin { + public static final String SYSTEM_INDEX_PATTERN = ".system-index-pattern*"; + + @Override + public Collection getSystemIndexDescriptors(Settings settings) { + final SystemIndexDescriptor systemIndexDescriptor = new SystemIndexDescriptor(SYSTEM_INDEX_PATTERN, "System index pattern"); + return Collections.singletonList(systemIndexDescriptor); + } + } } From 13f04d97e3438b0389d1e851dfd08678ab644361 Mon Sep 17 00:00:00 2001 From: Sandesh Kumar Date: Tue, 9 Jul 2024 11:49:03 -0700 Subject: [PATCH 044/167] Refactor Grok validate pattern to iterative approach (#14206) * grok validate patterns recusrion to iterative Signed-off-by: Sandesh Kumar * Add max depth in resolving a pattern to avoid OOM Signed-off-by: Sandesh Kumar * change path from deque to arraylist Signed-off-by: Sandesh Kumar * rename queue to stack Signed-off-by: Sandesh Kumar * Change max depth to 500 Signed-off-by: Sandesh Kumar * typo originPatternName fix Signed-off-by: Sandesh Kumar * spotless Signed-off-by: Sandesh Kumar --------- Signed-off-by: Sandesh Kumar --- CHANGELOG.md | 1 + .../main/java/org/opensearch/grok/Grok.java | 122 +++++++++++++----- .../java/org/opensearch/grok/GrokTests.java | 10 ++ 3 files changed, 99 insertions(+), 34 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index fe8d5d524097e..0f683154c52f9 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -61,6 +61,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Fix file cache initialization ([#14004](https://github.com/opensearch-project/OpenSearch/pull/14004)) - Handle NPE in GetResult if "found" field is missing ([#14552](https://github.com/opensearch-project/OpenSearch/pull/14552)) - Refactoring FilterPath.parse by using an iterative approach ([#14200](https://github.com/opensearch-project/OpenSearch/pull/14200)) +- Refactoring Grok.validatePatternBank by using an iterative approach ([#14206](https://github.com/opensearch-project/OpenSearch/pull/14206)) ### Security diff --git a/libs/grok/src/main/java/org/opensearch/grok/Grok.java b/libs/grok/src/main/java/org/opensearch/grok/Grok.java index 7aa3347ba4f4b..aa5b1a936b99d 100644 --- a/libs/grok/src/main/java/org/opensearch/grok/Grok.java +++ b/libs/grok/src/main/java/org/opensearch/grok/Grok.java @@ -37,14 +37,18 @@ import java.io.InputStream; import java.io.InputStreamReader; import java.nio.charset.StandardCharsets; +import java.util.ArrayDeque; import java.util.ArrayList; import java.util.Collections; +import java.util.Deque; +import java.util.HashMap; +import java.util.HashSet; import java.util.Iterator; import java.util.LinkedHashMap; import java.util.List; import java.util.Locale; import java.util.Map; -import java.util.Stack; +import java.util.Set; import java.util.function.Consumer; import org.jcodings.specific.UTF8Encoding; @@ -86,6 +90,7 @@ public final class Grok { UTF8Encoding.INSTANCE, Syntax.DEFAULT ); + private static final int MAX_PATTERN_DEPTH_SIZE = 500; private static final int MAX_TO_REGEX_ITERATIONS = 100_000; // sanity limit @@ -128,7 +133,7 @@ private Grok( expressionBytes.length, Option.DEFAULT, UTF8Encoding.INSTANCE, - message -> logCallBack.accept(message) + logCallBack::accept ); List captureConfig = new ArrayList<>(); @@ -144,7 +149,7 @@ private Grok( */ private void validatePatternBank() { for (String patternName : patternBank.keySet()) { - validatePatternBank(patternName, new Stack<>()); + validatePatternBank(patternName); } } @@ -156,33 +161,84 @@ private void validatePatternBank() { * a reference to another named pattern. This method will navigate to all these named patterns and * check for a circular reference. */ - private void validatePatternBank(String patternName, Stack path) { - String pattern = patternBank.get(patternName); - boolean isSelfReference = pattern.contains("%{" + patternName + "}") || pattern.contains("%{" + patternName + ":"); - if (isSelfReference) { - throwExceptionForCircularReference(patternName, pattern); - } else if (path.contains(patternName)) { - // current pattern name is already in the path, fetch its predecessor - String prevPatternName = path.pop(); - String prevPattern = patternBank.get(prevPatternName); - throwExceptionForCircularReference(prevPatternName, prevPattern, patternName, path); - } - path.push(patternName); - for (int i = pattern.indexOf("%{"); i != -1; i = pattern.indexOf("%{", i + 1)) { - int begin = i + 2; - int syntaxEndIndex = pattern.indexOf('}', begin); - if (syntaxEndIndex == -1) { - throw new IllegalArgumentException("Malformed pattern [" + patternName + "][" + pattern + "]"); + private void validatePatternBank(String initialPatternName) { + Deque stack = new ArrayDeque<>(); + Set visitedPatterns = new HashSet<>(); + Map> pathMap = new HashMap<>(); + + List initialPath = new ArrayList<>(); + initialPath.add(initialPatternName); + pathMap.put(initialPatternName, initialPath); + stack.push(new Frame(initialPatternName, initialPath, 0)); + + while (!stack.isEmpty()) { + Frame frame = stack.peek(); + String patternName = frame.patternName; + List path = frame.path; + int startIndex = frame.startIndex; + String pattern = patternBank.get(patternName); + + if (visitedPatterns.contains(patternName)) { + stack.pop(); + continue; + } + + visitedPatterns.add(patternName); + boolean foundDependency = false; + + for (int i = startIndex; i < pattern.length(); i++) { + if (pattern.startsWith("%{", i)) { + int begin = i + 2; + int syntaxEndIndex = pattern.indexOf('}', begin); + if (syntaxEndIndex == -1) { + throw new IllegalArgumentException("Malformed pattern [" + patternName + "][" + pattern + "]"); + } + + int semanticNameIndex = pattern.indexOf(':', begin); + int end = semanticNameIndex == -1 ? syntaxEndIndex : Math.min(syntaxEndIndex, semanticNameIndex); + + String dependsOnPattern = pattern.substring(begin, end); + + if (dependsOnPattern.equals(patternName)) { + throwExceptionForCircularReference(patternName, pattern); + } + + if (pathMap.containsKey(dependsOnPattern)) { + throwExceptionForCircularReference(patternName, pattern, dependsOnPattern, path.subList(0, path.size() - 1)); + } + + List newPath = new ArrayList<>(path); + newPath.add(dependsOnPattern); + pathMap.put(dependsOnPattern, newPath); + + stack.push(new Frame(dependsOnPattern, newPath, 0)); + frame.startIndex = i + 1; + foundDependency = true; + break; + } } - int semanticNameIndex = pattern.indexOf(':', begin); - int end = syntaxEndIndex; - if (semanticNameIndex != -1) { - end = Math.min(syntaxEndIndex, semanticNameIndex); + + if (!foundDependency) { + pathMap.remove(patternName); + stack.pop(); + } + + if (stack.size() > MAX_PATTERN_DEPTH_SIZE) { + throw new IllegalArgumentException("Pattern references exceeded maximum depth of " + MAX_PATTERN_DEPTH_SIZE); } - String dependsOnPattern = pattern.substring(begin, end); - validatePatternBank(dependsOnPattern, path); } - path.pop(); + } + + private static class Frame { + String patternName; + List path; + int startIndex; + + Frame(String patternName, List path, int startIndex) { + this.patternName = patternName; + this.path = path; + this.startIndex = startIndex; + } } private static void throwExceptionForCircularReference(String patternName, String pattern) { @@ -192,13 +248,13 @@ private static void throwExceptionForCircularReference(String patternName, Strin private static void throwExceptionForCircularReference( String patternName, String pattern, - String originPatterName, - Stack path + String originPatternName, + List path ) { StringBuilder message = new StringBuilder("circular reference in pattern ["); message.append(patternName).append("][").append(pattern).append("]"); - if (originPatterName != null) { - message.append(" back to pattern [").append(originPatterName).append("]"); + if (originPatternName != null) { + message.append(" back to pattern [").append(originPatternName).append("]"); } if (path != null && path.size() > 1) { message.append(" via patterns [").append(String.join("=>", path)).append("]"); @@ -217,9 +273,7 @@ private String groupMatch(String name, Region region, String pattern) { int begin = region.getBeg(number); int end = region.getEnd(number); return new String(pattern.getBytes(StandardCharsets.UTF_8), begin, end - begin, StandardCharsets.UTF_8); - } catch (StringIndexOutOfBoundsException e) { - return null; - } catch (ValueException e) { + } catch (StringIndexOutOfBoundsException | ValueException e) { return null; } } diff --git a/libs/grok/src/test/java/org/opensearch/grok/GrokTests.java b/libs/grok/src/test/java/org/opensearch/grok/GrokTests.java index a37689e051c67..8476d541aa46e 100644 --- a/libs/grok/src/test/java/org/opensearch/grok/GrokTests.java +++ b/libs/grok/src/test/java/org/opensearch/grok/GrokTests.java @@ -377,6 +377,16 @@ public void testCircularReference() { "circular reference in pattern [NAME5][!!!%{NAME1}!!!] back to pattern [NAME1] " + "via patterns [NAME1=>NAME2=>NAME3=>NAME4]", e.getMessage() ); + + e = expectThrows(IllegalArgumentException.class, () -> { + Map bank = new TreeMap<>(); + for (int i = 1; i <= 501; i++) { + bank.put("NAME" + i, "!!!%{NAME" + (i + 1) + "}!!!"); + } + String pattern = "%{NAME1}"; + new Grok(bank, pattern, false, logger::warn); + }); + assertEquals("Pattern references exceeded maximum depth of 500", e.getMessage()); } public void testMalformedPattern() { From b8dc46d93711882a8fae8b953f11934d940f12a5 Mon Sep 17 00:00:00 2001 From: Andriy Redko Date: Tue, 9 Jul 2024 16:15:40 -0400 Subject: [PATCH 045/167] Bump opentelemetry from 1.39.0 to 1.40.0 (#14674) Signed-off-by: Andriy Redko --- CHANGELOG.md | 5 +++-- buildSrc/version.properties | 4 ++-- .../licenses/opentelemetry-api-1.39.0.jar.sha1 | 1 - .../licenses/opentelemetry-api-1.40.0.jar.sha1 | 1 + .../opentelemetry-api-incubator-1.39.0-alpha.jar.sha1 | 1 - .../opentelemetry-api-incubator-1.40.0-alpha.jar.sha1 | 1 + .../licenses/opentelemetry-context-1.39.0.jar.sha1 | 1 - .../licenses/opentelemetry-context-1.40.0.jar.sha1 | 1 + .../licenses/opentelemetry-exporter-common-1.39.0.jar.sha1 | 1 - .../licenses/opentelemetry-exporter-common-1.40.0.jar.sha1 | 1 + .../licenses/opentelemetry-exporter-logging-1.39.0.jar.sha1 | 1 - .../licenses/opentelemetry-exporter-logging-1.40.0.jar.sha1 | 1 + .../licenses/opentelemetry-exporter-otlp-1.39.0.jar.sha1 | 1 - .../licenses/opentelemetry-exporter-otlp-1.40.0.jar.sha1 | 1 + .../opentelemetry-exporter-otlp-common-1.39.0.jar.sha1 | 1 - .../opentelemetry-exporter-otlp-common-1.40.0.jar.sha1 | 1 + .../opentelemetry-exporter-sender-okhttp-1.39.0.jar.sha1 | 1 - .../opentelemetry-exporter-sender-okhttp-1.40.0.jar.sha1 | 1 + .../licenses/opentelemetry-sdk-1.39.0.jar.sha1 | 1 - .../licenses/opentelemetry-sdk-1.40.0.jar.sha1 | 1 + .../licenses/opentelemetry-sdk-common-1.39.0.jar.sha1 | 1 - .../licenses/opentelemetry-sdk-common-1.40.0.jar.sha1 | 1 + .../licenses/opentelemetry-sdk-logs-1.39.0.jar.sha1 | 1 - .../licenses/opentelemetry-sdk-logs-1.40.0.jar.sha1 | 1 + .../licenses/opentelemetry-sdk-metrics-1.39.0.jar.sha1 | 1 - .../licenses/opentelemetry-sdk-metrics-1.40.0.jar.sha1 | 1 + .../licenses/opentelemetry-sdk-trace-1.39.0.jar.sha1 | 1 - .../licenses/opentelemetry-sdk-trace-1.40.0.jar.sha1 | 1 + .../licenses/opentelemetry-semconv-1.25.0-alpha.jar.sha1 | 1 - .../licenses/opentelemetry-semconv-1.26.0-alpha.jar.sha1 | 1 + 30 files changed, 19 insertions(+), 18 deletions(-) delete mode 100644 plugins/telemetry-otel/licenses/opentelemetry-api-1.39.0.jar.sha1 create mode 100644 plugins/telemetry-otel/licenses/opentelemetry-api-1.40.0.jar.sha1 delete mode 100644 plugins/telemetry-otel/licenses/opentelemetry-api-incubator-1.39.0-alpha.jar.sha1 create mode 100644 plugins/telemetry-otel/licenses/opentelemetry-api-incubator-1.40.0-alpha.jar.sha1 delete mode 100644 plugins/telemetry-otel/licenses/opentelemetry-context-1.39.0.jar.sha1 create mode 100644 plugins/telemetry-otel/licenses/opentelemetry-context-1.40.0.jar.sha1 delete mode 100644 plugins/telemetry-otel/licenses/opentelemetry-exporter-common-1.39.0.jar.sha1 create mode 100644 plugins/telemetry-otel/licenses/opentelemetry-exporter-common-1.40.0.jar.sha1 delete mode 100644 plugins/telemetry-otel/licenses/opentelemetry-exporter-logging-1.39.0.jar.sha1 create mode 100644 plugins/telemetry-otel/licenses/opentelemetry-exporter-logging-1.40.0.jar.sha1 delete mode 100644 plugins/telemetry-otel/licenses/opentelemetry-exporter-otlp-1.39.0.jar.sha1 create mode 100644 plugins/telemetry-otel/licenses/opentelemetry-exporter-otlp-1.40.0.jar.sha1 delete mode 100644 plugins/telemetry-otel/licenses/opentelemetry-exporter-otlp-common-1.39.0.jar.sha1 create mode 100644 plugins/telemetry-otel/licenses/opentelemetry-exporter-otlp-common-1.40.0.jar.sha1 delete mode 100644 plugins/telemetry-otel/licenses/opentelemetry-exporter-sender-okhttp-1.39.0.jar.sha1 create mode 100644 plugins/telemetry-otel/licenses/opentelemetry-exporter-sender-okhttp-1.40.0.jar.sha1 delete mode 100644 plugins/telemetry-otel/licenses/opentelemetry-sdk-1.39.0.jar.sha1 create mode 100644 plugins/telemetry-otel/licenses/opentelemetry-sdk-1.40.0.jar.sha1 delete mode 100644 plugins/telemetry-otel/licenses/opentelemetry-sdk-common-1.39.0.jar.sha1 create mode 100644 plugins/telemetry-otel/licenses/opentelemetry-sdk-common-1.40.0.jar.sha1 delete mode 100644 plugins/telemetry-otel/licenses/opentelemetry-sdk-logs-1.39.0.jar.sha1 create mode 100644 plugins/telemetry-otel/licenses/opentelemetry-sdk-logs-1.40.0.jar.sha1 delete mode 100644 plugins/telemetry-otel/licenses/opentelemetry-sdk-metrics-1.39.0.jar.sha1 create mode 100644 plugins/telemetry-otel/licenses/opentelemetry-sdk-metrics-1.40.0.jar.sha1 delete mode 100644 plugins/telemetry-otel/licenses/opentelemetry-sdk-trace-1.39.0.jar.sha1 create mode 100644 plugins/telemetry-otel/licenses/opentelemetry-sdk-trace-1.40.0.jar.sha1 delete mode 100644 plugins/telemetry-otel/licenses/opentelemetry-semconv-1.25.0-alpha.jar.sha1 create mode 100644 plugins/telemetry-otel/licenses/opentelemetry-semconv-1.26.0-alpha.jar.sha1 diff --git a/CHANGELOG.md b/CHANGELOG.md index 0f683154c52f9..1807fb6d5e00c 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -26,8 +26,9 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Bump `com.nimbusds:nimbus-jose-jwt` from 9.37.3 to 9.40 ([#14398](https://github.com/opensearch-project/OpenSearch/pull/14398)) - Bump `org.apache.commons:commons-configuration2` from 2.10.1 to 2.11.0 ([#14399](https://github.com/opensearch-project/OpenSearch/pull/14399)) - Bump `com.gradle.develocity` from 3.17.4 to 3.17.5 ([#14397](https://github.com/opensearch-project/OpenSearch/pull/14397)) -- Bump `opentelemetry` from 1.36.0 to 1.39.0 ([#14457](https://github.com/opensearch-project/OpenSearch/pull/14457)) -- Bump `azure-identity` from 1.11.4 to 1.13.0, Bump `msal4j` from 1.14.3 to 1.15.1, Bump `msal4j-persistence-extension` from 1.2.0 to 1.3.0 ([#14506](https://github.com/opensearch-project/OpenSearch/pull/14506)) +- Bump `opentelemetry` from 1.36.0 to 1.40.0 ([#14457](https://github.com/opensearch-project/OpenSearch/pull/14457), [#14674](https://github.com/opensearch-project/OpenSearch/pull/14674)) +- Bump `opentelemetry-semconv` from 1.25.0-alpha to 1.26.0-alpha ([#14674](https://github.com/opensearch-project/OpenSearch/pull/14674)) +- Bump `azure-identity` from 1.11.4 to 1.13.0, Bump `msal4j` from 1.14.3 to 1.15.1, Bump `msal4j-persistence-extension` from 1.2.0 to 1.3.0 ([#14506](https://github.com/opensearch-project/OpenSearch/pull/14673)) - Bump `com.azure:azure-storage-common` from 12.21.2 to 12.25.1 ([#14517](https://github.com/opensearch-project/OpenSearch/pull/14517)) - Bump `com.microsoft.azure:msal4j` from 1.15.1 to 1.16.0 ([#14610](https://github.com/opensearch-project/OpenSearch/pull/14610)) - Bump `com.github.spullara.mustache.java:compiler` from 0.9.13 to 0.9.14 ([#14672](https://github.com/opensearch-project/OpenSearch/pull/14672)) diff --git a/buildSrc/version.properties b/buildSrc/version.properties index a99bd4801b7f3..a04fb68f47f55 100644 --- a/buildSrc/version.properties +++ b/buildSrc/version.properties @@ -74,5 +74,5 @@ jzlib = 1.1.3 resteasy = 6.2.4.Final # opentelemetry dependencies -opentelemetry = 1.39.0 -opentelemetrysemconv = 1.25.0-alpha +opentelemetry = 1.40.0 +opentelemetrysemconv = 1.26.0-alpha diff --git a/plugins/telemetry-otel/licenses/opentelemetry-api-1.39.0.jar.sha1 b/plugins/telemetry-otel/licenses/opentelemetry-api-1.39.0.jar.sha1 deleted file mode 100644 index 415fe8f3d8aaa..0000000000000 --- a/plugins/telemetry-otel/licenses/opentelemetry-api-1.39.0.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -396b89a66526bd5694ad3bef4604b876177e0b44 \ No newline at end of file diff --git a/plugins/telemetry-otel/licenses/opentelemetry-api-1.40.0.jar.sha1 b/plugins/telemetry-otel/licenses/opentelemetry-api-1.40.0.jar.sha1 new file mode 100644 index 0000000000000..04ec81edf969c --- /dev/null +++ b/plugins/telemetry-otel/licenses/opentelemetry-api-1.40.0.jar.sha1 @@ -0,0 +1 @@ +6db562f2b74ffaa7253d740e9aa7a3c4f2e392ec \ No newline at end of file diff --git a/plugins/telemetry-otel/licenses/opentelemetry-api-incubator-1.39.0-alpha.jar.sha1 b/plugins/telemetry-otel/licenses/opentelemetry-api-incubator-1.39.0-alpha.jar.sha1 deleted file mode 100644 index 9c3c9f43d153c..0000000000000 --- a/plugins/telemetry-otel/licenses/opentelemetry-api-incubator-1.39.0-alpha.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -1a1fd96155e1b58726300bbf8457630713035e51 \ No newline at end of file diff --git a/plugins/telemetry-otel/licenses/opentelemetry-api-incubator-1.40.0-alpha.jar.sha1 b/plugins/telemetry-otel/licenses/opentelemetry-api-incubator-1.40.0-alpha.jar.sha1 new file mode 100644 index 0000000000000..bcd7c886b5f6c --- /dev/null +++ b/plugins/telemetry-otel/licenses/opentelemetry-api-incubator-1.40.0-alpha.jar.sha1 @@ -0,0 +1 @@ +43115633361430a3c6aaa39fd78363014ac79270 \ No newline at end of file diff --git a/plugins/telemetry-otel/licenses/opentelemetry-context-1.39.0.jar.sha1 b/plugins/telemetry-otel/licenses/opentelemetry-context-1.39.0.jar.sha1 deleted file mode 100644 index 115d4ccb1f34b..0000000000000 --- a/plugins/telemetry-otel/licenses/opentelemetry-context-1.39.0.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -f0601fb1c06f661afeffbc73a1dbe29797b2f13b \ No newline at end of file diff --git a/plugins/telemetry-otel/licenses/opentelemetry-context-1.40.0.jar.sha1 b/plugins/telemetry-otel/licenses/opentelemetry-context-1.40.0.jar.sha1 new file mode 100644 index 0000000000000..9716ec518c886 --- /dev/null +++ b/plugins/telemetry-otel/licenses/opentelemetry-context-1.40.0.jar.sha1 @@ -0,0 +1 @@ +bf1db0f288b9baaabdb439ab6179b673b751511e \ No newline at end of file diff --git a/plugins/telemetry-otel/licenses/opentelemetry-exporter-common-1.39.0.jar.sha1 b/plugins/telemetry-otel/licenses/opentelemetry-exporter-common-1.39.0.jar.sha1 deleted file mode 100644 index a10b92995becd..0000000000000 --- a/plugins/telemetry-otel/licenses/opentelemetry-exporter-common-1.39.0.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -570d71e39e36fe2caad142557bde0c11fcdb3b92 \ No newline at end of file diff --git a/plugins/telemetry-otel/licenses/opentelemetry-exporter-common-1.40.0.jar.sha1 b/plugins/telemetry-otel/licenses/opentelemetry-exporter-common-1.40.0.jar.sha1 new file mode 100644 index 0000000000000..c0e79b05aa675 --- /dev/null +++ b/plugins/telemetry-otel/licenses/opentelemetry-exporter-common-1.40.0.jar.sha1 @@ -0,0 +1 @@ +b883b179c242a1761df2d408fe01ec41b17327a3 \ No newline at end of file diff --git a/plugins/telemetry-otel/licenses/opentelemetry-exporter-logging-1.39.0.jar.sha1 b/plugins/telemetry-otel/licenses/opentelemetry-exporter-logging-1.39.0.jar.sha1 deleted file mode 100644 index f43393104296a..0000000000000 --- a/plugins/telemetry-otel/licenses/opentelemetry-exporter-logging-1.39.0.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -f5b528f8d6f8531836eabba698979516964b24ed \ No newline at end of file diff --git a/plugins/telemetry-otel/licenses/opentelemetry-exporter-logging-1.40.0.jar.sha1 b/plugins/telemetry-otel/licenses/opentelemetry-exporter-logging-1.40.0.jar.sha1 new file mode 100644 index 0000000000000..1df0ad183c475 --- /dev/null +++ b/plugins/telemetry-otel/licenses/opentelemetry-exporter-logging-1.40.0.jar.sha1 @@ -0,0 +1 @@ +a8c1f9b05ac9fb1259517cf53950ccecaf84ebe1 \ No newline at end of file diff --git a/plugins/telemetry-otel/licenses/opentelemetry-exporter-otlp-1.39.0.jar.sha1 b/plugins/telemetry-otel/licenses/opentelemetry-exporter-otlp-1.39.0.jar.sha1 deleted file mode 100644 index 5adba2ba0f342..0000000000000 --- a/plugins/telemetry-otel/licenses/opentelemetry-exporter-otlp-1.39.0.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -04fc0e4983253ea58430c3d24b6b3c5c95f84dc9 \ No newline at end of file diff --git a/plugins/telemetry-otel/licenses/opentelemetry-exporter-otlp-1.40.0.jar.sha1 b/plugins/telemetry-otel/licenses/opentelemetry-exporter-otlp-1.40.0.jar.sha1 new file mode 100644 index 0000000000000..ebeb639a8459c --- /dev/null +++ b/plugins/telemetry-otel/licenses/opentelemetry-exporter-otlp-1.40.0.jar.sha1 @@ -0,0 +1 @@ +8d8b92bcdb0ace48fb5764cc1ad7a0de197d5b8c \ No newline at end of file diff --git a/plugins/telemetry-otel/licenses/opentelemetry-exporter-otlp-common-1.39.0.jar.sha1 b/plugins/telemetry-otel/licenses/opentelemetry-exporter-otlp-common-1.39.0.jar.sha1 deleted file mode 100644 index ea9c293f25025..0000000000000 --- a/plugins/telemetry-otel/licenses/opentelemetry-exporter-otlp-common-1.39.0.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -a2b8571e36b11c3153d31ec87ec69cc168af8036 \ No newline at end of file diff --git a/plugins/telemetry-otel/licenses/opentelemetry-exporter-otlp-common-1.40.0.jar.sha1 b/plugins/telemetry-otel/licenses/opentelemetry-exporter-otlp-common-1.40.0.jar.sha1 new file mode 100644 index 0000000000000..b630c808d4763 --- /dev/null +++ b/plugins/telemetry-otel/licenses/opentelemetry-exporter-otlp-common-1.40.0.jar.sha1 @@ -0,0 +1 @@ +80fa10130cc7e7626e2581aa7c5871eab7381889 \ No newline at end of file diff --git a/plugins/telemetry-otel/licenses/opentelemetry-exporter-sender-okhttp-1.39.0.jar.sha1 b/plugins/telemetry-otel/licenses/opentelemetry-exporter-sender-okhttp-1.39.0.jar.sha1 deleted file mode 100644 index dcf23f16ac89f..0000000000000 --- a/plugins/telemetry-otel/licenses/opentelemetry-exporter-sender-okhttp-1.39.0.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -1a8947a2e28924ad9374e319150a23837926ca4b \ No newline at end of file diff --git a/plugins/telemetry-otel/licenses/opentelemetry-exporter-sender-okhttp-1.40.0.jar.sha1 b/plugins/telemetry-otel/licenses/opentelemetry-exporter-sender-okhttp-1.40.0.jar.sha1 new file mode 100644 index 0000000000000..eda90dc825e6f --- /dev/null +++ b/plugins/telemetry-otel/licenses/opentelemetry-exporter-sender-okhttp-1.40.0.jar.sha1 @@ -0,0 +1 @@ +006dcdbf8eb911ad4d11c54fa824f5a97f582850 \ No newline at end of file diff --git a/plugins/telemetry-otel/licenses/opentelemetry-sdk-1.39.0.jar.sha1 b/plugins/telemetry-otel/licenses/opentelemetry-sdk-1.39.0.jar.sha1 deleted file mode 100644 index f603af04d8012..0000000000000 --- a/plugins/telemetry-otel/licenses/opentelemetry-sdk-1.39.0.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -ba9afdf3ef1ea51e42999fd68c959e3ceb219399 \ No newline at end of file diff --git a/plugins/telemetry-otel/licenses/opentelemetry-sdk-1.40.0.jar.sha1 b/plugins/telemetry-otel/licenses/opentelemetry-sdk-1.40.0.jar.sha1 new file mode 100644 index 0000000000000..cdd7dc6551b33 --- /dev/null +++ b/plugins/telemetry-otel/licenses/opentelemetry-sdk-1.40.0.jar.sha1 @@ -0,0 +1 @@ +59f260c5412b79a5a40c7d433600248727cd195a \ No newline at end of file diff --git a/plugins/telemetry-otel/licenses/opentelemetry-sdk-common-1.39.0.jar.sha1 b/plugins/telemetry-otel/licenses/opentelemetry-sdk-common-1.39.0.jar.sha1 deleted file mode 100644 index f9419f6ccfbee..0000000000000 --- a/plugins/telemetry-otel/licenses/opentelemetry-sdk-common-1.39.0.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -fb8168627bf0059445f61081eaa47c4ab787fc2e \ No newline at end of file diff --git a/plugins/telemetry-otel/licenses/opentelemetry-sdk-common-1.40.0.jar.sha1 b/plugins/telemetry-otel/licenses/opentelemetry-sdk-common-1.40.0.jar.sha1 new file mode 100644 index 0000000000000..668291498bbae --- /dev/null +++ b/plugins/telemetry-otel/licenses/opentelemetry-sdk-common-1.40.0.jar.sha1 @@ -0,0 +1 @@ +7042214012232a5d6a251aca4aa5932014a4946b \ No newline at end of file diff --git a/plugins/telemetry-otel/licenses/opentelemetry-sdk-logs-1.39.0.jar.sha1 b/plugins/telemetry-otel/licenses/opentelemetry-sdk-logs-1.39.0.jar.sha1 deleted file mode 100644 index 63269f239eacd..0000000000000 --- a/plugins/telemetry-otel/licenses/opentelemetry-sdk-logs-1.39.0.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -b6b45155399bc9fa563945f3e3a77416d7165948 \ No newline at end of file diff --git a/plugins/telemetry-otel/licenses/opentelemetry-sdk-logs-1.40.0.jar.sha1 b/plugins/telemetry-otel/licenses/opentelemetry-sdk-logs-1.40.0.jar.sha1 new file mode 100644 index 0000000000000..74f0786e21954 --- /dev/null +++ b/plugins/telemetry-otel/licenses/opentelemetry-sdk-logs-1.40.0.jar.sha1 @@ -0,0 +1 @@ +1c6b884d65f79d40429263ac0ab7ed1422237837 \ No newline at end of file diff --git a/plugins/telemetry-otel/licenses/opentelemetry-sdk-metrics-1.39.0.jar.sha1 b/plugins/telemetry-otel/licenses/opentelemetry-sdk-metrics-1.39.0.jar.sha1 deleted file mode 100644 index f18c8259c1adc..0000000000000 --- a/plugins/telemetry-otel/licenses/opentelemetry-sdk-metrics-1.39.0.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -522d46926cc06a4c18829da7e4c4340bdf5673c3 \ No newline at end of file diff --git a/plugins/telemetry-otel/licenses/opentelemetry-sdk-metrics-1.40.0.jar.sha1 b/plugins/telemetry-otel/licenses/opentelemetry-sdk-metrics-1.40.0.jar.sha1 new file mode 100644 index 0000000000000..23ef1bf6e6b2c --- /dev/null +++ b/plugins/telemetry-otel/licenses/opentelemetry-sdk-metrics-1.40.0.jar.sha1 @@ -0,0 +1 @@ +a1c9b33a8660ace82aecb7f1c7ea50093dc87f0a \ No newline at end of file diff --git a/plugins/telemetry-otel/licenses/opentelemetry-sdk-trace-1.39.0.jar.sha1 b/plugins/telemetry-otel/licenses/opentelemetry-sdk-trace-1.39.0.jar.sha1 deleted file mode 100644 index 03b81424f46d5..0000000000000 --- a/plugins/telemetry-otel/licenses/opentelemetry-sdk-trace-1.39.0.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -0b72722a5bbea5f46319bf08b2caed5b8f987a92 \ No newline at end of file diff --git a/plugins/telemetry-otel/licenses/opentelemetry-sdk-trace-1.40.0.jar.sha1 b/plugins/telemetry-otel/licenses/opentelemetry-sdk-trace-1.40.0.jar.sha1 new file mode 100644 index 0000000000000..aea753f0df18b --- /dev/null +++ b/plugins/telemetry-otel/licenses/opentelemetry-sdk-trace-1.40.0.jar.sha1 @@ -0,0 +1 @@ +5145f077bf2821ad243617baf8c1810d29af8566 \ No newline at end of file diff --git a/plugins/telemetry-otel/licenses/opentelemetry-semconv-1.25.0-alpha.jar.sha1 b/plugins/telemetry-otel/licenses/opentelemetry-semconv-1.25.0-alpha.jar.sha1 deleted file mode 100644 index 7cf8e7e8ede28..0000000000000 --- a/plugins/telemetry-otel/licenses/opentelemetry-semconv-1.25.0-alpha.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -76b3d4ca0a8f20b27c1590ceece54f0c7fb5857e \ No newline at end of file diff --git a/plugins/telemetry-otel/licenses/opentelemetry-semconv-1.26.0-alpha.jar.sha1 b/plugins/telemetry-otel/licenses/opentelemetry-semconv-1.26.0-alpha.jar.sha1 new file mode 100644 index 0000000000000..7124dcb31da3f --- /dev/null +++ b/plugins/telemetry-otel/licenses/opentelemetry-semconv-1.26.0-alpha.jar.sha1 @@ -0,0 +1 @@ +955de1d2de4d3d2bb6ba2498f19c9a06da2f3956 \ No newline at end of file From a04cf24923e7379d94314ae4f03e2bddfddf1765 Mon Sep 17 00:00:00 2001 From: Andriy Redko Date: Tue, 9 Jul 2024 17:07:54 -0400 Subject: [PATCH 046/167] Bump jackson from 2.17.1 to 2.17.2 (#14687) Signed-off-by: Andriy Redko --- CHANGELOG.md | 1 + buildSrc/version.properties | 4 ++-- client/sniffer/licenses/jackson-core-2.17.1.jar.sha1 | 1 - client/sniffer/licenses/jackson-core-2.17.2.jar.sha1 | 1 + .../upgrade-cli/licenses/jackson-annotations-2.17.1.jar.sha1 | 1 - .../upgrade-cli/licenses/jackson-annotations-2.17.2.jar.sha1 | 1 + .../upgrade-cli/licenses/jackson-databind-2.17.1.jar.sha1 | 1 - .../upgrade-cli/licenses/jackson-databind-2.17.2.jar.sha1 | 1 + libs/core/licenses/jackson-core-2.17.1.jar.sha1 | 1 - libs/core/licenses/jackson-core-2.17.2.jar.sha1 | 1 + libs/x-content/licenses/jackson-core-2.17.1.jar.sha1 | 1 - libs/x-content/licenses/jackson-core-2.17.2.jar.sha1 | 1 + .../licenses/jackson-dataformat-cbor-2.17.1.jar.sha1 | 1 - .../licenses/jackson-dataformat-cbor-2.17.2.jar.sha1 | 1 + .../licenses/jackson-dataformat-smile-2.17.1.jar.sha1 | 1 - .../licenses/jackson-dataformat-smile-2.17.2.jar.sha1 | 1 + .../licenses/jackson-dataformat-yaml-2.17.1.jar.sha1 | 1 - .../licenses/jackson-dataformat-yaml-2.17.2.jar.sha1 | 1 + .../ingest-geoip/licenses/jackson-annotations-2.17.1.jar.sha1 | 1 - .../ingest-geoip/licenses/jackson-annotations-2.17.2.jar.sha1 | 1 + .../ingest-geoip/licenses/jackson-databind-2.17.1.jar.sha1 | 1 - .../ingest-geoip/licenses/jackson-databind-2.17.2.jar.sha1 | 1 + .../crypto-kms/licenses/jackson-annotations-2.17.1.jar.sha1 | 1 - .../crypto-kms/licenses/jackson-annotations-2.17.2.jar.sha1 | 1 + plugins/crypto-kms/licenses/jackson-databind-2.17.1.jar.sha1 | 1 - plugins/crypto-kms/licenses/jackson-databind-2.17.2.jar.sha1 | 1 + .../licenses/jackson-annotations-2.17.1.jar.sha1 | 1 - .../licenses/jackson-annotations-2.17.2.jar.sha1 | 1 + .../discovery-ec2/licenses/jackson-databind-2.17.1.jar.sha1 | 1 - .../discovery-ec2/licenses/jackson-databind-2.17.2.jar.sha1 | 1 + .../licenses/jackson-annotations-2.17.1.jar.sha1 | 1 - .../licenses/jackson-annotations-2.17.2.jar.sha1 | 1 + .../licenses/jackson-databind-2.17.1.jar.sha1 | 1 - .../licenses/jackson-databind-2.17.2.jar.sha1 | 1 + .../licenses/jackson-dataformat-xml-2.17.1.jar.sha1 | 1 - .../licenses/jackson-dataformat-xml-2.17.2.jar.sha1 | 1 + .../licenses/jackson-datatype-jsr310-2.17.1.jar.sha1 | 1 - .../licenses/jackson-datatype-jsr310-2.17.2.jar.sha1 | 1 + .../licenses/jackson-module-jaxb-annotations-2.17.1.jar.sha1 | 1 - .../licenses/jackson-module-jaxb-annotations-2.17.2.jar.sha1 | 1 + .../licenses/jackson-annotations-2.17.1.jar.sha1 | 1 - .../licenses/jackson-annotations-2.17.2.jar.sha1 | 1 + .../repository-s3/licenses/jackson-databind-2.17.1.jar.sha1 | 1 - .../repository-s3/licenses/jackson-databind-2.17.2.jar.sha1 | 1 + server/licenses/jackson-core-2.17.1.jar.sha1 | 1 - server/licenses/jackson-core-2.17.2.jar.sha1 | 1 + server/licenses/jackson-dataformat-cbor-2.17.1.jar.sha1 | 1 - server/licenses/jackson-dataformat-cbor-2.17.2.jar.sha1 | 1 + server/licenses/jackson-dataformat-smile-2.17.1.jar.sha1 | 1 - server/licenses/jackson-dataformat-smile-2.17.2.jar.sha1 | 1 + server/licenses/jackson-dataformat-yaml-2.17.1.jar.sha1 | 1 - server/licenses/jackson-dataformat-yaml-2.17.2.jar.sha1 | 1 + 52 files changed, 28 insertions(+), 27 deletions(-) delete mode 100644 client/sniffer/licenses/jackson-core-2.17.1.jar.sha1 create mode 100644 client/sniffer/licenses/jackson-core-2.17.2.jar.sha1 delete mode 100644 distribution/tools/upgrade-cli/licenses/jackson-annotations-2.17.1.jar.sha1 create mode 100644 distribution/tools/upgrade-cli/licenses/jackson-annotations-2.17.2.jar.sha1 delete mode 100644 distribution/tools/upgrade-cli/licenses/jackson-databind-2.17.1.jar.sha1 create mode 100644 distribution/tools/upgrade-cli/licenses/jackson-databind-2.17.2.jar.sha1 delete mode 100644 libs/core/licenses/jackson-core-2.17.1.jar.sha1 create mode 100644 libs/core/licenses/jackson-core-2.17.2.jar.sha1 delete mode 100644 libs/x-content/licenses/jackson-core-2.17.1.jar.sha1 create mode 100644 libs/x-content/licenses/jackson-core-2.17.2.jar.sha1 delete mode 100644 libs/x-content/licenses/jackson-dataformat-cbor-2.17.1.jar.sha1 create mode 100644 libs/x-content/licenses/jackson-dataformat-cbor-2.17.2.jar.sha1 delete mode 100644 libs/x-content/licenses/jackson-dataformat-smile-2.17.1.jar.sha1 create mode 100644 libs/x-content/licenses/jackson-dataformat-smile-2.17.2.jar.sha1 delete mode 100644 libs/x-content/licenses/jackson-dataformat-yaml-2.17.1.jar.sha1 create mode 100644 libs/x-content/licenses/jackson-dataformat-yaml-2.17.2.jar.sha1 delete mode 100644 modules/ingest-geoip/licenses/jackson-annotations-2.17.1.jar.sha1 create mode 100644 modules/ingest-geoip/licenses/jackson-annotations-2.17.2.jar.sha1 delete mode 100644 modules/ingest-geoip/licenses/jackson-databind-2.17.1.jar.sha1 create mode 100644 modules/ingest-geoip/licenses/jackson-databind-2.17.2.jar.sha1 delete mode 100644 plugins/crypto-kms/licenses/jackson-annotations-2.17.1.jar.sha1 create mode 100644 plugins/crypto-kms/licenses/jackson-annotations-2.17.2.jar.sha1 delete mode 100644 plugins/crypto-kms/licenses/jackson-databind-2.17.1.jar.sha1 create mode 100644 plugins/crypto-kms/licenses/jackson-databind-2.17.2.jar.sha1 delete mode 100644 plugins/discovery-ec2/licenses/jackson-annotations-2.17.1.jar.sha1 create mode 100644 plugins/discovery-ec2/licenses/jackson-annotations-2.17.2.jar.sha1 delete mode 100644 plugins/discovery-ec2/licenses/jackson-databind-2.17.1.jar.sha1 create mode 100644 plugins/discovery-ec2/licenses/jackson-databind-2.17.2.jar.sha1 delete mode 100644 plugins/repository-azure/licenses/jackson-annotations-2.17.1.jar.sha1 create mode 100644 plugins/repository-azure/licenses/jackson-annotations-2.17.2.jar.sha1 delete mode 100644 plugins/repository-azure/licenses/jackson-databind-2.17.1.jar.sha1 create mode 100644 plugins/repository-azure/licenses/jackson-databind-2.17.2.jar.sha1 delete mode 100644 plugins/repository-azure/licenses/jackson-dataformat-xml-2.17.1.jar.sha1 create mode 100644 plugins/repository-azure/licenses/jackson-dataformat-xml-2.17.2.jar.sha1 delete mode 100644 plugins/repository-azure/licenses/jackson-datatype-jsr310-2.17.1.jar.sha1 create mode 100644 plugins/repository-azure/licenses/jackson-datatype-jsr310-2.17.2.jar.sha1 delete mode 100644 plugins/repository-azure/licenses/jackson-module-jaxb-annotations-2.17.1.jar.sha1 create mode 100644 plugins/repository-azure/licenses/jackson-module-jaxb-annotations-2.17.2.jar.sha1 delete mode 100644 plugins/repository-s3/licenses/jackson-annotations-2.17.1.jar.sha1 create mode 100644 plugins/repository-s3/licenses/jackson-annotations-2.17.2.jar.sha1 delete mode 100644 plugins/repository-s3/licenses/jackson-databind-2.17.1.jar.sha1 create mode 100644 plugins/repository-s3/licenses/jackson-databind-2.17.2.jar.sha1 delete mode 100644 server/licenses/jackson-core-2.17.1.jar.sha1 create mode 100644 server/licenses/jackson-core-2.17.2.jar.sha1 delete mode 100644 server/licenses/jackson-dataformat-cbor-2.17.1.jar.sha1 create mode 100644 server/licenses/jackson-dataformat-cbor-2.17.2.jar.sha1 delete mode 100644 server/licenses/jackson-dataformat-smile-2.17.1.jar.sha1 create mode 100644 server/licenses/jackson-dataformat-smile-2.17.2.jar.sha1 delete mode 100644 server/licenses/jackson-dataformat-yaml-2.17.1.jar.sha1 create mode 100644 server/licenses/jackson-dataformat-yaml-2.17.2.jar.sha1 diff --git a/CHANGELOG.md b/CHANGELOG.md index 1807fb6d5e00c..fe26423cde573 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -33,6 +33,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Bump `com.microsoft.azure:msal4j` from 1.15.1 to 1.16.0 ([#14610](https://github.com/opensearch-project/OpenSearch/pull/14610)) - Bump `com.github.spullara.mustache.java:compiler` from 0.9.13 to 0.9.14 ([#14672](https://github.com/opensearch-project/OpenSearch/pull/14672)) - Bump `net.minidev:accessors-smart` from 2.5.0 to 2.5.1 ([#14673](https://github.com/opensearch-project/OpenSearch/pull/14673)) +- Bump `jackson` from 2.17.1 to 2.17.2 ([#14687](https://github.com/opensearch-project/OpenSearch/pull/14687)) ### Changed - [Tiered Caching] Move query recomputation logic outside write lock ([#14187](https://github.com/opensearch-project/OpenSearch/pull/14187)) diff --git a/buildSrc/version.properties b/buildSrc/version.properties index a04fb68f47f55..d62f8c51e616b 100644 --- a/buildSrc/version.properties +++ b/buildSrc/version.properties @@ -7,8 +7,8 @@ bundled_jdk = 21.0.3+9 # optional dependencies spatial4j = 0.7 jts = 1.15.0 -jackson = 2.17.1 -jackson_databind = 2.17.1 +jackson = 2.17.2 +jackson_databind = 2.17.2 snakeyaml = 2.1 icu4j = 70.1 supercsv = 2.4.0 diff --git a/client/sniffer/licenses/jackson-core-2.17.1.jar.sha1 b/client/sniffer/licenses/jackson-core-2.17.1.jar.sha1 deleted file mode 100644 index 82dab5981e652..0000000000000 --- a/client/sniffer/licenses/jackson-core-2.17.1.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -5e52a11644cd59a28ef79f02bddc2cc3bab45edb \ No newline at end of file diff --git a/client/sniffer/licenses/jackson-core-2.17.2.jar.sha1 b/client/sniffer/licenses/jackson-core-2.17.2.jar.sha1 new file mode 100644 index 0000000000000..e15f2340980bc --- /dev/null +++ b/client/sniffer/licenses/jackson-core-2.17.2.jar.sha1 @@ -0,0 +1 @@ +969a35cb35c86512acbadcdbbbfb044c877db814 \ No newline at end of file diff --git a/distribution/tools/upgrade-cli/licenses/jackson-annotations-2.17.1.jar.sha1 b/distribution/tools/upgrade-cli/licenses/jackson-annotations-2.17.1.jar.sha1 deleted file mode 100644 index 4ceead1b7ae4f..0000000000000 --- a/distribution/tools/upgrade-cli/licenses/jackson-annotations-2.17.1.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -fca7ef6192c9ad05d07bc50da991bf937a84af3a \ No newline at end of file diff --git a/distribution/tools/upgrade-cli/licenses/jackson-annotations-2.17.2.jar.sha1 b/distribution/tools/upgrade-cli/licenses/jackson-annotations-2.17.2.jar.sha1 new file mode 100644 index 0000000000000..411e1d62459fd --- /dev/null +++ b/distribution/tools/upgrade-cli/licenses/jackson-annotations-2.17.2.jar.sha1 @@ -0,0 +1 @@ +147b7b9412ffff24339f8aba080b292448e08698 \ No newline at end of file diff --git a/distribution/tools/upgrade-cli/licenses/jackson-databind-2.17.1.jar.sha1 b/distribution/tools/upgrade-cli/licenses/jackson-databind-2.17.1.jar.sha1 deleted file mode 100644 index 7cf1ac1b60301..0000000000000 --- a/distribution/tools/upgrade-cli/licenses/jackson-databind-2.17.1.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -0524dcbcccdde7d45a679dfc333e4763feb09079 \ No newline at end of file diff --git a/distribution/tools/upgrade-cli/licenses/jackson-databind-2.17.2.jar.sha1 b/distribution/tools/upgrade-cli/licenses/jackson-databind-2.17.2.jar.sha1 new file mode 100644 index 0000000000000..f2b4dbdc5decb --- /dev/null +++ b/distribution/tools/upgrade-cli/licenses/jackson-databind-2.17.2.jar.sha1 @@ -0,0 +1 @@ +e6deb029e5901e027c129341fac39e515066b68c \ No newline at end of file diff --git a/libs/core/licenses/jackson-core-2.17.1.jar.sha1 b/libs/core/licenses/jackson-core-2.17.1.jar.sha1 deleted file mode 100644 index 82dab5981e652..0000000000000 --- a/libs/core/licenses/jackson-core-2.17.1.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -5e52a11644cd59a28ef79f02bddc2cc3bab45edb \ No newline at end of file diff --git a/libs/core/licenses/jackson-core-2.17.2.jar.sha1 b/libs/core/licenses/jackson-core-2.17.2.jar.sha1 new file mode 100644 index 0000000000000..e15f2340980bc --- /dev/null +++ b/libs/core/licenses/jackson-core-2.17.2.jar.sha1 @@ -0,0 +1 @@ +969a35cb35c86512acbadcdbbbfb044c877db814 \ No newline at end of file diff --git a/libs/x-content/licenses/jackson-core-2.17.1.jar.sha1 b/libs/x-content/licenses/jackson-core-2.17.1.jar.sha1 deleted file mode 100644 index 82dab5981e652..0000000000000 --- a/libs/x-content/licenses/jackson-core-2.17.1.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -5e52a11644cd59a28ef79f02bddc2cc3bab45edb \ No newline at end of file diff --git a/libs/x-content/licenses/jackson-core-2.17.2.jar.sha1 b/libs/x-content/licenses/jackson-core-2.17.2.jar.sha1 new file mode 100644 index 0000000000000..e15f2340980bc --- /dev/null +++ b/libs/x-content/licenses/jackson-core-2.17.2.jar.sha1 @@ -0,0 +1 @@ +969a35cb35c86512acbadcdbbbfb044c877db814 \ No newline at end of file diff --git a/libs/x-content/licenses/jackson-dataformat-cbor-2.17.1.jar.sha1 b/libs/x-content/licenses/jackson-dataformat-cbor-2.17.1.jar.sha1 deleted file mode 100644 index ff42ed1f92cfe..0000000000000 --- a/libs/x-content/licenses/jackson-dataformat-cbor-2.17.1.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -ba5d8e6ecc62aa0e49c0ce935b8696352dbebc71 \ No newline at end of file diff --git a/libs/x-content/licenses/jackson-dataformat-cbor-2.17.2.jar.sha1 b/libs/x-content/licenses/jackson-dataformat-cbor-2.17.2.jar.sha1 new file mode 100644 index 0000000000000..069e088413ef1 --- /dev/null +++ b/libs/x-content/licenses/jackson-dataformat-cbor-2.17.2.jar.sha1 @@ -0,0 +1 @@ +57fa7c1b5104bbc4599278d13933a937ee058e68 \ No newline at end of file diff --git a/libs/x-content/licenses/jackson-dataformat-smile-2.17.1.jar.sha1 b/libs/x-content/licenses/jackson-dataformat-smile-2.17.1.jar.sha1 deleted file mode 100644 index 47d19067cf2a6..0000000000000 --- a/libs/x-content/licenses/jackson-dataformat-smile-2.17.1.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -89683ac4f0a0c2c4f69ea56b90480ed40266dac8 \ No newline at end of file diff --git a/libs/x-content/licenses/jackson-dataformat-smile-2.17.2.jar.sha1 b/libs/x-content/licenses/jackson-dataformat-smile-2.17.2.jar.sha1 new file mode 100644 index 0000000000000..28d8c8382aed3 --- /dev/null +++ b/libs/x-content/licenses/jackson-dataformat-smile-2.17.2.jar.sha1 @@ -0,0 +1 @@ +20e956b9b6f67138edd39fab7a506ded19638bcb \ No newline at end of file diff --git a/libs/x-content/licenses/jackson-dataformat-yaml-2.17.1.jar.sha1 b/libs/x-content/licenses/jackson-dataformat-yaml-2.17.1.jar.sha1 deleted file mode 100644 index 7946e994c7104..0000000000000 --- a/libs/x-content/licenses/jackson-dataformat-yaml-2.17.1.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -b4c7b8a9ea3f398116a75c146b982b22afebc4ee \ No newline at end of file diff --git a/libs/x-content/licenses/jackson-dataformat-yaml-2.17.2.jar.sha1 b/libs/x-content/licenses/jackson-dataformat-yaml-2.17.2.jar.sha1 new file mode 100644 index 0000000000000..f3e25b7eb253c --- /dev/null +++ b/libs/x-content/licenses/jackson-dataformat-yaml-2.17.2.jar.sha1 @@ -0,0 +1 @@ +78d2c73dbec62044d7cf3b544b2e0d24a1a093b0 \ No newline at end of file diff --git a/modules/ingest-geoip/licenses/jackson-annotations-2.17.1.jar.sha1 b/modules/ingest-geoip/licenses/jackson-annotations-2.17.1.jar.sha1 deleted file mode 100644 index 4ceead1b7ae4f..0000000000000 --- a/modules/ingest-geoip/licenses/jackson-annotations-2.17.1.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -fca7ef6192c9ad05d07bc50da991bf937a84af3a \ No newline at end of file diff --git a/modules/ingest-geoip/licenses/jackson-annotations-2.17.2.jar.sha1 b/modules/ingest-geoip/licenses/jackson-annotations-2.17.2.jar.sha1 new file mode 100644 index 0000000000000..411e1d62459fd --- /dev/null +++ b/modules/ingest-geoip/licenses/jackson-annotations-2.17.2.jar.sha1 @@ -0,0 +1 @@ +147b7b9412ffff24339f8aba080b292448e08698 \ No newline at end of file diff --git a/modules/ingest-geoip/licenses/jackson-databind-2.17.1.jar.sha1 b/modules/ingest-geoip/licenses/jackson-databind-2.17.1.jar.sha1 deleted file mode 100644 index 7cf1ac1b60301..0000000000000 --- a/modules/ingest-geoip/licenses/jackson-databind-2.17.1.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -0524dcbcccdde7d45a679dfc333e4763feb09079 \ No newline at end of file diff --git a/modules/ingest-geoip/licenses/jackson-databind-2.17.2.jar.sha1 b/modules/ingest-geoip/licenses/jackson-databind-2.17.2.jar.sha1 new file mode 100644 index 0000000000000..f2b4dbdc5decb --- /dev/null +++ b/modules/ingest-geoip/licenses/jackson-databind-2.17.2.jar.sha1 @@ -0,0 +1 @@ +e6deb029e5901e027c129341fac39e515066b68c \ No newline at end of file diff --git a/plugins/crypto-kms/licenses/jackson-annotations-2.17.1.jar.sha1 b/plugins/crypto-kms/licenses/jackson-annotations-2.17.1.jar.sha1 deleted file mode 100644 index 4ceead1b7ae4f..0000000000000 --- a/plugins/crypto-kms/licenses/jackson-annotations-2.17.1.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -fca7ef6192c9ad05d07bc50da991bf937a84af3a \ No newline at end of file diff --git a/plugins/crypto-kms/licenses/jackson-annotations-2.17.2.jar.sha1 b/plugins/crypto-kms/licenses/jackson-annotations-2.17.2.jar.sha1 new file mode 100644 index 0000000000000..411e1d62459fd --- /dev/null +++ b/plugins/crypto-kms/licenses/jackson-annotations-2.17.2.jar.sha1 @@ -0,0 +1 @@ +147b7b9412ffff24339f8aba080b292448e08698 \ No newline at end of file diff --git a/plugins/crypto-kms/licenses/jackson-databind-2.17.1.jar.sha1 b/plugins/crypto-kms/licenses/jackson-databind-2.17.1.jar.sha1 deleted file mode 100644 index 7cf1ac1b60301..0000000000000 --- a/plugins/crypto-kms/licenses/jackson-databind-2.17.1.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -0524dcbcccdde7d45a679dfc333e4763feb09079 \ No newline at end of file diff --git a/plugins/crypto-kms/licenses/jackson-databind-2.17.2.jar.sha1 b/plugins/crypto-kms/licenses/jackson-databind-2.17.2.jar.sha1 new file mode 100644 index 0000000000000..f2b4dbdc5decb --- /dev/null +++ b/plugins/crypto-kms/licenses/jackson-databind-2.17.2.jar.sha1 @@ -0,0 +1 @@ +e6deb029e5901e027c129341fac39e515066b68c \ No newline at end of file diff --git a/plugins/discovery-ec2/licenses/jackson-annotations-2.17.1.jar.sha1 b/plugins/discovery-ec2/licenses/jackson-annotations-2.17.1.jar.sha1 deleted file mode 100644 index 4ceead1b7ae4f..0000000000000 --- a/plugins/discovery-ec2/licenses/jackson-annotations-2.17.1.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -fca7ef6192c9ad05d07bc50da991bf937a84af3a \ No newline at end of file diff --git a/plugins/discovery-ec2/licenses/jackson-annotations-2.17.2.jar.sha1 b/plugins/discovery-ec2/licenses/jackson-annotations-2.17.2.jar.sha1 new file mode 100644 index 0000000000000..411e1d62459fd --- /dev/null +++ b/plugins/discovery-ec2/licenses/jackson-annotations-2.17.2.jar.sha1 @@ -0,0 +1 @@ +147b7b9412ffff24339f8aba080b292448e08698 \ No newline at end of file diff --git a/plugins/discovery-ec2/licenses/jackson-databind-2.17.1.jar.sha1 b/plugins/discovery-ec2/licenses/jackson-databind-2.17.1.jar.sha1 deleted file mode 100644 index 7cf1ac1b60301..0000000000000 --- a/plugins/discovery-ec2/licenses/jackson-databind-2.17.1.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -0524dcbcccdde7d45a679dfc333e4763feb09079 \ No newline at end of file diff --git a/plugins/discovery-ec2/licenses/jackson-databind-2.17.2.jar.sha1 b/plugins/discovery-ec2/licenses/jackson-databind-2.17.2.jar.sha1 new file mode 100644 index 0000000000000..f2b4dbdc5decb --- /dev/null +++ b/plugins/discovery-ec2/licenses/jackson-databind-2.17.2.jar.sha1 @@ -0,0 +1 @@ +e6deb029e5901e027c129341fac39e515066b68c \ No newline at end of file diff --git a/plugins/repository-azure/licenses/jackson-annotations-2.17.1.jar.sha1 b/plugins/repository-azure/licenses/jackson-annotations-2.17.1.jar.sha1 deleted file mode 100644 index 4ceead1b7ae4f..0000000000000 --- a/plugins/repository-azure/licenses/jackson-annotations-2.17.1.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -fca7ef6192c9ad05d07bc50da991bf937a84af3a \ No newline at end of file diff --git a/plugins/repository-azure/licenses/jackson-annotations-2.17.2.jar.sha1 b/plugins/repository-azure/licenses/jackson-annotations-2.17.2.jar.sha1 new file mode 100644 index 0000000000000..411e1d62459fd --- /dev/null +++ b/plugins/repository-azure/licenses/jackson-annotations-2.17.2.jar.sha1 @@ -0,0 +1 @@ +147b7b9412ffff24339f8aba080b292448e08698 \ No newline at end of file diff --git a/plugins/repository-azure/licenses/jackson-databind-2.17.1.jar.sha1 b/plugins/repository-azure/licenses/jackson-databind-2.17.1.jar.sha1 deleted file mode 100644 index 7cf1ac1b60301..0000000000000 --- a/plugins/repository-azure/licenses/jackson-databind-2.17.1.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -0524dcbcccdde7d45a679dfc333e4763feb09079 \ No newline at end of file diff --git a/plugins/repository-azure/licenses/jackson-databind-2.17.2.jar.sha1 b/plugins/repository-azure/licenses/jackson-databind-2.17.2.jar.sha1 new file mode 100644 index 0000000000000..f2b4dbdc5decb --- /dev/null +++ b/plugins/repository-azure/licenses/jackson-databind-2.17.2.jar.sha1 @@ -0,0 +1 @@ +e6deb029e5901e027c129341fac39e515066b68c \ No newline at end of file diff --git a/plugins/repository-azure/licenses/jackson-dataformat-xml-2.17.1.jar.sha1 b/plugins/repository-azure/licenses/jackson-dataformat-xml-2.17.1.jar.sha1 deleted file mode 100644 index 3915ab2616beb..0000000000000 --- a/plugins/repository-azure/licenses/jackson-dataformat-xml-2.17.1.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -e6a168dba62aa63743b9e2b83f4e0f0dfdc143d3 \ No newline at end of file diff --git a/plugins/repository-azure/licenses/jackson-dataformat-xml-2.17.2.jar.sha1 b/plugins/repository-azure/licenses/jackson-dataformat-xml-2.17.2.jar.sha1 new file mode 100644 index 0000000000000..f9c31c168926d --- /dev/null +++ b/plugins/repository-azure/licenses/jackson-dataformat-xml-2.17.2.jar.sha1 @@ -0,0 +1 @@ +ad58f5bd089e743ac6e5999b2d1e3cf8515cea9a \ No newline at end of file diff --git a/plugins/repository-azure/licenses/jackson-datatype-jsr310-2.17.1.jar.sha1 b/plugins/repository-azure/licenses/jackson-datatype-jsr310-2.17.1.jar.sha1 deleted file mode 100644 index db26ebbf738f7..0000000000000 --- a/plugins/repository-azure/licenses/jackson-datatype-jsr310-2.17.1.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -0969b0c3cb8c75d759e9a6c585c44c9b9f3a4f75 \ No newline at end of file diff --git a/plugins/repository-azure/licenses/jackson-datatype-jsr310-2.17.2.jar.sha1 b/plugins/repository-azure/licenses/jackson-datatype-jsr310-2.17.2.jar.sha1 new file mode 100644 index 0000000000000..a61bf643d69e6 --- /dev/null +++ b/plugins/repository-azure/licenses/jackson-datatype-jsr310-2.17.2.jar.sha1 @@ -0,0 +1 @@ +267b85e9ba2892a37be6d80aa9ca1438a0d8c210 \ No newline at end of file diff --git a/plugins/repository-azure/licenses/jackson-module-jaxb-annotations-2.17.1.jar.sha1 b/plugins/repository-azure/licenses/jackson-module-jaxb-annotations-2.17.1.jar.sha1 deleted file mode 100644 index bb8ecfe34d295..0000000000000 --- a/plugins/repository-azure/licenses/jackson-module-jaxb-annotations-2.17.1.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -f77e7bf0e64dfcf53bfdcf2764ad7ab92b78a4da \ No newline at end of file diff --git a/plugins/repository-azure/licenses/jackson-module-jaxb-annotations-2.17.2.jar.sha1 b/plugins/repository-azure/licenses/jackson-module-jaxb-annotations-2.17.2.jar.sha1 new file mode 100644 index 0000000000000..d9d7975146c22 --- /dev/null +++ b/plugins/repository-azure/licenses/jackson-module-jaxb-annotations-2.17.2.jar.sha1 @@ -0,0 +1 @@ +c2978b818ef2f2b2738b387c143624eab611d917 \ No newline at end of file diff --git a/plugins/repository-s3/licenses/jackson-annotations-2.17.1.jar.sha1 b/plugins/repository-s3/licenses/jackson-annotations-2.17.1.jar.sha1 deleted file mode 100644 index 4ceead1b7ae4f..0000000000000 --- a/plugins/repository-s3/licenses/jackson-annotations-2.17.1.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -fca7ef6192c9ad05d07bc50da991bf937a84af3a \ No newline at end of file diff --git a/plugins/repository-s3/licenses/jackson-annotations-2.17.2.jar.sha1 b/plugins/repository-s3/licenses/jackson-annotations-2.17.2.jar.sha1 new file mode 100644 index 0000000000000..411e1d62459fd --- /dev/null +++ b/plugins/repository-s3/licenses/jackson-annotations-2.17.2.jar.sha1 @@ -0,0 +1 @@ +147b7b9412ffff24339f8aba080b292448e08698 \ No newline at end of file diff --git a/plugins/repository-s3/licenses/jackson-databind-2.17.1.jar.sha1 b/plugins/repository-s3/licenses/jackson-databind-2.17.1.jar.sha1 deleted file mode 100644 index 7cf1ac1b60301..0000000000000 --- a/plugins/repository-s3/licenses/jackson-databind-2.17.1.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -0524dcbcccdde7d45a679dfc333e4763feb09079 \ No newline at end of file diff --git a/plugins/repository-s3/licenses/jackson-databind-2.17.2.jar.sha1 b/plugins/repository-s3/licenses/jackson-databind-2.17.2.jar.sha1 new file mode 100644 index 0000000000000..f2b4dbdc5decb --- /dev/null +++ b/plugins/repository-s3/licenses/jackson-databind-2.17.2.jar.sha1 @@ -0,0 +1 @@ +e6deb029e5901e027c129341fac39e515066b68c \ No newline at end of file diff --git a/server/licenses/jackson-core-2.17.1.jar.sha1 b/server/licenses/jackson-core-2.17.1.jar.sha1 deleted file mode 100644 index 82dab5981e652..0000000000000 --- a/server/licenses/jackson-core-2.17.1.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -5e52a11644cd59a28ef79f02bddc2cc3bab45edb \ No newline at end of file diff --git a/server/licenses/jackson-core-2.17.2.jar.sha1 b/server/licenses/jackson-core-2.17.2.jar.sha1 new file mode 100644 index 0000000000000..e15f2340980bc --- /dev/null +++ b/server/licenses/jackson-core-2.17.2.jar.sha1 @@ -0,0 +1 @@ +969a35cb35c86512acbadcdbbbfb044c877db814 \ No newline at end of file diff --git a/server/licenses/jackson-dataformat-cbor-2.17.1.jar.sha1 b/server/licenses/jackson-dataformat-cbor-2.17.1.jar.sha1 deleted file mode 100644 index ff42ed1f92cfe..0000000000000 --- a/server/licenses/jackson-dataformat-cbor-2.17.1.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -ba5d8e6ecc62aa0e49c0ce935b8696352dbebc71 \ No newline at end of file diff --git a/server/licenses/jackson-dataformat-cbor-2.17.2.jar.sha1 b/server/licenses/jackson-dataformat-cbor-2.17.2.jar.sha1 new file mode 100644 index 0000000000000..069e088413ef1 --- /dev/null +++ b/server/licenses/jackson-dataformat-cbor-2.17.2.jar.sha1 @@ -0,0 +1 @@ +57fa7c1b5104bbc4599278d13933a937ee058e68 \ No newline at end of file diff --git a/server/licenses/jackson-dataformat-smile-2.17.1.jar.sha1 b/server/licenses/jackson-dataformat-smile-2.17.1.jar.sha1 deleted file mode 100644 index 47d19067cf2a6..0000000000000 --- a/server/licenses/jackson-dataformat-smile-2.17.1.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -89683ac4f0a0c2c4f69ea56b90480ed40266dac8 \ No newline at end of file diff --git a/server/licenses/jackson-dataformat-smile-2.17.2.jar.sha1 b/server/licenses/jackson-dataformat-smile-2.17.2.jar.sha1 new file mode 100644 index 0000000000000..28d8c8382aed3 --- /dev/null +++ b/server/licenses/jackson-dataformat-smile-2.17.2.jar.sha1 @@ -0,0 +1 @@ +20e956b9b6f67138edd39fab7a506ded19638bcb \ No newline at end of file diff --git a/server/licenses/jackson-dataformat-yaml-2.17.1.jar.sha1 b/server/licenses/jackson-dataformat-yaml-2.17.1.jar.sha1 deleted file mode 100644 index 7946e994c7104..0000000000000 --- a/server/licenses/jackson-dataformat-yaml-2.17.1.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -b4c7b8a9ea3f398116a75c146b982b22afebc4ee \ No newline at end of file diff --git a/server/licenses/jackson-dataformat-yaml-2.17.2.jar.sha1 b/server/licenses/jackson-dataformat-yaml-2.17.2.jar.sha1 new file mode 100644 index 0000000000000..f3e25b7eb253c --- /dev/null +++ b/server/licenses/jackson-dataformat-yaml-2.17.2.jar.sha1 @@ -0,0 +1 @@ +78d2c73dbec62044d7cf3b544b2e0d24a1a093b0 \ No newline at end of file From 5ef41cf483581b86abc145091fea9dd6f636f503 Mon Sep 17 00:00:00 2001 From: Zelin Hao Date: Tue, 9 Jul 2024 14:57:05 -0700 Subject: [PATCH 047/167] Add release notes for release 1.3.18 (#14699) Signed-off-by: Zelin Hao --- release-notes/opensearch.release-notes-1.3.18.md | 4 ++++ 1 file changed, 4 insertions(+) create mode 100644 release-notes/opensearch.release-notes-1.3.18.md diff --git a/release-notes/opensearch.release-notes-1.3.18.md b/release-notes/opensearch.release-notes-1.3.18.md new file mode 100644 index 0000000000000..75c38dd285a63 --- /dev/null +++ b/release-notes/opensearch.release-notes-1.3.18.md @@ -0,0 +1,4 @@ +## 2024-07-09 Version 1.3.18 Release Notes + +### Upgrades +- Bump `netty` from 4.1.110.Final to 4.1.111.Final ([#14356](https://github.com/opensearch-project/OpenSearch/pull/14356)) From c4d960fcca1ae6c3191a5c474cbc867b4431d621 Mon Sep 17 00:00:00 2001 From: Andriy Redko Date: Tue, 9 Jul 2024 21:20:17 -0400 Subject: [PATCH 048/167] Bump reactor from 3.5.19 to 3.5.20 (#14697) Signed-off-by: Andriy Redko --- CHANGELOG.md | 4 ++-- buildSrc/version.properties | 4 ++-- .../licenses/reactor-netty-core-1.1.20.jar.sha1 | 1 - .../licenses/reactor-netty-core-1.1.21.jar.sha1 | 1 + .../licenses/reactor-netty-http-1.1.20.jar.sha1 | 1 - .../licenses/reactor-netty-http-1.1.21.jar.sha1 | 1 + .../licenses/reactor-netty-core-1.1.20.jar.sha1 | 1 - .../licenses/reactor-netty-core-1.1.21.jar.sha1 | 1 + .../licenses/reactor-netty-http-1.1.20.jar.sha1 | 1 - .../licenses/reactor-netty-http-1.1.21.jar.sha1 | 1 + server/licenses/reactor-core-3.5.18.jar.sha1 | 1 - server/licenses/reactor-core-3.5.19.jar.sha1 | 1 + 12 files changed, 9 insertions(+), 9 deletions(-) delete mode 100644 plugins/repository-azure/licenses/reactor-netty-core-1.1.20.jar.sha1 create mode 100644 plugins/repository-azure/licenses/reactor-netty-core-1.1.21.jar.sha1 delete mode 100644 plugins/repository-azure/licenses/reactor-netty-http-1.1.20.jar.sha1 create mode 100644 plugins/repository-azure/licenses/reactor-netty-http-1.1.21.jar.sha1 delete mode 100644 plugins/transport-reactor-netty4/licenses/reactor-netty-core-1.1.20.jar.sha1 create mode 100644 plugins/transport-reactor-netty4/licenses/reactor-netty-core-1.1.21.jar.sha1 delete mode 100644 plugins/transport-reactor-netty4/licenses/reactor-netty-http-1.1.20.jar.sha1 create mode 100644 plugins/transport-reactor-netty4/licenses/reactor-netty-http-1.1.21.jar.sha1 delete mode 100644 server/licenses/reactor-core-3.5.18.jar.sha1 create mode 100644 server/licenses/reactor-core-3.5.19.jar.sha1 diff --git a/CHANGELOG.md b/CHANGELOG.md index fe26423cde573..62bb73d80f2c1 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -20,8 +20,8 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Update to Apache Lucene 9.11.0 ([#14042](https://github.com/opensearch-project/OpenSearch/pull/14042)) - Bump `netty` from 4.1.110.Final to 4.1.111.Final ([#14356](https://github.com/opensearch-project/OpenSearch/pull/14356)) - Bump `org.wiremock:wiremock-standalone` from 3.3.1 to 3.6.0 ([#14361](https://github.com/opensearch-project/OpenSearch/pull/14361)) -- Bump `reactor` from 3.5.17 to 3.5.18 ([#14395](https://github.com/opensearch-project/OpenSearch/pull/14395)) -- Bump `reactor-netty` from 1.1.19 to 1.1.20 ([#14395](https://github.com/opensearch-project/OpenSearch/pull/14395)) +- Bump `reactor` from 3.5.17 to 3.5.19 ([#14395](https://github.com/opensearch-project/OpenSearch/pull/14395), [#14697](https://github.com/opensearch-project/OpenSearch/pull/14697)) +- Bump `reactor-netty` from 1.1.19 to 1.1.21 ([#14395](https://github.com/opensearch-project/OpenSearch/pull/14395), [#14697](https://github.com/opensearch-project/OpenSearch/pull/14697)) - Bump `commons-net:commons-net` from 3.10.0 to 3.11.1 ([#14396](https://github.com/opensearch-project/OpenSearch/pull/14396)) - Bump `com.nimbusds:nimbus-jose-jwt` from 9.37.3 to 9.40 ([#14398](https://github.com/opensearch-project/OpenSearch/pull/14398)) - Bump `org.apache.commons:commons-configuration2` from 2.10.1 to 2.11.0 ([#14399](https://github.com/opensearch-project/OpenSearch/pull/14399)) diff --git a/buildSrc/version.properties b/buildSrc/version.properties index d62f8c51e616b..855ccc1f87413 100644 --- a/buildSrc/version.properties +++ b/buildSrc/version.properties @@ -33,8 +33,8 @@ netty = 4.1.111.Final joda = 2.12.7 # project reactor -reactor_netty = 1.1.20 -reactor = 3.5.18 +reactor_netty = 1.1.21 +reactor = 3.5.19 # client dependencies httpclient5 = 5.2.1 diff --git a/plugins/repository-azure/licenses/reactor-netty-core-1.1.20.jar.sha1 b/plugins/repository-azure/licenses/reactor-netty-core-1.1.20.jar.sha1 deleted file mode 100644 index 2f4d023c88c80..0000000000000 --- a/plugins/repository-azure/licenses/reactor-netty-core-1.1.20.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -1a5ef52a470a82d9313e2e1ad8ba064bdbd38948 \ No newline at end of file diff --git a/plugins/repository-azure/licenses/reactor-netty-core-1.1.21.jar.sha1 b/plugins/repository-azure/licenses/reactor-netty-core-1.1.21.jar.sha1 new file mode 100644 index 0000000000000..21c16c7430016 --- /dev/null +++ b/plugins/repository-azure/licenses/reactor-netty-core-1.1.21.jar.sha1 @@ -0,0 +1 @@ +acb98bd08107287c454ce74e7b1ed8e7a018a662 \ No newline at end of file diff --git a/plugins/repository-azure/licenses/reactor-netty-http-1.1.20.jar.sha1 b/plugins/repository-azure/licenses/reactor-netty-http-1.1.20.jar.sha1 deleted file mode 100644 index 6c031e00e39c1..0000000000000 --- a/plugins/repository-azure/licenses/reactor-netty-http-1.1.20.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -8d4ee98405a5856cf0c9d7c1a70f3f14631e3c46 \ No newline at end of file diff --git a/plugins/repository-azure/licenses/reactor-netty-http-1.1.21.jar.sha1 b/plugins/repository-azure/licenses/reactor-netty-http-1.1.21.jar.sha1 new file mode 100644 index 0000000000000..648df22873d56 --- /dev/null +++ b/plugins/repository-azure/licenses/reactor-netty-http-1.1.21.jar.sha1 @@ -0,0 +1 @@ +b83542bb35630ef815b4177e3c670f62e952e695 \ No newline at end of file diff --git a/plugins/transport-reactor-netty4/licenses/reactor-netty-core-1.1.20.jar.sha1 b/plugins/transport-reactor-netty4/licenses/reactor-netty-core-1.1.20.jar.sha1 deleted file mode 100644 index 2f4d023c88c80..0000000000000 --- a/plugins/transport-reactor-netty4/licenses/reactor-netty-core-1.1.20.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -1a5ef52a470a82d9313e2e1ad8ba064bdbd38948 \ No newline at end of file diff --git a/plugins/transport-reactor-netty4/licenses/reactor-netty-core-1.1.21.jar.sha1 b/plugins/transport-reactor-netty4/licenses/reactor-netty-core-1.1.21.jar.sha1 new file mode 100644 index 0000000000000..21c16c7430016 --- /dev/null +++ b/plugins/transport-reactor-netty4/licenses/reactor-netty-core-1.1.21.jar.sha1 @@ -0,0 +1 @@ +acb98bd08107287c454ce74e7b1ed8e7a018a662 \ No newline at end of file diff --git a/plugins/transport-reactor-netty4/licenses/reactor-netty-http-1.1.20.jar.sha1 b/plugins/transport-reactor-netty4/licenses/reactor-netty-http-1.1.20.jar.sha1 deleted file mode 100644 index 6c031e00e39c1..0000000000000 --- a/plugins/transport-reactor-netty4/licenses/reactor-netty-http-1.1.20.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -8d4ee98405a5856cf0c9d7c1a70f3f14631e3c46 \ No newline at end of file diff --git a/plugins/transport-reactor-netty4/licenses/reactor-netty-http-1.1.21.jar.sha1 b/plugins/transport-reactor-netty4/licenses/reactor-netty-http-1.1.21.jar.sha1 new file mode 100644 index 0000000000000..648df22873d56 --- /dev/null +++ b/plugins/transport-reactor-netty4/licenses/reactor-netty-http-1.1.21.jar.sha1 @@ -0,0 +1 @@ +b83542bb35630ef815b4177e3c670f62e952e695 \ No newline at end of file diff --git a/server/licenses/reactor-core-3.5.18.jar.sha1 b/server/licenses/reactor-core-3.5.18.jar.sha1 deleted file mode 100644 index c503f768beafa..0000000000000 --- a/server/licenses/reactor-core-3.5.18.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -3a8157f7d66d71a407eb77ba12bce72a38c5b4da \ No newline at end of file diff --git a/server/licenses/reactor-core-3.5.19.jar.sha1 b/server/licenses/reactor-core-3.5.19.jar.sha1 new file mode 100644 index 0000000000000..04b59d2faae04 --- /dev/null +++ b/server/licenses/reactor-core-3.5.19.jar.sha1 @@ -0,0 +1 @@ +1d49ce1d0df79f28d3927da5f4c46a895b94335f \ No newline at end of file From b068355d0d81e031a4022efcb713c93fa4ff0b31 Mon Sep 17 00:00:00 2001 From: Shivansh Arora Date: Wed, 10 Jul 2024 12:18:49 +0530 Subject: [PATCH 049/167] Add unit tests for read flow of RemoteClusterStateService and bug fix for transient settings (#14476) Signed-off-by: Shivansh Arora --- .../PublicationTransportHandler.java | 1 - .../remote/ClusterStateDiffManifest.java | 28 +- .../remote/RemoteClusterStateService.java | 28 +- ...oteClusterStateAttributesManagerTests.java | 165 +-- .../RemoteClusterStateServiceTests.java | 1214 ++++++++++++++++- .../remote/RemoteClusterStateTestUtils.java | 227 +++ .../RemoteGlobalMetadataManagerTests.java | 125 +- .../test/TestClusterStateCustom.java | 68 + 8 files changed, 1531 insertions(+), 325 deletions(-) create mode 100644 server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateTestUtils.java create mode 100644 test/framework/src/main/java/org/opensearch/test/TestClusterStateCustom.java diff --git a/server/src/main/java/org/opensearch/cluster/coordination/PublicationTransportHandler.java b/server/src/main/java/org/opensearch/cluster/coordination/PublicationTransportHandler.java index 36eabd51ffda1..62885a12222be 100644 --- a/server/src/main/java/org/opensearch/cluster/coordination/PublicationTransportHandler.java +++ b/server/src/main/java/org/opensearch/cluster/coordination/PublicationTransportHandler.java @@ -284,7 +284,6 @@ PublishWithJoinResponse handleIncomingRemotePublishRequest(RemotePublishRequest ) ); ClusterState clusterState = remoteClusterStateService.getClusterStateUsingDiff( - request.getClusterName(), manifest, lastSeen, transportService.getLocalNode().getId() diff --git a/server/src/main/java/org/opensearch/gateway/remote/ClusterStateDiffManifest.java b/server/src/main/java/org/opensearch/gateway/remote/ClusterStateDiffManifest.java index 65ae2675a95da..aca53c92781e4 100644 --- a/server/src/main/java/org/opensearch/gateway/remote/ClusterStateDiffManifest.java +++ b/server/src/main/java/org/opensearch/gateway/remote/ClusterStateDiffManifest.java @@ -25,6 +25,7 @@ import java.io.IOException; import java.util.ArrayList; +import java.util.Collections; import java.util.List; import java.util.Map; import java.util.Objects; @@ -152,17 +153,17 @@ public ClusterStateDiffManifest( this.settingsMetadataUpdated = settingsMetadataUpdated; this.transientSettingsMetadataUpdated = transientSettingsMetadataUpdate; this.templatesMetadataUpdated = templatesMetadataUpdated; - this.customMetadataUpdated = customMetadataUpdated; - this.customMetadataDeleted = customMetadataDeleted; - this.indicesUpdated = indicesUpdated; - this.indicesDeleted = indicesDeleted; + this.customMetadataUpdated = Collections.unmodifiableList(customMetadataUpdated); + this.customMetadataDeleted = Collections.unmodifiableList(customMetadataDeleted); + this.indicesUpdated = Collections.unmodifiableList(indicesUpdated); + this.indicesDeleted = Collections.unmodifiableList(indicesDeleted); this.clusterBlocksUpdated = clusterBlocksUpdated; this.discoveryNodesUpdated = discoveryNodesUpdated; - this.indicesRoutingUpdated = indicesRoutingUpdated; - this.indicesRoutingDeleted = indicesRoutingDeleted; + this.indicesRoutingUpdated = Collections.unmodifiableList(indicesRoutingUpdated); + this.indicesRoutingDeleted = Collections.unmodifiableList(indicesRoutingDeleted); this.hashesOfConsistentSettingsUpdated = hashesOfConsistentSettingsUpdated; - this.clusterStateCustomUpdated = clusterStateCustomUpdated; - this.clusterStateCustomDeleted = clusterStateCustomDeleted; + this.clusterStateCustomUpdated = Collections.unmodifiableList(clusterStateCustomUpdated); + this.clusterStateCustomDeleted = Collections.unmodifiableList(clusterStateCustomDeleted); } public ClusterStateDiffManifest(StreamInput in) throws IOException { @@ -563,7 +564,16 @@ public static class Builder { private List clusterStateCustomUpdated; private List clusterStateCustomDeleted; - public Builder() {} + public Builder() { + customMetadataUpdated = Collections.emptyList(); + customMetadataDeleted = Collections.emptyList(); + indicesUpdated = Collections.emptyList(); + indicesDeleted = Collections.emptyList(); + indicesRoutingUpdated = Collections.emptyList(); + indicesRoutingDeleted = Collections.emptyList(); + clusterStateCustomUpdated = Collections.emptyList(); + clusterStateCustomDeleted = Collections.emptyList(); + } public Builder fromStateUUID(String fromStateUUID) { this.fromStateUUID = fromStateUUID; diff --git a/server/src/main/java/org/opensearch/gateway/remote/RemoteClusterStateService.java b/server/src/main/java/org/opensearch/gateway/remote/RemoteClusterStateService.java index 74abe9cd257b4..3e63f9114ea16 100644 --- a/server/src/main/java/org/opensearch/gateway/remote/RemoteClusterStateService.java +++ b/server/src/main/java/org/opensearch/gateway/remote/RemoteClusterStateService.java @@ -976,7 +976,8 @@ public ClusterState getLatestClusterState(String clusterName, String clusterUUID return getClusterStateForManifest(clusterName, clusterMetadataManifest.get(), nodeId, includeEphemeral); } - private ClusterState readClusterStateInParallel( + // package private for testing + ClusterState readClusterStateInParallel( ClusterState previousState, ClusterMetadataManifest manifest, String clusterUUID, @@ -1285,7 +1286,7 @@ public ClusterState getClusterStateForManifest( manifest.getCustomMetadataMap(), manifest.getCoordinationMetadata() != null, manifest.getSettingsMetadata() != null, - manifest.getTransientSettingsMetadata() != null, + includeEphemeral && manifest.getTransientSettingsMetadata() != null, manifest.getTemplatesMetadata() != null, includeEphemeral && manifest.getDiscoveryNodesMetadata() != null, includeEphemeral && manifest.getClusterBlocksMetadata() != null, @@ -1321,13 +1322,9 @@ public ClusterState getClusterStateForManifest( } - public ClusterState getClusterStateUsingDiff( - String clusterName, - ClusterMetadataManifest manifest, - ClusterState previousState, - String localNodeId - ) throws IOException { - assert manifest.getDiffManifest() != null; + public ClusterState getClusterStateUsingDiff(ClusterMetadataManifest manifest, ClusterState previousState, String localNodeId) + throws IOException { + assert manifest.getDiffManifest() != null : "Diff manifest null which is required for downloading cluster state"; ClusterStateDiffManifest diff = manifest.getDiffManifest(); List updatedIndices = diff.getIndicesUpdated().stream().map(idx -> { Optional uploadedIndexMetadataOptional = manifest.getIndices() @@ -1586,6 +1583,19 @@ private boolean isValidClusterUUID(ClusterMetadataManifest manifest) { return manifest.isClusterUUIDCommitted(); } + // package private setter which are required for injecting mock managers, these setters are not supposed to be used elsewhere + void setRemoteIndexMetadataManager(RemoteIndexMetadataManager remoteIndexMetadataManager) { + this.remoteIndexMetadataManager = remoteIndexMetadataManager; + } + + void setRemoteGlobalMetadataManager(RemoteGlobalMetadataManager remoteGlobalMetadataManager) { + this.remoteGlobalMetadataManager = remoteGlobalMetadataManager; + } + + void setRemoteClusterStateAttributesManager(RemoteClusterStateAttributesManager remoteClusterStateAttributeManager) { + this.remoteClusterStateAttributesManager = remoteClusterStateAttributeManager; + } + public void writeMetadataFailed() { getStats().stateFailed(); } diff --git a/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateAttributesManagerTests.java b/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateAttributesManagerTests.java index fe9ed57fa77b8..3f2edd1a6c5a5 100644 --- a/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateAttributesManagerTests.java +++ b/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateAttributesManagerTests.java @@ -8,9 +8,7 @@ package org.opensearch.gateway.remote; -import org.opensearch.Version; import org.opensearch.action.LatchedActionListener; -import org.opensearch.cluster.AbstractNamedDiffable; import org.opensearch.cluster.ClusterName; import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.ClusterState.Custom; @@ -23,11 +21,8 @@ import org.opensearch.common.util.TestCapturingListener; import org.opensearch.core.action.ActionListener; import org.opensearch.core.common.io.stream.NamedWriteableRegistry; -import org.opensearch.core.common.io.stream.StreamInput; -import org.opensearch.core.common.io.stream.StreamOutput; import org.opensearch.core.compress.Compressor; import org.opensearch.core.compress.NoneCompressor; -import org.opensearch.core.xcontent.XContentBuilder; import org.opensearch.gateway.remote.model.RemoteClusterBlocks; import org.opensearch.gateway.remote.model.RemoteClusterStateCustoms; import org.opensearch.gateway.remote.model.RemoteDiscoveryNodes; @@ -51,6 +46,10 @@ import static org.opensearch.gateway.remote.RemoteClusterStateAttributesManager.CLUSTER_STATE_ATTRIBUTE; import static org.opensearch.gateway.remote.RemoteClusterStateAttributesManager.CLUSTER_STATE_ATTRIBUTES_CURRENT_CODEC_VERSION; import static org.opensearch.gateway.remote.RemoteClusterStateAttributesManager.DISCOVERY_NODES; +import static org.opensearch.gateway.remote.RemoteClusterStateTestUtils.TestClusterStateCustom1; +import static org.opensearch.gateway.remote.RemoteClusterStateTestUtils.TestClusterStateCustom2; +import static org.opensearch.gateway.remote.RemoteClusterStateTestUtils.TestClusterStateCustom3; +import static org.opensearch.gateway.remote.RemoteClusterStateTestUtils.TestClusterStateCustom4; import static org.opensearch.gateway.remote.RemoteClusterStateUtils.CLUSTER_STATE_EPHEMERAL_PATH_TOKEN; import static org.opensearch.gateway.remote.RemoteClusterStateUtils.CLUSTER_STATE_PATH_TOKEN; import static org.opensearch.gateway.remote.RemoteClusterStateUtils.CUSTOM_DELIMITER; @@ -338,22 +337,22 @@ public void testGetAsyncMetadataReadAction_Exception() throws IOException, Inter public void testGetUpdatedCustoms() { Map previousCustoms = Map.of( - TestCustom1.TYPE, - new TestCustom1("data1"), - TestCustom2.TYPE, - new TestCustom2("data2"), - TestCustom3.TYPE, - new TestCustom3("data3") + TestClusterStateCustom1.TYPE, + new TestClusterStateCustom1("data1"), + TestClusterStateCustom2.TYPE, + new TestClusterStateCustom2("data2"), + TestClusterStateCustom3.TYPE, + new TestClusterStateCustom3("data3") ); ClusterState previousState = ClusterState.builder(new ClusterName("test-cluster")).customs(previousCustoms).build(); Map currentCustoms = Map.of( - TestCustom2.TYPE, - new TestCustom2("data2"), - TestCustom3.TYPE, - new TestCustom3("data3-changed"), - TestCustom4.TYPE, - new TestCustom4("data4") + TestClusterStateCustom2.TYPE, + new TestClusterStateCustom2("data2"), + TestClusterStateCustom3.TYPE, + new TestClusterStateCustom3("data3-changed"), + TestClusterStateCustom4.TYPE, + new TestClusterStateCustom4("data4") ); ClusterState currentState = ClusterState.builder(new ClusterName("test-cluster")).customs(currentCustoms).build(); @@ -368,136 +367,14 @@ public void testGetUpdatedCustoms() { assertThat(customsDiff.getDeletes(), is(Collections.emptyList())); Map expectedCustoms = Map.of( - TestCustom3.TYPE, - new TestCustom3("data3-changed"), - TestCustom4.TYPE, - new TestCustom4("data4") + TestClusterStateCustom3.TYPE, + new TestClusterStateCustom3("data3-changed"), + TestClusterStateCustom4.TYPE, + new TestClusterStateCustom4("data4") ); customsDiff = remoteClusterStateAttributesManager.getUpdatedCustoms(currentState, previousState, true, false); assertThat(customsDiff.getUpserts(), is(expectedCustoms)); - assertThat(customsDiff.getDeletes(), is(List.of(TestCustom1.TYPE))); - } - - private static abstract class AbstractTestCustom extends AbstractNamedDiffable implements ClusterState.Custom { - - private final String value; - - AbstractTestCustom(String value) { - this.value = value; - } - - AbstractTestCustom(StreamInput in) throws IOException { - this.value = in.readString(); - } - - @Override - public Version getMinimalSupportedVersion() { - return Version.CURRENT; - } - - @Override - public void writeTo(StreamOutput out) throws IOException { - out.writeString(value); - } - - @Override - public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - return builder; - } - - @Override - public boolean isPrivate() { - return true; - } - - @Override - public boolean equals(Object o) { - if (this == o) return true; - if (o == null || getClass() != o.getClass()) return false; - - AbstractTestCustom that = (AbstractTestCustom) o; - - if (!value.equals(that.value)) return false; - - return true; - } - - @Override - public int hashCode() { - return value.hashCode(); - } - } - - private static class TestCustom1 extends AbstractTestCustom { - - private static final String TYPE = "custom_1"; - - TestCustom1(String value) { - super(value); - } - - TestCustom1(StreamInput in) throws IOException { - super(in); - } - - @Override - public String getWriteableName() { - return TYPE; - } - } - - private static class TestCustom2 extends AbstractTestCustom { - - private static final String TYPE = "custom_2"; - - TestCustom2(String value) { - super(value); - } - - TestCustom2(StreamInput in) throws IOException { - super(in); - } - - @Override - public String getWriteableName() { - return TYPE; - } - } - - private static class TestCustom3 extends AbstractTestCustom { - - private static final String TYPE = "custom_3"; - - TestCustom3(String value) { - super(value); - } - - TestCustom3(StreamInput in) throws IOException { - super(in); - } - - @Override - public String getWriteableName() { - return TYPE; - } - } - - private static class TestCustom4 extends AbstractTestCustom { - - private static final String TYPE = "custom_4"; - - TestCustom4(String value) { - super(value); - } - - TestCustom4(StreamInput in) throws IOException { - super(in); - } - - @Override - public String getWriteableName() { - return TYPE; - } + assertThat(customsDiff.getDeletes(), is(List.of(TestClusterStateCustom1.TYPE))); } } diff --git a/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateServiceTests.java b/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateServiceTests.java index d983a4d8c4027..91ddd64cc2ccc 100644 --- a/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateServiceTests.java +++ b/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateServiceTests.java @@ -9,12 +9,14 @@ package org.opensearch.gateway.remote; import org.opensearch.Version; +import org.opensearch.action.LatchedActionListener; import org.opensearch.cluster.ClusterModule; import org.opensearch.cluster.ClusterName; import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.RepositoryCleanupInProgress; -import org.opensearch.cluster.RepositoryCleanupInProgress.Entry; +import org.opensearch.cluster.block.ClusterBlocks; import org.opensearch.cluster.coordination.CoordinationMetadata; +import org.opensearch.cluster.metadata.DiffableStringMap; import org.opensearch.cluster.metadata.IndexGraveyard; import org.opensearch.cluster.metadata.IndexMetadata; import org.opensearch.cluster.metadata.IndexTemplateMetadata; @@ -22,10 +24,12 @@ import org.opensearch.cluster.metadata.TemplatesMetadata; import org.opensearch.cluster.node.DiscoveryNode; import org.opensearch.cluster.node.DiscoveryNodes; +import org.opensearch.cluster.routing.IndexRoutingTable; import org.opensearch.cluster.routing.RoutingTable; import org.opensearch.cluster.routing.remote.InternalRemoteRoutingTableService; import org.opensearch.cluster.routing.remote.NoopRemoteRoutingTableService; import org.opensearch.cluster.service.ClusterService; +import org.opensearch.common.CheckedRunnable; import org.opensearch.common.blobstore.AsyncMultiStreamBlobContainer; import org.opensearch.common.blobstore.BlobContainer; import org.opensearch.common.blobstore.BlobMetadata; @@ -38,6 +42,7 @@ import org.opensearch.common.compress.DeflateCompressor; import org.opensearch.common.lucene.store.ByteArrayIndexInput; import org.opensearch.common.network.NetworkModule; +import org.opensearch.common.remote.AbstractRemoteWritableBlobEntity; import org.opensearch.common.settings.ClusterSettings; import org.opensearch.common.settings.Settings; import org.opensearch.common.util.FeatureFlags; @@ -45,13 +50,17 @@ import org.opensearch.core.action.ActionListener; import org.opensearch.core.common.bytes.BytesArray; import org.opensearch.core.common.bytes.BytesReference; +import org.opensearch.core.common.io.stream.NamedWriteableRegistry; +import org.opensearch.core.compress.Compressor; import org.opensearch.core.index.Index; import org.opensearch.core.xcontent.NamedXContentRegistry; import org.opensearch.gateway.remote.ClusterMetadataManifest.UploadedIndexMetadata; import org.opensearch.gateway.remote.ClusterMetadataManifest.UploadedMetadataAttribute; import org.opensearch.gateway.remote.model.RemoteClusterMetadataManifest; import org.opensearch.gateway.remote.model.RemoteClusterStateManifestInfo; -import org.opensearch.gateway.remote.model.RemoteIndexMetadata; +import org.opensearch.gateway.remote.model.RemotePersistentSettingsMetadata; +import org.opensearch.gateway.remote.model.RemoteReadResult; +import org.opensearch.gateway.remote.model.RemoteTransientSettingsMetadata; import org.opensearch.index.remote.RemoteIndexPathUploader; import org.opensearch.indices.IndicesModule; import org.opensearch.repositories.FilterRepository; @@ -61,7 +70,6 @@ import org.opensearch.repositories.blobstore.ChecksumWritableBlobStoreFormat; import org.opensearch.repositories.fs.FsRepository; import org.opensearch.test.OpenSearchTestCase; -import org.opensearch.test.TestCustomMetadata; import org.opensearch.test.VersionUtils; import org.opensearch.threadpool.TestThreadPool; import org.opensearch.threadpool.ThreadPool; @@ -75,7 +83,6 @@ import java.util.ArrayList; import java.util.Arrays; import java.util.Collections; -import java.util.EnumSet; import java.util.HashMap; import java.util.Iterator; import java.util.List; @@ -92,23 +99,51 @@ import java.util.stream.Stream; import org.mockito.ArgumentCaptor; +import org.mockito.ArgumentMatcher; import org.mockito.ArgumentMatchers; import org.mockito.Mockito; +import static java.util.Collections.emptyList; +import static java.util.Collections.emptyMap; import static java.util.stream.Collectors.toList; import static org.opensearch.common.util.FeatureFlags.REMOTE_PUBLICATION_EXPERIMENTAL; import static org.opensearch.gateway.remote.ClusterMetadataManifest.CODEC_V1; +import static org.opensearch.gateway.remote.ClusterMetadataManifest.CODEC_V2; +import static org.opensearch.gateway.remote.RemoteClusterStateAttributesManager.CLUSTER_BLOCKS; +import static org.opensearch.gateway.remote.RemoteClusterStateAttributesManager.CLUSTER_STATE_ATTRIBUTE; +import static org.opensearch.gateway.remote.RemoteClusterStateTestUtils.CustomMetadata1; +import static org.opensearch.gateway.remote.RemoteClusterStateTestUtils.CustomMetadata2; +import static org.opensearch.gateway.remote.RemoteClusterStateTestUtils.CustomMetadata3; +import static org.opensearch.gateway.remote.RemoteClusterStateTestUtils.TestClusterStateCustom1; +import static org.opensearch.gateway.remote.RemoteClusterStateTestUtils.TestClusterStateCustom2; +import static org.opensearch.gateway.remote.RemoteClusterStateTestUtils.TestClusterStateCustom3; import static org.opensearch.gateway.remote.RemoteClusterStateUtils.DELIMITER; import static org.opensearch.gateway.remote.RemoteClusterStateUtils.FORMAT_PARAMS; import static org.opensearch.gateway.remote.RemoteClusterStateUtils.getFormattedIndexFileName; +import static org.opensearch.gateway.remote.model.RemoteClusterBlocks.CLUSTER_BLOCKS_FORMAT; +import static org.opensearch.gateway.remote.model.RemoteClusterBlocksTests.randomClusterBlocks; import static org.opensearch.gateway.remote.model.RemoteClusterMetadataManifest.MANIFEST_CURRENT_CODEC_VERSION; +import static org.opensearch.gateway.remote.model.RemoteClusterStateCustoms.CLUSTER_STATE_CUSTOM; import static org.opensearch.gateway.remote.model.RemoteCoordinationMetadata.COORDINATION_METADATA; import static org.opensearch.gateway.remote.model.RemoteCoordinationMetadata.COORDINATION_METADATA_FORMAT; +import static org.opensearch.gateway.remote.model.RemoteCustomMetadata.CUSTOM_DELIMITER; +import static org.opensearch.gateway.remote.model.RemoteCustomMetadata.CUSTOM_METADATA; +import static org.opensearch.gateway.remote.model.RemoteCustomMetadata.readFrom; +import static org.opensearch.gateway.remote.model.RemoteDiscoveryNodes.DISCOVERY_NODES; +import static org.opensearch.gateway.remote.model.RemoteDiscoveryNodes.DISCOVERY_NODES_FORMAT; +import static org.opensearch.gateway.remote.model.RemoteDiscoveryNodesTests.getDiscoveryNodes; import static org.opensearch.gateway.remote.model.RemoteGlobalMetadata.GLOBAL_METADATA_FORMAT; +import static org.opensearch.gateway.remote.model.RemoteHashesOfConsistentSettings.HASHES_OF_CONSISTENT_SETTINGS; +import static org.opensearch.gateway.remote.model.RemoteHashesOfConsistentSettings.HASHES_OF_CONSISTENT_SETTINGS_FORMAT; +import static org.opensearch.gateway.remote.model.RemoteHashesOfConsistentSettingsTests.getHashesOfConsistentSettings; +import static org.opensearch.gateway.remote.model.RemoteIndexMetadata.INDEX; +import static org.opensearch.gateway.remote.model.RemoteIndexMetadata.INDEX_METADATA_FORMAT; import static org.opensearch.gateway.remote.model.RemotePersistentSettingsMetadata.SETTINGS_METADATA_FORMAT; import static org.opensearch.gateway.remote.model.RemotePersistentSettingsMetadata.SETTING_METADATA; import static org.opensearch.gateway.remote.model.RemoteTemplatesMetadata.TEMPLATES_METADATA; import static org.opensearch.gateway.remote.model.RemoteTemplatesMetadata.TEMPLATES_METADATA_FORMAT; +import static org.opensearch.gateway.remote.model.RemoteTemplatesMetadataTests.getTemplatesMetadata; +import static org.opensearch.gateway.remote.model.RemoteTransientSettingsMetadata.TRANSIENT_SETTING_METADATA; import static org.opensearch.node.remotestore.RemoteStoreNodeAttribute.REMOTE_STORE_CLUSTER_STATE_REPOSITORY_NAME_ATTRIBUTE_KEY; import static org.opensearch.node.remotestore.RemoteStoreNodeAttribute.REMOTE_STORE_REPOSITORY_SETTINGS_ATTRIBUTE_KEY_PREFIX; import static org.opensearch.node.remotestore.RemoteStoreNodeAttribute.REMOTE_STORE_REPOSITORY_TYPE_ATTRIBUTE_KEY_FORMAT; @@ -120,11 +155,19 @@ import static org.hamcrest.Matchers.notNullValue; import static org.hamcrest.Matchers.nullValue; import static org.mockito.ArgumentMatchers.any; +import static org.mockito.ArgumentMatchers.anyBoolean; +import static org.mockito.ArgumentMatchers.anyList; +import static org.mockito.ArgumentMatchers.anyMap; import static org.mockito.ArgumentMatchers.anyString; +import static org.mockito.ArgumentMatchers.argThat; import static org.mockito.ArgumentMatchers.eq; import static org.mockito.Mockito.doAnswer; +import static org.mockito.Mockito.doNothing; import static org.mockito.Mockito.doThrow; import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.spy; +import static org.mockito.Mockito.times; +import static org.mockito.Mockito.verify; import static org.mockito.Mockito.when; public class RemoteClusterStateServiceTests extends OpenSearchTestCase { @@ -135,11 +178,22 @@ public class RemoteClusterStateServiceTests extends OpenSearchTestCase { private Supplier repositoriesServiceSupplier; private RepositoriesService repositoriesService; private BlobStoreRepository blobStoreRepository; + private Compressor compressor; private BlobStore blobStore; private Settings settings; private boolean publicationEnabled; + private NamedWriteableRegistry namedWriteableRegistry; private final ThreadPool threadPool = new TestThreadPool(getClass().getName()); + private static final String NODE_ID = "test-node"; + private static final String COORDINATION_METADATA_FILENAME = "coordination-metadata-file__1"; + private static final String PERSISTENT_SETTINGS_FILENAME = "persistent-settings-file__1"; + private static final String TRANSIENT_SETTINGS_FILENAME = "transient-settings-file__1"; + private static final String TEMPLATES_METADATA_FILENAME = "templates-metadata-file__1"; + private static final String DISCOVERY_NODES_FILENAME = "discovery-nodes-file__1"; + private static final String CLUSTER_BLOCKS_FILENAME = "cluster-blocks-file__1"; + private static final String HASHES_OF_CONSISTENT_SETTINGS_FILENAME = "consistent-settings-hashes-file__1"; + @Before public void setup() { repositoriesServiceSupplier = mock(Supplier.class); @@ -164,6 +218,11 @@ public void setup() { .put(RemoteClusterStateService.REMOTE_CLUSTER_STATE_ENABLED_SETTING.getKey(), true) .put("node.attr." + REMOTE_STORE_ROUTING_TABLE_REPOSITORY_NAME_ATTRIBUTE_KEY, "routing_repository") .build(); + List writeableEntries = ClusterModule.getNamedWriteables(); + writeableEntries.add(new NamedWriteableRegistry.Entry(Metadata.Custom.class, CustomMetadata1.TYPE, CustomMetadata1::new)); + writeableEntries.add(new NamedWriteableRegistry.Entry(Metadata.Custom.class, CustomMetadata2.TYPE, CustomMetadata2::new)); + writeableEntries.add(new NamedWriteableRegistry.Entry(Metadata.Custom.class, CustomMetadata3.TYPE, CustomMetadata3::new)); + namedWriteableRegistry = new NamedWriteableRegistry(writeableEntries); clusterSettings = new ClusterSettings(settings, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS); clusterService = mock(ClusterService.class); @@ -176,6 +235,7 @@ public void setup() { ).flatMap(Function.identity()).collect(toList()) ); + compressor = new DeflateCompressor(); blobStoreRepository = mock(BlobStoreRepository.class); blobStore = mock(BlobStore.class); when(blobStoreRepository.blobStore()).thenReturn(blobStore); @@ -191,7 +251,7 @@ public void setup() { () -> 0L, threadPool, List.of(new RemoteIndexPathUploader(threadPool, settings, repositoriesServiceSupplier, clusterSettings)), - writableRegistry() + namedWriteableRegistry ); } @@ -275,6 +335,7 @@ public void testWriteFullMetadataSuccess() throws IOException { assertThat(manifest.getSettingsMetadata(), notNullValue()); assertThat(manifest.getTemplatesMetadata(), notNullValue()); assertFalse(manifest.getCustomMetadataMap().isEmpty()); + assertThat(manifest.getCustomMetadataMap().containsKey(CustomMetadata1.TYPE), is(true)); assertThat(manifest.getClusterBlocksMetadata(), nullValue()); assertThat(manifest.getDiscoveryNodesMetadata(), nullValue()); assertThat(manifest.getTransientSettingsMetadata(), nullValue()); @@ -298,7 +359,12 @@ public void testWriteFullMetadataSuccessPublicationEnabled() throws IOException writableRegistry() ); final ClusterState clusterState = generateClusterStateWithOneIndex().nodes(nodesWithLocalNodeClusterManager()) - .customs(Map.of(RepositoryCleanupInProgress.TYPE, new RepositoryCleanupInProgress(List.of(new Entry("test-repo", 10L))))) + .customs( + Map.of( + RepositoryCleanupInProgress.TYPE, + new RepositoryCleanupInProgress(List.of(new RepositoryCleanupInProgress.Entry("test-repo", 10L))) + ) + ) .build(); mockBlobStoreObjects(); remoteClusterStateService.start(); @@ -330,6 +396,7 @@ public void testWriteFullMetadataSuccessPublicationEnabled() throws IOException assertThat(manifest.getSettingsMetadata(), notNullValue()); assertThat(manifest.getTemplatesMetadata(), notNullValue()); assertFalse(manifest.getCustomMetadataMap().isEmpty()); + assertThat(manifest.getCustomMetadataMap().containsKey(CustomMetadata1.TYPE), is(true)); assertThat(manifest.getClusterStateCustomMap().size(), is(1)); assertThat(manifest.getClusterStateCustomMap().containsKey(RepositoryCleanupInProgress.TYPE), is(true)); } @@ -388,7 +455,7 @@ public void testWriteFullMetadataInParallelSuccess() throws IOException { .provideStream(0) .getInputStream() .readAllBytes(); - IndexMetadata writtenIndexMetadata = RemoteIndexMetadata.INDEX_METADATA_FORMAT.deserialize( + IndexMetadata writtenIndexMetadata = INDEX_METADATA_FORMAT.deserialize( capturedWriteContext.get("metadata").getFileName(), blobStoreRepository.getNamedXContentRegistry(), new BytesArray(writtenBytes) @@ -445,24 +512,35 @@ public void testTimeoutWhileWritingManifestFile() throws IOException { ArgumentCaptor> actionListenerArgumentCaptor = ArgumentCaptor.forClass(ActionListener.class); - doAnswer((i) -> { // For Global Metadata - actionListenerArgumentCaptor.getValue().onResponse(null); - return null; - }).doAnswer((i) -> { // For Index Metadata - actionListenerArgumentCaptor.getValue().onResponse(null); - return null; - }).doAnswer((i) -> { + doAnswer((i) -> { // For Manifest file perform No Op, so latch in code will timeout return null; }).when(container).asyncBlobUpload(any(WriteContext.class), actionListenerArgumentCaptor.capture()); remoteClusterStateService.start(); - try { - remoteClusterStateService.writeFullMetadata(clusterState, randomAlphaOfLength(10)); - } catch (Exception e) { - assertTrue(e instanceof RemoteStateTransferException); - assertTrue(e.getMessage().contains("Timed out waiting for transfer of following metadata to complete")); - } + RemoteClusterStateService spiedService = spy(remoteClusterStateService); + when( + spiedService.writeMetadataInParallel( + any(), + anyList(), + anyMap(), + anyMap(), + anyBoolean(), + anyBoolean(), + anyBoolean(), + anyBoolean(), + anyBoolean(), + anyBoolean(), + anyMap(), + anyBoolean(), + anyList() + ) + ).thenReturn(new RemoteClusterStateUtils.UploadedMetadataResults()); + RemoteStateTransferException ex = expectThrows( + RemoteStateTransferException.class, + () -> spiedService.writeFullMetadata(clusterState, randomAlphaOfLength(10)) + ); + assertTrue(ex.getMessage().contains("Timed out waiting for transfer of manifest file to complete")); } public void testWriteFullMetadataInParallelFailureForIndexMetadata() throws IOException { @@ -658,6 +736,991 @@ public void testWriteIncrementalMetadataSuccessWhenPublicationEnabled() throws I assertThat(manifest.getIndicesRouting().size(), is(1)); } + public void testTimeoutWhileWritingMetadata() throws IOException { + AsyncMultiStreamBlobContainer container = (AsyncMultiStreamBlobContainer) mockBlobStoreObjects(AsyncMultiStreamBlobContainer.class); + doNothing().when(container).asyncBlobUpload(any(), any()); + int writeTimeout = 2; + Settings newSettings = Settings.builder() + .put("cluster.remote_store.state.global_metadata.upload_timeout", writeTimeout + "s") + .build(); + clusterSettings.applySettings(newSettings); + remoteClusterStateService.start(); + RemoteStateTransferException exception = assertThrows( + RemoteStateTransferException.class, + () -> remoteClusterStateService.writeMetadataInParallel( + ClusterState.EMPTY_STATE, + emptyList(), + emptyMap(), + emptyMap(), + true, + true, + true, + true, + true, + true, + emptyMap(), + true, + emptyList() + ) + ); + assertTrue(exception.getMessage().startsWith("Timed out waiting for transfer of following metadata to complete")); + } + + public void testGetClusterStateForManifest_IncludeEphemeral() throws IOException { + ClusterMetadataManifest manifest = generateClusterMetadataManifestWithAllAttributes().build(); + mockBlobStoreObjects(); + remoteClusterStateService.start(); + RemoteReadResult mockedResult = mock(RemoteReadResult.class); + RemoteIndexMetadataManager mockedIndexManager = mock(RemoteIndexMetadataManager.class); + RemoteGlobalMetadataManager mockedGlobalMetadataManager = mock(RemoteGlobalMetadataManager.class); + RemoteClusterStateAttributesManager mockedClusterStateAttributeManager = mock(RemoteClusterStateAttributesManager.class); + remoteClusterStateService.setRemoteIndexMetadataManager(mockedIndexManager); + remoteClusterStateService.setRemoteGlobalMetadataManager(mockedGlobalMetadataManager); + remoteClusterStateService.setRemoteClusterStateAttributesManager(mockedClusterStateAttributeManager); + ArgumentCaptor> listenerArgumentCaptor = ArgumentCaptor.forClass( + LatchedActionListener.class + ); + when(mockedIndexManager.getAsyncIndexMetadataReadAction(any(), anyString(), listenerArgumentCaptor.capture())).thenReturn( + () -> listenerArgumentCaptor.getValue().onResponse(mockedResult) + ); + when(mockedGlobalMetadataManager.getAsyncMetadataReadAction(any(), anyString(), listenerArgumentCaptor.capture())).thenReturn( + () -> listenerArgumentCaptor.getValue().onResponse(mockedResult) + ); + when(mockedClusterStateAttributeManager.getAsyncMetadataReadAction(anyString(), any(), listenerArgumentCaptor.capture())) + .thenReturn(() -> listenerArgumentCaptor.getValue().onResponse(mockedResult)); + when(mockedResult.getComponent()).thenReturn(COORDINATION_METADATA); + RemoteClusterStateService mockService = spy(remoteClusterStateService); + mockService.getClusterStateForManifest(ClusterName.DEFAULT.value(), manifest, NODE_ID, true); + verify(mockService, times(1)).readClusterStateInParallel( + any(), + eq(manifest), + eq(manifest.getClusterUUID()), + eq(NODE_ID), + eq(manifest.getIndices()), + eq(manifest.getCustomMetadataMap()), + eq(true), + eq(true), + eq(true), + eq(true), + eq(true), + eq(true), + eq(manifest.getIndicesRouting()), + eq(true), + eq(manifest.getClusterStateCustomMap()), + eq(true) + ); + } + + public void testGetClusterStateForManifest_ExcludeEphemeral() throws IOException { + ClusterMetadataManifest manifest = generateClusterMetadataManifestWithAllAttributes().build(); + mockBlobStoreObjects(); + remoteClusterStateService.start(); + RemoteReadResult mockedResult = mock(RemoteReadResult.class); + RemoteIndexMetadataManager mockedIndexManager = mock(RemoteIndexMetadataManager.class); + RemoteGlobalMetadataManager mockedGlobalMetadataManager = mock(RemoteGlobalMetadataManager.class); + RemoteClusterStateAttributesManager mockedClusterStateAttributeManager = mock(RemoteClusterStateAttributesManager.class); + ArgumentCaptor> listenerArgumentCaptor = ArgumentCaptor.forClass( + LatchedActionListener.class + ); + when(mockedIndexManager.getAsyncIndexMetadataReadAction(any(), anyString(), listenerArgumentCaptor.capture())).thenReturn( + () -> listenerArgumentCaptor.getValue().onResponse(mockedResult) + ); + when(mockedGlobalMetadataManager.getAsyncMetadataReadAction(any(), anyString(), listenerArgumentCaptor.capture())).thenReturn( + () -> listenerArgumentCaptor.getValue().onResponse(mockedResult) + ); + when(mockedClusterStateAttributeManager.getAsyncMetadataReadAction(anyString(), any(), listenerArgumentCaptor.capture())) + .thenReturn(() -> listenerArgumentCaptor.getValue().onResponse(mockedResult)); + when(mockedResult.getComponent()).thenReturn(COORDINATION_METADATA); + remoteClusterStateService.setRemoteIndexMetadataManager(mockedIndexManager); + remoteClusterStateService.setRemoteGlobalMetadataManager(mockedGlobalMetadataManager); + remoteClusterStateService.setRemoteClusterStateAttributesManager(mockedClusterStateAttributeManager); + RemoteClusterStateService spiedService = spy(remoteClusterStateService); + spiedService.getClusterStateForManifest(ClusterName.DEFAULT.value(), manifest, NODE_ID, false); + verify(spiedService, times(1)).readClusterStateInParallel( + any(), + eq(manifest), + eq(manifest.getClusterUUID()), + eq(NODE_ID), + eq(manifest.getIndices()), + eq(manifest.getCustomMetadataMap()), + eq(true), + eq(true), + eq(false), + eq(true), + eq(false), + eq(false), + eq(emptyList()), + eq(false), + eq(emptyMap()), + eq(false) + ); + } + + public void testGetClusterStateFromManifest_CodecV1() throws IOException { + ClusterMetadataManifest manifest = generateClusterMetadataManifestWithAllAttributes().codecVersion(CODEC_V1).build(); + mockBlobStoreObjects(); + remoteClusterStateService.start(); + final Index index = new Index("test-index", "index-uuid"); + final Settings idxSettings = Settings.builder() + .put(IndexMetadata.SETTING_VERSION_CREATED, Version.CURRENT) + .put(IndexMetadata.SETTING_INDEX_UUID, index.getUUID()) + .build(); + final IndexMetadata indexMetadata = new IndexMetadata.Builder(index.getName()).settings(idxSettings) + .numberOfShards(1) + .numberOfReplicas(0) + .build(); + RemoteIndexMetadataManager mockedIndexManager = mock(RemoteIndexMetadataManager.class); + RemoteGlobalMetadataManager mockedGlobalMetadataManager = mock(RemoteGlobalMetadataManager.class); + remoteClusterStateService.setRemoteIndexMetadataManager(mockedIndexManager); + remoteClusterStateService.setRemoteGlobalMetadataManager(mockedGlobalMetadataManager); + ArgumentCaptor> listenerArgumentCaptor = ArgumentCaptor.forClass( + LatchedActionListener.class + ); + when(mockedIndexManager.getAsyncIndexMetadataReadAction(any(), anyString(), listenerArgumentCaptor.capture())).thenReturn( + () -> listenerArgumentCaptor.getValue().onResponse(new RemoteReadResult(indexMetadata, INDEX, INDEX)) + ); + when(mockedGlobalMetadataManager.getGlobalMetadata(anyString(), eq(manifest))).thenReturn(Metadata.EMPTY_METADATA); + RemoteClusterStateService spiedService = spy(remoteClusterStateService); + spiedService.getClusterStateForManifest(ClusterName.DEFAULT.value(), manifest, NODE_ID, true); + verify(spiedService, times(1)).readClusterStateInParallel( + any(), + eq(manifest), + eq(manifest.getClusterUUID()), + eq(NODE_ID), + eq(manifest.getIndices()), + eq(emptyMap()), + eq(false), + eq(false), + eq(false), + eq(false), + eq(false), + eq(false), + eq(emptyList()), + eq(false), + eq(emptyMap()), + eq(false) + ); + verify(mockedGlobalMetadataManager, times(1)).getGlobalMetadata(eq(manifest.getClusterUUID()), eq(manifest)); + } + + public void testGetClusterStateUsingDiffFailWhenDiffManifestAbsent() { + ClusterMetadataManifest manifest = ClusterMetadataManifest.builder().build(); + ClusterState previousState = ClusterState.EMPTY_STATE; + AssertionError error = assertThrows( + AssertionError.class, + () -> remoteClusterStateService.getClusterStateUsingDiff(manifest, previousState, "test-node") + ); + assertEquals("Diff manifest null which is required for downloading cluster state", error.getMessage()); + } + + public void testGetClusterStateUsingDiff_NoDiff() throws IOException { + ClusterStateDiffManifest diffManifest = ClusterStateDiffManifest.builder().build(); + ClusterState clusterState = generateClusterStateWithAllAttributes().build(); + ClusterMetadataManifest manifest = ClusterMetadataManifest.builder() + .diffManifest(diffManifest) + .stateUUID(clusterState.stateUUID()) + .stateVersion(clusterState.version()) + .metadataVersion(clusterState.metadata().version()) + .clusterUUID(clusterState.getMetadata().clusterUUID()) + .routingTableVersion(clusterState.routingTable().version()) + .build(); + ClusterState updatedClusterState = remoteClusterStateService.getClusterStateUsingDiff(manifest, clusterState, "test-node"); + assertEquals(clusterState.getClusterName(), updatedClusterState.getClusterName()); + assertEquals(clusterState.metadata().clusterUUID(), updatedClusterState.metadata().clusterUUID()); + assertEquals(clusterState.metadata().version(), updatedClusterState.metadata().version()); + assertEquals(clusterState.metadata().coordinationMetadata(), updatedClusterState.metadata().coordinationMetadata()); + assertEquals(clusterState.metadata().getIndices(), updatedClusterState.metadata().getIndices()); + assertEquals(clusterState.metadata().templates(), updatedClusterState.metadata().templates()); + assertEquals(clusterState.metadata().persistentSettings(), updatedClusterState.metadata().persistentSettings()); + assertEquals(clusterState.metadata().transientSettings(), updatedClusterState.metadata().transientSettings()); + assertEquals(clusterState.metadata().getCustoms(), updatedClusterState.metadata().getCustoms()); + assertEquals(clusterState.metadata().hashesOfConsistentSettings(), updatedClusterState.metadata().hashesOfConsistentSettings()); + assertEquals(clusterState.getCustoms(), updatedClusterState.getCustoms()); + assertEquals(clusterState.stateUUID(), updatedClusterState.stateUUID()); + assertEquals(clusterState.version(), updatedClusterState.version()); + assertEquals(clusterState.getRoutingTable().version(), updatedClusterState.getRoutingTable().version()); + assertEquals(clusterState.getRoutingTable().getIndicesRouting(), updatedClusterState.getRoutingTable().getIndicesRouting()); + assertEquals(clusterState.getNodes(), updatedClusterState.getNodes()); + assertEquals(clusterState.getBlocks(), updatedClusterState.getBlocks()); + } + + public void testGetClusterStateUsingDiff() throws IOException { + ClusterState clusterState = generateClusterStateWithAllAttributes().build(); + ClusterState.Builder expectedClusterStateBuilder = ClusterState.builder(clusterState); + Metadata.Builder mb = Metadata.builder(clusterState.metadata()); + ClusterStateDiffManifest.Builder diffManifestBuilder = ClusterStateDiffManifest.builder(); + ClusterMetadataManifest.Builder manifestBuilder = ClusterMetadataManifest.builder(); + BlobContainer blobContainer = mockBlobStoreObjects(); + if (randomBoolean()) { + // updated coordination metadata + CoordinationMetadata coordinationMetadata = CoordinationMetadata.builder() + .term(clusterState.metadata().coordinationMetadata().term() + 1) + .build(); + mb.coordinationMetadata(coordinationMetadata); + diffManifestBuilder.coordinationMetadataUpdated(true); + manifestBuilder.coordinationMetadata(new UploadedMetadataAttribute(COORDINATION_METADATA, COORDINATION_METADATA_FILENAME)); + when(blobContainer.readBlob(COORDINATION_METADATA_FILENAME)).thenAnswer(i -> { + BytesReference bytes = COORDINATION_METADATA_FORMAT.serialize( + coordinationMetadata, + COORDINATION_METADATA_FILENAME, + compressor, + FORMAT_PARAMS + ); + return new ByteArrayInputStream(bytes.streamInput().readAllBytes()); + }); + } + if (randomBoolean()) { + // updated templates + TemplatesMetadata templatesMetadata = TemplatesMetadata.builder() + .put( + IndexTemplateMetadata.builder("template" + randomAlphaOfLength(3)) + .patterns(Arrays.asList("bar-*", "foo-*")) + .settings(Settings.builder().put("random_index_setting_" + randomAlphaOfLength(3), randomAlphaOfLength(5)).build()) + .build() + ) + .build(); + mb.templates(templatesMetadata); + diffManifestBuilder.templatesMetadataUpdated(true); + manifestBuilder.templatesMetadata(new UploadedMetadataAttribute(TEMPLATES_METADATA, TEMPLATES_METADATA_FILENAME)); + when(blobContainer.readBlob(TEMPLATES_METADATA_FILENAME)).thenAnswer(i -> { + BytesReference bytes = TEMPLATES_METADATA_FORMAT.serialize( + templatesMetadata, + TEMPLATES_METADATA_FILENAME, + compressor, + FORMAT_PARAMS + ); + return new ByteArrayInputStream(bytes.streamInput().readAllBytes()); + }); + } + if (randomBoolean()) { + // updated persistent settings + Settings persistentSettings = Settings.builder() + .put("random_persistent_setting_" + randomAlphaOfLength(3), randomAlphaOfLength(5)) + .build(); + mb.persistentSettings(persistentSettings); + diffManifestBuilder.settingsMetadataUpdated(true); + manifestBuilder.settingMetadata(new UploadedMetadataAttribute(SETTING_METADATA, PERSISTENT_SETTINGS_FILENAME)); + when(blobContainer.readBlob(PERSISTENT_SETTINGS_FILENAME)).thenAnswer(i -> { + BytesReference bytes = RemotePersistentSettingsMetadata.SETTINGS_METADATA_FORMAT.serialize( + persistentSettings, + PERSISTENT_SETTINGS_FILENAME, + compressor, + FORMAT_PARAMS + ); + return new ByteArrayInputStream(bytes.streamInput().readAllBytes()); + }); + } + if (randomBoolean()) { + // updated transient settings + Settings transientSettings = Settings.builder() + .put("random_transient_setting_" + randomAlphaOfLength(3), randomAlphaOfLength(5)) + .build(); + mb.transientSettings(transientSettings); + diffManifestBuilder.transientSettingsMetadataUpdate(true); + manifestBuilder.transientSettingsMetadata( + new UploadedMetadataAttribute(TRANSIENT_SETTING_METADATA, TRANSIENT_SETTINGS_FILENAME) + ); + when(blobContainer.readBlob(TRANSIENT_SETTINGS_FILENAME)).thenAnswer(i -> { + BytesReference bytes = RemoteTransientSettingsMetadata.SETTINGS_METADATA_FORMAT.serialize( + transientSettings, + TRANSIENT_SETTINGS_FILENAME, + compressor, + FORMAT_PARAMS + ); + return new ByteArrayInputStream(bytes.streamInput().readAllBytes()); + }); + } + if (randomBoolean()) { + // updated customs + CustomMetadata2 addedCustom = new CustomMetadata2(randomAlphaOfLength(10)); + mb.putCustom(addedCustom.getWriteableName(), addedCustom); + diffManifestBuilder.customMetadataUpdated(Collections.singletonList(addedCustom.getWriteableName())); + manifestBuilder.customMetadataMap( + Map.of(addedCustom.getWriteableName(), new UploadedMetadataAttribute(addedCustom.getWriteableName(), "custom-md2-file__1")) + ); + when(blobContainer.readBlob("custom-md2-file__1")).thenAnswer(i -> { + ChecksumWritableBlobStoreFormat customMetadataFormat = new ChecksumWritableBlobStoreFormat<>( + "custom", + is -> readFrom(is, namedWriteableRegistry, addedCustom.getWriteableName()) + ); + BytesReference bytes = customMetadataFormat.serialize(addedCustom, "custom-md2-file__1", compressor); + return new ByteArrayInputStream(bytes.streamInput().readAllBytes()); + }); + } + if (randomBoolean()) { + Set customsToRemove = clusterState.metadata().customs().keySet(); + customsToRemove.forEach(mb::removeCustom); + diffManifestBuilder.customMetadataDeleted(new ArrayList<>(customsToRemove)); + } + if (randomBoolean()) { + // updated hashes of consistent settings + DiffableStringMap hashesOfConsistentSettings = new DiffableStringMap(Map.of("secure_setting_key", "secure_setting_value")); + mb.hashesOfConsistentSettings(hashesOfConsistentSettings); + diffManifestBuilder.hashesOfConsistentSettingsUpdated(true); + manifestBuilder.hashesOfConsistentSettings( + new UploadedMetadataAttribute(HASHES_OF_CONSISTENT_SETTINGS, HASHES_OF_CONSISTENT_SETTINGS_FILENAME) + ); + when(blobContainer.readBlob(HASHES_OF_CONSISTENT_SETTINGS_FILENAME)).thenAnswer(i -> { + BytesReference bytes = HASHES_OF_CONSISTENT_SETTINGS_FORMAT.serialize( + hashesOfConsistentSettings, + HASHES_OF_CONSISTENT_SETTINGS_FILENAME, + compressor + ); + return new ByteArrayInputStream(bytes.streamInput().readAllBytes()); + }); + } + if (randomBoolean()) { + // updated index metadata + IndexMetadata indexMetadata = new IndexMetadata.Builder("add-test-index").settings( + Settings.builder() + .put(IndexMetadata.SETTING_VERSION_CREATED, Version.CURRENT) + .put(IndexMetadata.SETTING_INDEX_UUID, "add-test-index-uuid") + .build() + ).numberOfShards(1).numberOfReplicas(0).build(); + mb.put(indexMetadata, true); + diffManifestBuilder.indicesUpdated(Collections.singletonList(indexMetadata.getIndex().getName())); + manifestBuilder.indices( + List.of( + new UploadedIndexMetadata(indexMetadata.getIndex().getName(), indexMetadata.getIndexUUID(), "add-test-index-file__2") + ) + ); + when(blobContainer.readBlob("add-test-index-file__2")).thenAnswer(i -> { + BytesReference bytes = INDEX_METADATA_FORMAT.serialize(indexMetadata, "add-test-index-file__2", compressor, FORMAT_PARAMS); + return new ByteArrayInputStream(bytes.streamInput().readAllBytes()); + }); + } + if (randomBoolean()) { + // remove index metadata + Set indicesToDelete = clusterState.metadata().getIndices().keySet(); + indicesToDelete.forEach(mb::remove); + diffManifestBuilder.indicesDeleted(new ArrayList<>(indicesToDelete)); + } + if (randomBoolean()) { + // update nodes + DiscoveryNode node = new DiscoveryNode("node_id", buildNewFakeTransportAddress(), Version.CURRENT); + DiscoveryNodes.Builder nodesBuilder = DiscoveryNodes.builder(clusterState.nodes()).add(node); + expectedClusterStateBuilder.nodes(nodesBuilder.build()); + diffManifestBuilder.discoveryNodesUpdated(true); + manifestBuilder.discoveryNodesMetadata(new UploadedMetadataAttribute(DISCOVERY_NODES, DISCOVERY_NODES_FILENAME)); + when(blobContainer.readBlob(DISCOVERY_NODES_FILENAME)).thenAnswer(invocationOnMock -> { + BytesReference bytes = DISCOVERY_NODES_FORMAT.serialize(nodesBuilder.build(), DISCOVERY_NODES_FILENAME, compressor); + return new ByteArrayInputStream(bytes.streamInput().readAllBytes()); + }); + } + if (randomBoolean()) { + // update blocks + ClusterBlocks newClusterBlock = randomClusterBlocks(); + expectedClusterStateBuilder.blocks(newClusterBlock); + diffManifestBuilder.clusterBlocksUpdated(true); + manifestBuilder.clusterBlocksMetadata(new UploadedMetadataAttribute(CLUSTER_BLOCKS, CLUSTER_BLOCKS_FILENAME)); + when(blobContainer.readBlob(CLUSTER_BLOCKS_FILENAME)).thenAnswer(invocationOnMock -> { + BytesReference bytes = CLUSTER_BLOCKS_FORMAT.serialize(newClusterBlock, CLUSTER_BLOCKS_FILENAME, compressor); + return new ByteArrayInputStream(bytes.streamInput().readAllBytes()); + }); + + } + ClusterState expectedClusterState = expectedClusterStateBuilder.metadata(mb).build(); + ClusterStateDiffManifest diffManifest = diffManifestBuilder.build(); + manifestBuilder.diffManifest(diffManifest) + .stateUUID(clusterState.stateUUID()) + .stateVersion(clusterState.version()) + .metadataVersion(clusterState.metadata().version()) + .clusterUUID(clusterState.getMetadata().clusterUUID()) + .routingTableVersion(clusterState.getRoutingTable().version()); + + remoteClusterStateService.start(); + ClusterState updatedClusterState = remoteClusterStateService.getClusterStateUsingDiff( + manifestBuilder.build(), + clusterState, + NODE_ID + ); + + assertEquals(expectedClusterState.getClusterName(), updatedClusterState.getClusterName()); + assertEquals(expectedClusterState.stateUUID(), updatedClusterState.stateUUID()); + assertEquals(expectedClusterState.version(), updatedClusterState.version()); + assertEquals(expectedClusterState.metadata().clusterUUID(), updatedClusterState.metadata().clusterUUID()); + assertEquals(expectedClusterState.getRoutingTable().version(), updatedClusterState.getRoutingTable().version()); + assertNotEquals(diffManifest.isClusterBlocksUpdated(), updatedClusterState.getBlocks().equals(clusterState.getBlocks())); + assertNotEquals(diffManifest.isDiscoveryNodesUpdated(), updatedClusterState.getNodes().equals(clusterState.getNodes())); + assertNotEquals( + diffManifest.isCoordinationMetadataUpdated(), + updatedClusterState.getMetadata().coordinationMetadata().equals(clusterState.getMetadata().coordinationMetadata()) + ); + assertNotEquals( + diffManifest.isTemplatesMetadataUpdated(), + updatedClusterState.getMetadata().templates().equals(clusterState.getMetadata().getTemplates()) + ); + assertNotEquals( + diffManifest.isSettingsMetadataUpdated(), + updatedClusterState.getMetadata().persistentSettings().equals(clusterState.getMetadata().persistentSettings()) + ); + assertNotEquals( + diffManifest.isTransientSettingsMetadataUpdated(), + updatedClusterState.getMetadata().transientSettings().equals(clusterState.getMetadata().transientSettings()) + ); + diffManifest.getIndicesUpdated().forEach(indexName -> { + IndexMetadata updatedIndexMetadata = updatedClusterState.metadata().index(indexName); + IndexMetadata originalIndexMetadata = clusterState.metadata().index(indexName); + assertNotEquals(originalIndexMetadata, updatedIndexMetadata); + }); + diffManifest.getCustomMetadataUpdated().forEach(customMetadataName -> { + Metadata.Custom updatedCustomMetadata = updatedClusterState.metadata().custom(customMetadataName); + Metadata.Custom originalCustomMetadata = clusterState.metadata().custom(customMetadataName); + assertNotEquals(originalCustomMetadata, updatedCustomMetadata); + }); + diffManifest.getClusterStateCustomUpdated().forEach(clusterStateCustomName -> { + ClusterState.Custom updateClusterStateCustom = updatedClusterState.customs().get(clusterStateCustomName); + ClusterState.Custom originalClusterStateCustom = clusterState.customs().get(clusterStateCustomName); + assertNotEquals(originalClusterStateCustom, updateClusterStateCustom); + }); + diffManifest.getIndicesRoutingUpdated().forEach(indexName -> { + IndexRoutingTable updatedIndexRoutingTable = updatedClusterState.getRoutingTable().getIndicesRouting().get(indexName); + IndexRoutingTable originalIndexingRoutingTable = clusterState.getRoutingTable().getIndicesRouting().get(indexName); + assertNotEquals(originalIndexingRoutingTable, updatedIndexRoutingTable); + }); + diffManifest.getIndicesDeleted() + .forEach(indexName -> { assertFalse(updatedClusterState.metadata().getIndices().containsKey(indexName)); }); + diffManifest.getCustomMetadataDeleted().forEach(customMetadataName -> { + assertFalse(updatedClusterState.metadata().customs().containsKey(customMetadataName)); + }); + diffManifest.getClusterStateCustomDeleted().forEach(clusterStateCustomName -> { + assertFalse(updatedClusterState.customs().containsKey(clusterStateCustomName)); + }); + diffManifest.getIndicesRoutingDeleted().forEach(indexName -> { + assertFalse(updatedClusterState.getRoutingTable().getIndicesRouting().containsKey(indexName)); + }); + } + + public void testReadClusterStateInParallel_TimedOut() throws IOException { + ClusterState previousClusterState = generateClusterStateWithAllAttributes().build(); + ClusterMetadataManifest manifest = generateClusterMetadataManifestWithAllAttributes().build(); + BlobContainer container = mockBlobStoreObjects(); + int readTimeOut = 2; + Settings newSettings = Settings.builder().put("cluster.remote_store.state.read_timeout", readTimeOut + "s").build(); + clusterSettings.applySettings(newSettings); + when(container.readBlob(anyString())).thenAnswer(invocationOnMock -> { + Thread.sleep(readTimeOut * 1000 + 100); + return null; + }); + remoteClusterStateService.start(); + RemoteStateTransferException exception = expectThrows( + RemoteStateTransferException.class, + () -> remoteClusterStateService.readClusterStateInParallel( + previousClusterState, + manifest, + manifest.getClusterUUID(), + NODE_ID, + emptyList(), + emptyMap(), + true, + true, + true, + true, + true, + true, + emptyList(), + true, + emptyMap(), + true + ) + ); + assertEquals("Timed out waiting to read cluster state from remote within timeout " + readTimeOut + "s", exception.getMessage()); + } + + public void testReadClusterStateInParallel_ExceptionDuringRead() throws IOException { + ClusterState previousClusterState = generateClusterStateWithAllAttributes().build(); + ClusterMetadataManifest manifest = generateClusterMetadataManifestWithAllAttributes().build(); + BlobContainer container = mockBlobStoreObjects(); + Exception mockException = new IOException("mock exception"); + when(container.readBlob(anyString())).thenThrow(mockException); + remoteClusterStateService.start(); + RemoteStateTransferException exception = expectThrows( + RemoteStateTransferException.class, + () -> remoteClusterStateService.readClusterStateInParallel( + previousClusterState, + manifest, + manifest.getClusterUUID(), + NODE_ID, + emptyList(), + emptyMap(), + true, + true, + true, + true, + true, + true, + emptyList(), + true, + emptyMap(), + true + ) + ); + assertEquals("Exception during reading cluster state from remote", exception.getMessage()); + assertTrue(exception.getSuppressed().length > 0); + assertEquals(mockException, exception.getSuppressed()[0]); + } + + public void testReadClusterStateInParallel_UnexpectedResult() throws IOException { + ClusterState previousClusterState = generateClusterStateWithAllAttributes().build(); + // index already present in previous state + List uploadedIndexMetadataList = new ArrayList<>( + List.of(new UploadedIndexMetadata("test-index", "test-index-uuid", "test-index-file__2")) + ); + // new index to be added + List newIndicesToRead = List.of( + new UploadedIndexMetadata("test-index-1", "test-index-1-uuid", "test-index-1-file__2") + ); + uploadedIndexMetadataList.addAll(newIndicesToRead); + // existing custom metadata + Map uploadedCustomMetadataMap = new HashMap<>( + Map.of( + "custom_md_1", + new UploadedMetadataAttribute("custom_md_1", "test-custom1-file__1"), + "custom_md_2", + new UploadedMetadataAttribute("custom_md_2", "test-custom2-file__1") + ) + ); + // new custom metadata to be added + Map newCustomMetadataMap = Map.of( + "custom_md_3", + new UploadedMetadataAttribute("custom_md_3", "test-custom3-file__1") + ); + uploadedCustomMetadataMap.putAll(newCustomMetadataMap); + // already existing cluster state customs + Map uploadedClusterStateCustomMap = new HashMap<>( + Map.of( + "custom_1", + new UploadedMetadataAttribute("custom_1", "test-cluster-state-custom1-file__1"), + "custom_2", + new UploadedMetadataAttribute("custom_2", "test-cluster-state-custom2-file__1") + ) + ); + // new customs uploaded + Map newClusterStateCustoms = Map.of( + "custom_3", + new UploadedMetadataAttribute("custom_3", "test-cluster-state-custom3-file__1") + ); + uploadedClusterStateCustomMap.putAll(newClusterStateCustoms); + ClusterMetadataManifest manifest = ClusterMetadataManifest.builder() + .clusterUUID(previousClusterState.getMetadata().clusterUUID()) + .indices(uploadedIndexMetadataList) + .coordinationMetadata(new UploadedMetadataAttribute(COORDINATION_METADATA, COORDINATION_METADATA_FILENAME)) + .settingMetadata(new UploadedMetadataAttribute(SETTING_METADATA, PERSISTENT_SETTINGS_FILENAME)) + .transientSettingsMetadata(new UploadedMetadataAttribute(TRANSIENT_SETTING_METADATA, TRANSIENT_SETTINGS_FILENAME)) + .templatesMetadata(new UploadedMetadataAttribute(TEMPLATES_METADATA, TEMPLATES_METADATA_FILENAME)) + .hashesOfConsistentSettings( + new UploadedMetadataAttribute(HASHES_OF_CONSISTENT_SETTINGS, HASHES_OF_CONSISTENT_SETTINGS_FILENAME) + ) + .customMetadataMap(uploadedCustomMetadataMap) + .discoveryNodesMetadata(new UploadedMetadataAttribute(DISCOVERY_NODES, DISCOVERY_NODES_FILENAME)) + .clusterBlocksMetadata(new UploadedMetadataAttribute(CLUSTER_BLOCKS, CLUSTER_BLOCKS_FILENAME)) + .clusterStateCustomMetadataMap(uploadedClusterStateCustomMap) + .build(); + + RemoteReadResult mockResult = mock(RemoteReadResult.class); + RemoteIndexMetadataManager mockIndexMetadataManager = mock(RemoteIndexMetadataManager.class); + CheckedRunnable mockRunnable = mock(CheckedRunnable.class); + ArgumentCaptor> latchCapture = ArgumentCaptor.forClass(LatchedActionListener.class); + when(mockIndexMetadataManager.getAsyncIndexMetadataReadAction(anyString(), anyString(), latchCapture.capture())).thenReturn( + mockRunnable + ); + RemoteGlobalMetadataManager mockGlobalMetadataManager = mock(RemoteGlobalMetadataManager.class); + when(mockGlobalMetadataManager.getAsyncMetadataReadAction(any(), anyString(), latchCapture.capture())).thenReturn(mockRunnable); + RemoteClusterStateAttributesManager mockClusterStateAttributeManager = mock(RemoteClusterStateAttributesManager.class); + when(mockClusterStateAttributeManager.getAsyncMetadataReadAction(anyString(), any(), latchCapture.capture())).thenReturn( + mockRunnable + ); + doAnswer(invocationOnMock -> { + latchCapture.getValue().onResponse(mockResult); + return null; + }).when(mockRunnable).run(); + when(mockResult.getComponent()).thenReturn("mock-result"); + remoteClusterStateService.start(); + remoteClusterStateService.setRemoteIndexMetadataManager(mockIndexMetadataManager); + remoteClusterStateService.setRemoteGlobalMetadataManager(mockGlobalMetadataManager); + remoteClusterStateService.setRemoteClusterStateAttributesManager(mockClusterStateAttributeManager); + IllegalStateException exception = expectThrows( + IllegalStateException.class, + () -> remoteClusterStateService.readClusterStateInParallel( + previousClusterState, + manifest, + manifest.getClusterUUID(), + NODE_ID, + newIndicesToRead, + newCustomMetadataMap, + true, + true, + true, + true, + true, + true, + emptyList(), + true, + newClusterStateCustoms, + true + ) + ); + assertEquals("Unknown component: mock-result", exception.getMessage()); + newIndicesToRead.forEach( + uploadedIndexMetadata -> verify(mockIndexMetadataManager, times(1)).getAsyncIndexMetadataReadAction( + eq(previousClusterState.getMetadata().clusterUUID()), + eq(uploadedIndexMetadata.getUploadedFilename()), + any() + ) + ); + verify(mockGlobalMetadataManager, times(1)).getAsyncMetadataReadAction( + argThat(new BlobNameMatcher(COORDINATION_METADATA_FILENAME)), + eq(COORDINATION_METADATA), + any() + ); + verify(mockGlobalMetadataManager, times(1)).getAsyncMetadataReadAction( + argThat(new BlobNameMatcher(PERSISTENT_SETTINGS_FILENAME)), + eq(SETTING_METADATA), + any() + ); + verify(mockGlobalMetadataManager, times(1)).getAsyncMetadataReadAction( + argThat(new BlobNameMatcher(TRANSIENT_SETTINGS_FILENAME)), + eq(TRANSIENT_SETTING_METADATA), + any() + ); + verify(mockGlobalMetadataManager, times(1)).getAsyncMetadataReadAction( + argThat(new BlobNameMatcher(TEMPLATES_METADATA_FILENAME)), + eq(TEMPLATES_METADATA), + any() + ); + verify(mockGlobalMetadataManager, times(1)).getAsyncMetadataReadAction( + argThat(new BlobNameMatcher(HASHES_OF_CONSISTENT_SETTINGS_FILENAME)), + eq(HASHES_OF_CONSISTENT_SETTINGS), + any() + ); + newCustomMetadataMap.keySet().forEach(uploadedCustomMetadataKey -> { + verify(mockGlobalMetadataManager, times(1)).getAsyncMetadataReadAction( + argThat(new BlobNameMatcher(newCustomMetadataMap.get(uploadedCustomMetadataKey).getUploadedFilename())), + eq(uploadedCustomMetadataKey), + any() + ); + }); + verify(mockClusterStateAttributeManager, times(1)).getAsyncMetadataReadAction( + eq(DISCOVERY_NODES), + argThat(new BlobNameMatcher(DISCOVERY_NODES_FILENAME)), + any() + ); + verify(mockClusterStateAttributeManager, times(1)).getAsyncMetadataReadAction( + eq(CLUSTER_BLOCKS), + argThat(new BlobNameMatcher(CLUSTER_BLOCKS_FILENAME)), + any() + ); + newClusterStateCustoms.keySet().forEach(uploadedClusterStateCustomMetadataKey -> { + verify(mockClusterStateAttributeManager, times(1)).getAsyncMetadataReadAction( + eq(String.join(CUSTOM_DELIMITER, CLUSTER_STATE_CUSTOM, uploadedClusterStateCustomMetadataKey)), + argThat(new BlobNameMatcher(newClusterStateCustoms.get(uploadedClusterStateCustomMetadataKey).getUploadedFilename())), + any() + ); + }); + } + + public void testReadClusterStateInParallel_Success() throws IOException { + ClusterState previousClusterState = generateClusterStateWithAllAttributes().build(); + String indexFilename = "test-index-1-file__2"; + String customMetadataFilename = "test-custom3-file__1"; + String clusterStateCustomFilename = "test-cluster-state-custom3-file__1"; + // index already present in previous state + List uploadedIndexMetadataList = new ArrayList<>( + List.of(new UploadedIndexMetadata("test-index", "test-index-uuid", "test-index-file__2")) + ); + // new index to be added + List newIndicesToRead = List.of( + new UploadedIndexMetadata("test-index-1", "test-index-1-uuid", indexFilename) + ); + uploadedIndexMetadataList.addAll(newIndicesToRead); + // existing custom metadata + Map uploadedCustomMetadataMap = new HashMap<>( + Map.of( + "custom_md_1", + new UploadedMetadataAttribute("custom_md_1", "test-custom1-file__1"), + "custom_md_2", + new UploadedMetadataAttribute("custom_md_2", "test-custom2-file__1") + ) + ); + // new custom metadata to be added + Map newCustomMetadataMap = Map.of( + "custom_md_3", + new UploadedMetadataAttribute("custom_md_3", customMetadataFilename) + ); + uploadedCustomMetadataMap.putAll(newCustomMetadataMap); + // already existing cluster state customs + Map uploadedClusterStateCustomMap = new HashMap<>( + Map.of( + "custom_1", + new UploadedMetadataAttribute("custom_1", "test-cluster-state-custom1-file__1"), + "custom_2", + new UploadedMetadataAttribute("custom_2", "test-cluster-state-custom2-file__1") + ) + ); + // new customs uploaded + Map newClusterStateCustoms = Map.of( + "custom_3", + new UploadedMetadataAttribute("custom_3", clusterStateCustomFilename) + ); + uploadedClusterStateCustomMap.putAll(newClusterStateCustoms); + + ClusterMetadataManifest manifest = generateClusterMetadataManifestWithAllAttributes().indices(uploadedIndexMetadataList) + .customMetadataMap(uploadedCustomMetadataMap) + .clusterStateCustomMetadataMap(uploadedClusterStateCustomMap) + .build(); + + IndexMetadata newIndexMetadata = new IndexMetadata.Builder("test-index-1").state(IndexMetadata.State.OPEN) + .settings(Settings.builder().put(IndexMetadata.SETTING_VERSION_CREATED, Version.CURRENT).build()) + .numberOfShards(1) + .numberOfReplicas(1) + .build(); + CustomMetadata3 customMetadata3 = new CustomMetadata3("custom_md_3"); + CoordinationMetadata updatedCoordinationMetadata = CoordinationMetadata.builder() + .term(previousClusterState.metadata().coordinationMetadata().term() + 1) + .build(); + Settings updatedPersistentSettings = Settings.builder() + .put("random_persistent_setting_" + randomAlphaOfLength(3), randomAlphaOfLength(5)) + .build(); + Settings updatedTransientSettings = Settings.builder() + .put("random_transient_setting_" + randomAlphaOfLength(3), randomAlphaOfLength(5)) + .build(); + TemplatesMetadata updatedTemplateMetadata = getTemplatesMetadata(); + DiffableStringMap updatedHashesOfConsistentSettings = getHashesOfConsistentSettings(); + DiscoveryNodes updatedDiscoveryNodes = getDiscoveryNodes(); + ClusterBlocks updatedClusterBlocks = randomClusterBlocks(); + TestClusterStateCustom3 updatedClusterStateCustom3 = new TestClusterStateCustom3("custom_3"); + + RemoteIndexMetadataManager mockedIndexManager = mock(RemoteIndexMetadataManager.class); + RemoteGlobalMetadataManager mockedGlobalMetadataManager = mock(RemoteGlobalMetadataManager.class); + RemoteClusterStateAttributesManager mockedClusterStateAttributeManager = mock(RemoteClusterStateAttributesManager.class); + + when( + mockedIndexManager.getAsyncIndexMetadataReadAction( + eq(manifest.getClusterUUID()), + eq(indexFilename), + any(LatchedActionListener.class) + ) + ).thenAnswer(invocationOnMock -> { + LatchedActionListener latchedActionListener = invocationOnMock.getArgument(2, LatchedActionListener.class); + return (CheckedRunnable) () -> latchedActionListener.onResponse( + new RemoteReadResult(newIndexMetadata, INDEX, "test-index-1") + ); + }); + when( + mockedGlobalMetadataManager.getAsyncMetadataReadAction( + argThat(new BlobNameMatcher(customMetadataFilename)), + eq("custom_md_3"), + any() + ) + ).thenAnswer(invocationOnMock -> { + LatchedActionListener latchedActionListener = invocationOnMock.getArgument(2, LatchedActionListener.class); + return (CheckedRunnable) () -> latchedActionListener.onResponse( + new RemoteReadResult(customMetadata3, CUSTOM_METADATA, "custom_md_3") + ); + }); + when( + mockedGlobalMetadataManager.getAsyncMetadataReadAction( + argThat(new BlobNameMatcher(COORDINATION_METADATA_FILENAME)), + eq(COORDINATION_METADATA), + any() + ) + ).thenAnswer(invocationOnMock -> { + LatchedActionListener latchedActionListener = invocationOnMock.getArgument(2, LatchedActionListener.class); + return (CheckedRunnable) () -> latchedActionListener.onResponse( + new RemoteReadResult(updatedCoordinationMetadata, COORDINATION_METADATA, COORDINATION_METADATA) + ); + }); + when( + mockedGlobalMetadataManager.getAsyncMetadataReadAction( + argThat(new BlobNameMatcher(PERSISTENT_SETTINGS_FILENAME)), + eq(SETTING_METADATA), + any() + ) + ).thenAnswer(invocationOnMock -> { + LatchedActionListener latchedActionListener = invocationOnMock.getArgument(2, LatchedActionListener.class); + + return (CheckedRunnable) () -> latchedActionListener.onResponse( + new RemoteReadResult(updatedPersistentSettings, SETTING_METADATA, SETTING_METADATA) + ); + }); + when( + mockedGlobalMetadataManager.getAsyncMetadataReadAction( + argThat(new BlobNameMatcher(TRANSIENT_SETTINGS_FILENAME)), + eq(TRANSIENT_SETTING_METADATA), + any() + ) + ).thenAnswer(invocationOnMock -> { + LatchedActionListener latchedActionListener = invocationOnMock.getArgument(2, LatchedActionListener.class); + return (CheckedRunnable) () -> latchedActionListener.onResponse( + new RemoteReadResult(updatedTransientSettings, TRANSIENT_SETTING_METADATA, TRANSIENT_SETTING_METADATA) + ); + }); + when( + mockedGlobalMetadataManager.getAsyncMetadataReadAction( + argThat(new BlobNameMatcher(TEMPLATES_METADATA_FILENAME)), + eq(TEMPLATES_METADATA), + any() + ) + ).thenAnswer(invocationOnMock -> { + LatchedActionListener latchedActionListener = invocationOnMock.getArgument(2, LatchedActionListener.class); + return (CheckedRunnable) () -> latchedActionListener.onResponse( + new RemoteReadResult(updatedTemplateMetadata, TEMPLATES_METADATA, TEMPLATES_METADATA) + ); + }); + when( + mockedGlobalMetadataManager.getAsyncMetadataReadAction( + argThat(new BlobNameMatcher(HASHES_OF_CONSISTENT_SETTINGS_FILENAME)), + eq(HASHES_OF_CONSISTENT_SETTINGS), + any() + ) + ).thenAnswer(invocationOnMock -> { + LatchedActionListener latchedActionListener = invocationOnMock.getArgument(2, LatchedActionListener.class); + return (CheckedRunnable) () -> latchedActionListener.onResponse( + new RemoteReadResult(updatedHashesOfConsistentSettings, HASHES_OF_CONSISTENT_SETTINGS, HASHES_OF_CONSISTENT_SETTINGS) + ); + }); + when( + mockedClusterStateAttributeManager.getAsyncMetadataReadAction( + eq(DISCOVERY_NODES), + argThat(new BlobNameMatcher(DISCOVERY_NODES_FILENAME)), + any() + ) + ).thenAnswer(invocationOnMock -> { + LatchedActionListener latchedActionListener = invocationOnMock.getArgument(2, LatchedActionListener.class); + return (CheckedRunnable) () -> latchedActionListener.onResponse( + new RemoteReadResult(updatedDiscoveryNodes, CLUSTER_STATE_ATTRIBUTE, DISCOVERY_NODES) + ); + }); + when( + mockedClusterStateAttributeManager.getAsyncMetadataReadAction( + eq(CLUSTER_BLOCKS), + argThat(new BlobNameMatcher(CLUSTER_BLOCKS_FILENAME)), + any() + ) + ).thenAnswer(invocationOnMock -> { + LatchedActionListener latchedActionListener = invocationOnMock.getArgument(2, LatchedActionListener.class); + return (CheckedRunnable) () -> latchedActionListener.onResponse( + new RemoteReadResult(updatedClusterBlocks, CLUSTER_STATE_ATTRIBUTE, CLUSTER_BLOCKS) + ); + }); + when( + mockedClusterStateAttributeManager.getAsyncMetadataReadAction( + eq(String.join(CUSTOM_DELIMITER, CLUSTER_STATE_CUSTOM, updatedClusterStateCustom3.getWriteableName())), + argThat(new BlobNameMatcher(clusterStateCustomFilename)), + any() + ) + ).thenAnswer(invocationOnMock -> { + LatchedActionListener latchedActionListener = invocationOnMock.getArgument(2, LatchedActionListener.class); + return (CheckedRunnable) () -> latchedActionListener.onResponse( + new RemoteReadResult( + updatedClusterStateCustom3, + CLUSTER_STATE_ATTRIBUTE, + String.join(CUSTOM_DELIMITER, CLUSTER_STATE_CUSTOM, updatedClusterStateCustom3.getWriteableName()) + ) + ); + }); + + remoteClusterStateService.start(); + remoteClusterStateService.setRemoteIndexMetadataManager(mockedIndexManager); + remoteClusterStateService.setRemoteGlobalMetadataManager(mockedGlobalMetadataManager); + remoteClusterStateService.setRemoteClusterStateAttributesManager(mockedClusterStateAttributeManager); + + ClusterState updatedClusterState = remoteClusterStateService.readClusterStateInParallel( + previousClusterState, + manifest, + manifest.getClusterUUID(), + NODE_ID, + newIndicesToRead, + newCustomMetadataMap, + true, + true, + true, + true, + true, + true, + emptyList(), + true, + newClusterStateCustoms, + true + ); + + assertEquals(uploadedIndexMetadataList.size(), updatedClusterState.metadata().indices().size()); + assertTrue(updatedClusterState.metadata().indices().containsKey("test-index-1")); + assertEquals(newIndexMetadata, updatedClusterState.metadata().index(newIndexMetadata.getIndex())); + uploadedCustomMetadataMap.keySet().forEach(key -> assertTrue(updatedClusterState.metadata().customs().containsKey(key))); + assertEquals(customMetadata3, updatedClusterState.metadata().custom(customMetadata3.getWriteableName())); + assertEquals( + previousClusterState.metadata().coordinationMetadata().term() + 1, + updatedClusterState.metadata().coordinationMetadata().term() + ); + assertEquals(updatedPersistentSettings, updatedClusterState.metadata().persistentSettings()); + assertEquals(updatedTransientSettings, updatedClusterState.metadata().transientSettings()); + assertEquals(updatedTemplateMetadata.getTemplates(), updatedClusterState.metadata().templates()); + assertEquals(updatedHashesOfConsistentSettings, updatedClusterState.metadata().hashesOfConsistentSettings()); + assertEquals(updatedDiscoveryNodes.getSize(), updatedClusterState.getNodes().getSize()); + updatedDiscoveryNodes.getNodes().forEach((nodeId, node) -> assertEquals(updatedClusterState.getNodes().get(nodeId), node)); + assertEquals(updatedDiscoveryNodes.getClusterManagerNodeId(), updatedClusterState.getNodes().getClusterManagerNodeId()); + assertEquals(updatedClusterBlocks, updatedClusterState.blocks()); + uploadedClusterStateCustomMap.keySet().forEach(key -> assertTrue(updatedClusterState.customs().containsKey(key))); + assertEquals(updatedClusterStateCustom3, updatedClusterState.custom("custom_3")); + newIndicesToRead.forEach( + uploadedIndexMetadata -> verify(mockedIndexManager, times(1)).getAsyncIndexMetadataReadAction( + eq(previousClusterState.getMetadata().clusterUUID()), + eq(uploadedIndexMetadata.getUploadedFilename()), + any() + ) + ); + verify(mockedGlobalMetadataManager, times(1)).getAsyncMetadataReadAction( + argThat(new BlobNameMatcher(COORDINATION_METADATA_FILENAME)), + eq(COORDINATION_METADATA), + any() + ); + verify(mockedGlobalMetadataManager, times(1)).getAsyncMetadataReadAction( + argThat(new BlobNameMatcher(PERSISTENT_SETTINGS_FILENAME)), + eq(SETTING_METADATA), + any() + ); + verify(mockedGlobalMetadataManager, times(1)).getAsyncMetadataReadAction( + argThat(new BlobNameMatcher(TRANSIENT_SETTINGS_FILENAME)), + eq(TRANSIENT_SETTING_METADATA), + any() + ); + verify(mockedGlobalMetadataManager, times(1)).getAsyncMetadataReadAction( + argThat(new BlobNameMatcher(TEMPLATES_METADATA_FILENAME)), + eq(TEMPLATES_METADATA), + any() + ); + verify(mockedGlobalMetadataManager, times(1)).getAsyncMetadataReadAction( + argThat(new BlobNameMatcher(HASHES_OF_CONSISTENT_SETTINGS_FILENAME)), + eq(HASHES_OF_CONSISTENT_SETTINGS), + any() + ); + newCustomMetadataMap.keySet().forEach(uploadedCustomMetadataKey -> { + verify(mockedGlobalMetadataManager, times(1)).getAsyncMetadataReadAction( + argThat(new BlobNameMatcher(newCustomMetadataMap.get(uploadedCustomMetadataKey).getUploadedFilename())), + eq(uploadedCustomMetadataKey), + any() + ); + }); + verify(mockedClusterStateAttributeManager, times(1)).getAsyncMetadataReadAction( + eq(DISCOVERY_NODES), + argThat(new BlobNameMatcher(DISCOVERY_NODES_FILENAME)), + any() + ); + verify(mockedClusterStateAttributeManager, times(1)).getAsyncMetadataReadAction( + eq(CLUSTER_BLOCKS), + argThat(new BlobNameMatcher(CLUSTER_BLOCKS_FILENAME)), + any() + ); + newClusterStateCustoms.keySet().forEach(uploadedClusterStateCustomMetadataKey -> { + verify(mockedClusterStateAttributeManager, times(1)).getAsyncMetadataReadAction( + eq(String.join(CUSTOM_DELIMITER, CLUSTER_STATE_CUSTOM, uploadedClusterStateCustomMetadataKey)), + argThat(new BlobNameMatcher(newClusterStateCustoms.get(uploadedClusterStateCustomMetadataKey).getUploadedFilename())), + any() + ); + }); + } + /* * Here we will verify the migration of manifest file from codec V0. * @@ -1857,7 +2920,7 @@ private void mockObjectsForGettingPreviousClusterUUID( BlobContainer[] mockBlobContainerOrderedArray = new BlobContainer[mockBlobContainerOrderedList.size()]; mockBlobContainerOrderedList.toArray(mockBlobContainerOrderedArray); when(blobStore.blobContainer(ArgumentMatchers.any())).thenReturn(uuidBlobContainer, mockBlobContainerOrderedArray); - when(blobStoreRepository.getCompressor()).thenReturn(new DeflateCompressor()); + when(blobStoreRepository.getCompressor()).thenReturn(compressor); } private ClusterMetadataManifest generateV1ClusterMetadataManifest( @@ -1986,7 +3049,7 @@ private void mockBlobContainer( } String fileName = uploadedIndexMetadata.getUploadedFilename(); when(blobContainer.readBlob(getFormattedIndexFileName(fileName))).thenAnswer((invocationOnMock) -> { - BytesReference bytesIndexMetadata = RemoteIndexMetadata.INDEX_METADATA_FORMAT.serialize( + BytesReference bytesIndexMetadata = INDEX_METADATA_FORMAT.serialize( indexMetadata, fileName, blobStoreRepository.getCompressor(), @@ -2057,12 +3120,6 @@ private void mockBlobContainerForGlobalMetadata( .stream() .collect(Collectors.toMap(Map.Entry::getKey, entry -> getFileNameFromPath(entry.getValue().getUploadedFilename()))); - // ChecksumBlobStoreFormat customMetadataFormat = new ChecksumBlobStoreFormat<>( - // "custom", - // METADATA_NAME_PLAIN_FORMAT, - // null - // ); - ChecksumWritableBlobStoreFormat customMetadataFormat = new ChecksumWritableBlobStoreFormat<>("custom", null); for (Map.Entry entry : customFileMap.entrySet()) { String custom = entry.getKey(); @@ -2147,32 +3204,105 @@ static ClusterState.Builder generateClusterStateWithOneIndex() { .routingTable(RoutingTable.builder().addAsNew(indexMetadata).version(1L).build()); } + static ClusterState.Builder generateClusterStateWithAllAttributes() { + final Index index = new Index("test-index", "index-uuid"); + final Settings idxSettings = Settings.builder() + .put(IndexMetadata.SETTING_VERSION_CREATED, Version.CURRENT) + .put(IndexMetadata.SETTING_INDEX_UUID, index.getUUID()) + .build(); + final IndexMetadata indexMetadata = new IndexMetadata.Builder(index.getName()).settings(idxSettings) + .numberOfShards(1) + .numberOfReplicas(0) + .build(); + final CoordinationMetadata coordinationMetadata = CoordinationMetadata.builder().term(1L).build(); + final Settings settings = Settings.builder().put("mock-settings", true).build(); + final Settings transientSettings = Settings.builder().put("mock-transient-settings", true).build(); + final DiffableStringMap hashesOfConsistentSettings = new DiffableStringMap(emptyMap()); + final TemplatesMetadata templatesMetadata = TemplatesMetadata.builder() + .put(IndexTemplateMetadata.builder("template-1").patterns(List.of("test-index* ")).build()) + .build(); + final CustomMetadata1 customMetadata1 = new CustomMetadata1("custom-metadata-1"); + final CustomMetadata2 customMetadata2 = new CustomMetadata2("custom-metadata-2"); + final DiscoveryNodes nodes = nodesWithLocalNodeClusterManager(); + final ClusterBlocks clusterBlocks = randomClusterBlocks(); + final TestClusterStateCustom1 custom1 = new RemoteClusterStateTestUtils.TestClusterStateCustom1("custom-1"); + final TestClusterStateCustom2 custom2 = new RemoteClusterStateTestUtils.TestClusterStateCustom2("custom-2"); + return ClusterState.builder(ClusterName.DEFAULT) + .version(1L) + .stateUUID("state-uuid") + .metadata( + Metadata.builder() + .version(randomNonNegativeLong()) + .put(indexMetadata, true) + .clusterUUID("cluster-uuid") + .coordinationMetadata(coordinationMetadata) + .persistentSettings(settings) + .transientSettings(transientSettings) + .hashesOfConsistentSettings(hashesOfConsistentSettings) + .templates(templatesMetadata) + .putCustom(customMetadata1.getWriteableName(), customMetadata1) + .putCustom(customMetadata2.getWriteableName(), customMetadata2) + .build() + ) + .routingTable(RoutingTable.builder().addAsNew(indexMetadata).version(1L).build()) + .nodes(nodes) + .blocks(clusterBlocks) + .putCustom(custom1.getWriteableName(), custom1) + .putCustom(custom2.getWriteableName(), custom2); + } + + static ClusterMetadataManifest.Builder generateClusterMetadataManifestWithAllAttributes() { + return ClusterMetadataManifest.builder() + .codecVersion(CODEC_V2) + .clusterUUID("cluster-uuid") + .indices(List.of(new UploadedIndexMetadata("test-index", "test-index-uuid", "test-index-file__2"))) + .customMetadataMap( + Map.of( + "custom_md_1", + new UploadedMetadataAttribute("custom_md_1", "test-custom1-file__1"), + "custom_md_2", + new UploadedMetadataAttribute("custom_md_2", "test-custom2-file__1") + ) + ) + .coordinationMetadata(new UploadedMetadataAttribute(COORDINATION_METADATA, COORDINATION_METADATA_FILENAME)) + .settingMetadata(new UploadedMetadataAttribute(SETTING_METADATA, PERSISTENT_SETTINGS_FILENAME)) + .transientSettingsMetadata(new UploadedMetadataAttribute(TRANSIENT_SETTING_METADATA, TRANSIENT_SETTINGS_FILENAME)) + .templatesMetadata(new UploadedMetadataAttribute(TEMPLATES_METADATA, TEMPLATES_METADATA_FILENAME)) + .hashesOfConsistentSettings( + new UploadedMetadataAttribute(HASHES_OF_CONSISTENT_SETTINGS, HASHES_OF_CONSISTENT_SETTINGS_FILENAME) + ) + .discoveryNodesMetadata(new UploadedMetadataAttribute(DISCOVERY_NODES, DISCOVERY_NODES_FILENAME)) + .clusterBlocksMetadata(new UploadedMetadataAttribute(CLUSTER_BLOCKS, CLUSTER_BLOCKS_FILENAME)) + .clusterStateCustomMetadataMap( + Map.of( + "custom_1", + new UploadedMetadataAttribute("custom_1", "test-cluster-state-custom1-file__1"), + "custom_2", + new UploadedMetadataAttribute("custom_2", "test-cluster-state-custom2-file__1") + ) + ); + } + static DiscoveryNodes nodesWithLocalNodeClusterManager() { final DiscoveryNode localNode = new DiscoveryNode("cluster-manager-id", buildNewFakeTransportAddress(), Version.CURRENT); return DiscoveryNodes.builder().clusterManagerNodeId("cluster-manager-id").localNodeId("cluster-manager-id").add(localNode).build(); } - private static class CustomMetadata1 extends TestCustomMetadata { - public static final String TYPE = "custom_md_1"; + private class BlobNameMatcher implements ArgumentMatcher { + private final String expectedBlobName; - CustomMetadata1(String data) { - super(data); + BlobNameMatcher(String expectedBlobName) { + this.expectedBlobName = expectedBlobName; } @Override - public String getWriteableName() { - return TYPE; + public boolean matches(AbstractRemoteWritableBlobEntity argument) { + return argument != null && expectedBlobName.equals(argument.getFullBlobName()); } @Override - public Version getMinimalSupportedVersion() { - return Version.CURRENT; - } - - @Override - public EnumSet context() { - return EnumSet.of(Metadata.XContentContext.GATEWAY); + public String toString() { + return "BlobNameMatcher[Expected blobName: " + expectedBlobName + "]"; } } - } diff --git a/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateTestUtils.java b/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateTestUtils.java new file mode 100644 index 0000000000000..b17ffcbaac344 --- /dev/null +++ b/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateTestUtils.java @@ -0,0 +1,227 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.gateway.remote; + +import org.opensearch.Version; +import org.opensearch.cluster.metadata.Metadata; +import org.opensearch.core.common.io.stream.StreamInput; +import org.opensearch.test.TestClusterStateCustom; +import org.opensearch.test.TestCustomMetadata; + +import java.io.IOException; +import java.util.EnumSet; + +public class RemoteClusterStateTestUtils { + public static class CustomMetadata1 extends TestCustomMetadata { + public static final String TYPE = "custom_md_1"; + + public CustomMetadata1(String data) { + super(data); + } + + public CustomMetadata1(StreamInput in) throws IOException { + super(in.readString()); + } + + @Override + public String getWriteableName() { + return TYPE; + } + + @Override + public Version getMinimalSupportedVersion() { + return Version.CURRENT; + } + + @Override + public EnumSet context() { + return EnumSet.of(Metadata.XContentContext.GATEWAY); + } + } + + public static class CustomMetadata2 extends TestCustomMetadata { + public static final String TYPE = "custom_md_2"; + + public CustomMetadata2(String data) { + super(data); + } + + public CustomMetadata2(StreamInput in) throws IOException { + super(in.readString()); + } + + @Override + public String getWriteableName() { + return TYPE; + } + + @Override + public Version getMinimalSupportedVersion() { + return Version.CURRENT; + } + + @Override + public EnumSet context() { + return EnumSet.of(Metadata.XContentContext.GATEWAY); + } + } + + public static class CustomMetadata3 extends TestCustomMetadata { + public static final String TYPE = "custom_md_3"; + + public CustomMetadata3(String data) { + super(data); + } + + public CustomMetadata3(StreamInput in) throws IOException { + super(in.readString()); + } + + @Override + public String getWriteableName() { + return TYPE; + } + + @Override + public Version getMinimalSupportedVersion() { + return Version.CURRENT; + } + + @Override + public EnumSet context() { + return EnumSet.of(Metadata.XContentContext.GATEWAY); + } + } + + public static class CustomMetadata4 extends TestCustomMetadata { + public static final String TYPE = "custom_md_4"; + + public CustomMetadata4(String data) { + super(data); + } + + public CustomMetadata4(StreamInput in) throws IOException { + super(in.readString()); + } + + @Override + public String getWriteableName() { + return TYPE; + } + + @Override + public Version getMinimalSupportedVersion() { + return Version.CURRENT; + } + + @Override + public EnumSet context() { + return EnumSet.of(Metadata.XContentContext.GATEWAY); + } + } + + public static class CustomMetadata5 extends TestCustomMetadata { + public static final String TYPE = "custom_md_5"; + + public CustomMetadata5(String data) { + super(data); + } + + public CustomMetadata5(StreamInput in) throws IOException { + super(in.readString()); + } + + @Override + public String getWriteableName() { + return TYPE; + } + + @Override + public Version getMinimalSupportedVersion() { + return Version.CURRENT; + } + + @Override + public EnumSet context() { + return EnumSet.of(Metadata.XContentContext.API); + } + } + + public static class TestClusterStateCustom1 extends TestClusterStateCustom { + + public static final String TYPE = "custom_1"; + + public TestClusterStateCustom1(String value) { + super(value); + } + + public TestClusterStateCustom1(StreamInput in) throws IOException { + super(in); + } + + @Override + public String getWriteableName() { + return TYPE; + } + } + + public static class TestClusterStateCustom2 extends TestClusterStateCustom { + + public static final String TYPE = "custom_2"; + + public TestClusterStateCustom2(String value) { + super(value); + } + + public TestClusterStateCustom2(StreamInput in) throws IOException { + super(in); + } + + @Override + public String getWriteableName() { + return TYPE; + } + } + + public static class TestClusterStateCustom3 extends TestClusterStateCustom { + + public static final String TYPE = "custom_3"; + + public TestClusterStateCustom3(String value) { + super(value); + } + + public TestClusterStateCustom3(StreamInput in) throws IOException { + super(in); + } + + @Override + public String getWriteableName() { + return TYPE; + } + } + + public static class TestClusterStateCustom4 extends TestClusterStateCustom { + + public static final String TYPE = "custom_4"; + + public TestClusterStateCustom4(String value) { + super(value); + } + + public TestClusterStateCustom4(StreamInput in) throws IOException { + super(in); + } + + @Override + public String getWriteableName() { + return TYPE; + } + } +} diff --git a/server/src/test/java/org/opensearch/gateway/remote/RemoteGlobalMetadataManagerTests.java b/server/src/test/java/org/opensearch/gateway/remote/RemoteGlobalMetadataManagerTests.java index c543f986b3e86..917794ec03c3a 100644 --- a/server/src/test/java/org/opensearch/gateway/remote/RemoteGlobalMetadataManagerTests.java +++ b/server/src/test/java/org/opensearch/gateway/remote/RemoteGlobalMetadataManagerTests.java @@ -8,7 +8,6 @@ package org.opensearch.gateway.remote; -import org.opensearch.Version; import org.opensearch.action.LatchedActionListener; import org.opensearch.cluster.ClusterModule; import org.opensearch.cluster.ClusterName; @@ -18,7 +17,6 @@ import org.opensearch.cluster.metadata.DiffableStringMap; import org.opensearch.cluster.metadata.IndexGraveyard; import org.opensearch.cluster.metadata.Metadata; -import org.opensearch.cluster.metadata.Metadata.XContentContext; import org.opensearch.cluster.metadata.TemplatesMetadata; import org.opensearch.common.blobstore.BlobPath; import org.opensearch.common.network.NetworkModule; @@ -43,7 +41,6 @@ import org.opensearch.indices.IndicesModule; import org.opensearch.repositories.blobstore.BlobStoreRepository; import org.opensearch.test.OpenSearchTestCase; -import org.opensearch.test.TestCustomMetadata; import org.opensearch.threadpool.TestThreadPool; import org.opensearch.threadpool.ThreadPool; import org.junit.After; @@ -51,7 +48,6 @@ import java.io.IOException; import java.io.InputStream; -import java.util.EnumSet; import java.util.List; import java.util.Map; import java.util.concurrent.CountDownLatch; @@ -61,6 +57,11 @@ import static java.util.stream.Collectors.toList; import static org.opensearch.cluster.metadata.Metadata.isGlobalStateEquals; import static org.opensearch.common.blobstore.stream.write.WritePriority.URGENT; +import static org.opensearch.gateway.remote.RemoteClusterStateTestUtils.CustomMetadata1; +import static org.opensearch.gateway.remote.RemoteClusterStateTestUtils.CustomMetadata2; +import static org.opensearch.gateway.remote.RemoteClusterStateTestUtils.CustomMetadata3; +import static org.opensearch.gateway.remote.RemoteClusterStateTestUtils.CustomMetadata4; +import static org.opensearch.gateway.remote.RemoteClusterStateTestUtils.CustomMetadata5; import static org.opensearch.gateway.remote.RemoteClusterStateUtils.CLUSTER_STATE_PATH_TOKEN; import static org.opensearch.gateway.remote.RemoteClusterStateUtils.CUSTOM_DELIMITER; import static org.opensearch.gateway.remote.RemoteClusterStateUtils.DELIMITER; @@ -699,121 +700,5 @@ public void testGetUpdatedCustoms() { ); assertThat(customsDiff.getUpserts(), is(expectedUpserts)); assertThat(customsDiff.getDeletes(), is(List.of(CustomMetadata1.TYPE))); - - } - - private static class CustomMetadata1 extends TestCustomMetadata { - public static final String TYPE = "custom_md_1"; - - CustomMetadata1(String data) { - super(data); - } - - @Override - public String getWriteableName() { - return TYPE; - } - - @Override - public Version getMinimalSupportedVersion() { - return Version.CURRENT; - } - - @Override - public EnumSet context() { - return EnumSet.of(Metadata.XContentContext.GATEWAY); - } - } - - private static class CustomMetadata2 extends TestCustomMetadata { - public static final String TYPE = "custom_md_2"; - - CustomMetadata2(String data) { - super(data); - } - - @Override - public String getWriteableName() { - return TYPE; - } - - @Override - public Version getMinimalSupportedVersion() { - return Version.CURRENT; - } - - @Override - public EnumSet context() { - return EnumSet.of(Metadata.XContentContext.GATEWAY); - } - } - - private static class CustomMetadata3 extends TestCustomMetadata { - public static final String TYPE = "custom_md_3"; - - CustomMetadata3(String data) { - super(data); - } - - @Override - public String getWriteableName() { - return TYPE; - } - - @Override - public Version getMinimalSupportedVersion() { - return Version.CURRENT; - } - - @Override - public EnumSet context() { - return EnumSet.of(Metadata.XContentContext.GATEWAY); - } - } - - private static class CustomMetadata4 extends TestCustomMetadata { - public static final String TYPE = "custom_md_4"; - - CustomMetadata4(String data) { - super(data); - } - - @Override - public String getWriteableName() { - return TYPE; - } - - @Override - public Version getMinimalSupportedVersion() { - return Version.CURRENT; - } - - @Override - public EnumSet context() { - return EnumSet.of(Metadata.XContentContext.GATEWAY); - } - } - - private static class CustomMetadata5 extends TestCustomMetadata { - public static final String TYPE = "custom_md_5"; - - CustomMetadata5(String data) { - super(data); - } - - @Override - public String getWriteableName() { - return TYPE; - } - - @Override - public Version getMinimalSupportedVersion() { - return Version.CURRENT; - } - - @Override - public EnumSet context() { - return EnumSet.of(XContentContext.API); - } } } diff --git a/test/framework/src/main/java/org/opensearch/test/TestClusterStateCustom.java b/test/framework/src/main/java/org/opensearch/test/TestClusterStateCustom.java new file mode 100644 index 0000000000000..ac32b8d227eda --- /dev/null +++ b/test/framework/src/main/java/org/opensearch/test/TestClusterStateCustom.java @@ -0,0 +1,68 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.test; + +import org.opensearch.Version; +import org.opensearch.cluster.AbstractNamedDiffable; +import org.opensearch.cluster.ClusterState; +import org.opensearch.core.common.io.stream.StreamInput; +import org.opensearch.core.common.io.stream.StreamOutput; +import org.opensearch.core.xcontent.XContentBuilder; + +import java.io.IOException; + +public abstract class TestClusterStateCustom extends AbstractNamedDiffable implements ClusterState.Custom { + + private final String value; + + protected TestClusterStateCustom(String value) { + this.value = value; + } + + protected TestClusterStateCustom(StreamInput in) throws IOException { + this.value = in.readString(); + } + + @Override + public Version getMinimalSupportedVersion() { + return Version.CURRENT; + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeString(value); + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + return builder; + } + + @Override + public boolean isPrivate() { + return true; + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + + TestClusterStateCustom that = (TestClusterStateCustom) o; + + if (!value.equals(that.value)) return false; + + return true; + } + + @Override + public int hashCode() { + return value.hashCode(); + } +} From bda839377195fe3f665cb08b2ebf5263690e757c Mon Sep 17 00:00:00 2001 From: gaobinlong Date: Wed, 10 Jul 2024 19:35:09 +0800 Subject: [PATCH 050/167] Update version check for the bug fix of match_phrase_prefix_query not working on text field with multiple values and index_prefixes (#14703) Signed-off-by: Gao Binlong --- .../rest-api-spec/test/search/190_index_prefix_search.yml | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/search/190_index_prefix_search.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/search/190_index_prefix_search.yml index 8b031c132f979..6a946fb264560 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/search/190_index_prefix_search.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/search/190_index_prefix_search.yml @@ -138,8 +138,8 @@ setup: --- "search index prefixes with multiple values": - skip: - version: " - 2.99.99" - reason: "the bug was fixed in 3.0.0" + version: " - 2.15.99" + reason: "the bug was fixed since 2.16.0" - do: search: rest_total_hits_as_int: true @@ -154,8 +154,8 @@ setup: --- "search index prefixes with multiple values and custom position_increment_gap": - skip: - version: " - 2.99.99" - reason: "the bug was fixed in 3.0.0" + version: " - 2.15.99" + reason: "the bug was fixed since 2.16.0" - do: search: rest_total_hits_as_int: true From 605543b0abce8dd84983571df5354490d2029113 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Luk=C3=A1=C5=A1=20Vl=C4=8Dek?= Date: Wed, 10 Jul 2024 18:22:26 +0200 Subject: [PATCH 051/167] Remove unnecessary cast to int from test (#14696) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: Lukáš Vlček --- .../remote/RemoteSegmentTransferTrackerTests.java | 10 +++------- 1 file changed, 3 insertions(+), 7 deletions(-) diff --git a/server/src/test/java/org/opensearch/index/remote/RemoteSegmentTransferTrackerTests.java b/server/src/test/java/org/opensearch/index/remote/RemoteSegmentTransferTrackerTests.java index 280598c516c3c..1ec1e9977a9d5 100644 --- a/server/src/test/java/org/opensearch/index/remote/RemoteSegmentTransferTrackerTests.java +++ b/server/src/test/java/org/opensearch/index/remote/RemoteSegmentTransferTrackerTests.java @@ -571,13 +571,9 @@ public void testStatsObjectCreationViaStream() throws IOException { assertEquals((int) deserializedStats.uploadBytesStarted, (int) transferTrackerStats.uploadBytesStarted); assertEquals((int) deserializedStats.uploadBytesSucceeded, (int) transferTrackerStats.uploadBytesSucceeded); assertEquals((int) deserializedStats.uploadBytesFailed, (int) transferTrackerStats.uploadBytesFailed); - assertEquals((int) deserializedStats.uploadBytesMovingAverage, transferTrackerStats.uploadBytesMovingAverage, 0); - assertEquals( - (int) deserializedStats.uploadBytesPerSecMovingAverage, - transferTrackerStats.uploadBytesPerSecMovingAverage, - 0 - ); - assertEquals((int) deserializedStats.uploadTimeMovingAverage, transferTrackerStats.uploadTimeMovingAverage, 0); + assertEquals(deserializedStats.uploadBytesMovingAverage, transferTrackerStats.uploadBytesMovingAverage, 0); + assertEquals(deserializedStats.uploadBytesPerSecMovingAverage, transferTrackerStats.uploadBytesPerSecMovingAverage, 0); + assertEquals(deserializedStats.uploadTimeMovingAverage, transferTrackerStats.uploadTimeMovingAverage, 0); assertEquals((int) deserializedStats.totalUploadsStarted, (int) transferTrackerStats.totalUploadsStarted); assertEquals((int) deserializedStats.totalUploadsSucceeded, (int) transferTrackerStats.totalUploadsSucceeded); assertEquals((int) deserializedStats.totalUploadsFailed, (int) transferTrackerStats.totalUploadsFailed); From dfb8449ed8f85e0c1e8601c86d17b833e1641e0f Mon Sep 17 00:00:00 2001 From: kkewwei Date: Thu, 11 Jul 2024 03:45:27 +0800 Subject: [PATCH 052/167] print reason why parent task was cancelled (#14604) Signed-off-by: kkewwei --- CHANGELOG.md | 1 + .../cluster/node/tasks/CancellableTasksIT.java | 4 ++-- .../tasks/TaskCancellationService.java | 2 +- .../java/org/opensearch/tasks/TaskManager.java | 16 ++++++++++++---- .../node/tasks/CancellableTasksTests.java | 2 +- 5 files changed, 17 insertions(+), 8 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 62bb73d80f2c1..813eecbaabfa3 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -14,6 +14,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Fix race condition while parsing derived fields from search definition ([14445](https://github.com/opensearch-project/OpenSearch/pull/14445)) - Add allowlist setting for ingest-common and search-pipeline-common processors ([#14439](https://github.com/opensearch-project/OpenSearch/issues/14439)) - Create SystemIndexRegistry with helper method matchesSystemIndex ([#14415](https://github.com/opensearch-project/OpenSearch/pull/14415)) +- Print reason why parent task was cancelled ([#14604](https://github.com/opensearch-project/OpenSearch/issues/14604)) ### Dependencies - Bump `org.gradle.test-retry` from 1.5.8 to 1.5.9 ([#13442](https://github.com/opensearch-project/OpenSearch/pull/13442)) diff --git a/server/src/internalClusterTest/java/org/opensearch/action/admin/cluster/node/tasks/CancellableTasksIT.java b/server/src/internalClusterTest/java/org/opensearch/action/admin/cluster/node/tasks/CancellableTasksIT.java index bdb36b62ada21..d8a4bed4740bf 100644 --- a/server/src/internalClusterTest/java/org/opensearch/action/admin/cluster/node/tasks/CancellableTasksIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/action/admin/cluster/node/tasks/CancellableTasksIT.java @@ -327,7 +327,7 @@ public void testFailedToStartChildTaskAfterCancelled() throws Exception { mainAction.startSubTask(taskId, subRequest, future); TransportException te = expectThrows(TransportException.class, future::actionGet); assertThat(te.getCause(), instanceOf(TaskCancelledException.class)); - assertThat(te.getCause().getMessage(), equalTo("The parent task was cancelled, shouldn't start any child tasks")); + assertThat(te.getCause().getMessage(), equalTo("The parent task was cancelled, shouldn't start any child tasks, by user request")); allowEntireRequest(rootRequest); waitForRootTask(rootTaskFuture); ensureAllBansRemoved(); @@ -386,7 +386,7 @@ static void waitForRootTask(ActionFuture rootTask) { assertThat( cause.getMessage(), anyOf( - equalTo("The parent task was cancelled, shouldn't start any child tasks"), + equalTo("The parent task was cancelled, shouldn't start any child tasks, by user request"), containsString("Task cancelled before it started:"), equalTo("Task was cancelled while executing") ) diff --git a/server/src/main/java/org/opensearch/tasks/TaskCancellationService.java b/server/src/main/java/org/opensearch/tasks/TaskCancellationService.java index 6955a5927ca23..5a4a25ec832bd 100644 --- a/server/src/main/java/org/opensearch/tasks/TaskCancellationService.java +++ b/server/src/main/java/org/opensearch/tasks/TaskCancellationService.java @@ -92,7 +92,7 @@ void cancelTaskAndDescendants(CancellableTask task, String reason, boolean waitF Collection childrenNodes = taskManager.startBanOnChildrenNodes(task.getId(), () -> { logger.trace("child tasks of parent [{}] are completed", taskId); groupedListener.onResponse(null); - }); + }, reason); taskManager.cancel(task, reason, () -> { logger.trace("task [{}] is cancelled", taskId); groupedListener.onResponse(null); diff --git a/server/src/main/java/org/opensearch/tasks/TaskManager.java b/server/src/main/java/org/opensearch/tasks/TaskManager.java index a49968ab85e89..6ad06da9d2fa2 100644 --- a/server/src/main/java/org/opensearch/tasks/TaskManager.java +++ b/server/src/main/java/org/opensearch/tasks/TaskManager.java @@ -510,17 +510,22 @@ public Set getBannedTaskIds() { return Collections.unmodifiableSet(banedParents.keySet()); } + public Collection startBanOnChildrenNodes(long taskId, Runnable onChildTasksCompleted) { + return startBanOnChildrenNodes(taskId, onChildTasksCompleted, "unknown"); + } + /** * Start rejecting new child requests as the parent task was cancelled. * * @param taskId the parent task id * @param onChildTasksCompleted called when all child tasks are completed or failed + * @param reason the ban reason * @return the set of current nodes that have outstanding child tasks */ - public Collection startBanOnChildrenNodes(long taskId, Runnable onChildTasksCompleted) { + public Collection startBanOnChildrenNodes(long taskId, Runnable onChildTasksCompleted, String reason) { final CancellableTaskHolder holder = cancellableTasks.get(taskId); if (holder != null) { - return holder.startBan(onChildTasksCompleted); + return holder.startBan(onChildTasksCompleted, reason); } else { onChildTasksCompleted.run(); return Collections.emptySet(); @@ -585,6 +590,7 @@ private static class CancellableTaskHolder { private List cancellationListeners = null; private Map childTasksPerNode = null; private boolean banChildren = false; + private String banReason; private List childTaskCompletedListeners = null; CancellableTaskHolder(CancellableTask task) { @@ -662,7 +668,7 @@ public CancellableTask getTask() { synchronized void registerChildNode(DiscoveryNode node) { if (banChildren) { - throw new TaskCancelledException("The parent task was cancelled, shouldn't start any child tasks"); + throw new TaskCancelledException("The parent task was cancelled, shouldn't start any child tasks, " + banReason); } if (childTasksPerNode == null) { childTasksPerNode = new HashMap<>(); @@ -686,11 +692,13 @@ void unregisterChildNode(DiscoveryNode node) { notifyListeners(listeners); } - Set startBan(Runnable onChildTasksCompleted) { + Set startBan(Runnable onChildTasksCompleted, String reason) { final Set pendingChildNodes; final Runnable toRun; synchronized (this) { banChildren = true; + assert reason != null; + banReason = reason; if (childTasksPerNode == null) { pendingChildNodes = Collections.emptySet(); } else { diff --git a/server/src/test/java/org/opensearch/action/admin/cluster/node/tasks/CancellableTasksTests.java b/server/src/test/java/org/opensearch/action/admin/cluster/node/tasks/CancellableTasksTests.java index 7d706411b6f0d..5b3b08377f19b 100644 --- a/server/src/test/java/org/opensearch/action/admin/cluster/node/tasks/CancellableTasksTests.java +++ b/server/src/test/java/org/opensearch/action/admin/cluster/node/tasks/CancellableTasksTests.java @@ -428,7 +428,7 @@ public void testRegisterAndExecuteChildTaskWhileParentTaskIsBeingCanceled() thro ); assertThat(cancelledException.getMessage(), startsWith("Task cancelled before it started:")); CountDownLatch latch = new CountDownLatch(1); - taskManager.startBanOnChildrenNodes(parentTaskId.getId(), latch::countDown); + taskManager.startBanOnChildrenNodes(parentTaskId.getId(), latch::countDown, cancelledException.getMessage()); assertTrue("onChildTasksCompleted() is not invoked", latch.await(1, TimeUnit.SECONDS)); } From 125b77300eb783f502a23ec0329c77e3d7170494 Mon Sep 17 00:00:00 2001 From: SwethaGuptha <156877431+SwethaGuptha@users.noreply.github.com> Date: Thu, 11 Jul 2024 10:33:23 +0530 Subject: [PATCH 053/167] Use set of shard routing for shard in unassigned shard batch check. (#14533) Signed-off-by: Swetha Guptha --- .../gateway/PrimaryShardBatchAllocator.java | 8 +++-- .../gateway/ReplicaShardBatchAllocator.java | 11 ++++-- .../PrimaryShardBatchAllocatorTests.java | 36 +++++++++++++++++-- 3 files changed, 49 insertions(+), 6 deletions(-) diff --git a/server/src/main/java/org/opensearch/gateway/PrimaryShardBatchAllocator.java b/server/src/main/java/org/opensearch/gateway/PrimaryShardBatchAllocator.java index 27f9bedc4e495..c493bf717c97f 100644 --- a/server/src/main/java/org/opensearch/gateway/PrimaryShardBatchAllocator.java +++ b/server/src/main/java/org/opensearch/gateway/PrimaryShardBatchAllocator.java @@ -23,8 +23,10 @@ import java.util.ArrayList; import java.util.Collections; import java.util.HashMap; +import java.util.HashSet; import java.util.List; import java.util.Map; +import java.util.Set; /** * PrimaryShardBatchAllocator is similar to {@link org.opensearch.gateway.PrimaryShardAllocator} only difference is @@ -82,6 +84,7 @@ public AllocateUnassignedDecision makeAllocationDecision(ShardRouting unassigned * @param allocation the allocation state container object */ public void allocateUnassignedBatch(List shardRoutings, RoutingAllocation allocation) { + logger.trace("Starting shard allocation execution for unassigned primary shards: {}", shardRoutings.size()); HashMap ineligibleShardAllocationDecisions = new HashMap<>(); List eligibleShards = new ArrayList<>(); List inEligibleShards = new ArrayList<>(); @@ -99,13 +102,13 @@ public void allocateUnassignedBatch(List shardRoutings, RoutingAll // only fetch data for eligible shards final FetchResult shardsState = fetchData(eligibleShards, inEligibleShards, allocation); + Set batchShardRoutingSet = new HashSet<>(shardRoutings); RoutingNodes.UnassignedShards.UnassignedIterator iterator = allocation.routingNodes().unassigned().iterator(); while (iterator.hasNext()) { ShardRouting unassignedShard = iterator.next(); AllocateUnassignedDecision allocationDecision; - if (shardRoutings.contains(unassignedShard)) { - assert unassignedShard.primary(); + if (unassignedShard.primary() && batchShardRoutingSet.contains(unassignedShard)) { if (ineligibleShardAllocationDecisions.containsKey(unassignedShard.shardId())) { allocationDecision = ineligibleShardAllocationDecisions.get(unassignedShard.shardId()); } else { @@ -115,6 +118,7 @@ public void allocateUnassignedBatch(List shardRoutings, RoutingAll executeDecision(unassignedShard, allocationDecision, allocation, iterator); } } + logger.trace("Finished shard allocation execution for unassigned primary shards: {}", shardRoutings.size()); } /** diff --git a/server/src/main/java/org/opensearch/gateway/ReplicaShardBatchAllocator.java b/server/src/main/java/org/opensearch/gateway/ReplicaShardBatchAllocator.java index f2cb3d053440d..7c75f2a5d1a8f 100644 --- a/server/src/main/java/org/opensearch/gateway/ReplicaShardBatchAllocator.java +++ b/server/src/main/java/org/opensearch/gateway/ReplicaShardBatchAllocator.java @@ -28,10 +28,11 @@ import java.util.ArrayList; import java.util.Collections; import java.util.HashMap; +import java.util.HashSet; import java.util.List; import java.util.Map; +import java.util.Set; import java.util.function.Supplier; -import java.util.stream.Collectors; /** * Allocates replica shards in a batch mode @@ -117,6 +118,7 @@ public AllocateUnassignedDecision makeAllocationDecision(ShardRouting unassigned * @param allocation the allocation state container object */ public void allocateUnassignedBatch(List shardRoutings, RoutingAllocation allocation) { + logger.trace("Starting shard allocation execution for unassigned replica shards: {}", shardRoutings.size()); List eligibleShards = new ArrayList<>(); List ineligibleShards = new ArrayList<>(); Map ineligibleShardAllocationDecisions = new HashMap<>(); @@ -135,7 +137,11 @@ public void allocateUnassignedBatch(List shardRoutings, RoutingAll // only fetch data for eligible shards final FetchResult shardsState = fetchData(eligibleShards, ineligibleShards, allocation); - List shardIdsFromBatch = shardRoutings.stream().map(shardRouting -> shardRouting.shardId()).collect(Collectors.toList()); + Set shardIdsFromBatch = new HashSet<>(); + for (ShardRouting shardRouting : shardRoutings) { + ShardId shardId = shardRouting.shardId(); + shardIdsFromBatch.add(shardId); + } RoutingNodes.UnassignedShards.UnassignedIterator iterator = allocation.routingNodes().unassigned().iterator(); while (iterator.hasNext()) { ShardRouting unassignedShard = iterator.next(); @@ -159,6 +165,7 @@ public void allocateUnassignedBatch(List shardRoutings, RoutingAll executeDecision(unassignedShard, allocateUnassignedDecision, allocation, iterator); } } + logger.trace("Finished shard allocation execution for unassigned replica shards: {}", shardRoutings.size()); } private AllocateUnassignedDecision getUnassignedShardAllocationDecision( diff --git a/server/src/test/java/org/opensearch/gateway/PrimaryShardBatchAllocatorTests.java b/server/src/test/java/org/opensearch/gateway/PrimaryShardBatchAllocatorTests.java index e90850de3fe33..8ad8bcda95f40 100644 --- a/server/src/test/java/org/opensearch/gateway/PrimaryShardBatchAllocatorTests.java +++ b/server/src/test/java/org/opensearch/gateway/PrimaryShardBatchAllocatorTests.java @@ -85,7 +85,10 @@ private void allocateAllUnassignedBatch(final RoutingAllocation allocation) { final RoutingNodes.UnassignedShards.UnassignedIterator iterator = allocation.routingNodes().unassigned().iterator(); List shardsToBatch = new ArrayList<>(); while (iterator.hasNext()) { - shardsToBatch.add(iterator.next()); + ShardRouting unassignedShardRouting = iterator.next(); + if (unassignedShardRouting.primary()) { + shardsToBatch.add(unassignedShardRouting); + } } batchAllocator.allocateUnassignedBatch(shardsToBatch, allocation); } @@ -180,6 +183,35 @@ public void testInitializePrimaryShards() { assertEquals(2, routingAllocation.routingNodes().getInitialPrimariesIncomingRecoveries(node1.getId())); } + public void testInitializeOnlyPrimaryUnassignedShardsIgnoreReplicaShards() { + ClusterSettings clusterSettings = new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS); + AllocationDeciders allocationDeciders = randomAllocationDeciders(Settings.builder().build(), clusterSettings, random()); + setUpShards(1); + final RoutingAllocation routingAllocation = routingAllocationWithOnePrimary(allocationDeciders, CLUSTER_RECOVERED, "allocId-0"); + + for (ShardId shardId : shardsInBatch) { + batchAllocator.addShardData( + node1, + "allocId-0", + shardId, + true, + new ReplicationCheckpoint(shardId, 20, 101, 1, Codec.getDefault().getName()), + null + ); + } + + allocateAllUnassignedBatch(routingAllocation); + + List initializingShards = routingAllocation.routingNodes().shardsWithState(ShardRoutingState.INITIALIZING); + assertEquals(1, initializingShards.size()); + assertTrue(shardsInBatch.contains(initializingShards.get(0).shardId())); + assertTrue(initializingShards.get(0).primary()); + assertEquals(1, routingAllocation.routingNodes().getInitialPrimariesIncomingRecoveries(node1.getId())); + List unassignedShards = routingAllocation.routingNodes().shardsWithState(ShardRoutingState.UNASSIGNED); + assertEquals(1, unassignedShards.size()); + assertTrue(!unassignedShards.get(0).primary()); + } + public void testAllocateUnassignedBatchThrottlingAllocationDeciderIsHonoured() { ClusterSettings clusterSettings = new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS); AllocationDeciders allocationDeciders = randomAllocationDeciders( @@ -258,7 +290,7 @@ private RoutingAllocation routingAllocationWithOnePrimary( .routingTable(routingTableBuilder.build()) .nodes(DiscoveryNodes.builder().add(node1).add(node2).add(node3)) .build(); - return new RoutingAllocation(deciders, new RoutingNodes(state, false), state, null, null, System.nanoTime()); + return new RoutingAllocation(deciders, new RoutingNodes(state, false), state, ClusterInfo.EMPTY, null, System.nanoTime()); } private RoutingAllocation routingAllocationWithMultiplePrimaries( From 2d8c68cbb3eafb0811aafbf0436cd4041261caf9 Mon Sep 17 00:00:00 2001 From: Sooraj Sinha <81695996+soosinha@users.noreply.github.com> Date: Thu, 11 Jul 2024 10:42:29 +0530 Subject: [PATCH 054/167] Add versioning for UploadedIndexMetadata (#14677) * Add versioning for UploadedIndexMetadata * Handle componentPrefix for backward compatibility Signed-off-by: Sooraj Sinha --- .../remote/ClusterMetadataManifest.java | 65 +++++++++++++++---- .../remote/ClusterMetadataManifestTests.java | 23 ++++++- .../RemoteClusterStateServiceTests.java | 14 ++-- 3 files changed, 81 insertions(+), 21 deletions(-) diff --git a/server/src/main/java/org/opensearch/gateway/remote/ClusterMetadataManifest.java b/server/src/main/java/org/opensearch/gateway/remote/ClusterMetadataManifest.java index 2786cd432b002..3a66419b1dc20 100644 --- a/server/src/main/java/org/opensearch/gateway/remote/ClusterMetadataManifest.java +++ b/server/src/main/java/org/opensearch/gateway/remote/ClusterMetadataManifest.java @@ -20,6 +20,7 @@ import org.opensearch.core.xcontent.ToXContentFragment; import org.opensearch.core.xcontent.XContentBuilder; import org.opensearch.core.xcontent.XContentParser; +import org.opensearch.gateway.remote.ClusterMetadataManifest.Builder; import java.io.IOException; import java.util.ArrayList; @@ -243,7 +244,7 @@ private static void declareParser(ConstructingObjectParser UploadedIndexMetadata.fromXContent(p), + (p, c) -> UploadedIndexMetadata.fromXContent(p, codec_version), INDICES_FIELD ); parser.declareString(ConstructingObjectParser.constructorArg(), PREVIOUS_CLUSTER_UUID); @@ -277,7 +278,7 @@ private static void declareParser(ConstructingObjectParser UploadedIndexMetadata.fromXContent(p), + (p, c) -> UploadedIndexMetadata.fromXContent(p, codec_version), INDICES_ROUTING_FIELD ); parser.declareNamedObject( @@ -1112,16 +1113,30 @@ private static String componentPrefix(Object[] fields) { return (String) fields[3]; } - private static final ConstructingObjectParser PARSER = new ConstructingObjectParser<>( + private static final ConstructingObjectParser PARSER_V0 = new ConstructingObjectParser<>( + "uploaded_index_metadata", + fields -> new UploadedIndexMetadata(indexName(fields), indexUUID(fields), uploadedFilename(fields)) + ); + + private static final ConstructingObjectParser PARSER_V2 = new ConstructingObjectParser<>( "uploaded_index_metadata", fields -> new UploadedIndexMetadata(indexName(fields), indexUUID(fields), uploadedFilename(fields), componentPrefix(fields)) ); + private static final ConstructingObjectParser CURRENT_PARSER = PARSER_V2; + static { - PARSER.declareString(ConstructingObjectParser.constructorArg(), INDEX_NAME_FIELD); - PARSER.declareString(ConstructingObjectParser.constructorArg(), INDEX_UUID_FIELD); - PARSER.declareString(ConstructingObjectParser.constructorArg(), UPLOADED_FILENAME_FIELD); - PARSER.declareString(ConstructingObjectParser.constructorArg(), COMPONENT_PREFIX_FIELD); + declareParser(PARSER_V0, CODEC_V0); + declareParser(PARSER_V2, CODEC_V2); + } + + private static void declareParser(ConstructingObjectParser parser, long codec_version) { + parser.declareString(ConstructingObjectParser.constructorArg(), INDEX_NAME_FIELD); + parser.declareString(ConstructingObjectParser.constructorArg(), INDEX_UUID_FIELD); + parser.declareString(ConstructingObjectParser.constructorArg(), UPLOADED_FILENAME_FIELD); + if (codec_version >= CODEC_V2) { + parser.declareString(ConstructingObjectParser.constructorArg(), COMPONENT_PREFIX_FIELD); + } } static final String COMPONENT_PREFIX = "index--"; @@ -1130,15 +1145,32 @@ private static String componentPrefix(Object[] fields) { private final String indexUUID; private final String uploadedFilename; + private long codecVersion = CODEC_V2; + public UploadedIndexMetadata(String indexName, String indexUUID, String uploadedFileName) { - this(indexName, indexUUID, uploadedFileName, COMPONENT_PREFIX); + this(indexName, indexUUID, uploadedFileName, CODEC_V2); + } + + public UploadedIndexMetadata(String indexName, String indexUUID, String uploadedFileName, long codecVersion) { + this(indexName, indexUUID, uploadedFileName, COMPONENT_PREFIX, codecVersion); } public UploadedIndexMetadata(String indexName, String indexUUID, String uploadedFileName, String componentPrefix) { + this(indexName, indexUUID, uploadedFileName, componentPrefix, CODEC_V2); + } + + public UploadedIndexMetadata( + String indexName, + String indexUUID, + String uploadedFileName, + String componentPrefix, + long codecVersion + ) { this.componentPrefix = componentPrefix; this.indexName = indexName; this.indexUUID = indexUUID; this.uploadedFilename = uploadedFileName; + this.codecVersion = codecVersion; } public UploadedIndexMetadata(StreamInput in) throws IOException { @@ -1175,10 +1207,13 @@ public String getComponentPrefix() { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - return builder.field(INDEX_NAME_FIELD.getPreferredName(), getIndexName()) + builder.field(INDEX_NAME_FIELD.getPreferredName(), getIndexName()) .field(INDEX_UUID_FIELD.getPreferredName(), getIndexUUID()) - .field(UPLOADED_FILENAME_FIELD.getPreferredName(), getUploadedFilePath()) - .field(COMPONENT_PREFIX_FIELD.getPreferredName(), getComponentPrefix()); + .field(UPLOADED_FILENAME_FIELD.getPreferredName(), getUploadedFilePath()); + if (codecVersion >= CODEC_V2) { + builder.field(COMPONENT_PREFIX_FIELD.getPreferredName(), getComponentPrefix()); + } + return builder; } @Override @@ -1214,9 +1249,13 @@ public String toString() { return Strings.toString(MediaTypeRegistry.JSON, this); } - public static UploadedIndexMetadata fromXContent(XContentParser parser) throws IOException { - return PARSER.parse(parser, null); + public static UploadedIndexMetadata fromXContent(XContentParser parser, long codecVersion) throws IOException { + if (codecVersion >= CODEC_V2) { + return CURRENT_PARSER.parse(parser, null); + } + return PARSER_V0.parse(parser, null); } + } /** diff --git a/server/src/test/java/org/opensearch/gateway/remote/ClusterMetadataManifestTests.java b/server/src/test/java/org/opensearch/gateway/remote/ClusterMetadataManifestTests.java index 02471c9cdbbbe..152a6dba6c032 100644 --- a/server/src/test/java/org/opensearch/gateway/remote/ClusterMetadataManifestTests.java +++ b/server/src/test/java/org/opensearch/gateway/remote/ClusterMetadataManifestTests.java @@ -48,7 +48,7 @@ public class ClusterMetadataManifestTests extends OpenSearchTestCase { public void testClusterMetadataManifestXContentV0() throws IOException { - UploadedIndexMetadata uploadedIndexMetadata = new UploadedIndexMetadata("test-index", "test-uuid", "/test/upload/path"); + UploadedIndexMetadata uploadedIndexMetadata = new UploadedIndexMetadata("test-index", "test-uuid", "/test/upload/path", CODEC_V0); ClusterMetadataManifest originalManifest = ClusterMetadataManifest.builder() .clusterTerm(1L) .stateVersion(1L) @@ -74,7 +74,7 @@ public void testClusterMetadataManifestXContentV0() throws IOException { } public void testClusterMetadataManifestXContentV1() throws IOException { - UploadedIndexMetadata uploadedIndexMetadata = new UploadedIndexMetadata("test-index", "test-uuid", "/test/upload/path"); + UploadedIndexMetadata uploadedIndexMetadata = new UploadedIndexMetadata("test-index", "test-uuid", "/test/upload/path", CODEC_V1); ClusterMetadataManifest originalManifest = ClusterMetadataManifest.builder() .clusterTerm(1L) .stateVersion(1L) @@ -619,6 +619,24 @@ public void testUploadedIndexMetadataSerializationEqualsHashCode() { ); } + public void testUploadedIndexMetadataWithoutComponentPrefix() throws IOException { + final UploadedIndexMetadata originalUploadedIndexMetadata = new UploadedIndexMetadata( + "test-index", + "test-index-uuid", + "test_file_name", + CODEC_V1 + ); + final XContentBuilder builder = JsonXContent.contentBuilder(); + builder.startObject(); + originalUploadedIndexMetadata.toXContent(builder, ToXContent.EMPTY_PARAMS); + builder.endObject(); + + try (XContentParser parser = createParser(JsonXContent.jsonXContent, BytesReference.bytes(builder))) { + final UploadedIndexMetadata fromXContentUploadedIndexMetadata = UploadedIndexMetadata.fromXContent(parser, 1L); + assertEquals(originalUploadedIndexMetadata, fromXContentUploadedIndexMetadata); + } + } + private UploadedIndexMetadata randomlyChangingUploadedIndexMetadata(UploadedIndexMetadata uploadedIndexMetadata) { switch (randomInt(2)) { case 0: @@ -642,4 +660,5 @@ private UploadedIndexMetadata randomlyChangingUploadedIndexMetadata(UploadedInde } return uploadedIndexMetadata; } + } diff --git a/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateServiceTests.java b/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateServiceTests.java index 91ddd64cc2ccc..6cd9cbbf13848 100644 --- a/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateServiceTests.java +++ b/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateServiceTests.java @@ -2254,13 +2254,14 @@ public void testReadLatestMetadataManifestSuccessButIndexMetadataFetchIOExceptio .stateVersion(1L) .stateUUID("state-uuid") .clusterUUID("cluster-uuid") + .codecVersion(CODEC_V2) .nodeId("nodeA") .opensearchVersion(VersionUtils.randomOpenSearchVersion(random())) .previousClusterUUID("prev-cluster-uuid") .build(); BlobContainer blobContainer = mockBlobStoreObjects(); - mockBlobContainer(blobContainer, expectedManifest, Map.of()); + mockBlobContainer(blobContainer, expectedManifest, Map.of(), CODEC_V2); when(blobContainer.readBlob(uploadedIndexMetadata.getUploadedFilename())).thenThrow(FileNotFoundException.class); remoteClusterStateService.start(); @@ -2288,11 +2289,11 @@ public void testReadLatestMetadataManifestSuccess() throws IOException { .clusterUUID("cluster-uuid") .nodeId("nodeA") .opensearchVersion(VersionUtils.randomOpenSearchVersion(random())) - .codecVersion(ClusterMetadataManifest.CODEC_V0) + .codecVersion(CODEC_V2) .previousClusterUUID("prev-cluster-uuid") .build(); - mockBlobContainer(mockBlobStoreObjects(), expectedManifest, new HashMap<>()); + mockBlobContainer(mockBlobStoreObjects(), expectedManifest, new HashMap<>(), CODEC_V2); remoteClusterStateService.start(); final ClusterMetadataManifest manifest = remoteClusterStateService.getLatestClusterMetadataManifest( clusterState.getClusterName().value(), @@ -2416,10 +2417,10 @@ public void testReadLatestIndexMetadataSuccess() throws IOException { .nodeId("nodeA") .opensearchVersion(VersionUtils.randomOpenSearchVersion(random())) .previousClusterUUID("prev-cluster-uuid") - .codecVersion(ClusterMetadataManifest.CODEC_V0) + .codecVersion(CODEC_V2) .build(); - mockBlobContainer(mockBlobStoreObjects(), expectedManifest, Map.of(index.getUUID(), indexMetadata)); + mockBlobContainer(mockBlobStoreObjects(), expectedManifest, Map.of(index.getUUID(), indexMetadata), CODEC_V2); Map indexMetadataMap = remoteClusterStateService.getLatestClusterState( clusterState.getClusterName().value(), @@ -2664,6 +2665,7 @@ public void testWriteFullMetadataInParallelSuccessWithRoutingTable() throws IOEx .clusterUUID("cluster-uuid") .previousClusterUUID("prev-cluster-uuid") .routingTableVersion(1) + .codecVersion(CODEC_V2) .indicesRouting(List.of(uploadedIndiceRoutingMetadata)) .build(); @@ -3081,7 +3083,7 @@ private void mockBlobContainerForGlobalMetadata( FORMAT_PARAMS ); when(blobContainer.readBlob(mockManifestFileName)).thenReturn(new ByteArrayInputStream(bytes.streamInput().readAllBytes())); - if (codecVersion >= ClusterMetadataManifest.CODEC_V2) { + if (codecVersion >= CODEC_V2) { String coordinationFileName = getFileNameFromPath(clusterMetadataManifest.getCoordinationMetadata().getUploadedFilename()); when(blobContainer.readBlob(COORDINATION_METADATA_FORMAT.blobName(coordinationFileName))).thenAnswer((invocationOnMock) -> { BytesReference bytesReference = COORDINATION_METADATA_FORMAT.serialize( From 17b799697e1dbfe65671531f722a1433e646b1bc Mon Sep 17 00:00:00 2001 From: Ahmed Sobeh Date: Thu, 11 Jul 2024 17:00:13 +0200 Subject: [PATCH 055/167] Fix: update help output for _cat (#14722) * fixed help output for _cat Signed-off-by: ahmedsobeh * updated changelog Signed-off-by: ahmedsobeh * updated changelog Signed-off-by: ahmedsobeh --------- Signed-off-by: ahmedsobeh --- CHANGELOG.md | 1 + .../org/opensearch/rest/action/cat/RestNodesAction.java | 6 +++--- 2 files changed, 4 insertions(+), 3 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 813eecbaabfa3..cb8e6403aa47e 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -65,6 +65,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Handle NPE in GetResult if "found" field is missing ([#14552](https://github.com/opensearch-project/OpenSearch/pull/14552)) - Refactoring FilterPath.parse by using an iterative approach ([#14200](https://github.com/opensearch-project/OpenSearch/pull/14200)) - Refactoring Grok.validatePatternBank by using an iterative approach ([#14206](https://github.com/opensearch-project/OpenSearch/pull/14206)) +- Update help output for _cat ([#14722](https://github.com/opensearch-project/OpenSearch/pull/14722)) ### Security diff --git a/server/src/main/java/org/opensearch/rest/action/cat/RestNodesAction.java b/server/src/main/java/org/opensearch/rest/action/cat/RestNodesAction.java index e11012a23fce7..bffb50cc63401 100644 --- a/server/src/main/java/org/opensearch/rest/action/cat/RestNodesAction.java +++ b/server/src/main/java/org/opensearch/rest/action/cat/RestNodesAction.java @@ -171,9 +171,9 @@ protected Table getTableWithHeader(final RestRequest request) { table.addCell("port", "default:false;alias:po;desc:bound transport port"); table.addCell("http_address", "default:false;alias:http;desc:bound http address"); - table.addCell("version", "default:false;alias:v;desc:es version"); - table.addCell("type", "default:false;alias:t;desc:es distribution type"); - table.addCell("build", "default:false;alias:b;desc:es build hash"); + table.addCell("version", "default:false;alias:v;desc:os version"); + table.addCell("type", "default:false;alias:t;desc:os distribution type"); + table.addCell("build", "default:false;alias:b;desc:os build hash"); table.addCell("jdk", "default:false;alias:j;desc:jdk version"); table.addCell("disk.total", "default:false;alias:dt,diskTotal;text-align:right;desc:total disk space"); table.addCell("disk.used", "default:false;alias:du,diskUsed;text-align:right;desc:used disk space"); From 82bffb1d506366f952c813e349ce41dc037fde46 Mon Sep 17 00:00:00 2001 From: Andriy Redko Date: Thu, 11 Jul 2024 14:13:22 -0400 Subject: [PATCH 056/167] Fix hdfs-fixture kerb-admin & hadoop-minicluster dependencies are not being updated / false positive reports on CVEs (#14729) Signed-off-by: Andriy Redko --- test/fixtures/hdfs-fixture/build.gradle | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/test/fixtures/hdfs-fixture/build.gradle b/test/fixtures/hdfs-fixture/build.gradle index a532bf0c6287b..6ab6d5acb8880 100644 --- a/test/fixtures/hdfs-fixture/build.gradle +++ b/test/fixtures/hdfs-fixture/build.gradle @@ -33,7 +33,7 @@ apply plugin: 'opensearch.java' group = 'hdfs' versions << [ - 'jetty': '9.4.53.v20231009' + 'jetty': '9.4.55.v20240627' ] dependencies { @@ -73,7 +73,12 @@ dependencies { api "commons-net:commons-net:3.11.1" api "ch.qos.logback:logback-core:1.5.6" api "ch.qos.logback:logback-classic:1.2.13" - api 'org.apache.kerby:kerb-admin:2.0.3' + api "org.jboss.xnio:xnio-nio:3.8.16.Final" + api 'org.jline:jline:3.26.2' + api ('org.apache.kerby:kerb-admin:2.0.3') { + exclude group: "org.jboss.xnio" + exclude group: "org.jline" + } runtimeOnly "com.google.guava:guava:${versions.guava}" runtimeOnly("com.squareup.okhttp3:okhttp:4.12.0") { exclude group: "com.squareup.okio" From 9234a4226acf00416108f4f1484a4a49632ecba2 Mon Sep 17 00:00:00 2001 From: Andriy Redko Date: Thu, 11 Jul 2024 16:55:32 -0400 Subject: [PATCH 057/167] Update to Gradle 8.9 (#14574) Signed-off-by: Andriy Redko --- .../gradle/test/rest/RestResourcesPlugin.java | 56 +++++++++--------- gradle/wrapper/gradle-wrapper.jar | Bin 43462 -> 43504 bytes gradle/wrapper/gradle-wrapper.properties | 4 +- gradlew | 7 ++- gradlew.bat | 2 + 5 files changed, 38 insertions(+), 31 deletions(-) diff --git a/buildSrc/src/main/java/org/opensearch/gradle/test/rest/RestResourcesPlugin.java b/buildSrc/src/main/java/org/opensearch/gradle/test/rest/RestResourcesPlugin.java index fcadf35593ce6..9396797536052 100644 --- a/buildSrc/src/main/java/org/opensearch/gradle/test/rest/RestResourcesPlugin.java +++ b/buildSrc/src/main/java/org/opensearch/gradle/test/rest/RestResourcesPlugin.java @@ -81,50 +81,52 @@ public void apply(Project project) { // tests Configuration testConfig = project.getConfigurations().create("restTestConfig"); project.getConfigurations().create("restTests"); + + if (BuildParams.isInternal()) { + // core + Dependency restTestdependency = project.getDependencies().project(new HashMap() { + { + put("path", ":rest-api-spec"); + put("configuration", "restTests"); + } + }); + testConfig.withDependencies(s -> s.add(restTestdependency)); + } else { + Dependency dependency = project.getDependencies().create("org.opensearch:rest-api-spec:" + VersionProperties.getOpenSearch()); + testConfig.withDependencies(s -> s.add(dependency)); + } + Provider copyRestYamlTestTask = project.getTasks() .register("copyYamlTestsTask", CopyRestTestsTask.class, task -> { task.includeCore.set(extension.restTests.getIncludeCore()); task.coreConfig = testConfig; task.sourceSetName = SourceSet.TEST_SOURCE_SET_NAME; - if (BuildParams.isInternal()) { - // core - Dependency restTestdependency = project.getDependencies().project(new HashMap() { - { - put("path", ":rest-api-spec"); - put("configuration", "restTests"); - } - }); - project.getDependencies().add(task.coreConfig.getName(), restTestdependency); - } else { - Dependency dependency = project.getDependencies() - .create("org.opensearch:rest-api-spec:" + VersionProperties.getOpenSearch()); - project.getDependencies().add(task.coreConfig.getName(), dependency); - } task.dependsOn(task.coreConfig); }); // api Configuration specConfig = project.getConfigurations().create("restSpec"); // name chosen for passivity project.getConfigurations().create("restSpecs"); + + if (BuildParams.isInternal()) { + Dependency restSpecDependency = project.getDependencies().project(new HashMap() { + { + put("path", ":rest-api-spec"); + put("configuration", "restSpecs"); + } + }); + specConfig.withDependencies(s -> s.add(restSpecDependency)); + } else { + Dependency dependency = project.getDependencies().create("org.opensearch:rest-api-spec:" + VersionProperties.getOpenSearch()); + specConfig.withDependencies(s -> s.add(dependency)); + } + Provider copyRestYamlSpecTask = project.getTasks() .register("copyRestApiSpecsTask", CopyRestApiTask.class, task -> { task.includeCore.set(extension.restApi.getIncludeCore()); task.dependsOn(copyRestYamlTestTask); task.coreConfig = specConfig; task.sourceSetName = SourceSet.TEST_SOURCE_SET_NAME; - if (BuildParams.isInternal()) { - Dependency restSpecDependency = project.getDependencies().project(new HashMap() { - { - put("path", ":rest-api-spec"); - put("configuration", "restSpecs"); - } - }); - project.getDependencies().add(task.coreConfig.getName(), restSpecDependency); - } else { - Dependency dependency = project.getDependencies() - .create("org.opensearch:rest-api-spec:" + VersionProperties.getOpenSearch()); - project.getDependencies().add(task.coreConfig.getName(), dependency); - } task.dependsOn(task.coreConfig); }); diff --git a/gradle/wrapper/gradle-wrapper.jar b/gradle/wrapper/gradle-wrapper.jar index d64cd4917707c1f8861d8cb53dd15194d4248596..2c3521197d7c4586c843d1d3e9090525f1898cde 100644 GIT binary patch delta 34463 zcmY(qRX`kF)3u#IAjsf0xCD212@LM;?(PINyAue(f;$XO2=4Cg1P$=#e%|lo zKk1`B>Q#GH)wNd-&cI#Hz}3=WfYndTeo)CyX{fOHsQjGa<{e=jamMNwjdatD={CN3>GNchOE9OGPIqr)3v>RcKWR3Z zF-guIMjE2UF0Wqk1)21791y#}ciBI*bAenY*BMW_)AeSuM5}vz_~`+1i!Lo?XAEq{TlK5-efNFgHr6o zD>^vB&%3ZGEWMS>`?tu!@66|uiDvS5`?bF=gIq3rkK(j<_TybyoaDHg8;Y#`;>tXI z=tXo~e9{U!*hqTe#nZjW4z0mP8A9UUv1}C#R*@yu9G3k;`Me0-BA2&Aw6f`{Ozan2 z8c8Cs#dA-7V)ZwcGKH}jW!Ja&VaUc@mu5a@CObzNot?b{f+~+212lwF;!QKI16FDS zodx>XN$sk9;t;)maB^s6sr^L32EbMV(uvW%or=|0@U6cUkE`_!<=LHLlRGJx@gQI=B(nn z-GEjDE}*8>3U$n(t^(b^C$qSTI;}6q&ypp?-2rGpqg7b}pyT zOARu2x>0HB{&D(d3sp`+}ka+Pca5glh|c=M)Ujn_$ly^X6&u z%Q4Y*LtB_>i6(YR!?{Os-(^J`(70lZ&Hp1I^?t@~SFL1!m0x6j|NM!-JTDk)%Q^R< z@e?23FD&9_W{Bgtr&CG&*Oer3Z(Bu2EbV3T9FeQ|-vo5pwzwQ%g&=zFS7b{n6T2ZQ z*!H(=z<{D9@c`KmHO&DbUIzpg`+r5207}4D=_P$ONIc5lsFgn)UB-oUE#{r+|uHc^hzv_df zV`n8&qry%jXQ33}Bjqcim~BY1?KZ}x453Oh7G@fA(}+m(f$)TY%7n=MeLi{jJ7LMB zt(mE*vFnep?YpkT_&WPV9*f>uSi#n#@STJmV&SLZnlLsWYI@y+Bs=gzcqche=&cBH2WL)dkR!a95*Ri)JH_4c*- zl4pPLl^as5_y&6RDE@@7342DNyF&GLJez#eMJjI}#pZN{Y8io{l*D+|f_Y&RQPia@ zNDL;SBERA|B#cjlNC@VU{2csOvB8$HzU$01Q?y)KEfos>W46VMh>P~oQC8k=26-Ku)@C|n^zDP!hO}Y z_tF}0@*Ds!JMt>?4y|l3?`v#5*oV-=vL7}zehMON^=s1%q+n=^^Z{^mTs7}*->#YL z)x-~SWE{e?YCarwU$=cS>VzmUh?Q&7?#Xrcce+jeZ|%0!l|H_=D_`77hBfd4Zqk&! zq-Dnt_?5*$Wsw8zGd@?woEtfYZ2|9L8b>TO6>oMh%`B7iBb)-aCefM~q|S2Cc0t9T zlu-ZXmM0wd$!gd-dTtik{bqyx32%f;`XUvbUWWJmpHfk8^PQIEsByJm+@+-aj4J#D z4#Br3pO6z1eIC>X^yKk|PeVwX_4B+IYJyJyc3B`4 zPrM#raacGIzVOexcVB;fcsxS=s1e&V;Xe$tw&KQ`YaCkHTKe*Al#velxV{3wxx}`7@isG zp6{+s)CG%HF#JBAQ_jM%zCX5X;J%-*%&jVI?6KpYyzGbq7qf;&hFprh?E5Wyo=bZ) z8YNycvMNGp1836!-?nihm6jI`^C`EeGryoNZO1AFTQhzFJOA%Q{X(sMYlzABt!&f{ zoDENSuoJQIg5Q#@BUsNJX2h>jkdx4<+ipUymWKFr;w+s>$laIIkfP6nU}r+?J9bZg zUIxz>RX$kX=C4m(zh-Eg$BsJ4OL&_J38PbHW&7JmR27%efAkqqdvf)Am)VF$+U3WR z-E#I9H6^)zHLKCs7|Zs<7Bo9VCS3@CDQ;{UTczoEprCKL3ZZW!ffmZFkcWU-V|_M2 zUA9~8tE9<5`59W-UgUmDFp11YlORl3mS3*2#ZHjv{*-1#uMV_oVTy{PY(}AqZv#wF zJVks)%N6LaHF$$<6p8S8Lqn+5&t}DmLKiC~lE{jPZ39oj{wR&fe*LX-z0m}9ZnZ{U z>3-5Bh{KKN^n5i!M79Aw5eY=`6fG#aW1_ZG;fw7JM69qk^*(rmO{|Z6rXy?l=K=#_ zE-zd*P|(sskasO(cZ5L~_{Mz&Y@@@Q)5_8l<6vB$@226O+pDvkFaK8b>%2 zfMtgJ@+cN@w>3)(_uR;s8$sGONbYvoEZ3-)zZk4!`tNzd<0lwt{RAgplo*f@Z)uO` zzd`ljSqKfHJOLxya4_}T`k5Ok1Mpo#MSqf~&ia3uIy{zyuaF}pV6 z)@$ZG5LYh8Gge*LqM_|GiT1*J*uKes=Oku_gMj&;FS`*sfpM+ygN&yOla-^WtIU#$ zuw(_-?DS?6DY7IbON7J)p^IM?N>7x^3)(7wR4PZJu(teex%l>zKAUSNL@~{czc}bR z)I{XzXqZBU3a;7UQ~PvAx8g-3q-9AEd}1JrlfS8NdPc+!=HJ6Bs( zCG!0;e0z-22(Uzw>hkEmC&xj?{0p|kc zM}MMXCF%RLLa#5jG`+}{pDL3M&|%3BlwOi?dq!)KUdv5__zR>u^o|QkYiqr(m3HxF z6J*DyN#Jpooc$ok=b7{UAVM@nwGsr6kozSddwulf5g1{B=0#2)zv!zLXQup^BZ4sv*sEsn)+MA?t zEL)}3*R?4(J~CpeSJPM!oZ~8;8s_=@6o`IA%{aEA9!GELRvOuncE`s7sH91 zmF=+T!Q6%){?lJn3`5}oW31(^Of|$r%`~gT{eimT7R~*Mg@x+tWM3KE>=Q>nkMG$U za7r>Yz2LEaA|PsMafvJ(Y>Xzha?=>#B!sYfVob4k5Orb$INFdL@U0(J8Hj&kgWUlO zPm+R07E+oq^4f4#HvEPANGWLL_!uF{nkHYE&BCH%l1FL_r(Nj@M)*VOD5S42Gk-yT z^23oAMvpA57H(fkDGMx86Z}rtQhR^L!T2iS!788E z+^${W1V}J_NwdwdxpXAW8}#6o1(Uu|vhJvubFvQIH1bDl4J4iDJ+181KuDuHwvM?` z%1@Tnq+7>p{O&p=@QT}4wT;HCb@i)&7int<0#bj8j0sfN3s6|a(l7Bj#7$hxX@~iP z1HF8RFH}irky&eCN4T94VyKqGywEGY{Gt0Xl-`|dOU&{Q;Ao;sL>C6N zXx1y^RZSaL-pG|JN;j9ADjo^XR}gce#seM4QB1?S`L*aB&QlbBIRegMnTkTCks7JU z<0(b+^Q?HN1&$M1l&I@>HMS;!&bb()a}hhJzsmB?I`poqTrSoO>m_JE5U4=?o;OV6 zBZjt;*%1P>%2{UL=;a4(aI>PRk|mr&F^=v6Fr&xMj8fRCXE5Z2qdre&;$_RNid5!S zm^XiLK25G6_j4dWkFqjtU7#s;b8h?BYFxV?OE?c~&ME`n`$ix_`mb^AWr+{M9{^^Rl;~KREplwy2q;&xe zUR0SjHzKVYzuqQ84w$NKVPGVHL_4I)Uw<$uL2-Ml#+5r2X{LLqc*p13{;w#E*Kwb*1D|v?e;(<>vl@VjnFB^^Y;;b3 z=R@(uRj6D}-h6CCOxAdqn~_SG=bN%^9(Ac?zfRkO5x2VM0+@_qk?MDXvf=@q_* z3IM@)er6-OXyE1Z4sU3{8$Y$>8NcnU-nkyWD&2ZaqX1JF_JYL8y}>@V8A5%lX#U3E zet5PJM`z79q9u5v(OE~{by|Jzlw2<0h`hKpOefhw=fgLTY9M8h+?37k@TWpzAb2Fc zQMf^aVf!yXlK?@5d-re}!fuAWu0t57ZKSSacwRGJ$0uC}ZgxCTw>cjRk*xCt%w&hh zoeiIgdz__&u~8s|_TZsGvJ7sjvBW<(C@}Y%#l_ID2&C`0;Eg2Z+pk;IK}4T@W6X5H z`s?ayU-iF+aNr5--T-^~K~p;}D(*GWOAYDV9JEw!w8ZYzS3;W6*_`#aZw&9J ziXhBKU3~zd$kKzCAP-=t&cFDeQR*_e*(excIUxKuD@;-twSlP6>wWQU)$|H3Cy+`= z-#7OW!ZlYzZxkdQpfqVDFU3V2B_-eJS)Fi{fLtRz!K{~7TR~XilNCu=Z;{GIf9KYz zf3h=Jo+1#_s>z$lc~e)l93h&RqW1VHYN;Yjwg#Qi0yzjN^M4cuL>Ew`_-_wRhi*!f zLK6vTpgo^Bz?8AsU%#n}^EGigkG3FXen3M;hm#C38P@Zs4{!QZPAU=m7ZV&xKI_HWNt90Ef zxClm)ZY?S|n**2cNYy-xBlLAVZ=~+!|7y`(fh+M$#4zl&T^gV8ZaG(RBD!`3?9xcK zp2+aD(T%QIgrLx5au&TjG1AazI;`8m{K7^!@m>uGCSR;Ut{&?t%3AsF{>0Cm(Kf)2 z?4?|J+!BUg*P~C{?mwPQ#)gDMmro20YVNsVx5oWQMkzQ? zsQ%Y>%7_wkJqnSMuZjB9lBM(o zWut|B7w48cn}4buUBbdPBW_J@H7g=szrKEpb|aE>!4rLm+sO9K%iI75y~2HkUo^iw zJ3se$8$|W>3}?JU@3h@M^HEFNmvCp|+$-0M?RQ8SMoZ@38%!tz8f8-Ptb@106heiJ z^Bx!`0=Im z1!NUhO=9ICM*+||b3a7w*Y#5*Q}K^ar+oMMtekF0JnO>hzHqZKH0&PZ^^M(j;vwf_ z@^|VMBpcw8;4E-9J{(u7sHSyZpQbS&N{VQ%ZCh{c1UA5;?R} z+52*X_tkDQ(s~#-6`z4|Y}3N#a&dgP4S_^tsV=oZr4A1 zaSoPN1czE(UIBrC_r$0HM?RyBGe#lTBL4~JW#A`P^#0wuK)C-2$B6TvMi@@%K@JAT_IB^T7Zfqc8?{wHcSVG_?{(wUG%zhCm=%qP~EqeqKI$9UivF zv+5IUOs|%@ypo6b+i=xsZ=^G1yeWe)z6IX-EC`F=(|_GCNbHbNp(CZ*lpSu5n`FRA zhnrc4w+Vh?r>her@Ba_jv0Omp#-H7avZb=j_A~B%V0&FNi#!S8cwn0(Gg-Gi_LMI{ zCg=g@m{W@u?GQ|yp^yENd;M=W2s-k7Gw2Z(tsD5fTGF{iZ%Ccgjy6O!AB4x z%&=6jB7^}pyftW2YQpOY1w@%wZy%}-l0qJlOSKZXnN2wo3|hujU+-U~blRF!^;Tan z0w;Srh0|Q~6*tXf!5-rCD)OYE(%S|^WTpa1KHtpHZ{!;KdcM^#g8Z^+LkbiBHt85m z;2xv#83lWB(kplfgqv@ZNDcHizwi4-8+WHA$U-HBNqsZ`hKcUI3zV3d1ngJP-AMRET*A{> zb2A>Fk|L|WYV;Eu4>{a6ESi2r3aZL7x}eRc?cf|~bP)6b7%BnsR{Sa>K^0obn?yiJ zCVvaZ&;d_6WEk${F1SN0{_`(#TuOOH1as&#&xN~+JDzX(D-WU_nLEI}T_VaeLA=bc zl_UZS$nu#C1yH}YV>N2^9^zye{rDrn(rS99>Fh&jtNY7PP15q%g=RGnxACdCov47= zwf^9zfJaL{y`R#~tvVL#*<`=`Qe zj_@Me$6sIK=LMFbBrJps7vdaf_HeX?eC+P^{AgSvbEn?n<}NDWiQGQG4^ZOc|GskK z$Ve2_n8gQ-KZ=s(f`_X!+vM5)4+QmOP()2Fe#IL2toZBf+)8gTVgDSTN1CkP<}!j7 z0SEl>PBg{MnPHkj4wj$mZ?m5x!1ePVEYI(L_sb0OZ*=M%yQb?L{UL(2_*CTVbRxBe z@{)COwTK1}!*CK0Vi4~AB;HF(MmQf|dsoy(eiQ>WTKcEQlnKOri5xYsqi61Y=I4kzAjn5~{IWrz_l))|Ls zvq7xgQs?Xx@`N?f7+3XKLyD~6DRJw*uj*j?yvT3}a;(j_?YOe%hUFcPGWRVBXzpMJ zM43g6DLFqS9tcTLSg=^&N-y0dXL816v&-nqC0iXdg7kV|PY+js`F8dm z2PuHw&k+8*&9SPQ6f!^5q0&AH(i+z3I7a?8O+S5`g)>}fG|BM&ZnmL;rk)|u{1!aZ zEZHpAMmK_v$GbrrWNP|^2^s*!0waLW=-h5PZa-4jWYUt(Hr@EA(m3Mc3^uDxwt-me^55FMA9^>hpp26MhqjLg#^Y7OIJ5%ZLdNx&uDgIIqc zZRZl|n6TyV)0^DDyVtw*jlWkDY&Gw4q;k!UwqSL6&sW$B*5Rc?&)dt29bDB*b6IBY z6SY6Unsf6AOQdEf=P1inu6(6hVZ0~v-<>;LAlcQ2u?wRWj5VczBT$Op#8IhppP-1t zfz5H59Aa~yh7EN;BXJsLyjkjqARS5iIhDVPj<=4AJb}m6M@n{xYj3qsR*Q8;hVxDyC4vLI;;?^eENOb5QARj#nII5l$MtBCI@5u~(ylFi$ zw6-+$$XQ}Ca>FWT>q{k)g{Ml(Yv=6aDfe?m|5|kbGtWS}fKWI+})F6`x@||0oJ^(g|+xi zqlPdy5;`g*i*C=Q(aGeDw!eQg&w>UUj^{o?PrlFI=34qAU2u@BgwrBiaM8zoDTFJ< zh7nWpv>dr?q;4ZA?}V}|7qWz4W?6#S&m>hs4IwvCBe@-C>+oohsQZ^JC*RfDRm!?y zS4$7oxcI|##ga*y5hV>J4a%HHl^t$pjY%caL%-FlRb<$A$E!ws?8hf0@(4HdgQ!@> zds{&g$ocr9W4I84TMa9-(&^_B*&R%^=@?Ntxi|Ejnh;z=!|uVj&3fiTngDPg=0=P2 zB)3#%HetD84ayj??qrxsd9nqrBem(8^_u_UY{1@R_vK-0H9N7lBX5K(^O2=0#TtUUGSz{ z%g>qU8#a$DyZ~EMa|8*@`GOhCW3%DN%xuS91T7~iXRr)SG`%=Lfu%U~Z_`1b=lSi?qpD4$vLh$?HU6t0MydaowUpb zQr{>_${AMesCEffZo`}K0^~x>RY_ZIG{(r39MP>@=aiM@C;K)jUcfQV8#?SDvq>9D zI{XeKM%$$XP5`7p3K0T}x;qn)VMo>2t}Ib(6zui;k}<<~KibAb%p)**e>ln<=qyWU zrRDy|UXFi9y~PdEFIAXejLA{K)6<)Q`?;Q5!KsuEw({!#Rl8*5_F{TP?u|5(Hijv( ztAA^I5+$A*+*e0V0R~fc{ET-RAS3suZ}TRk3r)xqj~g_hxB`qIK5z(5wxYboz%46G zq{izIz^5xW1Vq#%lhXaZL&)FJWp0VZNO%2&ADd?+J%K$fM#T_Eke1{dQsx48dUPUY zLS+DWMJeUSjYL453f@HpRGU6Dv)rw+-c6xB>(=p4U%}_p>z^I@Ow9`nkUG21?cMIh9}hN?R-d)*6%pr6d@mcb*ixr7 z)>Lo<&2F}~>WT1ybm^9UO{6P9;m+fU^06_$o9gBWL9_}EMZFD=rLJ~&e?fhDnJNBI zKM=-WR6g7HY5tHf=V~6~QIQ~rakNvcsamU8m28YE=z8+G7K=h%)l6k zmCpiDInKL6*e#)#Pt;ANmjf`8h-nEt&d}(SBZMI_A{BI#ck-_V7nx)K9_D9K-p@?Zh81#b@{wS?wCcJ%og)8RF*-0z+~)6f#T` zWqF7_CBcnn=S-1QykC*F0YTsKMVG49BuKQBH%WuDkEy%E?*x&tt%0m>>5^HCOq|ux zuvFB)JPR-W|%$24eEC^AtG3Gp4qdK%pjRijF5Sg3X}uaKEE z-L5p5aVR!NTM8T`4|2QA@hXiLXRcJveWZ%YeFfV%mO5q#($TJ`*U>hicS+CMj%Ip# zivoL;dd*araeJK9EA<(tihD50FHWbITBgF9E<33A+eMr2;cgI3Gg6<-2o|_g9|> zv5}i932( zYfTE9?4#nQhP@a|zm#9FST2 z!y+p3B;p>KkUzH!K;GkBW}bWssz)9b>Ulg^)EDca;jDl+q=243BddS$hY^fC6lbpM z(q_bo4V8~eVeA?0LFD6ZtKcmOH^75#q$Eo%a&qvE8Zsqg=$p}u^|>DSWUP5i{6)LAYF4E2DfGZuMJ zMwxxmkxQf}Q$V3&2w|$`9_SQS^2NVbTHh;atB>=A%!}k-f4*i$X8m}Ni^ppZXk5_oYF>Gq(& z0wy{LjJOu}69}~#UFPc;$7ka+=gl(FZCy4xEsk);+he>Nnl>hb5Ud-lj!CNicgd^2 z_Qgr_-&S7*#nLAI7r()P$`x~fy)+y=W~6aNh_humoZr7MWGSWJPLk}$#w_1n%(@? z3FnHf1lbxKJbQ9c&i<$(wd{tUTX6DAKs@cXIOBv~!9i{wD@*|kwfX~sjKASrNFGvN zrFc=!0Bb^OhR2f`%hrp2ibv#KUxl)Np1aixD9{^o=)*U%n%rTHX?FSWL^UGpHpY@7 z74U}KoIRwxI#>)Pn4($A`nw1%-D}`sGRZD8Z#lF$6 zOeA5)+W2qvA%m^|$WluUU-O+KtMqd;Pd58?qZj})MbxYGO<{z9U&t4D{S2G>e+J9K ztFZ?}ya>SVOLp9hpW)}G%kTrg*KXXXsLkGdgHb+R-ZXqdkdQC0_)`?6mqo8(EU#d( zy;u&aVPe6C=YgCRPV!mJ6R6kdY*`e+VGM~`VtC>{k27!9vAZT)x2~AiX5|m1Rq}_= z;A9LX^nd$l-9&2%4s~p5r6ad-siV`HtxKF}l&xGSYJmP=z!?Mlwmwef$EQq~7;#OE z)U5eS6dB~~1pkj#9(}T3j!((8Uf%!W49FfUAozijoxInUE7z`~U3Y^}xc3xp){#9D z<^Tz2xw}@o@fdUZ@hnW#dX6gDOj4R8dV}Dw`u!h@*K)-NrxT8%2`T}EvOImNF_N1S zy?uo6_ZS>Qga4Xme3j#aX+1qdFFE{NT0Wfusa$^;eL5xGE_66!5_N8!Z~jCAH2=${ z*goHjl|z|kbmIE{cl-PloSTtD+2=CDm~ZHRgXJ8~1(g4W=1c3=2eF#3tah7ho`zm4 z05P&?nyqq$nC?iJ-nK_iBo=u5l#|Ka3H7{UZ&O`~t-=triw=SE7ynzMAE{Mv-{7E_ zViZtA(0^wD{iCCcg@c{54Ro@U5p1QZq_XlEGtdBAQ9@nT?(zLO0#)q55G8_Ug~Xnu zR-^1~hp|cy&52iogG@o?-^AD8Jb^;@&Ea5jEicDlze6%>?u$-eE};bQ`T6@(bED0J zKYtdc?%9*<<$2LCBzVx9CA4YV|q-qg*-{yQ;|0=KIgI6~z0DKTtajw2Oms3L zn{C%{P`duw!(F@*P)lFy11|Z&x`E2<=$Ln38>UR~z6~za(3r;45kQK_^QTX%!s zNzoIFFH8|Y>YVrUL5#mgA-Jh>j7)n)5}iVM4%_@^GSwEIBA2g-;43* z*)i7u*xc8jo2z8&=8t7qo|B-rsGw)b8UXnu`RgE4u!(J8yIJi(5m3~aYsADcfZ!GG zzqa7p=sg`V_KjiqI*LA-=T;uiNRB;BZZ)~88 z`C%p8%hIev2rxS12@doqsrjgMg3{A&N8A?%Ui5vSHh7!iC^ltF&HqG~;=16=h0{ygy^@HxixUb1XYcR36SB}}o3nxu z_IpEmGh_CK<+sUh@2zbK9MqO!S5cao=8LSQg0Zv4?ju%ww^mvc0WU$q@!oo#2bv24 z+?c}14L2vlDn%Y0!t*z=$*a!`*|uAVu&NO!z_arim$=btpUPR5XGCG0U3YU`v>yMr z^zmTdcEa!APX zYF>^Q-TP11;{VgtMqC}7>B^2gN-3KYl33gS-p%f!X<_Hr?`rG8{jb9jmuQA9U;BeG zHj6Pk(UB5c6zwX%SNi*Py*)gk^?+729$bAN-EUd*RKN7{CM4`Q65a1qF*-QWACA&m zrT)B(M}yih{2r!Tiv5Y&O&=H_OtaHUz96Npo_k0eN|!*s2mLe!Zkuv>^E8Xa43ZwH zOI058AZznYGrRJ+`*GmZzMi6yliFmGMge6^j?|PN%ARns!Eg$ufpcLc#1Ns!1@1 zvC7N8M$mRgnixwEtX{ypBS^n`k@t2cCh#_6L6WtQb8E~*Vu+Rr)YsKZRX~hzLG*BE zaeU#LPo?RLm(Wzltk79Jd1Y$|6aWz1)wf1K1RtqS;qyQMy@H@B805vQ%wfSJB?m&&=^m4i* zYVH`zTTFbFtNFkAI`Khe4e^CdGZw;O0 zqkQe2|NG_y6D%h(|EZNf&77_!NU%0y={^E=*gKGQ=)LdKPM3zUlM@otH2X07Awv8o zY8Y7a1^&Yy%b%m{mNQ5sWNMTIq96Wtr>a(hL>Qi&F(ckgKkyvM0IH<_}v~Fv-GqDapig=3*ZMOx!%cYY)SKzo7ECyem z9Mj3C)tCYM?C9YIlt1?zTJXNOo&oVxu&uXKJs7i+j8p*Qvu2PAnY}b`KStdpi`trk ztAO}T8eOC%x)mu+4ps8sYZ=vYJp16SVWEEgQyFKSfWQ@O5id6GfL`|2<}hMXLPszS zgK>NWOoR zBRyKeUPevpqKKShD|MZ`R;~#PdNMB3LWjqFKNvH9k+;(`;-pyXM55?qaji#nl~K8m z_MifoM*W*X9CQiXAOH{cZcP0;Bn10E1)T@62Um>et2ci!J2$5-_HPy(AGif+BJpJ^ ziHWynC_%-NlrFY+(f7HyVvbDIM$5ci_i3?22ZkF>Y8RPBhgx-7k3M2>6m5R24C|~I z&RPh9xpMGzhN4bii*ryWaN^d(`0 zTOADlU)g`1p+SVMNLztd)c+;XjXox(VHQwqzu>FROvf0`s&|NEv26}(TAe;@=FpZq zaVs6mp>W0rM3Qg*6x5f_bPJd!6dQGmh?&v0rpBNfS$DW-{4L7#_~-eA@7<2BsZV=X zow){3aATmLZOQrs>uzDkXOD=IiX;Ue*B(^4RF%H zeaZ^*MWn4tBDj(wj114r(`)P96EHq4th-;tWiHhkp2rDlrklX}I@ib-nel0slFoQO zOeTc;Rh7sMIebO`1%u)=GlEj+7HU;c|Nj>2j)J-kpR)s3#+9AiB zd$hAk6;3pu9(GCR#)#>aCGPYq%r&i02$0L9=7AlIGYdlUO5%eH&M!ZWD&6^NBAj0Y9ZDcPg@r@8Y&-}e!aq0S(`}NuQ({;aigCPnq75U9cBH&Y7 ze)W0aD>muAepOKgm7uPg3Dz7G%)nEqTUm_&^^3(>+eEI;$ia`m>m0QHEkTt^=cx^JsBC68#H(3zc~Z$E9I)oSrF$3 zUClHXhMBZ|^1ikm3nL$Z@v|JRhud*IhOvx!6X<(YSX(9LG#yYuZeB{=7-MyPF;?_8 zy2i3iVKG2q!=JHN>~!#Bl{cwa6-yB@b<;8LSj}`f9pw7#x3yTD>C=>1S@H)~(n_K4 z2-yr{2?|1b#lS`qG@+823j;&UE5|2+EdU4nVw5=m>o_gj#K>>(*t=xI7{R)lJhLU{ z4IO6!x@1f$aDVIE@1a0lraN9!(j~_uGlks)!&davUFRNYHflp<|ENwAxsp~4Hun$Q z$w>@YzXp#VX~)ZP8`_b_sTg(Gt7?oXJW%^Pf0UW%YM+OGjKS}X`yO~{7WH6nX8S6Z ztl!5AnM2Lo*_}ZLvo%?iV;D2z>#qdpMx*xY2*GGlRzmHCom`VedAoR=(A1nO)Y>;5 zCK-~a;#g5yDgf7_phlkM@)C8s!xOu)N2UnQhif-v5kL$*t=X}L9EyBRq$V(sI{90> z=ghTPGswRVbTW@dS2H|)QYTY&I$ljbpNPTc_T|FEJkSW7MV!JM4I(ksRqQ8)V5>}v z2Sf^Z9_v;dKSp_orZm09jb8;C(vzFFJgoYuWRc|Tt_&3k({wPKiD|*m!+za$(l*!gNRo{xtmqjy1=kGzFkTH=Nc>EL@1Um0BiN1)wBO$i z6rG={bRcT|%A3s3xh!Bw?=L&_-X+6}L9i~xRj2}-)7fsoq0|;;PS%mcn%_#oV#kAp zGw^23c8_0~ ze}v9(p};6HM0+qF5^^>BBEI3d=2DW&O#|(;wg}?3?uO=w+{*)+^l_-gE zSw8GV=4_%U4*OU^hibDV38{Qb7P#Y8zh@BM9pEM_o2FuFc2LWrW2jRRB<+IE)G=Vx zuu?cp2-`hgqlsn|$nx@I%TC!`>bX^G00_oKboOGGXLgyLKXoo$^@L7v;GWqfUFw3< zekKMWo0LR;TaFY}Tt4!O$3MU@pqcw!0w0 zA}SnJ6Lb597|P5W8$OsEHTku2Kw9y4V=hx*K%iSn!#LW9W#~OiWf^dXEP$^2 zaok=UyGwy3GRp)bm6Gqr>8-4h@3=2`Eto2|JE6Sufh?%U6;ut1v1d@#EfcQP2chCt z+mB{Bk5~()7G>wM3KYf7Xh?LGbwg1uWLotmc_}Z_o;XOUDyfU?{9atAT$={v82^w9 z(MW$gINHt4xB3{bdbhRR%T}L?McK?!zkLK3(e>zKyei(yq%Nsijm~LV|9mll-XHavFcc$teX7v);H>=oN-+E_Q{c|! zp

    JV~-9AH}jxf6IF!PxrB9is{_9s@PYth^`pb%DkwghLdAyDREz(csf9)HcVRq z+2Vn~>{(S&_;bq_qA{v7XbU?yR7;~JrLfo;g$Lkm#ufO1P`QW_`zWW+4+7xzQZnO$ z5&GyJs4-VGb5MEDBc5=zxZh9xEVoY(|2yRv&!T7LAlIs@tw+4n?v1T8M>;hBv}2n) zcqi+>M*U@uY>4N3eDSAH2Rg@dsl!1py>kO39GMP#qOHipL~*cCac2_vH^6x@xmO|E zkWeyvl@P$2Iy*mCgVF+b{&|FY*5Ygi8237i)9YW#Fp& z?TJTQW+7U)xCE*`Nsx^yaiJ0KSW}}jc-ub)8Z8x(|K7G>`&l{Y&~W=q#^4Gf{}aJ%6kLXsmv6cr=Hi*uB`V26;dr4C$WrPnHO>g zg1@A%DvIWPDtXzll39kY6#%j;aN7grYJP9AlJgs3FnC?crv$wC7S4_Z?<_s0j;MmE z75yQGul2=bY%`l__1X3jxju2$Ws%hNv75ywfAqjgFO7wFsFDOW^)q2%VIF~WhwEW0 z45z^+r+}sJ{q+>X-w(}OiD(!*&cy4X&yM`!L0Fe+_RUfs@=J{AH#K~gArqT=#DcGE z!FwY(h&+&811rVCVoOuK)Z<-$EX zp`TzcUQC256@YWZ*GkE@P_et4D@qpM92fWA6c$MV=^qTu7&g)U?O~-fUR&xFqNiY1 zRd=|zUs_rmFZhKI|H}dcKhy%Okl(#y#QuMi81zsY56Y@757xBQqDNkd+XhLQhp2BB zBF^aJ__D676wLu|yYo6jNJNw^B+Ce;DYK!f$!dNs1*?D^97u^jKS++7S z5qE%zG#HY-SMUn^_yru=T6v`)CM%K<>_Z>tPe|js`c<|y7?qol&)C=>uLWkg5 zmzNcSAG_sL)E9or;i+O}tY^70@h7+=bG1;YDlX{<4zF_?{)K5B&?^tKZ6<$SD%@>F zY0cl2H7)%zKeDX%Eo7`ky^mzS)s;842cP{_;dzFuyd~Npb4u!bwkkhf8-^C2e3`q8>MuPhgiv0VxHxvrN9_`rJv&GX0fWz-L-Jg^B zrTsm>)-~j0F1sV=^V?UUi{L2cp%YwpvHwwLaSsCIrGI#({{QfbgDxMqR1Z0TcrO*~ z;`z(A$}o+TN+QHHSvsC2`@?YICZ>s8&hY;SlR#|0PKaZIauCMS*cOpAMn@6@g@rZ+ z+GT--(uT6#mL8^*mMf7BE`(AVj?zLY-2$aI%TjtREu}5AWdGlcWLvfz(%wn72tGczwUOgGD3RXpWs%onuMxs9!*D^698AupW z9qTDQu4`!>n|)e35b4t+d(+uOx+>VC#nXCiRex_Fq4fu1f`;C`>g;IuS%6KgEa3NK z<8dsc`?SDP0g~*EC3QU&OZH-QpPowNEUd4rJF9MGAgb@H`mjRGq;?wFRDVQY7mMpm z3yoB7eQ!#O#`XIBDXqU>Pt~tCe{Q#awQI4YOm?Q3muUO6`nZ4^zi5|(wb9R)oyarG?mI|I@A0U!+**&lW7_bYKF2biJ4BDbi~*$h?kQ`rCC(LG-oO(nPxMU zfo#Z#n8t)+3Ph87roL-y2!!U4SEWNCIM16i~-&+f55;kxC2bL$FE@jH{5p$Z8gxOiP%Y`hTTa_!v{AKQz&- ztE+dosg?pN)leO5WpNTS>IKdEEn21zMm&?r28Q52{$e2tGL44^Ys=^?m6p=kOy!gJ zWm*oFGKS@mqj~{|SONA*T2)3XC|J--en+NrnPlNhAmXMqmiXs^*154{EVE{Uc%xqF zrbcQ~sezg;wQkW;dVezGrdC0qf!0|>JG6xErVZ8_?B(25cZrr-sL&=jKwW>zKyYMY zdRn1&@Rid0oIhoRl)+X4)b&e?HUVlOtk^(xldhvgf^7r+@TXa!2`LC9AsB@wEO&eU2mN) z(2^JsyA6qfeOf%LSJx?Y8BU1m=}0P;*H3vVXSjksEcm>#5Xa`}jj5D2fEfH2Xje-M zUYHgYX}1u_p<|fIC+pI5g6KGn%JeZPZ-0!!1})tOab>y=S>3W~x@o{- z6^;@rhHTgRaoor06T(UUbrK4+@5bO?r=!vckDD+nwK+>2{{|{u4N@g}r(r z#3beB`G2`XrO(iR6q2H8yS9v;(z-=*`%fk%CVpj%l#pt?g4*)yP|xS-&NBKOeW5_5 zXkVr;A)BGS=+F;j%O|69F0Lne?{U*t=^g?1HKy7R)R*<>%xD>K zelPqrp$&BF_?^mZ&U<*tWDIuhrw3HJj~--_0)GL8jxYs2@VLev2$;`DG7X6UI9Z)P zq|z`w46OtLJ1=V3U8B%9@FSsRP+Ze)dQ@;zLq|~>(%J5G-n}dRZ6&kyH|cQ!{Vil( zBUvQvj*~0_A1JCtaGZW|?6>KdP}!4A%l>(MnVv>A%d;!|qA>*t&-9-JFU4GZhn`jG z8GrgNsQJ%JSLgNFP`5;(=b+M9GO8cg+ygIz^4i?=eR@IY>IcG?+on?I4+Y47p-DB8 zjrlar)KtoI{#kBcqL&4?ub@Df+zMt*USCD_T8O$J$~oMrC6*TP7j@H5trGV$r0P6I zV7EZ{MWH`5`DrX*wx&`d;C`jjYoc_PMSqNB290QXlRn_4*F{5hBmEE4DHBC$%EsbR zQGb7p;)4MAjY@Bd*2F3L?<8typrrUykb$JXr#}c1|BL*QF|18D{ZTYBZ_=M&Ec6IS ziv{(%>CbeR(9Aog)}hA!xSm1p@K?*ce*-6R%odqGGk?I4@6q3dmHq)4jbw+B?|%#2 zbX;ioJ_tcGO*#d0v?il&mPAi+AKQvsQnPf*?8tX6qfOPsf-ttT+RZX6Dm&RF6beP3 zdotcJDI1Kn7wkq=;Au=BIyoGfXCNVjCKTj+fxU@mxp*d*7aHec0GTUPt`xbN8x%fe zikv87g)u~0cpQaf zd<7Mi9GR0B@*S&l&9pCl-HEaNX?ZY8MoXaYHGDf}733;(88<{E%)< z^k)X#To3=_O2$lKPsc9P-MkDAhJ~{x<=xTJw2aRY5SSZIA6Gij5cFzsGk@S)4@C65 zwN^6CwOI9`5c(3?cqRrH_gSq+ox(wtSBZc-Jr5N%^t3N&WB|TT_i4!i3lxwI=*p)Y zn7fb%HlXhf8OGjhzswj!=Crh~YwQYb+p~UaV@s%YPgiH_);$|Gx3{{v5v?7s<)+cb zxlT0Bb!OwtE!K>gx6c4v^M9mL0F=It*NfQL0J0O$RCpt746=H1pPNG#AZC|Y`SZt( zG`yKMBPV_0I|S?}?$t7GU%;*_39bCGO*x3+R|<=9WNe!8jH- zw5ZJS(k@wws?6w1rejjyZ>08aizReJBo%IRb3b3|VuR6Uo&sL?L5j(isqs%CYe@@b zIID7kF*hyqmy+7D(SPa^xNVm54hVF3{;4I9+mh)F22+_YFP>ux`{F)8l;uRX>1-cH zXqPnGsFRr|UZwJtjG=1x2^l_tF-mS0@sdC38kMi$kDw8W#zceJowZuV=@agQ_#l5w znB`g+sb1mhkrXh$X4y(<-CntwmVwah5#oA_p-U<_5$ zGDc%(b6Z=!QQ%w6YZS&HWovIaN8wMw1B-9N+Vyl=>(yIgy}BrAhpc2}8YL-i*_KY7 ztV+`WKcC?{RKA@t3pu*BtqZJFSd2d)+cc07-Z#4x&7Dnd{yg6)lz@`z%=Sl-`9Z~*io zck_Lshk9JRJs=t>1jmKB~>`6+(J z@(S}J2Q{Q{a-ASTnIViecW(FIagWQ%G41y?zS)gpooM z@c<2$7TykMs4LH*UUYfts(!Ncn`?eZl}f zg)wx@0N0J(X(OJ^=$2()HLn)=Cn~=zx(_9(B@L04%{F_Zn}5!~5Ec5D4ibN6G_AD} zzxY^T_JF##qM8~B%aZ1OC}X^kQu`JDwaRaZnt!YcRrP7fq>eIihJW1UY{Xhkn>NdX zKy|<6-wD*;GtE08sLYryW<-e)?7k;;B>e$u?v!QhU9jPK6*Y$o8{Tl`N`+QvG ze}71rVC)fis9TZ<>EJ2JR`80F^2rkB7dihm$1Ta2bR?&wz>e`)w<4)1{3SfS$uKfV z3R=JT!eY+i7+IIfl3SIgiR|KvBWH*s;OEuF5tq~wLOB^xP_Dc7-BbNjpC|dHYJrZCWj-ucmv4;YS~eN!LvwER`NCd`R4Xh5%zP$V^nU>j zdOkNvbyB_117;mhiTiL_TBcy&Grvl->zO_SlCCX5dFLd`q7x-lBj*&ykj^ zR3@z`y0<8XlBHEhlCk7IV=ofWsuF|d)ECS}qnWf?I#-o~5=JFQM8u+7I!^>dg|wEb zbu4wp#rHGayeYTT>MN+(x3O`nFMpOSERQdpzQv2ui|Z5#Qd zB(+GbXda|>CW55ky@mG13K0wfXAm8yoek3MJG!Hujn$5)Q(6wWb-l4ogu?jj2Q|srw?r z-TG0$OfmDx%(qcX`Fc`D!WS{3dN*V%SZas3$vFXQy98^y3oT~8Yv>$EX0!uiRae?m z_}pvK=rBy5Z_#_!8QEmix_@_*w8E8(2{R5kf^056;GzbLOPr2uqFYaG6Fkrv($n_51%7~QN<>9$WdjE=H}>(a41KM%d2x#e@K3{W|+=-h*mR&2C01e z2sMP;YjU)9h+1kxOKJ+g*W=&D@=$q4jF%@HyRtCwOmEmpS|Rr9V_2br*NOd^ z4LN#oxd5yL=#MPWN{9Vo^X-Wo{a7IF2hvYWB%eUCkAZq+=NQ=iLI9?~@ zr+|ky4Rgm7yEDuc2dIe941~qc8V_$7;?7|XLk6+nbrh}e&Tt20EWZ@dRFDoYbwhkn zjJ$th974Z0F${3wtVLk_Ty;*J-Pi zP0IwrAT!Lj34GcoSB8g?IKPt%!iLD-$s+f_eZg@9q!2Si?`F#fUqY`!{bM0O7V^G%VB|A zyMM>SKNg|KKP}+>>?n6|5MlPK3Vto&;nxppD;yk@z4DXPm0z9hxb+U&Fv4$y&G>q= z799L0$A2&#>CfSgCuu$+9W>s<-&yq3!C{F9N!{d?I|g|+Qd9@*d;GplgY5Fk$LOV+ zoMealKns!!80PWsJ%(}L61B!7l?j1_5P#LRrVv%NBhs{R`;aufHYb&b+mF%A+DGl5 zBemAHtbLFi++KT(wv9*?;awp>ROX~P?e<4#Uf5RKIV{c3NxmUz!LYO#Cxdz*CoRQp zSvX|#NN06=q_eTU5-T!RmUJ?Ht=XQF8t)f+GnY5nY5>-}WLR1+R5pou?l@Y|F@KEX zk=jh-yq=Rn9;riE*;Slo}PfNKhXO#;FrZCf%VZ9h7W z<63YWE^s_SlAVQh6B(En9i<9%4AT|2bTQ4Ph2)pI?f2S`$j?bp`>_3(`Fz&?ig-FJ zoO7KAh@4BDOU>sBXV84Eajr9;>wlbW&OSUt&dug?oAV;`+3oBzpI18%%1wA4blzmb z-{QPYJmn_2-F$A5JI!a8+-p8Bk*^U?^f5j7uZ}jEz0E3;XbahB2iZwS&l4jj4WRS6 z3O&!w=ymQSl~7LUE99noXd2y1)9E>yK`+ouR%sTOQ@Qjt@<;lErGLk1wrw7r zV)M})+amJXs_9hQa++&vrqgU&Xr8T)=G&5Vy6vOnvt37L*nU7&ws&ZO-9`)TGA**t zpby#0X|df;etRud+s~#Y_7zlPZ=_oLg%q&wraF6s>g@;VO#2sUseO=^+3%&Z?61(- z_IKzU`+Kw;Blil&LR#qv&{rzQnG|%i(Q3zLI@gh)2FE^H;~1dx9G|AOj(e%mSwT(C z71Zp!jar*i3S|_ik_3{n0L4KavYWWZ2x3MhyU!66E$h=L+A&-s$9X_w9Q_e;+`-{ZW# z^Zn2H_I~`}!vGeFRRY^DyKK#pORBr{&?X}ut`1a(x__(dt3y_-*Np0pX~q39D{Rns z!iXBWZO~+oZu>($Mrf0rjM>$JZar!n_0_!*e@yT7n=HfVT6#jbYZ0wYEXnTgPDZ0N zVE5?$1-v94G2@1jFyj##-E1Um(naG-8WuGy@rRAg)t9Oe0$RJ3OoWV8X4DXvW+ftx zk%S(O8h?#_3B9-1NHn&@ZAXtr=PXcAATV*GzFBXK>hVb9*`iMM-zvA6RwMH#2^901uxUFh&4fT% zmP?pjNsiRIMD)<6xZyOeThl_DN_ZJ*?KUIHgnx{vz`WKxj&!7HbM8{w?{Rued(M1v zKHsK{_q=YI88@Bf0*RW@cIV@=<{eGsG21xrTrWycT7*KBd!eD2zb1R(O@H~k7>Duv zHPwp=n8;t#1>7~fuM9IaD5w%BpwLtNCe_Sq9eal4oj2DB1#<+(MGR-P&Ig%3t%=!< zS$|KxI1a~an2Q>L$s;1$9nQJal4dk)Box$YsAKgCiEGni##jr|%So6Y4J@pYBF!;~ zhXwpKhc7&QZ$=e~Sb&ABZ4o)&U~N*dSU`2G^eQh-WCe9tA}~Ae369btLlB{GjOKB@yEDH!C7Q&df^#X zi~?{rCuAE|kAjKzt+r#t6s)1h840@A<%i5(O;$Q&tD(opg0)yzgm#=ucf4CSqkqYS zaTdivk5I~#=1Z9K5M*uV6H??6s9*ynT`vzr2@%Tkr4k+Tr_ib40$fPP7$yLA$cwJ@ zF@`94=op)$x^0t+QAsNY$pi!4e7hp~gO=|yD=^8JTvTiC(HAamYEQ}t z+hR~QoKTOz%)IHEg&6iC4vP=3mw&u4wvcSwi$vNBGQE5RoSUs^l+u{A+6s~aMMkXG z+1g4wD8^Y27Oe4f``K{+tm76n(*d6BUA4;pLa26`6RD6?Rq?2K1yMXVAk`&xbks*~{+``Mhg4cQEuw+aM zaI9{}9en8DCh*S9CojIk)qh|k?#iNiCQ}rAmr&iYRJiND ztt+j*c+}Fv&6x&7U~!(Sb1eAz1N@Nf`w?YxGJdhy+seiNNZEYIG1_<^?&pm^P8W?d ze(p@$nWC`Pxqpf8d&AIGNJn#Ty)j z1NbA^Y}pNQ>OfTdiAp+WR>C6390IrFj;YZglitGH8r7(GvVRpWjZd7|r24M{u66B) zs#VS$?R*!1FT&sO-ssvW8s5jh$-O=^9=7^y z75||~QA6zLW}Lu!YOZh1J$j46m zNH|;^a$U_RKgla5h>5(igl^ek(~2nL5a_0}ipvA_Xf0k*E-ExJNld0{LZ;F^DzqAL+IZGJ7<3i1szf zxMRkQ(|@;wj9%I7h{c*{;?g%giylU}Dz{iwb(1vGK<-vlnKs!|Mb9}iTt)Rl&NZka zkkugrMiY(ng3QseY!npaOf1jo3|r35nK+eTYh*`DHabuv@IFy zG7@V!LWE0&)bvqgQ8=-L-(vt#Z-&xaOj3G@Nqw1FfbNQ`!bFEl@z)0)+#Z5e#_hQ|Rd!KrEoRn^aFz zkzYzz%hher>ixcg6fW`=rr>Nx@enQ!sQqYR{<2^|eUfw?e8;B_`T)Kxkp8${U>g?k*VhCd zp^yYLvi}<#5TDjrx@{0U$jx*tQn+mhcXsq2e46a@44^-Sd;C6S2=}sK1LQ_OUhgO` z^4yN+e9Dv9TQ64y1Bw)0i4u)98(^+@R~eUUsG!Ye84 zFa7-?x3cqUXX)$G<2MgYiGWhjq?Q-CE(|sm-68_z>h_O2vME5nX;RodIf)=No(={I z_<&3QJcPg8kAI}_Vd+OH4z{NsFMmjv3;kunMSh94VNnqD?85uOps%nq=q?kU_JT5@ zwih;eQlhxr)7d^K#-~InWlc&<*#?{A(8f^+C_WmRR{B&Yh3pxhLU9-toLz%rCPi}} zE!cw^pQlXB3aACUpacU&ZlBUl(Jo4fxpbDVwDn^m{VG||ar9B)9}@K`(SJxmAWro& z_3yzfUqLoXg`H($!I;FTudPdo6FTJm2@^S|&42H(XbSRW7!)V&=I`{;mWicu@BT7z zQs!)F9t-K|aFaMsoJ_6z-ICrzjW5#yJRs>~)bugki)ST$8T%!D4F@EBliCNSA5!fl zN;OuKbR3m0rj=rrq}5`nq<<%iHIl|euXt6QA}$hFNqV)oR?_Rm4oPnoLy|ru_DQ-= zJTDFa;zjY2p{sg zWqz0I5y>-U{xR1Rl4r{NQ?6Ge&y@N7t~Vsll=-(^?@FF2^Y6JnkbgW==09{7N}eh4 z?h`%x-LM8D}+*41ZA#EG0D9KQjc2#z59Pq zO9u!y^MeiK3jhHB6_epc9Fs0q7m}w4lLmSnf6Gb(F%*XXShZTmYQ1gTje=G?4qg`Z zf*U~;6hT37na-R}qnQiIv@S#+#J6xEf(swOhZ4_JMMMtdob%^9e?s#9@%jc}19Jk8 z4-eKFdIEVQN4T|=j2t&EtMI{9_E$cx)DHN2-1mG28IEdMq557#dRO3U?22M($g zlriC81f!!ELd`)1V?{MBFnGYPgmrGp{4)cn6%<#sg5fMU9E|fi%iTOm9KgiN)zu3o zSD!J}c*e{V&__#si_#}hO9u$51d|3zY5@QM=aUgu9h0?tNPn1w)HWnB7LQ^GRUjeP z(zSg-y4St;3UIQ}ZX?^;ZtL2n4`>^*Y>Trk?aBtSQ(D-o$(D8Px^?ZI-PUB?*1fv! z{YdHme3Fc8%cR@*@zc5A_nq&2=R47Hp@$-JF4Fz*;SLw5}|ID{W__bHvfJIivHmqmPXlPJd^=<$8K97bHK^(i8eAy)&m< zBc1z)P8b<4NOeqgIeTQpaF|x5YV1#`#T`tctbN+b*?N{~O)bV<K z^y>s-s;V!}b2i=5=M-ComP? zju>8FPIq0VrdV5*EH$|!Ot;e=VudJExcb;2wST}N#u?M~TxGC_!?ccCHCjt|F*PgJ zf@kJB`|Ml}cmsyrAjO#Kjr^E5p29w+#>$C`Q|54BoDv$fQ9D?3n32P9LPMIzu?LjNqggOH=1@T{9bMn*u8(GI!;MGs%MKpd@c!?|2x+D-Rsw10~pU|Rn@A}C1xOlxCribxes0~+n26qDaI zA2$?e`opx3_KW!rAgbpzU)gFdjAKXh|5w``#F0R|c)Y)Du0_Ihhz^S?k^pk%P>9|p zIDx)xHH^_~+aA=^$M!<8K~Hy(71nJG(ov0$3Fg{n+QicHk{UcoFg0-esGM}1X@Ad~ zBS?mZCLw;l4W4a+D8qc)XJS`pUJ5X-f^1ytxwr`@si$lAE?{4G|o;O0l>` zrr?;~c;{ZEFJ!!3=7=FdGJ?Q^xfNQh4A?i;IJ4}B+A?4olTK(fN++3CRBP97jTJnI zF!X$o@{%29Dqq5zt&v4zmF$4E8GqYQko@>U1_;EC_6ig|Drn@=DMV9YEUSCaIf$kH zei3(u#zm9I!Jf(4t`Vm1lltJ&lVHy(eIXE8sy9sUpmz%I_gA#8x^Zv8%w?r2{GdkX z1SkzRIr>prRK@rqn9j2wG|rUv%t7pQ!2SrmOQRpAcS|Wp-{6gg=|^e5#DDOQVM?H4 z;eM-QeRFr06@ifV(ocvk?_)~N@1c2ien56UjWXid6W%6i zevIh)>dk|rIs##^kY67ib8Kw%#-oVFaXG7$ERyA9(NSJUvWiOA5H(!{uOpcWg&-?i zqPhds%3%tFspHDqqr;A!N0fU`!IdoMs=lv7E*9NYeVfBht~=W5wtrfcc#o#+l8s8! z(|NMeqjsy@0x{8^j0d00SqRZjp{Kj)&4UHYGxG+z9b-)72I*&J70?+8e?p_@=>-(> zl6z5vYlP~<2%DU02b!mA{7mS)NS_eLe=CB zc62^$j+OeC%Nkvg?0*n6EKlkPQ)EUvvfC=;4M&*|I!w}(@V_)eUKLA_t^%`o0PM9L zV|UKTLnk|?M3u!|f2S0?UqZsEIH9*NJS-8lzu;A6-rr-ot=dg9SASoluZUkFH$7X;P=?kY zX!K?JL-b~<#7wU;b;eS)O;@?h%sPPk{4xEBxb{!sm0AY|>CXVS(_RT9YPMpChUjl310o*$QocjGdf>jS%%kn_+Y;Ztbauie*k&Q@=9;erLneIoel2C zfCMiPTmYnjjxjV!Ar1h1yQ-31h=b@RZt-play?)#cs=ZxOt;5oX)|*e=7k*ASmQ;r zO4_`=Z&gX-C2$fitvq+iGK1U*^*#IW!Bo{nON%KSxQv@MZsO%Lx21x78z740FSW!f zJ%f-?XMgR#xdurqd6mWyUX2uh=Si>bnwg#gssR#jDVN{uEi3n(PZ%PFZ|6J25_rBf z0-u>e4sFe0*Km49ATi7>Kn0f9!uc|rRMR1Dtt6m1LW8^>qFlo}h$@br=Rmpi;mI&> zOF64Ba2v-pj&TB}f&A09bMg?1id{fne%>Q?9GLm{i~p^lAn!%ZtF$I~>39XVZxk0bROh^B zk9cE0AJBLozZIEmy7xG(yHWGztvfnr0(2ro1%>zsGMS^EMu+S$r=_;9WwZkg z)ww}6KOsH_)RkMh?x@N2R^3(SICQNAzP7(RdB{@@`v*GfeSYLv=cfmTC%s2_T@_Cso2168v@AU^NzL&qv?6hZBJEdb)g=X=dVg9? zYf78=0c@!QU6_a$>CPiXT7QAGDM}7Z(0z#_ZA=fmLUj{2z7@Ypo71UDy8GHr-&TLK zf6a5WCf@Adle3VglBt4>Z>;xF}}-S~B7<(%B;Y0QR55 z{z-buw>8ilNM3u6I+D$S%?)(p>=eBx-HpvZj{7c*_?K=d()*7r74N{MulF2dQ*rGJ8Al=QJ~zb`)MPYedy2kVl9jXxdnmn`&r8ut0w>q?93 zus}1dq%FAFYLsW8ZTQ_XZLh`P2*6(NgS}qGcfGXVWpwsp#Rs}IuKbk*`2}&)I^Vsk z6S&Q4@oYS?dJ`NwMVBs6!1v<013>Q(y%%a0i}Y#1 z-F3m;Ieh#Y12UgW?-R)|eX>ZuF-2cc!1>~NS|XSF-6In>zBoZg+ml!6%fk7Uw0LHc zz8VQk(jOJ+Yu)|^|15ufl$KQd_1eUZZzj`aC%umU6F1&D5XVWcUvDqcUtW@*>xfVd z@!G2_v`obR5 zU*UT{eMHfcUo`jw*u?4r2s_$`}U{?NjvEm(u&<>B|%mq$Q3weshzrh!=m4 zH~yPq{qO0O>o|+xpE_i3$yVP%gs2l20HBh&_;PzZtwMPqQDk4~L}0tfu;d4uxUM8h zx$5GP@d7%rg(9Y8!9@i+9&2l=3<|?le_)g9Z)PQ5ESCo?x4680QstTl-CH_ z5m)j*Epfqj7I|G0-*vpm?U#8&k?((2zg;QYNszIUs?zAIGUr9}em3I$Fhb*w9-ci~gV$1;8(U;p&SDZE^3_CNLX1zM3@E|W%A=rX4; zwOlLm!AP*(*Bl0rL_(L=6`Hv5>_8;g?VljGOuMhr8|fxKG|7jrCnCW}AbEe8A8O*a z;rbQWArFQUVyZaIdGyF7WbZ8lvQ6v;yEgG7uqYA&H#G5ad?wWuhnhHBvUGfsN3K^( zewji7_p=ede8DTP$FEa_M(6|&v8m{z@NJ&XsIgEPpP?ss9mYaeWBd+!UX6vy_yzie z8Vi;2C+U(J3ze}%uZ)Gt_+?D`yc!FY@z?1aYAjU7Z=eB`u~3ZJ#|<)8RL1SxrN%;K zoZ+XHo~5{G1p40!tUgK$I7L3rV9Y8@Eg;`_0Z>Z^2tPilXQ&PU0NNXq;YJ*jtBNjv zYflqF6o%gs=t3z%xd|2&*IQdyR=^LH8WYpRgrrep4Mx6Aw}fxhSE$jN z_`x6Gk20R2MM&C)-R$h{nfE#GnVgwFe}DZ3unAM(^yK7C>62cU)*<-~eOtHo^)=lJ zyq4q2*a>{Y3mU}nkX(`x@nlm*hSem0>o7{ZNZ;OQ5dw>RYT0 zOXvK4;<_A&n$p-%65n=wqR{bejviAOu@}cn>s#w3qd~{|=TQiObS+3ii(WV`2`mPo zZQ7x1xMY3^WvfM@Sq*HPLJh+LQwQ=`ny&P1^Hu$TtXM-zVD=*VoC&`n>n>@37!?>f zN*sy>#GXLvspC8GGlAj!USU^YC|}skAcN~^Xqe0(jqx#zAj>muU<=IUs~34|v06u2 zahGbSeT-uAG|Vv*Bw$#pf8#qXFtMfw|VuC{UeT)2WpJ6&O+E6jF; z;~n9>cf~Ip6j-_@&PGFD0%Vu*QJ@Ht`C7Og!xt#L>mqlJGEh<%*ATJUmZc(FfNSB## zfy_`Y-70r{Iv3jEfR|~Ii!xC44vZ(KNj#>kjsE86E3FB*OayD~$|}3Y&(h6^X|1(TcJ}8{Ua3yL1loSfg!2gTekn ztVO7WNyFQCfwF2ti$UvL8C6{{IPBg01XK~$ThIQx{)~aw>(9F2L#G36*kRDPqA$P* znq=!@bbQ#RzDpVIfYc*x9=}2N^*2z1E%3epP)i30>M4^xlbnuWe_MAGRTTb?O*?TC zw6v5$6bS)qZqo=w4J~*9i;eVx4NwO!crrOjhE8U(&P-ZZU9$We^ubqNd73QDTJqqV z55D;u{1?`JQre~$mu9WZ%=z|x?{A;q|NiAy0GH5U*nIM2xww(4aBEe#)zoy#s-^NN z%WJl5hX=Oj8cnY%e+ZYt5!@FfY;fPO8p2xj+f6?;UE_`~@~KwcX!4d}D<7hA<#M$$ zMY^)MV_$1K4gr3H8yA&|Ten>yr0v!TT@%u$ScDfRrzVR=Rjj3cjDj)fWv?wQanp7L zL)Me^LS6EzBMR%1w^~9L%8&g(G;d3f4uLKFIqs5JYKSlle?R1Fyx?%RURbI;6jq>N zh+(uYf`e8J=hO2&ZQCoTU^AKRV>_^&!W{P-3%oVMaQqOcL1!4cYP)vuF~dMQb1#lK zj_HWu4TgBXPYuJQYWv&8km~(7Mlh=5I8HE}*mJ#?mxhx%#+9e>eorO0)eg#m6uhb7 zG^KSg`Cbxlf9XizZH9>B@hZcqJ*7VTp6)w1tHLB11}(?)MI0$rLIUS0;Z^atECLmz zzb6FE#PKdBl;L{}$M%UdWEi4$AS4ew$#8O?ZRr(G4syuHkcGi8a#*gRz@QP|7R93= zj*A$L;eA}9id+JyWjkK`Mod00;{&DlA!QJFR3&ljf1vI*O1ec{(V=0QA?ELLVls-W z``ELsu7M`3`vI4MzhVcpJ!9#^KGjq|#b-J`!F7h${dUEFmBLuMbYu>nV^(S3q+UC; z7s@e_qZG#+N=oo0o$G1>6Y0a{9@&9;EU2+8k|7P6p?HMh|8#X5UnwpxGbHw;%WXHX zn_~8ne zdvw09V+G$(lhoq7L}=qb+OaPSD&;$TuUtG(4;py(h)8|Nord(*d1ZH-Dmw1MqU&RK ziI)26r-hE(pqnmo4uixe^`qea7(_HA_ zR2KjdJ4$g!)7ve&Q^b1Tf+{(Vd6vInCd>i725IomG^(Ez( zD8L!4qlUAX=)EV9!3JfWLB4n1z)!ums&0UuuVLUHP)i30*5f6tnvk?lbhL{|8I78X7|_c zA3p(L9<~X5y1L3{K8Sf*xL|5gToDT;aYig?m8z^zQ`XdEMJqC#*O|ho!7x~+MzT<5 zg$turF~pS;RSY&GR;6TxR)3Q+&%yG`3&ngIwR*qK&t{TERu@0|fDrKKw3=RE&t-)Xh-$i&l5|>BSn5)z)hg3d?<~8msU=ye z>CHWR!9yT;PU|$KP*qADf(V?zj^n^g~nykv^I)Uz3{78Ty81{n~ZsS&7WH)#Ach3%UyVD1s=Ahvw9*%Wt<42vTt%|niux3Zww13+oK)-d~ zG>VKHM0ov>KXKaUH(Cc)#9GFVSc4EoUbnRudxi}T8J!VNY=4g*Y7C*Ho7#^wUVt&< zKN3&ugs1Ur<767&ea4^1oBw%@h^+YZ+eK^VI5573*KZosq? zpMj(u5257?^lBu&LF9`ao`sYf9&zx;uK2iv&$;8{4nFUSFF5$3JHFuHORo5YgFkV{ zCmcNEicdQDvO7NM;484|f=_+6!)x%g1CL;L9DE%%T=1xaKZ8v-+-@x1OZ;|0_a9J8 z2MFd71j+6K002-1li@}jlN6Rde_awnSQ^R>8l%uQO&WF!6qOdxN;eu7Q-nHAUeckH znK(0P3kdECiu+2%6$MdLP?%OK@`LB_gMXCA`(~0RX;Tm9uJ&d7>n%9A~GP*{Zrpyh7B^|a-)|8b<&(!>OhWQ08 z$LV}WQ`RD4Od8d3O-;%vhK7#W<7u;XvbxQo0JX@fY(C0RS6^zcd>jo287k@<4tg;k z3q5e5hLHE@&4ooC)S|`w7N|jm>3tns$G}U4o!(2g=!}xLHp?+qFvj$ztd<%96=4tCKGG@ADSX{=m zNZ@ho6rr?EOQ1(G2i@2;GXb&S#U3YtCuVwc*4rJcPm$kZf2+|!X~X6%(QMj{4u)mZ zOi!(P(dF3hX4ra9l=RKQ$v(kJFS#;ib+z9K^#Gle6LKa>&4oMFJ4C&NBJ7hhPSIjc zOno$M6iq+l;ExpH9rF68@D3-EgCCf}JJSgVPbI1$?JjPPX!_88InA}KX&=#cFH#s3 zIx<6LeY==wf5DK*jP`hqF%u+|sI)3HfyywfAj=0OMNUX2pLR;T(8c+$g&}Z#q9L>( zD~t~l&X^VFXp@&w92f8tq+KXMZ&o!an%$#uo^hJh^9-RjEvqE_s%H8{qw(juo4?SC z{YhO*`|H*ibxm%ZF6r=2QC)bE`d3oZ(~?;a-(mX) zb!|i%p!VVP>DN6tg*Ry97gUPUJj<}OxaYL1nXE}hxs-O{twImUw43Eo6nJ4_RTDIQALB8H!3nq37 zcE6>oNG;jZZhXh!vORPsMKfzJ8_*?O7DfGmcrL8A(_NAhSH+JE?u?`xR1|ZThDb;2 zDt`9hC;UQ%94^20-MA*;<$KO0{3b&9y(ENIe@&xj6>X23)Ftc?ax=4pL5FZ06CPOj zgG%2*F$-x6 z&si`nj955%8LK)caVl1M8?IPaMPtM85o>MvPUn@(X=!wZq0)at}MK|kJ&KJggGx6y?Ey21qiw~76MoISk z+LyUR=2+oJK1IoYOX~R}S1x>iblZ|_oAmqhyU+NpxvjQb;Ht{pO_xn4T+UO<73|gD zaq0Wtdz^7GoZq-Fu+;61dX%|tud0myO`{vHTlP*oes5OaTBV$=y?3V{mRnFLdQ!Hj z)lErp+uBchtEPv?ao=?feR1oRVaUdpIVC}+xkgTxPYSGDyR2Zw++VdTe(-~Oh=P%c zFD5UUvx;?cLREy~~@9BnQ?{+kh7j7^BGZ3r}vC zuRPgbSbFk*%f8<`nm*%=sYP!wJk1uNV$&qN0K`bt|AMMaWeMf&qirQ!Dt0FDJ8`4KXRTiO^HPz`BO1{-ofSrz0YR`9K0lLHorGM!h0O0Z3yut19ieErkD1!7DO zG~nX@7pO{uE-YFOTtaXT=wTxi=Y>zUU+BjIx>jcL#D!u^>AGNjXBL{vAZ}$~KnuVC z1E3-$;H5MCAlFEP4~z$T=^-$HP(wOqa`hr78Te`EKnLicSpL~^a?K*8$-ft=N<+?q zW?-0u5gn^0TQByPK^#BKz~G2th_L-+o5j*dCr4Ycg3q*_+`m|qNyu^Xvc-|obKpm+ zGBD_)==PZ0utaRK!4gv$&;gX1%nS@qfG$9_!NzrRSv~>`eq9tbPbwj5K&x^fX&o_o$H1U~ zqIOd?L@oQ|Bg^Gwz#}riv?K=%D|r-k8@s@c6Ir1u0~(i50a^-LyMmf7oO;2EvR3Fw zgF8gPQ1=7g{c3<>(&5P)SNO;vnvv+PKQakyh~7$L8Bq2Q1{!dbhk-!@#SpP+P(|#M SXRcJ{65?fGI57uQ5&!`B?F@7P delta 34554 zcmX7vV`H6d(}mmEwr$(CZQE$vU^m*aZQE(=WXEZ2+l}qF_w)XN>&rEBu9;)4xt<3b zo(HR^Mh47P)@z^^pH!4#b(O8!;$>N+S+v5K5f8RrQ+Qv0_oH#e!pI2>yt4ij>fI9l zW&-hsVAQg%dpn3NRy$kb_vbM2sr`>bZ48b35m{D=OqX;p8A${^Dp|W&J5mXvUl#_I zN!~GCBUzj~C%K?<7+UZ_q|L)EGG#_*2Zzko-&Kck)Qd2%CpS3{P1co1?$|Sj1?E;PO z7alI9$X(MDly9AIEZ-vDLhpAKd1x4U#w$OvBtaA{fW9)iD#|AkMrsSaNz(69;h1iM1#_ z?u?O_aKa>vk=j;AR&*V-p3SY`CI}Uo%eRO(Dr-Te<99WQhi>y&l%UiS%W2m(d#woD zW?alFl75!1NiUzVqgqY98fSQNjhX3uZ&orB08Y*DFD;sjIddWoJF;S_@{Lx#SQk+9 zvSQ-620z0D7cy8-u_7u?PqYt?R0m2k%PWj%V(L|MCO(@3%l&pzEy7ijNv(VXU9byn z@6=4zL|qk*7!@QWd9imT9i%y}1#6+%w=s%WmsHbw@{UVc^?nL*GsnACaLnTbr9A>B zK)H-$tB`>jt9LSwaY+4!F1q(YO!E7@?SX3X-Ug4r($QrmJnM8m#;#LN`kE>?<{vbCZbhKOrMpux zTU=02hy${;n&ikcP8PqufhT9nJU>s;dyl;&~|Cs+o{9pCu{cRF+0{iyuH~6=tIZXVd zR~pJBC3Hf-g%Y|bhTuGyd~3-sm}kaX5=T?p$V?48h4{h2;_u{b}8s~Jar{39PnL7DsXpxcX#3zx@f9K zkkrw9s2*>)&=fLY{=xeIYVICff2Id5cc*~l7ztSsU@xuXYdV1(lLGZ5)?mXyIDf1- zA7j3P{C5s?$Y-kg60&XML*y93zrir8CNq*EMx)Kw)XA(N({9t-XAdX;rjxk`OF%4-0x?ne@LlBQMJe5+$Ir{Oj`@#qe+_-z!g5qQ2SxKQy1ex_x^Huj%u+S@EfEPP-70KeL@7@PBfadCUBt%`huTknOCj{ z;v?wZ2&wsL@-iBa(iFd)7duJTY8z-q5^HR-R9d*ex2m^A-~uCvz9B-1C$2xXL#>ow z!O<5&jhbM&@m=l_aW3F>vjJyy27gY}!9PSU3kITbrbs#Gm0gD?~Tub8ZFFK$X?pdv-%EeopaGB#$rDQHELW!8bVt`%?&>0 zrZUQ0!yP(uzVK?jWJ8^n915hO$v1SLV_&$-2y(iDIg}GDFRo!JzQF#gJoWu^UW0#? z*OC-SPMEY!LYcIZO95!sv{#-t!3Z!CfomqgzFJld>~CTFKGcr^sUai5s-y^vI5K={ z)cmQthQuKS07e8nLfaIYQ5f}PJQqcmokx?%yzFH*`%k}RyXCt1Chfv5KAeMWbq^2MNft;@`hMyhWg50(!jdAn;Jyx4Yt)^^DVCSu?xRu^$*&&=O6#JVShU_N3?D)|$5pyP8A!f)`| z>t0k&S66T*es5(_cs>0F=twYJUrQMqYa2HQvy)d+XW&rai?m;8nW9tL9Ivp9qi2-` zOQM<}D*g`28wJ54H~1U!+)vQh)(cpuf^&8uteU$G{9BUhOL| zBX{5E1**;hlc0ZAi(r@)IK{Y*ro_UL8Ztf8n{Xnwn=s=qH;fxkK+uL zY)0pvf6-iHfX+{F8&6LzG;&d%^5g`_&GEEx0GU=cJM*}RecV-AqHSK@{TMir1jaFf&R{@?|ieOUnmb?lQxCN!GnAqcii9$ z{a!Y{Vfz)xD!m2VfPH=`bk5m6dG{LfgtA4ITT?Sckn<92rt@pG+sk>3UhTQx9ywF3 z=$|U(bN<=6-B4+UbYWxfQUOe8cmEDY3QL$;mOw&X2;q9x9qNz3J97)3^jb zdlzkDYLKm^5?3IV>t3fdWwNpq3qY;hsj=pk9;P!wVmjP|6Dw^ez7_&DH9X33$T=Q{>Nl zv*a*QMM1-2XQ)O=3n@X+RO~S`N13QM81^ZzljPJIFBh%x<~No?@z_&LAl)ap!AflS zb{yFXU(Uw(dw%NR_l7%eN2VVX;^Ln{I1G+yPQr1AY+0MapBnJ3k1>Zdrw^3aUig*! z?xQe8C0LW;EDY(qe_P!Z#Q^jP3u$Z3hQpy^w7?jI;~XTz0ju$DQNc4LUyX}+S5zh> zGkB%~XU+L?3pw&j!i|x6C+RyP+_XYNm9`rtHpqxvoCdV_MXg847oHhYJqO+{t!xxdbsw4Ugn($Cwkm^+36&goy$vkaFs zrH6F29eMPXyoBha7X^b+N*a!>VZ<&Gf3eeE+Bgz7PB-6X7 z_%2M~{sTwC^iQVjH9#fVa3IO6E4b*S%M;#WhHa^L+=DP%arD_`eW5G0<9Tk=Ci?P@ z6tJXhej{ZWF=idj32x7dp{zmQY;;D2*11&-(~wifGXLmD6C-XR=K3c>S^_+x!3OuB z%D&!EOk;V4Sq6eQcE{UEDsPMtED*;qgcJU^UwLwjE-Ww54d73fQ`9Sv%^H>juEKmxN+*aD=0Q+ZFH1_J(*$~9&JyUJ6!>(Nj zi3Z6zWC%Yz0ZjX>thi~rH+lqv<9nkI3?Ghn7@!u3Ef){G(0Pvwnxc&(YeC=Kg2-7z zr>a^@b_QClXs?Obplq@Lq-l5>W);Y^JbCYk^n8G`8PzCH^rnY5Zk-AN6|7Pn=oF(H zxE#8LkI;;}K7I^UK55Z)c=zn7OX_XVgFlEGSO}~H^y|wd7piw*b1$kA!0*X*DQ~O` z*vFvc5Jy7(fFMRq>XA8Tq`E>EF35{?(_;yAdbO8rrmrlb&LceV%;U3haVV}Koh9C| zTZnR0a(*yN^Hp9u*h+eAdn)d}vPCo3k?GCz1w>OOeme(Mbo*A7)*nEmmUt?eN_vA; z=~2}K_}BtDXJM-y5fn^v>QQo+%*FdZQFNz^j&rYhmZHgDA-TH47#Wjn_@iH4?6R{J z%+C8LYIy>{3~A@|y4kN8YZZp72F8F@dOZWp>N0-DyVb4UQd_t^`P)zsCoygL_>>x| z2Hyu7;n(4G&?wCB4YVUIVg0K!CALjRsb}&4aLS|}0t`C}orYqhFe7N~h9XQ_bIW*f zGlDCIE`&wwyFX1U>}g#P0xRRn2q9%FPRfm{-M7;}6cS(V6;kn@6!$y06lO>8AE_!O z{|W{HEAbI0eD$z9tQvWth7y>qpTKQ0$EDsJkQxAaV2+gE28Al8W%t`Pbh zPl#%_S@a^6Y;lH6BfUfZNRKwS#x_keQ`;Rjg@qj zZRwQXZd-rWngbYC}r6X)VCJ-=D54A+81%(L*8?+&r7(wOxDSNn!t(U}!;5|sjq zc5yF5$V!;%C#T+T3*AD+A({T)#p$H_<$nDd#M)KOLbd*KoW~9E19BBd-UwBX1<0h9 z8lNI&7Z_r4bx;`%5&;ky+y7PD9F^;Qk{`J@z!jJKyJ|s@lY^y!r9p^75D)_TJ6S*T zLA7AA*m}Y|5~)-`cyB+lUE9CS_`iB;MM&0fX**f;$n($fQ1_Zo=u>|n~r$HvkOUK(gv_L&@DE0b4#ya{HN)8bNQMl9hCva zi~j0v&plRsp?_zR zA}uI4n;^_Ko5`N-HCw_1BMLd#OAmmIY#ol4M^UjLL-UAat+xA+zxrFqKc@V5Zqan_ z+LoVX-Ub2mT7Dk_ z<+_3?XWBEM84@J_F}FDe-hl@}x@v-s1AR{_YD!_fMgagH6s9uyi6pW3gdhauG>+H? zi<5^{dp*5-9v`|m*ceT&`Hqv77oBQ+Da!=?dDO&9jo;=JkzrQKx^o$RqAgzL{ zjK@n)JW~lzxB>(o(21ibI}i|r3e;17zTjdEl5c`Cn-KAlR7EPp84M@!8~CywES-`mxKJ@Dsf6B18_!XMIq$Q3rTDeIgJ3X zB1)voa#V{iY^ju>*Cdg&UCbx?d3UMArPRHZauE}c@Fdk;z85OcA&Th>ZN%}=VU%3b9={Q(@M4QaeuGE(BbZ{U z?WPDG+sjJSz1OYFpdImKYHUa@ELn%n&PR9&I7B$<-c3e|{tPH*u@hs)Ci>Z@5$M?lP(#d#QIz}~()P7mt`<2PT4oHH}R&#dIx4uq943D8gVbaa2&FygrSk3*whGr~Jn zR4QnS@83UZ_BUGw;?@T zo5jA#potERcBv+dd8V$xTh)COur`TQ^^Yb&cdBcesjHlA3O8SBeKrVj!-D3+_p6%P zP@e{|^-G-C(}g+=bAuAy8)wcS{$XB?I=|r=&=TvbqeyXiuG43RR>R72Ry7d6RS;n^ zO5J-QIc@)sz_l6%Lg5zA8cgNK^GK_b-Z+M{RLYk5=O|6c%!1u6YMm3jJg{TfS*L%2 zA<*7$@wgJ(M*gyTzz8+7{iRP_e~(CCbGB}FN-#`&1ntct@`5gB-u6oUp3#QDxyF8v zOjxr}pS{5RpK1l7+l(bC)0>M;%7L?@6t}S&a zx0gP8^sXi(g2_g8+8-1~hKO;9Nn%_S%9djd*;nCLadHpVx(S0tixw2{Q}vOPCWvZg zjYc6LQ~nIZ*b0m_uN~l{&2df2*ZmBU8dv`#o+^5p>D5l%9@(Y-g%`|$%nQ|SSRm0c zLZV)45DS8d#v(z6gj&6|ay@MP23leodS8-GWIMH8_YCScX#Xr)mbuvXqSHo*)cY9g z#Ea+NvHIA)@`L+)T|f$Etx;-vrE3;Gk^O@IN@1{lpg&XzU5Eh3!w;6l=Q$k|%7nj^ z|HGu}c59-Ilzu^w<93il$cRf@C(4Cr2S!!E&7#)GgUH@py?O;Vl&joXrep=2A|3Vn zH+e$Ctmdy3B^fh%12D$nQk^j|v=>_3JAdKPt2YVusbNW&CL?M*?`K1mK*!&-9Ecp~>V1w{EK(429OT>DJAV21fG z=XP=%m+0vV4LdIi#(~XpaUY$~fQ=xA#5?V%xGRr_|5WWV=uoG_Z&{fae)`2~u{6-p zG>E>8j({w7njU-5Lai|2HhDPntQ(X@yB z9l?NGoKB5N98fWrkdN3g8ox7Vic|gfTF~jIfXkm|9Yuu-p>v3d{5&hC+ZD%mh|_=* zD5v*u(SuLxzX~owH!mJQi%Z=ALvdjyt9U6baVY<88B>{HApAJ~>`buHVGQd%KUu(d z5#{NEKk6Vy08_8*E(?hqZe2L?P2$>!0~26N(rVzB9KbF&JQOIaU{SumX!TsYzR%wB z<5EgJXDJ=1L_SNCNZcBWBNeN+Y`)B%R(wEA?}Wi@mp(jcw9&^1EMSM58?68gwnXF` zzT0_7>)ep%6hid-*DZ42eU)tFcFz7@bo=<~CrLXpNDM}tv*-B(ZF`(9^RiM9W4xC%@ZHv=>w(&~$Wta%)Z;d!{J;e@z zX1Gkw^XrHOfYHR#hAU=G`v43E$Iq}*gwqm@-mPac0HOZ0 zVtfu7>CQYS_F@n6n#CGcC5R%4{+P4m7uVlg3axX}B(_kf((>W?EhIO&rQ{iUO$16X zv{Abj3ZApUrcar7Ck}B1%RvnR%uocMlKsRxV9Qqe^Y_5C$xQW@9QdCcF%W#!zj;!xWc+0#VQ*}u&rJ7)zc+{vpw+nV?{tdd&Xs`NV zKUp|dV98WbWl*_MoyzM0xv8tTNJChwifP!9WM^GD|Mkc75$F;j$K%Y8K@7?uJjq-w zz*|>EH5jH&oTKlIzueAN2926Uo1OryC|CmkyoQZABt#FtHz)QmQvSX35o`f z<^*5XXxexj+Q-a#2h4(?_*|!5Pjph@?Na8Z>K%AAjNr3T!7RN;7c)1SqAJfHY|xAV z1f;p%lSdE8I}E4~tRH(l*rK?OZ>mB4C{3e%E-bUng2ymerg8?M$rXC!D?3O}_mka? zm*Y~JMu+_F7O4T;#nFv)?Ru6 z92r|old*4ZB$*6M40B;V&2w->#>4DEu0;#vHSgXdEzm{+VS48 z7U1tVn#AnQ3z#gP26$!dmS5&JsXsrR>~rWA}%qd{92+j zu+wYAqrJYOA%WC9nZ>BKH&;9vMSW_59z5LtzS4Q@o5vcrWjg+28#&$*8SMYP z!l5=|p@x6YnmNq>23sQ(^du5K)TB&K8t{P`@T4J5cEFL@qwtsCmn~p>>*b=37y!kB zn6x{#KjM{S9O_otGQub*K)iIjtE2NfiV~zD2x{4r)IUD(Y8%r`n;#)ujIrl8Sa+L{ z>ixGoZJ1K@;wTUbRRFgnltN_U*^EOJS zRo4Y+S`cP}e-zNtdl^S5#%oN#HLjmq$W^(Y6=5tM#RBK-M14RO7X(8Gliy3+&9fO; zXn{60%0sWh1_g1Z2r0MuGwSGUE;l4TI*M!$5dm&v9pO7@KlW@j_QboeDd1k9!7S)jIwBza-V#1)(7ht|sjY}a19sO!T z2VEW7nB0!zP=Sx17-6S$r=A)MZikCjlQHE)%_Ka|OY4+jgGOw=I3CM`3ui^=o0p7u z?xujpg#dRVZCg|{%!^DvoR*~;QBH8ia6%4pOh<#t+e_u!8gjuk_Aic=|*H24Yq~Wup1dTRQs0nlZOy+30f16;f7EYh*^*i9hTZ`h`015%{i|4 z?$7qC3&kt#(jI#<76Biz=bl=k=&qyaH>foM#zA7}N`Ji~)-f-t&tR4^do)-5t?Hz_Q+X~S2bZx{t+MEjwy3kGfbv(ij^@;=?H_^FIIu*HP_7mpV)NS{MY-Rr7&rvWo@Wd~{Lt!8|66rq`GdGu% z@<(<7bYcZKCt%_RmTpAjx=TNvdh+ZiLkMN+hT;=tC?%vQQGc7WrCPIYZwYTW`;x|N zrlEz1yf95FiloUU^(onr3A3>+96;;6aL?($@!JwiQ2hO|^i)b4pCJ7-y&a~B#J`#FO!3uBp{5GLQfhOAOMUV7$0|d$=_y&jl>va$3u-H z_+H*|UXBPLe%N2Ukwu1*)kt!$Y>(IH3`YbEt; znb1uB*{UgwG{pQnh>h@vyCE!6B~!k}NxEai#iY{$!_w54s5!6jG9%pr=S~3Km^EEA z)sCnnau+ZY)(}IK#(3jGGADw8V7#v~<&y5cF=5_Ypkrs3&7{}%(4KM7) zuSHVqo~g#1kzNwXc39%hL8atpa1Wd#V^uL=W^&E)fvGivt)B!M)?)Y#Ze&zU6O_I?1wj)*M;b*dE zqlcwgX#eVuZj2GKgBu@QB(#LHMd`qk<08i$hG1@g1;zD*#(9PHjVWl*5!;ER{Q#A9 zyQ%fu<$U?dOW=&_#~{nrq{RRyD8upRi}c-m!n)DZw9P>WGs>o1vefI}ujt_`O@l#Z z%xnOt4&e}LlM1-0*dd?|EvrAO-$fX8i{aTP^2wsmSDd!Xc9DxJB=x1}6|yM~QQPbl z0xrJcQNtWHgt*MdGmtj%x6SWYd?uGnrx4{m{6A9bYx`m z$*UAs@9?3s;@Jl19%$!3TxPlCkawEk12FADYJClt0N@O@Pxxhj+Kk(1jK~laR0*KGAc7%C4nI^v2NShTc4#?!p{0@p0T#HSIRndH;#Ts0YECtlSR}~{Uck+keoJq6iH)(Zc~C!fBe2~4(Wd> zR<4I1zMeW$<0xww(@09!l?;oDiq zk8qjS9Lxv$<5m#j(?4VLDgLz;8b$B%XO|9i7^1M;V{aGC#JT)c+L=BgCfO5k>CTlI zOlf~DzcopV29Dajzt*OcYvaUH{UJPaD$;spv%>{y8goE+bDD$~HQbON>W*~JD`;`- zZEcCPSdlCvANe z=?|+e{6AW$f(H;BND>uy1MvQ`pri>SafK5bK!YAE>0URAW9RS8#LWUHBOc&BNQ9T+ zJpg~Eky!u!9WBk)!$Z?!^3M~o_VPERYnk1NmzVYaGH;1h+;st==-;jzF~2LTn+x*k zvywHZg7~=aiJe=OhS@U>1fYGvT1+jsAaiaM;) zay2xsMKhO+FIeK?|K{G4SJOEt*eX?!>K8jpsZWW8c!X|JR#v(1+Ey5NM^TB1n|_40 z@Db2gH}PNT+3YEyqXP8U@)`E|Xat<{K5K;eK7O0yV72m|b!o43!e-!P>iW>7-9HN7 zmmc7)JX0^lPzF#>$#D~nU^3f!~Q zQWly&oZEb1847&czU;dg?=dS>z3lJkADL1innNtE(f?~OxM`%A_PBp?Lj;zDDomdw zoC=eKBnzA5DamDVIk!-AoSMv~QchAOt&5fk#G=s!$FD}9rL0yDjwDkw<9>|UUuyVm z&o7y|6Ut5WI0!G$M?NiMUy%;s3ugPKJU_+B!Z$eMFm}A**6Z8jHg)_qVmzG-uG7bj zfb6twRQ2wVgd)WY00}ux=jqy@YH4ldI*;T^2iAk+@0u`r_Fu(hmc3}!u-Pb>BDIf{ zCNDDv_Ko`U@})TZvuE=#74~E4SUh)<>8kxZ=7`E?#|c zdDKEoHxbEq;VVpkk^b&~>-y`uO~mX=X0bmP!=F1G1YiluyeEg!D*8Fq-h=NyE-2S;^F6j=QMtUzN4oPedvc*q(BCpbg~*As!D@U z3(sz|;Pe1hn08P_cDQ(klZ6 z;P`q(5_V?*kJYBBrA1^yDgJD|)X1FV_*~sO>?8Sy~I9WdK5K8bc7aeNC zDb{Fe>y3N^{mrD1+GyH{F?@9}YQ2Om3t`nt zQ(}MS8M?6Vk>B=*j*yibz6QCdR=ALgTUcKx61){O@1WkPp-v$$4}e#KgK`HG~2@#A?`BF8em`ah6+8hH-DNA2>@02WWk9(fzhL_iz|~H~qEViQ(*{ zV;3tjb<%&r!whm6B`XtWmmrMWi=#ZO&`{h9`->HVxQ)^_oOS{W z!BzVRjdx5@pCXl#87ovlp<^QU;s<*d$)+|vI;Ai(!8Tjll^mi6!o~CpnlgZAK>6=V zm38^kT`D$_$v@UYeFyVhnsMZI1m`E&8<{V07>bBEI1=fg3cji*N?7pBzuamD`X|^^ zm!)2v?s|6T&H-_^y`KM&$!0!9tai9x&)5<(&sY6B`3D{$$KMAX3@&`SW;X0 zB-}obt^I;|#o_bR>eOv?P>=UC6CGTXIM+lSu?Uy+R9~O;q|c2+FafBP;E)B5M9HJgRIpF|GvRi*E+JTBI~T?T*X}r) zefUd*(+3n_YHZZS(g8)+7=pNV9QR^>Qs8t+iEpbJS!9;wio&9rn=19C0G#Ax zM-tWHp_YlJvXWsUqJUr^`OYFA4wkgL`cSOV;w4?tp>GT1jq}-qPoN zp&G}*;+#+Zh&vqDOp>gRL#^O7;s2yWqs+U4_+R4`{l9rEt-ud(kZ*JZm#0M{4K(OH zb<7kgkgbakPE=G&!#cNkvSgpU{KLkc6)dNU$}BQelv+t+gemD5;)F-0(%cjYUFcm{ zxaUt??ycI({X5Gkk@KIR$WCqy4!wkeO_j)?O7=lFL@zJDfz zrJJRDePaPzCAB)hPOL%05T5D*hq|L5-GG&s5sB97pCT23toUrTxRB{!lejfX_xg(y z;VQ+X91I;EUOB;=mTkswkW0~F$ zS%M}ATlKkIg??F?I|%gdYBhU(h$LqkhE!Xx$7kPS{2U4wLujF_4O+d8^ej{ zgSo(;vA)|(KT8R_n_aQ$YqDQaI9Stqi7u=+l~~*u^3-WsfA$=w=VX6H%gf!6X|O#X z*U6Wg#naq%yrf&|`*$O!?cS94GD zk}Gx%{UU!kx|HFb+{f(RA2h+t#A!32`fxL}QlXUM{QF3m&{=7+hz@aXMq*FirZk?W zoQ~ZCOx>S?o>3`+tC&N0x4R`%m)%O$b@BkW;6zE+aBzeYi47~78w$d~uypaV*p$kQ zJf34Q+pp~vg6)yeTT&qWbnR2|SifwK2gA7fzy#W(DyM^bdCjnee42Ws>5mM9W6_`j zC(|n5Fa&=MT$$@?p~)!IlLezYa}=Uw21^Fz-I#?_AOk(7Ttxm;#>RDD_9EloqhvrS z&7fpbd$q_e21Al+bcz|o{(^p}AG>jX0B}ZZRfzk$WLbNLC{y|lZ|&a(=bOE6Mxum{ zM=Nd+-I2A-N&2giWM2oAH`O&QecJn6%uYl0GWlpx&2*)BIfl3h&2E(>#ODt4oG}Dq z__73?sw2-TOWq@d&gmYKdh`a}-_6YQ5```}bEBEmWLj))O z?*eUM4tw0Cwrr+4Ml^9JkKW9e4|_^oal0*sS-u_Xovjo8RJ18x_m7v!j$eR@-{2(Y z?&K4ZR8^T{MGHL#C(+ZAs6&k}r07Xqo1WzaMLo9V;I<9a6jx2wH2qeU?kv25MJxoj zJKzX`Un|;_e&KY%R2jU~<5lm-`$EjIJLDP~11_5?&W#t3I{~+0Ze++pOh2B4c1Mde zSgj$ODQQm7gk&w{wwfE1_@V(g!C=2Hd%Gwj{{-_K4S|nZu+vk}@k(?&13iccsLkQo z_t8#Ah$HVB-MRyzpab*OHOp zl`$tEcUcF9_=3*qh8KTaW$znGztA7Obzb`QW5IQN+8XC=l%+$FVgZ|*XCU?G4w)}! zmEY+2!(!%R5;h`>W(ACqB|7`GTSp4{d)eEC8O)Mhsr$dQG}WVBk$aN1->sTSV7E)K zBqr;^#^bZJJX4E_{9gdPo8e?Ry>ZrE&qM)zF5z20DP0`)IIm_!vm&s2mzl z2;EPI{HgFH-Mp&fIL^6f74>19^>o^AOj`uyL0+Nb##Slvi9K4LQSs>f+$j?cn9Z__C zAkyZ9C;#uRi3cDYoTA>AT<|*pt{K70oZKG*S1F$r?KE=$4~W3!u53yUvh~(kMrClS zXC?Dmgv4iS`>~wBPJJFL_C8x2tEg*PCDX2=rHQ@z+Zs)Kkr;FYG`GnbUXqdipzvHE z1aZ>G6|e`}Q#)Kru0)(SZnUCN#dN2H zd1}r&xGsaAeEed9#?|0HzMGA7pl2=aehy_zsRV8RKV6+^I8woDd%4J8v9hs$x{ zl*V61wSumovRVWtetd1eJ%i^#z`_~~^B;aeuD`6LgHL66F0b^G5@om^&_3REtGmhz z%j^9{U`BH7-~P_>c_yu9sE+kk)|2`C)-ygYhR?g~gH`OK@JFAGg0O)ng-JzSZMjw< z2f&vA7@qAhrVyoz64A!JaTVa>jb5=I0cbRuTv;gMF@4bX3DVV#!VWZEo>PWHeMQtU!!7ptMzb{H ze`E4ZG!rr4A8>j2AK(A0Vh6mNY0|*1BbLhs4?>jmi6fRaQwed-Z?0d=eT@Hg zLS(%af5#q%h@txY2KaYmJBu>}ZESUv-G02~cJ-(ADz6u8rLVECbAR7+KV~a!DI83H zd!Z(Ekz%vjA-|%4-YpgfymMzxm_RjZg%ruo zT4^x)f*%Ufvg_n`&55cK;~QChP6~Fy_Z67HA`UtdW)@$Xk-2+|opk6A@y0~3Qb;V% z%+B@ArKl|Q^DJW&xuBZD#~SurH7XXf*uE0@|ccNd&MA%Ts*1 zg7TU!xY}~*AOY+tAnFR(Fu)e@^9V!Rm65$;G$-?6e%7w7p9WT098%-R?u#J+zLot@ z4H7R>G8;q~_^uxC_Z=-548YRA`r`CsPDL!^$v0Yy<^KSoKwiJaCt&dlW?p^7Y_<9c z3n#cMWFUe@W@4ffE`}pQduRZ)I5v`G8On2RI zL)V5k)PMBq(Zfb6Ruig;_SMwaM9t)2JfUafW-6F8V+PjKM#9iD1~v!uOfWiNL=R_j z$xKbCPfuiw`kKN1U{W6p#s!Vo+Suw#*7O24y`hNTmrEqDkQvZ}tMO{2`r|3XNXJwC zSUqB-GdK(D8yYTd*bs~vM{3@r5;JMtW-c8ywtvPG2Gepg-QU=s)?*2y@n~8f95m96 z+pO1p_FIP@Pbnlb&AnDXqBkb=RDa{H-fN9$Rv{OYoWwrU{J??m#C~^HFtMrjN~Spz zt1SsVlTk=x^7b3q-DxumB4DxAv}x1?YHb=BBbrOcvqOzjVK#ZlL$frhpxI1I&JL^4 zTz{rnIH(26vL$9Zf7%ffyC7agUX3bg9@D~^pcIOgp^SvS@0_fS0rHL9Zq*vjT4ZZ-;< zjl1>i0E~DMlLHLFe*&dK6lIzW57ySu#Tu=qwMh#+h*$yk2HIFb z>nT*!OJPT$OPLhmOCaK*%WUy42dzuvsd)CXDdLTLrH7iRS)E$Zzgab4TrcDG#Hg058>HuG9V=$qMph{<;l?`Ri zEyGDUBkrQzLi1NJtvoj(mN?yl$vw8i+u{fXdFV>oD0cQS`6mT>G!chOCzE!M}POG4yVkcsa=D@;o&t554oCp+<>_TZ~ZFu!frP4 zU=Fl`17;Hbhh*q72kj_XUp7O8XXeU24I1gAe!Z;8OmghWKbAdr6WwUEq^k(Y&_8z zj%SeljzOqyBkQ*T{RNL0@|%7B?116lab<@;U^MhM_=By8;asX*oe`l13GJ8z5* z5VjTi4+vl>1TM8OFqzvHGm)^9If&dr@6zaY`cEcbpgfH2v+vgE7J84UMd4{&7eL;p z(c9_$OzU1R7?w91eP-GY=k8o@VPB!Un6?GZ;t-tik9u# zvqoC)70K;GOln-bWzDpZYO;db3+qtNN9djk`Y?U8NTp<7p^qb*p}pudj%BUzM(7UH zy%qEc`XuT^%33b1Ck5~E(5L7=0rzR9`q$N${pil>S#W+o{57c$^%{6jXLl7mylgTC zJD;ToHF|(P$0P-VDu1113cl`fO??oskdG7^5dmB%MB4r5SOQ*GRGZ)={o>ds z>9kPUQ%r0Ab$o@MK{hL}EBvA<4GAv_oC7bVTzr|H)#yv~6@O3*T%M^d=yP+!DwVzl zmBv#szT%!L@ zp@s&_ia!GxNcwyFgCOxoHX+X@7dgvR{(Rc?n~*xScUt%qyo=g)w5da7a@kfkHC5f{IFx%*o4ng~rPm)5Yw; zw2^`5jQ4|6i@zwi9u9D=8;Zrap%z2I!`5JN3kOAh$h0K~vqK(kg#U3hW2TTZ@#_r_ zuYrSM;o@m|cf2&M;Y$Pr=7tL7cfFCjZdTPi91>|OQHV-$Uwc{<^Jl;4rh{n0WYMi;%o-qsd8G>t` zQ-2D8(zo(95gXe{3}cf6_?9yO@>*O2@DnMi0IM0|s|7 zttz7!JH98}Y&!xefmFwP>`Q>D`_oUYE!S7_mAp^my?hl~!ZN3Z&HjFI$bM0J_S;+@ z)c61&5|i&S#33B9Mvme=0gk(Yj(KKL8KhQ>V+m7_DV!+plI5r>jJ{+xCiSCc z`tY83(lA9*;dT!X@^x-D8ExhQ@OlJNOt(y3UP_9ldOS+k8hnRVig8sESest%o% z;j}Clsg_Ca5_>KG)G$OIMXfS(ocFQ<>%6$;u%x@EBc{_~MsPZjH3YcHB?RH<~ z;dk0a0@D>EH({DmGJ2n}HyvkMGJnIh%sA;g_+3K57^-Gv&8F^__Vz-f!0)!MQ5b`i zqoef_mEQ*sEWHiuFftjv-)N2Z8=|Bgx097+l$5w-TRn5KDo+Fae1PxP_%6mQq=HuS zP*%8{9H>3e?BNgbhlQLUK_uk{V@U3p*8>NdMN#@Fe@vi#yja%I#t$?$$AA0VQ(42x z0mDFwS%-M|lb{3O|He|F-NJ`0?$h{Q{SHul5z+L*m&!#!fJJqj;3jztr>O#Fy-E!z~0 zLOmUN3K~L8HkR|Nwiywi&40)E3vRgB<4otz96rleEBpjg`mCW*>Nn*WDNrlBS2nlV zdOxl4ll+uzZtGeG6`^DdE!@@cGyElu6#g>Yp&=1HtTN^eSMqQSqq&E_W@quQ!v*8$ z+|%d|%rshx=j?UN8s|+=?8>FG$a<4ngKuN*X)$w&m{snhX#>vXAAhv&&-}3>HGiL( z_9x8fVZXSs^sD>=(;RT!)SEFAxvXK^@SkiV<(^P-nfQ+mo2Io4{LcX;>*{6kT1 zf8-?bXHN4L2l2NaD^3zncNc1-nY1lw-EQ*FFcGJZs{9L$e=aJlCR8<`r&0!z{?fpt ztJbK!nz3wF0D;ur zV^Cy@9RmCxjK=X*#$+N#;gcRdLx}GuB`W$sS&0-$g7}56F@GLO#-t)SB+Mj^M7&p( z6cp|#ig#l@GT+ik-Xx2!!l_e8s;ehRK%E%3_0F#P1+Hc zYSW_5-U2TRC4ZkLEs)OhP@Dbhd?Cw$($5_;U|V4>EzzV(=>k+4Eezv|b9qyP_f% zJ<_EjASxvcKW!7qG9kWy8P-j=tyX_g&Hf!tUH*8gxIDQ$`d6;VtZYyv@r?#q71eqQ zuVwU8hJV-Mv?Dc1&FBmyML`_H0h2++J;ImVNPoF!}q{<%zspm zX8~m8`|*10*R2fZ&ze^H4}rQEqeM{`zr#4%AJ6!6_9qfm>cr6#TEf6N09|0P_S;v9 z5PmmirL$iSA{@-4#TOxVGx|!+=_0&Hxs(;xvNvL&VY_&!l9JH6|vKHhzEX6SO zrIYcL;g1S;8$`*n#4IE;{|-Iv?@OCWf7FZ_y^yVFseR%m<}9p51Z(??En=Zh=pMqj ze{7=8N(YOdYb_d`rseakM&DL5mx|f;i}F&b&b&8JY8k~4Uf_O$iai1BXmeU zNxJh9s*6M%Rncy_%IMBhysGXbnZ?!Xuz#8ntNV&8IjkHNE0L-p09L)>B;7blH;>WV zBO!T=Zixg>&~16TbA;YILdVDG1Cfw3=#xk2gAdWim_ja}>mfoTdz?@EoZ|Oqm>vV^ zkdmhp$NA$vr7ADPq{=ZG1+G9H8$Rw{GzH3e!l(4)>FGRuHRK#VbAKQ9 zzi#a}i2b>n^YpEC0Bo1` zLID4d1?(E8iZS|GWQ2ZxDhM<{hEz!HQ}gtz<1|mu62FVQ%?%c4hui|nZ9%=o=NzM# zB0hId)o(}WcX@g_Pk#}6PebTD{eS&9d5ePDY`pf24==BVoX&M>wd#YqUc2YDlRjs) zDqkZctyV2jL#jnqEg@?&^J)knJ~ada!)H#xPI@V`uZmNmGxAjcXcicGX7PKSPX<#g zkFwS|Mz@3W5w57p<$3lA_U3v1gte)?#MWM3nCC^2b?V(zDd>55ah{j%8-G6YoX--) zr#PxrA&nwmQ!ur){W+f;35p|ERz-!Lc=o;%TqhP9j#IY}4!Akwtcqei5^`BQtd?&Q zK4HJCl|M=ggxlfGk>~Yb22nFi#u#smczM$ZUwX>^d71e6Ah+!Ea@#1k^- zbokLQ!dK^6Kkj&9jH8iA{TMHcjBsp(`%m!UjxkOGJXn8%GqA)cAMF|8>&N(wkq$)O z7~cSr&bkqPb8v*;3iwFp34Vv5Pg}sSmv7DUZIN}#-NLbF`&`ww&VPmNynK6cPlHU# zFwOG09My_tnP3EDM)}S>zc-|M`Te8(!AQsrU*dc6{E0EX7fvLv!|SK2RWS6Kxy$qX zfaO~XUOx-Z5=Ya^J+_a96k$B|1fKvE=+#OBn$H<>55q^WVx(5L#`f>KZr zI>8T((-L7Jh(V!(nt%HQe?Ah@iqzabXIO}+6^X5^_qppP5js^$sPNM@PV)qRag3jg zgnbaxC)Y!tPv`krD+Nb7M37unh#gD59TthNj$>mx(wXOP+(oN{!k9D*k8fG|#6QN* zM+9ztkC(qA;*P&p#QXj!?&J_+?8o!?CrK~=^k#j%lS7J6d4G!b7FOpw-+ec2ALE}# ztl;`(JvjJPo_}k3(VrrnPtg*DIcU6szm@d#&7=IO+);m;_KZoDk%M7CROO}W4*3yU9C6flk4lU3(&7=xKPoN9$pNpl zDlau)w;~dDc%_TFz0zu|UxF0{E33L0Z=3ezrOQ4m^kyyZbkqTC%c@bSRj6zl^W1r= zsACw%D{Zxm^V7W4?v-{5E4xcnzA9MM);O9^>+wn*c7IOvO1mat#{t|k0PGYHUg?Te zBhsEzlQ^yi$5$3Po+8Or#dQlAm{o6SPc$)6{MSG`t;S{}Nwk|Bw4Y=$(D1~` zMMG$NZbZZLE;Ks#kVdGb^hxs2eKd>ir`hy1nnTagT-KhaQJDVV+HvfwRE0i9W8RS(D{ztwAe8~OMe_Gy1?;P@;lx^OC8^&8pq#gne3qD zvO+85Idq|1MJwe11>}0FmDkcLc|Fz1O;j&mMM3!xHONtFly9bsZp= z6aWB?DU;C^9FxIqIe*i8dz(GluG`YRvTlQ}ZQ8wBMi`H+11Xd;){T;FQf`ym_HIdT zxw%<4ULqnQiUNY#fhed{bPCKaEfg4_ZZJSmR31)Vg5U#DR8+vtbG{^9+GV)@e(AaA z`@Zu&-#O>ofAE2a0W1-#1$JC<#oFbUR(9&)Ek-<28LSLhbRSb2~R1VMjrsz%03% zbj)ad*oudfwr#|n`X(aNJEMjIl?b=$(fLs;tVcJPy=iF^TO^rj)iZvQKrx?*m$vcIFG^5a1P{u+&```@)4cGezkFUy zz(oF<;l(6O=C4@-?kc7$!yF9?`~n5!dh*|ts)a4%V@TF{bB$0iUtmJF;jGa)km+bm z&Jt!V^?%|x9Is&kssyGTX4&R&&aFzC(THIysMb)!;uT`os>h7+8l;aCvjFOtSv`50 zeGrcb1gefacqDB`6tP&0B`j?z8DD2@QPCivI#&9W7bmcQ8Y~x>mp6iAq)68VSs~6# zGeH?ij0XzQs=bD^bVyf2kC6uJu)YXwIG^r#mu^Or zwtsOB`9bfdlqt=ZFc%=i(l$_~$iq;0# zo#`-!DS0T2O;J6OAQ5AdRxXkX2DP1kIRVJqUWIC#Beg@3V)cqhED(^in`<%f%NlNF6p8k5w7f}}u^ z5$kofw-5#SIBTIi$!la_AGT@O3d;JTD6Oz~;#g9(aO3z|a49Zhd6#FSA-SxyZC$cg z@Cgl9avgB%k;u4kWQq{qs;lrRK6f?cz*t=rTto3N9fRCxQ4&oZqiu6$o%FaCpMNdJ zXK)=EbmYE*&r?!Re{D6kIbM7LrxfFQe36P{TrS**dAx8F`7vsBcN-*VM!q}LA~#9e z&A6qA9RFpqdNrpHrIkODEfszhU*$5=!DVNMfbXcB6x>FhA(39(&d0xouan2q2`PJF z$+#3?U)_N_Iq2V{;+>mMUVNLo!GC7lm96TTOi}P1s_KrlvaPAPIa?IJ%XR5)e2+Xz zGlJQ*eYMpWk6L=9DKmfwG~~HD$5KDPj~}pp_fR$`555d62BlN?n!g>VGn9BeK@e zWxskjn>ZPbvg?oJ34&}Ak7;-mKjI28x|^oS?Egf=9_*#$rK%KZp_$B!$Jv-YctXGv zj#>#?d6L`o9y~=!(qtv05r5or{9Szg{gkaeekuo)O+Te{%#%aekSTbEJd)76jP*8E znb}q23dMMD`~uHv_&I(#u7A;Huj5BH+Fx@{KPMpSRJ=gOk;w@w9wa4yldS-fa$S#Y z^`(cv-*UGwoJ>*o;$`;2OL&EJwi0!5nhjLEM$MLEZd+uSLuKcM&0B0 z+1`_`9Gr3_`Yi$1`nJ(NlCwvYf5e}P@CW>PY}b-}75s%1a;z4skALboP3MOd%H@$) zp}*p98s5RXWL}>ck63*P75^Yl(WvU^W}M3Cj9lBAdUU(ZxHxIV!|Ch&9{$Dj|0b_> zn(<7`RlF}S{V)|diid^KY3oBysUCU}s5nR!<%EU?8okLdZe)7gikqabyimd=2NL1t zQo8Xd1Ca1&_^+V(-hV?~-*&ic=bD-kev((HqKHpwbVrWZR)m*bpqtJaT)1g^YW9kW zVv;5%h{=@i*-O(L?@eZUcjnHCQfdRFdCm?^nmJ==&ITzlMU*qospO!lyhqYDP1i)3 z@QrCxq*zRM92Pl46Eo$sydbe4u8P^z3A*I2z=}Mnxbdj>W`8VWQqM2u5^qt-0+x@- zHM%2Yup$;vdCt6@(o5rK<@74?I$l(1;yAI8ngq=^G*u;g9j~aNB0{UR0@a6$NWyUZ z#x^6Ibodtf=~~6i1iu9nTvX`7iaHicj2)xZ=#!JISR{uBv6!aS!_wC#PH>XOr>8%D1|eI(Gogm5a)$j_o8sX^+C-p zv=ft!DSzlGMB1xEp-ps}PE2nd#LQp;kp(@2m>mih)~3+YK8RRQaW|@kjYR>;T`gDp zq16U_1u0zY^Q7SHK=Cjx3918VX8ej!P~Ate4!!MDM{s2*s14zh4>uOO8@=V;^5Q!& z$ETKimxO{7q|(Jc%|~CKZok?q1`fUA(}Jo`y?-B{6G(sDAkdGc{PiV)N5~~Xjr9Kt zJH)4Tl=ctdRx&f~ixj>wjBm9M9D0KED;&f?3OfTnWf=FeVuNJH0A6e_FDkqPdwt42 zJX$MHg@TG?r?7)l7-H|0pInr4lHx!P8Nr^=CZ>3lv>U>Y zhkvjyh5bP_g{OULP#Hig`>Dvs3wvrqSwobL(w~tb!}wJS&zHV9YE5=u?I=AU4SjWV zO9YjIMzy@iby29X=ytKFT-|Z-qHN^pH&Zg(nG=7i2(%pv7I0ike>aRbcj4_6{$Bde z6#mms5yO+xQcs}t1F}Z6j^Mwc!iVrqD1YShbcEcchuR9tglO|L7N$f&d0|J}kWf;h zm{KJrO8T*djc*+hWg#CeOdApvWc`SkN&7=$7P)ReIeIUue1&CVPEaj)2udhe+5W`X$bg@!MQ?OPnF&J6-okoFU`8T)QRCknthc6B1|0_*1TDCC-rX z7hEq%oFU_{xL%hyL&o29y(@8sj30EnCC-p=s)kKe88@Q>JiDAt)wLaNY+XbFz1BVS zL@dNLRAFy|io2*{eh7_dip6SpMK>mh7$&+JFv)c`CcD<5#I*sXt_xA-axlexD$3nw zVXAu#rn%Q+y88n7+?%8vx2)ps{{c`-2M9FbluW}5006p^;dxnq+e!m55QhI)wOUte zJ>7V>3ZA+y^#Dc18$lElK|$~`-JNcu*#pV8UWh)3Z{dXqUibh$lsH=z5gEwL{Q2fj zNZvnQ-vDf2PT=w3;k&^Ae^^@j$M1ODMq|d0-FZ_2|XiKHLhEB;^88I<+^6PSu7q?|oxD=%8&Ue1^o%27B&#!&!lh=u83+I?Fo;!DF z$CE8Xdghd2Wm~#iGQ%zHEg3sMe`e-%&$O*%-p(4BcZ{5&y9O3VbvKzAH8Q8%Lf&oZ z9@cZN(cUsPlFaL4NmFEG@6K-Cwq*#s&W_6d;X*El33pUaZpP5CMoh~v9Mc-X>}kVs zaTexxbZqU|k<1#WTb>FLGiif%!O0j8m^p)Kwe5^_jyQTYXLO!%^szC+f9dSETu;yC zg5+mfeo{ZJcjk0!r1QYgNh9M0sg9{GXOD~+4%3=cjr}RLxRWWAwa-{NThB7BtHrpx zybRXW#@S4+;F_nEUOkzN;kx^DOIN3K*4n&h!3_{scdu!g-Y%v`W4F-omO9m1Jg9r4 zJ+5oyhjQ57_Arw#*7k6if0oj6je^v`l>A?58l)zTR!~Ej!nCBG0<oPUP+Nxx!$(>=ko$io(N14La#|EhdE-=oTuIDNfJrbr3)T+^Xf4YmQS+N#8GuPQ? z=W@UlaOwsr##C?Q$Gq_r_Axb9PE?#ShXdo3(5Q{t!J5O29EKAbVr|D}-#bhl)G6n| zUQIJndK^br;)AqBqpjkw#iqO4bfARojE8AkNz3ifTF(Nu&9T(n0N5$F*+KWn{%)qF zvvmy8y-Y#V-6IzXf732%T}=1U{Y;NPs7xNsg2^$53UcY_##VP@G;14f)Uv&3#(fwb~OKgwcQ~c3ABsH``hMQBut0th^QhVpEHL-^bWxZ^lhtQ zj9%OJpr$^y4~h+Xy5kwnhRs1brqOZ1T-$7$SbAPkgC{Aa296(-lTI-0eQN~C@wy{d zoyJnM#xC4fe`i{W5@8OHR}x-dx&AP1tAUcYb|PRu_)t%B%eL(yf&{+ER1R_iIhUs1OZsGmziq=&(?k$+PtW<^X)#$tcrD2An z-|`GqF}@F`^X!L=v!y-r5IY^PKR`dI(f892Nx4RE;Ejgqhv|UC@Q+|hpkm>EYh!)$ zcb64`e~|amkBKhtLuFgoLksNufb4t*WyG^9x~_=TRQ1Q{L&E!EsT%Jrp!*5aMai(c z=_6u5^hq9U`q5HyewJw&u+uZ-+PQ*fNKFpYb0T3q{Ur0~!vbqFqgt(~JzOgQqQg3n zkiE0jYPHhnhHCQU_3`Mae%go*8HN@0^gKcve|hAL>5X=@T79-PY&!X!L1F`^r* zHxG{L2!z2xeq(gZv9Zw`k0Kh!<*ZV&NS2dDM|mB|3i$~-m@b0Xk<5fbkd-Y_-GOT5 zFonU?apmpNVaLuR$~~vxN|tj~Z`UCgi|($z%@HTp9c^`6txCK{Q+CNlrRnKBS?NQ& ze^qXQm}pPNgHPrygy^Txx6OF-P{H!dyn$}V7!$cc`k6TebXLNj(C7tv5rw?uUKHUP zq525ICa2ng=II(g8#*u1$Heg;57W=l&ueIxK7k-CSWlRU?K^7Lo|!x_s~5qJ&PU9# zQvY&AqpOk~f`;Wu9bt;hYDe~1g}mV?fAc|yNtzP=muJbVVhPeUU=~gOKHD+&m+#s2*K)+1CBJ974%so%*Jy3HzNWTt^5gPkZP{QifeO9B_f9SX6 zWOPw=`BSK}xa;qfV)qM3I29-K7KVo5d9q!qfY+= z?z-RuCP?3qcElbD(>Eoa{)zq>+4c|~l@iq<`qxT%Q$9L8>ey%WA%XY5LowKW{sP8e9jV>_n~qo~*gnHu*n%<7JA~&RICDgu;o;t?QVYd9(L!PI-dS%ggq9&d+y&sH zSryoqrsgK|(kwjrHtx~*e(uEv)0N)NaSCH7zhT~uOo^2}0g`{qiEt8ngb@e9DlbgK zl0S*ucdNf$Y}joKf9r*uR~a9ivmNL6^Ioyz0MpL@hoB(uL(QwSCV1(11-EY$7d2Gp zymzm7;{YGjct5`#nQXfEIHS8!bLQ3^As*D|O?nYJ5u$=Zd=#0?QBR}8c9_#r+t)MN zfrjebpqif$9|!8nEnRoiE4exv3-M#p-qvW2t0VexiDX{3@+VT%}0+Ra$dd!Ka?q z(z?xqH*%k(y;3l#N#nu6&8U;AKVZ+wa# z8n{M#(tN%9 zvvSp*zVO>1;x%OAdf4OmZigNp}k(KWD zCno8ge+|p&Q=#ra#4i>*liptUEHx%00bg@nk)E7@wdn)Rb&D>E*}syE_=|L|NZ*6~ z=dpj1p7w1IGzXH`pQnywb6{%&-8?r%?@4!K^N-@bizEK!n~L=QqY#g&4<0=qfJ45} zE^;oU_ZR6WE- z#SK`XnO4&_+-xn%v(PsDZkx8(Qg8%dulK=Tui?91+V3(td$HmJ-5yu|N`m}?xM_p$ zzO@P5X03QOo>;pDj-8^*7b)O->HH$-{suTNy;KG+nx(Rhx0j>i`D=7Fo!$pEi$(gR zf8g$h;O;y=evJW{&!qQ@WSBl#q~DmL&ne)1{sJwNOa1QAiJPCFpkwXHYxG6o{8Cyx zGf7{L1SaW^iu9Fke}jLHzdl0CD*k$X;^x9UjF!2gMx?;eQbq&IG~7wIoA%g+r& zsD^m$RTf&I=qidT+Cr_0#%Q~u_s}jyOZU)TMN@P@(L;1x(c^Ri)+N$uSkY0k6)n(v z6qR4$dp~_x(UM;@_ygF)>LTQhuT^Y_xuD7z2NUg6^w*cu`{U^=6cMB)PBeaflh243 ze@`_2n_~U%>6IHei{PI+WN*n<-$I0_6BhxXlDYUwdpxZ|c_2|_U+F|(yU4KQ2b;LA zBucsJ($Vrk?I)Tzgp;OtX^|T$I;`0*=0@gXpSY8|{oEZ;EUOR{;??e;xD^2TvUrr& z3EB}?@;@zc!FLvULlfV1qR8!6cvF$@e^$R;MegnnG{oTieMP=+yT86GRNtjV0__R~ zVMM4m#eGG7;37S~Qd=2n4nKXoE2MYfQ^&^&elTDE%ttA_Qfu}<{meyLm0T&4Mpx(x zr!cirEApX8u-(@j29QKTm(~@UxcS^bB-rhrAh%4ruhE<7CO$mLM{Xn{!AKx^e}x}z z;&;}6w@uRRP5&}0g@d1%2%RK{KxGFDW^?cAlt zLS>xcXOy0$xM&3W-wv!kMvFK_KF(mwDoZUQ-?sr!O9u!`Lm;-F4gdhY8;4O&V%U42cOzgT@++{5Rb_Y!~)Y_JT1+9)zb* zqnP-I58y)?&(IzX7bl6gWOQdQ<(RH>I^tfvvCW)~>#y zTcO`}J(;*+VECa;9FNE&852*oWNcV1vVZpD)Q|P`UFpTNqPHExmu^|J zwNdqq-%UM_193|l6&_OHxB*e*1`bCLDT>*Pb*8!6ELqrE-i8iy7Ij%u-2E|-0W*uxf<$W z`9N7d`evT{Ki4BcStVHJs&4Qp6v);2&~2rDlcKi@M}=#uL12{Myecx^iy{8c zVw`(}N3*!b4ak(=|HMS$2PVHlJ$X!Fx~nO4HM#P4Odcci4L6rhaQjTSgiAYJVW}(3 zcZ6dd;k|d|FB}wD<$jpIV3ES^cd=y*as#G1*to(L7Ee&T3=W)vrT%_}6Rcdu_!2Ox zdYK3HJOTg!9+QD19g|)V50gKZ2$Phk9FvcY6@RORP(={Ilb|T{zS&HZ zZ8w{+o7RKa2k|XD2_Ad^A4;5v9-M{w_q z=X}6rk(Ww~N);x^iv)>V)F>R%WhPu8Gn7lW${nB1g?2dLWg6t73{<@%IZZ~BaZFho z{msu;S`%=Y2!BRo(WJ^CT4hqAYqXBuA|4G-hEb5X+gsK4vi|+ax`Y)QE>yX5GbXw0?()rHg zp2v6Y?|;Ai6~Hta44Y4$EEhLYRc@>br(frOjAV;0o1acsC^@* zn3r)y+I>hF1TIxce;hk#yN!}<5g)5iP-2MryPTMe;_5#3Y?~{f39EjFts-NL=6`$fd!<&A)>c385EL}b_hc7TIt#4AVZQ2VNn8;C%V-97h_=;pxPGBN^ zxZEQv^u1TyF>`Dd|Y+WNVk^$vUz2S`^>>OG|rnzOP~h-%^w0;yXlW?LXSF zFAFN=d;B0nJdh6>c=m{s`j9&f&t2!$-EFF>xC?`>kKH9&>Z_j?I&y<d)Ov7vpfIa?C#9&uirm@0zd|~2z#gaHD7ORz-qEb_-YRO7fVmPlel~IFXuuP3)vCN9+M!jN)Dp22H6{lT-VJ zGgdUc&`&^+6vNb&LY?af1om1gjhU%`gWT>aQtk0gJTQUq-oH$Flkd1w_lBBf0;BCy z`7+HcE$8bM0^avZ&C0|*OB=uyFRJ?aTcyIPb&~+uB{0^Ysv=R7ZMP*l&{d2c6X;)4 zG{sye&>M>%3NQkre(=Ig+{%mG#`fOM=|O%cclvVw)s7Fw1@Oa-0qBDX0)tL}srdd3 zAKVr|u!4652w2`d0fsD36d(v8?%fw448z=eKw!vV=Ju7+g<@B0$2aAJ0j^IF7?!W< ztpbe1;%>zpHr&Lcv2JbrusgL?(as#!?0ARvZ(9Tyw9dPLBI6nnUO(iIo%Z>S_JI|# zma!w&AcT?E9qq-QVS__Pcf=Ea+vSIvKgxKI!0TcYM;pGp_iegD<(`iw?f*icdNCBX@kt!LzRTw1Yo($EO{91y)_~ zna_534W4x25$ukGuftOpJnG=jV8ac!8;kc6zdg|V2T)4~2x;QgE$@>LmS2BOn-Id% zPzQ28t;HPLr2p=wv3&Oj;JfT|seQL0nM~MJ-CF6-0jU9DeYR z@_64&(j;x_;hdb@dGFotF5i9czW2|+H~#{#7PlELoIc&#dNMd5B?h^g3~ml4Qo(RA zp=EQjBAK$LMzUIx)4a|VE*XEE7Bi9&No06p(8y7msI>(K*_+;xm6@}{P{;bNG3R2q_^ill$0qum2XdBSv~ zj!flrjWkV}8w?9NY@NI*E76{b`7I2yOInW8*^Z{HMa7sj>JplolG6-L9n;6tX6xj2 zn?nKGDyy>jD8s78N_*AgXzF9AX>98AVK(M^;YK|n@6nqZ^So$4y$?Rjnt@s@@WF!_ z;%ku)Ud$9Xi~Bio)1CH@sgE?7-s2Q zO70|>uI<+qhK9zbjuQPbQ&f114=b=z09Fwo&CMQ3=c?)OJGTfZGU7uMLc(z~Lu*;i zHb=5*a$S{_V&=AIc_1$mC;vnQ?IluiBSJ+^IKxRw46Caap*(-$LQE<*qx*Z?DW)h^ zd(nb5408-#VUeM}u~J*qZ5`H&Dr}$xlV!>~=nQ%A2*bQ|r4_N@!zMvf12!|v6f`-E zA159fr-nFf(3Q+@#Wuk_ZM}KMRF@3%tC$uEJdW)mlpT{2=#k8f2Ro-GAQpVs?IiHT zRBz6DyJPh!@>_pyHI|XqZrB*hXFcd(STxD>#HtTnj{R zI_co4MD?WI#m!+&AKWKrxt2HWBiimm8X2J@Gq@Vt#l(MB42sNXkJlShK|+a2t3nf~ z9K#Z_+$Sk=QZo6ZQ{saz&VK_8f$J9yVJq^&_z>ZYX>pD=c{zsT0)B$DOC{*dt0qOW z>sW&4oM!brL%2=LE6ISWnE}yg0)_4tD7E51O4qW1RV$2DEgqb%=t39~8?^CDDrIS&Wms6= zbK2Eh-Xx=3%DVAZsfQF>l4J92FV5i|>Z;Xl2{+y&vIS$bk4x|}%eIvd@Szv)LD%aOMWyPXmsD3iJHYjQVmo3Dol!SE z@M=&mE`Iu|7uUWm=}AD+4I&bA=>HbL+*kq^&HmjSY7T`%@iF*sp&=gc8pHfiEF8t+ zQ7pCa;CWn%gd*{&Kf;B_@vw!)P77iBTx)+}qra5~Tf#>yJZ7QIzl%ms7DjvgoiyqR zAE~hrv(V>%nuZ4pi--Ns(kM|Fr7Rq^khSof1=GT?g_BpXtn(I5#a*}Ij(62GJN`%C z<=Drl3ZC?LG0U$s-Dq50A)NbSTPi=_%})kwxho&E==wkE(LH}@{{)3qO|C%#YF=3$ zdiA?ni$9)wR*=E-zD>6#=i#B!N#gG&-1E6KkNw7xOU%m~-nh!XQ{HJ=8J4JS5MC7j80GfF1F!!W{h{y?1Y6gJv#Es?z-Mhy6*8qFYB=KY5fJ$eA5$JDWZC&|wm9Vh`;wc1 z=hdk(0FO+816Kit$%z66lMChx$ilBF2VOs5jG{_Fm|^llWu?h^^R#6V_b)Rr*r2Go zCJIq?W1a~s_?F7ag7Zb0%OoM9-t$dmLAMF|0NpViXalO=LkbX8`{$d;BCcg)V6a88 zp-~y6${p-l#0_8!3>GM=&ZvP@X-rJ1|U_6z{_d)L2hS-94p_r zNR&C&lwq=fmEz=Gi{xeDN1+4Vql040S4)s8GqAtmXGCMf(rRml$p-dPz{AsxWx*#7 z1I<|s^p_oqSz`7Kll2`vz-A#%!)0L5M^WYL$S|3)N@Q}Svnp66{FqRnt&S)votz;m zA;;+IfmI{UMr2?xK~eqK4W?QPtP=SQA4L?Exn2;JaX#W;mGFaPfWAVFN$n7b${>49 zkV+ZQVI((!sx|@ru8U%(NZ90nWgaq!b@vPmS||zvBY+B|C!b%YB@17*Bg(*_graC; zF33Ka$q#Y`z%B!>QGqN`0osXb-`Pr#N^_7ZX~ZBQ1A_vJd9x>9Ty7%^AMXK%usn+V z#4d>c>{h7C!iPA3cA@5E9CIF-wP*MN@ diff --git a/gradle/wrapper/gradle-wrapper.properties b/gradle/wrapper/gradle-wrapper.properties index f5bdef81deb70..c7f182843385d 100644 --- a/gradle/wrapper/gradle-wrapper.properties +++ b/gradle/wrapper/gradle-wrapper.properties @@ -11,7 +11,7 @@ distributionBase=GRADLE_USER_HOME distributionPath=wrapper/dists -distributionUrl=https\://services.gradle.org/distributions/gradle-8.8-all.zip +distributionUrl=https\://services.gradle.org/distributions/gradle-8.9-all.zip zipStoreBase=GRADLE_USER_HOME zipStorePath=wrapper/dists -distributionSha256Sum=f8b4f4772d302c8ff580bc40d0f56e715de69b163546944f787c87abf209c961 +distributionSha256Sum=258e722ec21e955201e31447b0aed14201765a3bfbae296a46cf60b70e66db70 diff --git a/gradlew b/gradlew index 1aa94a4269074..f5feea6d6b116 100755 --- a/gradlew +++ b/gradlew @@ -15,6 +15,8 @@ # See the License for the specific language governing permissions and # limitations under the License. # +# SPDX-License-Identifier: Apache-2.0 +# ############################################################################## # @@ -55,7 +57,7 @@ # Darwin, MinGW, and NonStop. # # (3) This script is generated from the Groovy template -# https://github.com/gradle/gradle/blob/HEAD/subprojects/plugins/src/main/resources/org/gradle/api/internal/plugins/unixStartScript.txt +# https://github.com/gradle/gradle/blob/HEAD/platforms/jvm/plugins-application/src/main/resources/org/gradle/api/internal/plugins/unixStartScript.txt # within the Gradle project. # # You can find Gradle at https://github.com/gradle/gradle/. @@ -84,7 +86,8 @@ done # shellcheck disable=SC2034 APP_BASE_NAME=${0##*/} # Discard cd standard output in case $CDPATH is set (https://github.com/gradle/gradle/issues/25036) -APP_HOME=$( cd "${APP_HOME:-./}" > /dev/null && pwd -P ) || exit +APP_HOME=$( cd -P "${APP_HOME:-./}" > /dev/null && printf '%s +' "$PWD" ) || exit # Use the maximum available, or set MAX_FD != -1 to use that value. MAX_FD=maximum diff --git a/gradlew.bat b/gradlew.bat index 7101f8e4676fc..9b42019c7915b 100644 --- a/gradlew.bat +++ b/gradlew.bat @@ -13,6 +13,8 @@ @rem See the License for the specific language governing permissions and @rem limitations under the License. @rem +@rem SPDX-License-Identifier: Apache-2.0 +@rem @if "%DEBUG%"=="" @echo off @rem ########################################################################## From 084852519ea884b9ada0acbb798facb006ab8ee0 Mon Sep 17 00:00:00 2001 From: Andriy Redko Date: Thu, 11 Jul 2024 17:43:33 -0400 Subject: [PATCH 058/167] Fix hdfs-fixture hadoop-minicluster dependencies are not being updated / false positive reports on CVEs (#14732) Signed-off-by: Andriy Redko --- test/fixtures/hdfs-fixture/build.gradle | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/test/fixtures/hdfs-fixture/build.gradle b/test/fixtures/hdfs-fixture/build.gradle index 6ab6d5acb8880..a3c2932be64c4 100644 --- a/test/fixtures/hdfs-fixture/build.gradle +++ b/test/fixtures/hdfs-fixture/build.gradle @@ -52,6 +52,8 @@ dependencies { exclude module: "logback-classic" exclude module: "avro" exclude group: 'org.apache.kerby' + exclude group: 'com.nimbusds' + exclude module: "commons-configuration2" } api "org.codehaus.jettison:jettison:${versions.jettison}" api "org.apache.commons:commons-compress:${versions.commonscompress}" @@ -75,6 +77,8 @@ dependencies { api "ch.qos.logback:logback-classic:1.2.13" api "org.jboss.xnio:xnio-nio:3.8.16.Final" api 'org.jline:jline:3.26.2' + api 'org.apache.commons:commons-configuration2:2.11.0' + api 'com.nimbusds:nimbus-jose-jwt:9.40' api ('org.apache.kerby:kerb-admin:2.0.3') { exclude group: "org.jboss.xnio" exclude group: "org.jline" From 6b8b3efe01a62c221f308a2e3b019d75a7f5ad8a Mon Sep 17 00:00:00 2001 From: gaobinlong Date: Fri, 12 Jul 2024 06:05:06 +0800 Subject: [PATCH 059/167] Add `strict_allow_templates` dynamic mapping option (#14555) * The dynamic mapping parameter supports strict_allow_templates Signed-off-by: Gao Binlong * Modify change log Signed-off-by: Gao Binlong * Modify skip version in yml test file Signed-off-by: Gao Binlong * Refactor some code Signed-off-by: Gao Binlong * Keep the old methods Signed-off-by: Gao Binlong * change public to private Signed-off-by: Gao Binlong * Optimize some code Signed-off-by: Gao Binlong * Do not override toString method for Dynamic Signed-off-by: Gao Binlong * Optimize some code and modify the changelog Signed-off-by: Gao Binlong --------- Signed-off-by: Gao Binlong --- CHANGELOG.md | 1 + .../test/index/110_strict_allow_templates.yml | 155 ++++++++ .../indices.put_mapping/all_path_options.yml | 31 ++ .../index/mapper/DocumentParser.java | 175 ++++++--- .../opensearch/index/mapper/ObjectMapper.java | 5 +- .../mapper/StrictDynamicMappingException.java | 4 +- .../index/mapper/CopyToMapperTests.java | 40 +++ .../index/mapper/DocumentParserTests.java | 334 ++++++++++++++++++ 8 files changed, 688 insertions(+), 57 deletions(-) create mode 100644 rest-api-spec/src/main/resources/rest-api-spec/test/index/110_strict_allow_templates.yml diff --git a/CHANGELOG.md b/CHANGELOG.md index cb8e6403aa47e..fbe2b7f50d446 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -12,6 +12,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - [Workload Management] Add QueryGroup schema ([13669](https://github.com/opensearch-project/OpenSearch/pull/13669)) - Add batching supported processor base type AbstractBatchingProcessor ([#14554](https://github.com/opensearch-project/OpenSearch/pull/14554)) - Fix race condition while parsing derived fields from search definition ([14445](https://github.com/opensearch-project/OpenSearch/pull/14445)) +- Add `strict_allow_templates` dynamic mapping option ([#14555](https://github.com/opensearch-project/OpenSearch/pull/14555)) - Add allowlist setting for ingest-common and search-pipeline-common processors ([#14439](https://github.com/opensearch-project/OpenSearch/issues/14439)) - Create SystemIndexRegistry with helper method matchesSystemIndex ([#14415](https://github.com/opensearch-project/OpenSearch/pull/14415)) - Print reason why parent task was cancelled ([#14604](https://github.com/opensearch-project/OpenSearch/issues/14604)) diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/index/110_strict_allow_templates.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/index/110_strict_allow_templates.yml new file mode 100644 index 0000000000000..b3899e295eb61 --- /dev/null +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/index/110_strict_allow_templates.yml @@ -0,0 +1,155 @@ +--- +"Index documents with setting dynamic parameter to strict_allow_templates in the mapping of the index": + - skip: + version: " - 2.99.99" + reason: "introduced in 3.0.0" + + - do: + indices.create: + index: test_1 + body: + mappings: + dynamic: strict_allow_templates + dynamic_templates: [ + { + strings: { + "match": "stringField*", + "match_mapping_type": "string", + "mapping": { + "type": "keyword" + } + } + }, + { + object: { + "match": "objectField*", + "match_mapping_type": "object", + "mapping": { + "type": "object", + "properties": { + "bar1": { + "type": "keyword" + }, + "bar2": { + "type": "text" + } + } + } + } + }, + { + boolean: { + "match": "booleanField*", + "match_mapping_type": "boolean", + "mapping": { + "type": "boolean" + } + } + }, + { + double: { + "match": "doubleField*", + "match_mapping_type": "double", + "mapping": { + "type": "double" + } + } + }, + { + long: { + "match": "longField*", + "match_mapping_type": "long", + "mapping": { + "type": "long" + } + } + }, + { + array: { + "match": "arrayField*", + "mapping": { + "type": "keyword" + } + } + }, + { + date: { + "match": "dateField*", + "match_mapping_type": "date", + "mapping": { + "type": "date" + } + } + } + ] + properties: + test1: + type: text + + - do: + catch: /mapping set to strict_allow_templates, dynamic introduction of \[test2\] within \[\_doc\] is not allowed/ + index: + index: test_1 + id: 1 + body: { + stringField: bar, + objectField: { + bar1: "bar1", + bar2: "bar2" + }, + test1: test1, + test2: test2 + } + + - do: + index: + index: test_1 + id: 1 + body: { + stringField: bar, + objectField: { + bar1: "bar1", + bar2: "bar2" + }, + booleanField: true, + doubleField: 1.0, + longField: 100, + arrayField: ["1","2"], + dateField: "2024-06-25T05:11:51.243Z", + test1: test1 + } + + - do: + get: + index: test_1 + id: 1 + - match: { _source: { + stringField: bar, + objectField: { + bar1: "bar1", + bar2: "bar2" + }, + booleanField: true, + doubleField: 1.0, + longField: 100, + arrayField: [ "1","2" ], + dateField: "2024-06-25T05:11:51.243Z", + test1: test1 + } + } + + - do: + indices.get_mapping: { + index: test_1 + } + + - match: {test_1.mappings.dynamic: strict_allow_templates} + - match: {test_1.mappings.properties.stringField.type: keyword} + - match: {test_1.mappings.properties.objectField.properties.bar1.type: keyword} + - match: {test_1.mappings.properties.objectField.properties.bar2.type: text} + - match: {test_1.mappings.properties.booleanField.type: boolean} + - match: {test_1.mappings.properties.doubleField.type: double} + - match: {test_1.mappings.properties.longField.type: long} + - match: {test_1.mappings.properties.arrayField.type: keyword} + - match: {test_1.mappings.properties.dateField.type: date} + - match: {test_1.mappings.properties.test1.type: text} diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/indices.put_mapping/all_path_options.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/indices.put_mapping/all_path_options.yml index ca7a21df20ea4..f579891478b19 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/indices.put_mapping/all_path_options.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/indices.put_mapping/all_path_options.yml @@ -159,3 +159,34 @@ setup: indices.get_mapping: {} - match: {test_index1.mappings.properties.text.type: text} + +--- +"post a mapping with setting dynamic to strict_allow_templates": + - skip: + version: " - 2.99.99" + reason: "introduced in 3.0.0" + - do: + indices.put_mapping: + index: test_index1 + body: + dynamic: strict_allow_templates + dynamic_templates: [ + { + strings: { + "match": "foo*", + "match_mapping_type": "string", + "mapping": { + "type": "keyword" + } + } + } + ] + properties: + test1: + type: text + + - do: + indices.get_mapping: {} + + - match: {test_index1.mappings.dynamic: strict_allow_templates} + - match: {test_index1.mappings.properties.test1.type: text} diff --git a/server/src/main/java/org/opensearch/index/mapper/DocumentParser.java b/server/src/main/java/org/opensearch/index/mapper/DocumentParser.java index f276d6ee2e579..c6815ebe8d91a 100644 --- a/server/src/main/java/org/opensearch/index/mapper/DocumentParser.java +++ b/server/src/main/java/org/opensearch/index/mapper/DocumentParser.java @@ -54,6 +54,7 @@ import java.util.Collections; import java.util.Iterator; import java.util.List; +import java.util.Locale; import static org.opensearch.index.mapper.FieldMapper.IGNORE_MALFORMED_SETTING; @@ -545,22 +546,32 @@ private static void parseObject(final ParseContext context, ObjectMapper mapper, Tuple parentMapperTuple = getDynamicParentMapper(context, paths, mapper); ObjectMapper parentMapper = parentMapperTuple.v2(); ObjectMapper.Dynamic dynamic = dynamicOrDefault(parentMapper, context); - if (dynamic == ObjectMapper.Dynamic.STRICT) { - throw new StrictDynamicMappingException(mapper.fullPath(), currentFieldName); - } else if (dynamic == ObjectMapper.Dynamic.TRUE) { - Mapper.Builder builder = context.root().findTemplateBuilder(context, currentFieldName, XContentFieldType.OBJECT); - if (builder == null) { - builder = new ObjectMapper.Builder(currentFieldName).enabled(true); - } - Mapper.BuilderContext builderContext = new Mapper.BuilderContext(context.indexSettings().getSettings(), context.path()); - objectMapper = builder.build(builderContext); - context.addDynamicMapper(objectMapper); - context.path().add(currentFieldName); - parseObjectOrField(context, objectMapper); - context.path().remove(); - } else { - // not dynamic, read everything up to end object - context.parser().skipChildren(); + switch (dynamic) { + case STRICT: + throw new StrictDynamicMappingException(dynamic.name().toLowerCase(Locale.ROOT), mapper.fullPath(), currentFieldName); + case TRUE: + case STRICT_ALLOW_TEMPLATES: + Mapper.Builder builder = findTemplateBuilder( + context, + currentFieldName, + XContentFieldType.OBJECT, + dynamic, + mapper.fullPath() + ); + + if (builder == null) { + builder = new ObjectMapper.Builder(currentFieldName).enabled(true); + } + Mapper.BuilderContext builderContext = new Mapper.BuilderContext(context.indexSettings().getSettings(), context.path()); + objectMapper = builder.build(builderContext); + context.addDynamicMapper(objectMapper); + context.path().add(currentFieldName); + parseObjectOrField(context, objectMapper); + context.path().remove(); + break; + case FALSE: + // not dynamic, read everything up to end object + context.parser().skipChildren(); } for (int i = 0; i < parentMapperTuple.v1(); i++) { context.path().remove(); @@ -591,31 +602,44 @@ private static void parseArray(ParseContext context, ObjectMapper parentMapper, Tuple parentMapperTuple = getDynamicParentMapper(context, paths, parentMapper); parentMapper = parentMapperTuple.v2(); ObjectMapper.Dynamic dynamic = dynamicOrDefault(parentMapper, context); - if (dynamic == ObjectMapper.Dynamic.STRICT) { - throw new StrictDynamicMappingException(parentMapper.fullPath(), arrayFieldName); - } else if (dynamic == ObjectMapper.Dynamic.TRUE) { - Mapper.Builder builder = context.root().findTemplateBuilder(context, arrayFieldName, XContentFieldType.OBJECT); - if (builder == null) { - parseNonDynamicArray(context, parentMapper, lastFieldName, arrayFieldName); - } else { - Mapper.BuilderContext builderContext = new Mapper.BuilderContext( - context.indexSettings().getSettings(), - context.path() + switch (dynamic) { + case STRICT: + throw new StrictDynamicMappingException( + dynamic.name().toLowerCase(Locale.ROOT), + parentMapper.fullPath(), + arrayFieldName ); - mapper = builder.build(builderContext); - assert mapper != null; - if (parsesArrayValue(mapper)) { - context.addDynamicMapper(mapper); - context.path().add(arrayFieldName); - parseObjectOrField(context, mapper); - context.path().remove(); - } else { + case TRUE: + case STRICT_ALLOW_TEMPLATES: + Mapper.Builder builder = findTemplateBuilder( + context, + arrayFieldName, + XContentFieldType.OBJECT, + dynamic, + parentMapper.fullPath() + ); + if (builder == null) { parseNonDynamicArray(context, parentMapper, lastFieldName, arrayFieldName); + } else { + Mapper.BuilderContext builderContext = new Mapper.BuilderContext( + context.indexSettings().getSettings(), + context.path() + ); + mapper = builder.build(builderContext); + assert mapper != null; + if (parsesArrayValue(mapper)) { + context.addDynamicMapper(mapper); + context.path().add(arrayFieldName); + parseObjectOrField(context, mapper); + context.path().remove(); + } else { + parseNonDynamicArray(context, parentMapper, lastFieldName, arrayFieldName); + } } - } - } else { - // TODO: shouldn't this skip, not parse? - parseNonDynamicArray(context, parentMapper, lastFieldName, arrayFieldName); + break; + case FALSE: + // TODO: shouldn't this skip, not parse? + parseNonDynamicArray(context, parentMapper, lastFieldName, arrayFieldName); } for (int i = 0; i < parentMapperTuple.v1(); i++) { context.path().remove(); @@ -692,11 +716,12 @@ private static void parseNullValue(ParseContext context, ObjectMapper parentMapp throws IOException { // we can only handle null values if we have mappings for them Mapper mapper = getMapper(context, parentMapper, lastFieldName, paths); + ObjectMapper.Dynamic dynamic = parentMapper.dynamic(); if (mapper != null) { // TODO: passing null to an object seems bogus? parseObjectOrField(context, mapper); - } else if (parentMapper.dynamic() == ObjectMapper.Dynamic.STRICT) { - throw new StrictDynamicMappingException(parentMapper.fullPath(), lastFieldName); + } else if (dynamic == ObjectMapper.Dynamic.STRICT || dynamic == ObjectMapper.Dynamic.STRICT_ALLOW_TEMPLATES) { + throw new StrictDynamicMappingException(dynamic.name().toLowerCase(Locale.ROOT), parentMapper.fullPath(), lastFieldName); } } @@ -711,7 +736,9 @@ private static Mapper.Builder newFloatBuilder(String name, Settings settings) private static Mapper.Builder createBuilderFromDynamicValue( final ParseContext context, XContentParser.Token token, - String currentFieldName + String currentFieldName, + ObjectMapper.Dynamic dynamic, + String fullPath ) throws IOException { if (token == XContentParser.Token.VALUE_STRING) { String text = context.parser().text(); @@ -733,13 +760,13 @@ private static Mapper.Builder createBuilderFromDynamicValue( } if (parseableAsLong && context.root().numericDetection()) { - Mapper.Builder builder = context.root().findTemplateBuilder(context, currentFieldName, XContentFieldType.LONG); + Mapper.Builder builder = findTemplateBuilder(context, currentFieldName, XContentFieldType.LONG, dynamic, fullPath); if (builder == null) { builder = newLongBuilder(currentFieldName, context.indexSettings().getSettings()); } return builder; } else if (parseableAsDouble && context.root().numericDetection()) { - Mapper.Builder builder = context.root().findTemplateBuilder(context, currentFieldName, XContentFieldType.DOUBLE); + Mapper.Builder builder = findTemplateBuilder(context, currentFieldName, XContentFieldType.DOUBLE, dynamic, fullPath); if (builder == null) { builder = newFloatBuilder(currentFieldName, context.indexSettings().getSettings()); } @@ -755,7 +782,7 @@ private static Mapper.Builder createBuilderFromDynamicValue( // failure to parse this, continue continue; } - Mapper.Builder builder = context.root().findTemplateBuilder(context, currentFieldName, dateTimeFormatter); + Mapper.Builder builder = findTemplateBuilder(context, currentFieldName, dateTimeFormatter, dynamic, fullPath); if (builder == null) { boolean ignoreMalformed = IGNORE_MALFORMED_SETTING.get(context.indexSettings().getSettings()); builder = new DateFieldMapper.Builder( @@ -771,7 +798,7 @@ private static Mapper.Builder createBuilderFromDynamicValue( } } - Mapper.Builder builder = context.root().findTemplateBuilder(context, currentFieldName, XContentFieldType.STRING); + Mapper.Builder builder = findTemplateBuilder(context, currentFieldName, XContentFieldType.STRING, dynamic, fullPath); if (builder == null) { builder = new TextFieldMapper.Builder(currentFieldName, context.mapperService().getIndexAnalyzers()).addMultiField( new KeywordFieldMapper.Builder("keyword").ignoreAbove(256) @@ -783,7 +810,7 @@ private static Mapper.Builder createBuilderFromDynamicValue( if (numberType == XContentParser.NumberType.INT || numberType == XContentParser.NumberType.LONG || numberType == XContentParser.NumberType.BIG_INTEGER) { - Mapper.Builder builder = context.root().findTemplateBuilder(context, currentFieldName, XContentFieldType.LONG); + Mapper.Builder builder = findTemplateBuilder(context, currentFieldName, XContentFieldType.LONG, dynamic, fullPath); if (builder == null) { builder = newLongBuilder(currentFieldName, context.indexSettings().getSettings()); } @@ -791,7 +818,7 @@ private static Mapper.Builder createBuilderFromDynamicValue( } else if (numberType == XContentParser.NumberType.FLOAT || numberType == XContentParser.NumberType.DOUBLE || numberType == XContentParser.NumberType.BIG_DECIMAL) { - Mapper.Builder builder = context.root().findTemplateBuilder(context, currentFieldName, XContentFieldType.DOUBLE); + Mapper.Builder builder = findTemplateBuilder(context, currentFieldName, XContentFieldType.DOUBLE, dynamic, fullPath); if (builder == null) { // no templates are defined, we use float by default instead of double // since this is much more space-efficient and should be enough most of @@ -801,19 +828,19 @@ private static Mapper.Builder createBuilderFromDynamicValue( return builder; } } else if (token == XContentParser.Token.VALUE_BOOLEAN) { - Mapper.Builder builder = context.root().findTemplateBuilder(context, currentFieldName, XContentFieldType.BOOLEAN); + Mapper.Builder builder = findTemplateBuilder(context, currentFieldName, XContentFieldType.BOOLEAN, dynamic, fullPath); if (builder == null) { builder = new BooleanFieldMapper.Builder(currentFieldName); } return builder; } else if (token == XContentParser.Token.VALUE_EMBEDDED_OBJECT) { - Mapper.Builder builder = context.root().findTemplateBuilder(context, currentFieldName, XContentFieldType.BINARY); + Mapper.Builder builder = findTemplateBuilder(context, currentFieldName, XContentFieldType.BINARY, dynamic, fullPath); if (builder == null) { builder = new BinaryFieldMapper.Builder(currentFieldName); } return builder; } else { - Mapper.Builder builder = context.root().findTemplateBuilder(context, currentFieldName, XContentFieldType.STRING); + Mapper.Builder builder = findTemplateBuilder(context, currentFieldName, XContentFieldType.STRING, dynamic, fullPath); if (builder != null) { return builder; } @@ -832,13 +859,13 @@ private static void parseDynamicValue( ) throws IOException { ObjectMapper.Dynamic dynamic = dynamicOrDefault(parentMapper, context); if (dynamic == ObjectMapper.Dynamic.STRICT) { - throw new StrictDynamicMappingException(parentMapper.fullPath(), currentFieldName); + throw new StrictDynamicMappingException(dynamic.name().toLowerCase(Locale.ROOT), parentMapper.fullPath(), currentFieldName); } if (dynamic == ObjectMapper.Dynamic.FALSE) { return; } final Mapper.BuilderContext builderContext = new Mapper.BuilderContext(context.indexSettings().getSettings(), context.path()); - final Mapper.Builder builder = createBuilderFromDynamicValue(context, token, currentFieldName); + final Mapper.Builder builder = createBuilderFromDynamicValue(context, token, currentFieldName, dynamic, parentMapper.fullPath()); Mapper mapper = builder.build(builderContext); context.addDynamicMapper(mapper); @@ -926,9 +953,16 @@ private static Tuple getDynamicParentMapper( switch (dynamic) { case STRICT: - throw new StrictDynamicMappingException(parent.fullPath(), paths[i]); + throw new StrictDynamicMappingException(dynamic.name().toLowerCase(Locale.ROOT), parent.fullPath(), paths[i]); + case STRICT_ALLOW_TEMPLATES: case TRUE: - Mapper.Builder builder = context.root().findTemplateBuilder(context, paths[i], XContentFieldType.OBJECT); + Mapper.Builder builder = findTemplateBuilder( + context, + paths[i], + XContentFieldType.OBJECT, + dynamic, + parent.fullPath() + ); if (builder == null) { builder = new ObjectMapper.Builder(paths[i]).enabled(true); } @@ -1010,4 +1044,37 @@ private static Mapper getMapper(final ParseContext context, ObjectMapper objectM } return objectMapper.getMapper(subfields[subfields.length - 1]); } + + // Throws exception if no dynamic templates found but `dynamic` is set to strict_allow_templates + @SuppressWarnings("rawtypes") + private static Mapper.Builder findTemplateBuilder( + ParseContext context, + String name, + XContentFieldType matchType, + ObjectMapper.Dynamic dynamic, + String fieldFullPath + ) { + Mapper.Builder builder = context.root().findTemplateBuilder(context, name, matchType); + if (builder == null && dynamic == ObjectMapper.Dynamic.STRICT_ALLOW_TEMPLATES) { + throw new StrictDynamicMappingException(dynamic.name().toLowerCase(Locale.ROOT), fieldFullPath, name); + } + + return builder; + } + + // Throws exception if no dynamic templates found but `dynamic` is set to strict_allow_templates + @SuppressWarnings("rawtypes") + private static Mapper.Builder findTemplateBuilder( + ParseContext context, + String name, + DateFormatter dateFormat, + ObjectMapper.Dynamic dynamic, + String fieldFullPath + ) { + Mapper.Builder builder = context.root().findTemplateBuilder(context, name, dateFormat); + if (builder == null && dynamic == ObjectMapper.Dynamic.STRICT_ALLOW_TEMPLATES) { + throw new StrictDynamicMappingException(dynamic.name().toLowerCase(Locale.ROOT), fieldFullPath, name); + } + return builder; + } } diff --git a/server/src/main/java/org/opensearch/index/mapper/ObjectMapper.java b/server/src/main/java/org/opensearch/index/mapper/ObjectMapper.java index be3adfe8b2c4e..533e6ca73d737 100644 --- a/server/src/main/java/org/opensearch/index/mapper/ObjectMapper.java +++ b/server/src/main/java/org/opensearch/index/mapper/ObjectMapper.java @@ -92,7 +92,8 @@ public static class Defaults { public enum Dynamic { TRUE, FALSE, - STRICT + STRICT, + STRICT_ALLOW_TEMPLATES } /** @@ -297,6 +298,8 @@ protected static boolean parseObjectOrDocumentTypeProperties( String value = fieldNode.toString(); if (value.equalsIgnoreCase("strict")) { builder.dynamic(Dynamic.STRICT); + } else if (value.equalsIgnoreCase("strict_allow_templates")) { + builder.dynamic(Dynamic.STRICT_ALLOW_TEMPLATES); } else { boolean dynamic = XContentMapValues.nodeBooleanValue(fieldNode, fieldName + ".dynamic"); builder.dynamic(dynamic ? Dynamic.TRUE : Dynamic.FALSE); diff --git a/server/src/main/java/org/opensearch/index/mapper/StrictDynamicMappingException.java b/server/src/main/java/org/opensearch/index/mapper/StrictDynamicMappingException.java index 9127641128dad..0524c672011c5 100644 --- a/server/src/main/java/org/opensearch/index/mapper/StrictDynamicMappingException.java +++ b/server/src/main/java/org/opensearch/index/mapper/StrictDynamicMappingException.java @@ -43,8 +43,8 @@ */ public class StrictDynamicMappingException extends MapperParsingException { - public StrictDynamicMappingException(String path, String fieldName) { - super("mapping set to strict, dynamic introduction of [" + fieldName + "] within [" + path + "] is not allowed"); + public StrictDynamicMappingException(String dynamic, String path, String fieldName) { + super("mapping set to " + dynamic + ", dynamic introduction of [" + fieldName + "] within [" + path + "] is not allowed"); } public StrictDynamicMappingException(StreamInput in) throws IOException { diff --git a/server/src/test/java/org/opensearch/index/mapper/CopyToMapperTests.java b/server/src/test/java/org/opensearch/index/mapper/CopyToMapperTests.java index b274cf28429e8..7a8c4ffe35021 100644 --- a/server/src/test/java/org/opensearch/index/mapper/CopyToMapperTests.java +++ b/server/src/test/java/org/opensearch/index/mapper/CopyToMapperTests.java @@ -247,6 +247,46 @@ public void testCopyToStrictDynamicInnerObjectParsing() throws Exception { assertThat(e.getMessage(), startsWith("mapping set to strict, dynamic introduction of [very] within [_doc] is not allowed")); } + public void testCopyToStrictAllowTemplatesDynamicInnerObjectParsing() throws Exception { + DocumentMapper docMapper = createDocumentMapper(topMapping(b -> { + b.field("dynamic", "strict_allow_templates"); + b.startArray("dynamic_templates"); + { + b.startObject(); + { + b.startObject("test"); + { + b.field("match", "test"); + b.startObject("mapping").field("type", "object").endObject(); + } + b.endObject(); + } + b.endObject(); + } + b.endArray(); + b.startObject("properties"); + { + b.startObject("copy_test"); + { + b.field("type", "text"); + b.field("copy_to", "very.inner.field"); + } + b.endObject(); + } + b.endObject(); + })); + + MapperParsingException e = expectThrows( + MapperParsingException.class, + () -> docMapper.parse(source(b -> b.field("copy_test", "foo"))) + ); + + assertThat( + e.getMessage(), + startsWith("mapping set to strict_allow_templates, dynamic introduction of [very] within [_doc] is not allowed") + ); + } + public void testCopyToInnerStrictDynamicInnerObjectParsing() throws Exception { DocumentMapper docMapper = createDocumentMapper(mapping(b -> { diff --git a/server/src/test/java/org/opensearch/index/mapper/DocumentParserTests.java b/server/src/test/java/org/opensearch/index/mapper/DocumentParserTests.java index ecab9da8c6b6c..15e2b6649b0be 100644 --- a/server/src/test/java/org/opensearch/index/mapper/DocumentParserTests.java +++ b/server/src/test/java/org/opensearch/index/mapper/DocumentParserTests.java @@ -878,6 +878,340 @@ public void testDynamicStrictDottedFieldNameLong() throws Exception { assertEquals("mapping set to strict, dynamic introduction of [foo] within [_doc] is not allowed", exception.getMessage()); } + public void testDynamicStrictAllowTemplatesDottedFieldNameLong() throws Exception { + DocumentMapper documentMapper = createDocumentMapper(topMapping(b -> b.field("dynamic", "strict_allow_templates"))); + StrictDynamicMappingException exception = expectThrows( + StrictDynamicMappingException.class, + () -> documentMapper.parse(source(b -> b.field("foo.bar.baz", 0))) + ); + assertEquals( + "mapping set to strict_allow_templates, dynamic introduction of [foo] within [_doc] is not allowed", + exception.getMessage() + ); + + DocumentMapper documentMapperWithDynamicTemplates = createDocumentMapper(topMapping(b -> { + b.field("dynamic", "strict_allow_templates"); + b.startArray("dynamic_templates"); + { + b.startObject(); + { + b.startObject("test"); + { + b.field("path_match", "foo.bar.baz"); + b.startObject("mapping").field("type", "long").endObject(); + } + b.endObject(); + } + b.endObject(); + } + b.endArray(); + })); + exception = expectThrows( + StrictDynamicMappingException.class, + () -> documentMapperWithDynamicTemplates.parse(source(b -> b.field("foo.bar.baz", 0))) + ); + assertEquals( + "mapping set to strict_allow_templates, dynamic introduction of [foo] within [_doc] is not allowed", + exception.getMessage() + ); + + DocumentMapper mapper = createDocumentMapper(topMapping(b -> { + b.field("dynamic", "strict_allow_templates"); + b.startArray("dynamic_templates"); + { + b.startObject(); + { + b.startObject("test"); + { + b.field("match", "foo"); + b.field("match_mapping_type", "object"); + b.startObject("mapping").field("type", "object").endObject(); + } + b.endObject(); + } + b.endObject(); + } + { + b.startObject(); + { + b.startObject("test1"); + { + b.field("match", "bar"); + b.field("match_mapping_type", "object"); + b.startObject("mapping").field("type", "object").endObject(); + } + b.endObject(); + } + b.endObject(); + } + { + b.startObject(); + { + b.startObject("test2"); + { + b.field("path_match", "foo.bar.baz"); + b.startObject("mapping").field("type", "long").endObject(); + } + b.endObject(); + } + b.endObject(); + } + b.endArray(); + })); + + ParsedDocument doc = mapper.parse(source(b -> b.field("foo.bar.baz", 0))); + assertEquals(2, doc.rootDoc().getFields("foo.bar.baz").length); + } + + public void testDynamicAllowTemplatesStrictLongArray() throws Exception { + DocumentMapper documentMapper = createDocumentMapper(topMapping(b -> b.field("dynamic", "strict_allow_templates"))); + StrictDynamicMappingException exception = expectThrows( + StrictDynamicMappingException.class, + () -> documentMapper.parse(source(b -> b.startArray("foo").value(0).value(1).endArray())) + ); + assertEquals( + "mapping set to strict_allow_templates, dynamic introduction of [foo] within [_doc] is not allowed", + exception.getMessage() + ); + + DocumentMapper documentMapperWithDynamicTemplates = createDocumentMapper(topMapping(b -> { + b.field("dynamic", "strict_allow_templates"); + b.startArray("dynamic_templates"); + { + b.startObject(); + { + b.startObject("test"); + { + b.field("match", "test"); + b.startObject("mapping").field("type", "long").endObject(); + } + b.endObject(); + } + b.endObject(); + } + b.endArray(); + })); + exception = expectThrows( + StrictDynamicMappingException.class, + () -> documentMapperWithDynamicTemplates.parse(source(b -> b.startArray("foo").value(0).value(1).endArray())) + ); + assertEquals( + "mapping set to strict_allow_templates, dynamic introduction of [foo] within [_doc] is not allowed", + exception.getMessage() + ); + + DocumentMapper mapper = createDocumentMapper(topMapping(b -> { + b.field("dynamic", "strict_allow_templates"); + b.startArray("dynamic_templates"); + { + b.startObject(); + { + b.startObject("test"); + { + b.field("match", "foo"); + b.startObject("mapping").field("type", "long").endObject(); + } + b.endObject(); + } + b.endObject(); + } + b.endArray(); + })); + + ParsedDocument doc = mapper.parse(source(b -> b.startArray("foo").value(0).value(1).endArray())); + assertEquals(4, doc.rootDoc().getFields("foo").length); + } + + public void testDynamicStrictAllowTemplatesDottedFieldNameObject() throws Exception { + DocumentMapper documentMapper = createDocumentMapper(topMapping(b -> b.field("dynamic", "strict_allow_templates"))); + StrictDynamicMappingException exception = expectThrows( + StrictDynamicMappingException.class, + () -> documentMapper.parse(source(b -> b.startObject("foo.bar.baz").field("a", 0).endObject())) + ); + assertEquals( + "mapping set to strict_allow_templates, dynamic introduction of [foo] within [_doc] is not allowed", + exception.getMessage() + ); + + DocumentMapper documentMapperWithDynamicTemplates = createDocumentMapper(topMapping(b -> { + b.field("dynamic", "strict_allow_templates"); + b.startArray("dynamic_templates"); + { + b.startObject(); + { + b.startObject("test"); + { + b.field("match", "test"); + b.startObject("mapping").field("type", "long").endObject(); + } + b.endObject(); + } + b.endObject(); + } + b.endArray(); + })); + exception = expectThrows( + StrictDynamicMappingException.class, + () -> documentMapperWithDynamicTemplates.parse(source(b -> b.startObject("foo.bar.baz").field("a", 0).endObject())) + ); + assertEquals( + "mapping set to strict_allow_templates, dynamic introduction of [foo] within [_doc] is not allowed", + exception.getMessage() + ); + + DocumentMapper mapper = createDocumentMapper(topMapping(b -> { + b.field("dynamic", "strict_allow_templates"); + b.startArray("dynamic_templates"); + { + b.startObject(); + { + b.startObject("test"); + { + b.field("match", "foo"); + b.field("match_mapping_type", "object"); + b.startObject("mapping").field("type", "object").endObject(); + } + b.endObject(); + } + b.endObject(); + } + { + b.startObject(); + { + b.startObject("test1"); + { + b.field("match", "bar"); + b.field("match_mapping_type", "object"); + b.startObject("mapping").field("type", "object").endObject(); + } + b.endObject(); + } + b.endObject(); + } + { + b.startObject(); + { + b.startObject("test2"); + { + b.field("match", "baz"); + b.field("match_mapping_type", "object"); + b.startObject("mapping").field("type", "object").endObject(); + } + b.endObject(); + } + b.endObject(); + } + { + b.startObject(); + { + b.startObject("test3"); + { + b.field("path_match", "foo.bar.baz.a"); + b.startObject("mapping").field("type", "long").endObject(); + } + b.endObject(); + } + b.endObject(); + } + b.endArray(); + })); + + ParsedDocument doc = mapper.parse(source(b -> b.startObject("foo.bar.baz").field("a", 0).endObject())); + assertEquals(2, doc.rootDoc().getFields("foo.bar.baz.a").length); + } + + public void testDynamicStrictAllowTemplatesObject() throws Exception { + DocumentMapper mapper = createDocumentMapper(topMapping(b -> { + b.field("dynamic", "strict_allow_templates"); + b.startArray("dynamic_templates"); + { + b.startObject(); + { + b.startObject("test"); + { + b.field("match", "test"); + b.field("match_mapping_type", "object"); + b.startObject("mapping").field("type", "object").endObject(); + } + b.endObject(); + } + b.endObject(); + } + { + b.startObject(); + { + b.startObject("test1"); + { + b.field("match", "test1"); + b.startObject("mapping").field("type", "keyword").endObject(); + } + b.endObject(); + } + b.endObject(); + } + b.endArray(); + } + + )); + StrictDynamicMappingException exception = expectThrows( + StrictDynamicMappingException.class, + () -> mapper.parse(source(b -> b.startObject("foo").field("bar", "baz").endObject())) + ); + assertEquals( + "mapping set to strict_allow_templates, dynamic introduction of [foo] within [_doc] is not allowed", + exception.getMessage() + ); + + ParsedDocument doc = mapper.parse(source(b -> b.startObject("test").field("test1", "baz").endObject())); + assertEquals(2, doc.rootDoc().getFields("test.test1").length); + } + + public void testDynamicStrictAllowTemplatesValue() throws Exception { + DocumentMapper mapper = createDocumentMapper(topMapping(b -> { + b.field("dynamic", "strict_allow_templates"); + b.startArray("dynamic_templates"); + { + b.startObject(); + { + b.startObject("test"); + { + b.field("match", "test*"); + b.field("match_mapping_type", "string"); + b.startObject("mapping").field("type", "keyword").endObject(); + } + b.endObject(); + } + b.endObject(); + } + b.endArray(); + } + + )); + StrictDynamicMappingException exception = expectThrows( + StrictDynamicMappingException.class, + () -> mapper.parse(source(b -> b.field("bar", "baz"))) + ); + assertEquals( + "mapping set to strict_allow_templates, dynamic introduction of [bar] within [_doc] is not allowed", + exception.getMessage() + ); + + ParsedDocument doc = mapper.parse(source(b -> b.field("test1", "baz"))); + assertEquals(2, doc.rootDoc().getFields("test1").length); + } + + public void testDynamicStrictAllowTemplatesNull() throws Exception { + DocumentMapper mapper = createDocumentMapper(topMapping(b -> b.field("dynamic", "strict_allow_templates"))); + StrictDynamicMappingException exception = expectThrows( + StrictDynamicMappingException.class, + () -> mapper.parse(source(b -> b.nullField("bar"))) + ); + assertEquals( + "mapping set to strict_allow_templates, dynamic introduction of [bar] within [_doc] is not allowed", + exception.getMessage() + ); + } + public void testDynamicDottedFieldNameObject() throws Exception { DocumentMapper mapper = createDocumentMapper(mapping(b -> {})); ParsedDocument doc = mapper.parse(source(b -> b.startObject("foo.bar.baz").field("a", 0).endObject())); From afa479b2c5ce9a22220bf2f4de49ae4ca69c3bc7 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Mon, 15 Jul 2024 11:13:25 -0400 Subject: [PATCH 060/167] Bump net.minidev:json-smart from 2.5.0 to 2.5.1 in /plugins/repository-azure (#14748) * Bump net.minidev:json-smart in /plugins/repository-azure Bumps [net.minidev:json-smart](https://github.com/netplex/json-smart-v2) from 2.5.0 to 2.5.1. - [Release notes](https://github.com/netplex/json-smart-v2/releases) - [Commits](https://github.com/netplex/json-smart-v2/compare/2.5.0...2.5.1) --- updated-dependencies: - dependency-name: net.minidev:json-smart dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] * Updating SHAs Signed-off-by: dependabot[bot] * Update changelog Signed-off-by: dependabot[bot] --------- Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] --- CHANGELOG.md | 1 + plugins/repository-azure/build.gradle | 2 +- plugins/repository-azure/licenses/json-smart-2.5.0.jar.sha1 | 1 - plugins/repository-azure/licenses/json-smart-2.5.1.jar.sha1 | 1 + 4 files changed, 3 insertions(+), 2 deletions(-) delete mode 100644 plugins/repository-azure/licenses/json-smart-2.5.0.jar.sha1 create mode 100644 plugins/repository-azure/licenses/json-smart-2.5.1.jar.sha1 diff --git a/CHANGELOG.md b/CHANGELOG.md index fbe2b7f50d446..6c260f8be9ca3 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -36,6 +36,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Bump `com.github.spullara.mustache.java:compiler` from 0.9.13 to 0.9.14 ([#14672](https://github.com/opensearch-project/OpenSearch/pull/14672)) - Bump `net.minidev:accessors-smart` from 2.5.0 to 2.5.1 ([#14673](https://github.com/opensearch-project/OpenSearch/pull/14673)) - Bump `jackson` from 2.17.1 to 2.17.2 ([#14687](https://github.com/opensearch-project/OpenSearch/pull/14687)) +- Bump `net.minidev:json-smart` from 2.5.0 to 2.5.1 ([#14748](https://github.com/opensearch-project/OpenSearch/pull/14748)) ### Changed - [Tiered Caching] Move query recomputation logic outside write lock ([#14187](https://github.com/opensearch-project/OpenSearch/pull/14187)) diff --git a/plugins/repository-azure/build.gradle b/plugins/repository-azure/build.gradle index 0f822a02e05d8..980940e35b0b0 100644 --- a/plugins/repository-azure/build.gradle +++ b/plugins/repository-azure/build.gradle @@ -68,7 +68,7 @@ dependencies { api 'com.nimbusds:lang-tag:1.7' // Both msal4j:1.14.3 and oauth2-oidc-sdk:11.9.1 has compile dependency on different versions of json-smart, // selected the higher version which is 2.5.0 - api 'net.minidev:json-smart:2.5.0' + api 'net.minidev:json-smart:2.5.1' api 'net.minidev:accessors-smart:2.5.1' api "org.ow2.asm:asm:${versions.asm}" // End of transitive dependencies for azure-identity diff --git a/plugins/repository-azure/licenses/json-smart-2.5.0.jar.sha1 b/plugins/repository-azure/licenses/json-smart-2.5.0.jar.sha1 deleted file mode 100644 index 3ec055efa1255..0000000000000 --- a/plugins/repository-azure/licenses/json-smart-2.5.0.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -57a64f421b472849c40e77d2e7cce3a141b41e99 \ No newline at end of file diff --git a/plugins/repository-azure/licenses/json-smart-2.5.1.jar.sha1 b/plugins/repository-azure/licenses/json-smart-2.5.1.jar.sha1 new file mode 100644 index 0000000000000..fe23968afce1e --- /dev/null +++ b/plugins/repository-azure/licenses/json-smart-2.5.1.jar.sha1 @@ -0,0 +1 @@ +4c11d2808d009132dfbbf947ebf37de6bf266c8e \ No newline at end of file From 26ac836751fd91afe7c140c08c5fd0bce3b216bb Mon Sep 17 00:00:00 2001 From: Chenyang Ji Date: Mon, 15 Jul 2024 12:56:33 -0700 Subject: [PATCH 061/167] remove query insights plugin from core (#14743) Signed-off-by: Chenyang Ji --- plugins/query-insights/build.gradle | 18 - .../QueryInsightsPluginTransportIT.java | 274 ------------- .../plugin/insights/TopQueriesRestIT.java | 107 ----- .../plugin/insights/QueryInsightsPlugin.java | 125 ------ .../insights/core/exporter/DebugExporter.java | 61 --- .../core/exporter/LocalIndexExporter.java | 113 ------ .../core/exporter/QueryInsightsExporter.java | 26 -- .../QueryInsightsExporterFactory.java | 143 ------- .../insights/core/exporter/SinkType.java | 66 ---- .../insights/core/exporter/package-info.java | 12 - .../core/listener/QueryInsightsListener.java | 202 ---------- .../insights/core/listener/package-info.java | 12 - .../core/service/QueryInsightsService.java | 283 ------------- .../core/service/TopQueriesService.java | 372 ------------------ .../insights/core/service/package-info.java | 12 - .../plugin/insights/package-info.java | 12 - .../insights/rules/action/package-info.java | 12 - .../rules/action/top_queries/TopQueries.java | 77 ---- .../action/top_queries/TopQueriesAction.java | 32 -- .../action/top_queries/TopQueriesRequest.java | 62 --- .../top_queries/TopQueriesResponse.java | 143 ------- .../action/top_queries/package-info.java | 12 - .../insights/rules/model/Attribute.java | 82 ---- .../insights/rules/model/MetricType.java | 119 ------ .../rules/model/SearchQueryRecord.java | 183 --------- .../insights/rules/model/package-info.java | 12 - .../rules/resthandler/package-info.java | 12 - .../top_queries/RestTopQueriesAction.java | 99 ----- .../resthandler/top_queries/package-info.java | 12 - .../rules/transport/package-info.java | 12 - .../TransportTopQueriesAction.java | 148 ------- .../transport/top_queries/package-info.java | 12 - .../settings/QueryInsightsSettings.java | 304 -------------- .../insights/settings/package-info.java | 12 - .../insights/QueryInsightsPluginTests.java | 113 ------ .../insights/QueryInsightsTestUtils.java | 205 ---------- .../core/exporter/DebugExporterTests.java | 37 -- .../exporter/LocalIndexExporterTests.java | 99 ----- .../QueryInsightsExporterFactoryTests.java | 89 ----- .../listener/QueryInsightsListenerTests.java | 217 ---------- .../service/QueryInsightsServiceTests.java | 65 --- .../core/service/TopQueriesServiceTests.java | 112 ------ .../top_queries/TopQueriesRequestTests.java | 43 -- .../top_queries/TopQueriesResponseTests.java | 71 ---- .../action/top_queries/TopQueriesTests.java | 35 -- .../rules/model/SearchQueryRecordTests.java | 71 ---- .../RestTopQueriesActionTests.java | 70 ---- .../TransportTopQueriesActionTests.java | 85 ---- 48 files changed, 4495 deletions(-) delete mode 100644 plugins/query-insights/build.gradle delete mode 100644 plugins/query-insights/src/internalClusterTest/java/org/opensearch/plugin/insights/QueryInsightsPluginTransportIT.java delete mode 100644 plugins/query-insights/src/javaRestTest/java/org/opensearch/plugin/insights/TopQueriesRestIT.java delete mode 100644 plugins/query-insights/src/main/java/org/opensearch/plugin/insights/QueryInsightsPlugin.java delete mode 100644 plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/exporter/DebugExporter.java delete mode 100644 plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/exporter/LocalIndexExporter.java delete mode 100644 plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/exporter/QueryInsightsExporter.java delete mode 100644 plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/exporter/QueryInsightsExporterFactory.java delete mode 100644 plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/exporter/SinkType.java delete mode 100644 plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/exporter/package-info.java delete mode 100644 plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/listener/QueryInsightsListener.java delete mode 100644 plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/listener/package-info.java delete mode 100644 plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/service/QueryInsightsService.java delete mode 100644 plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/service/TopQueriesService.java delete mode 100644 plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/service/package-info.java delete mode 100644 plugins/query-insights/src/main/java/org/opensearch/plugin/insights/package-info.java delete mode 100644 plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/action/package-info.java delete mode 100644 plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/action/top_queries/TopQueries.java delete mode 100644 plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/action/top_queries/TopQueriesAction.java delete mode 100644 plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/action/top_queries/TopQueriesRequest.java delete mode 100644 plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/action/top_queries/TopQueriesResponse.java delete mode 100644 plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/action/top_queries/package-info.java delete mode 100644 plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/model/Attribute.java delete mode 100644 plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/model/MetricType.java delete mode 100644 plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/model/SearchQueryRecord.java delete mode 100644 plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/model/package-info.java delete mode 100644 plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/resthandler/package-info.java delete mode 100644 plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/resthandler/top_queries/RestTopQueriesAction.java delete mode 100644 plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/resthandler/top_queries/package-info.java delete mode 100644 plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/transport/package-info.java delete mode 100644 plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/transport/top_queries/TransportTopQueriesAction.java delete mode 100644 plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/transport/top_queries/package-info.java delete mode 100644 plugins/query-insights/src/main/java/org/opensearch/plugin/insights/settings/QueryInsightsSettings.java delete mode 100644 plugins/query-insights/src/main/java/org/opensearch/plugin/insights/settings/package-info.java delete mode 100644 plugins/query-insights/src/test/java/org/opensearch/plugin/insights/QueryInsightsPluginTests.java delete mode 100644 plugins/query-insights/src/test/java/org/opensearch/plugin/insights/QueryInsightsTestUtils.java delete mode 100644 plugins/query-insights/src/test/java/org/opensearch/plugin/insights/core/exporter/DebugExporterTests.java delete mode 100644 plugins/query-insights/src/test/java/org/opensearch/plugin/insights/core/exporter/LocalIndexExporterTests.java delete mode 100644 plugins/query-insights/src/test/java/org/opensearch/plugin/insights/core/exporter/QueryInsightsExporterFactoryTests.java delete mode 100644 plugins/query-insights/src/test/java/org/opensearch/plugin/insights/core/listener/QueryInsightsListenerTests.java delete mode 100644 plugins/query-insights/src/test/java/org/opensearch/plugin/insights/core/service/QueryInsightsServiceTests.java delete mode 100644 plugins/query-insights/src/test/java/org/opensearch/plugin/insights/core/service/TopQueriesServiceTests.java delete mode 100644 plugins/query-insights/src/test/java/org/opensearch/plugin/insights/rules/action/top_queries/TopQueriesRequestTests.java delete mode 100644 plugins/query-insights/src/test/java/org/opensearch/plugin/insights/rules/action/top_queries/TopQueriesResponseTests.java delete mode 100644 plugins/query-insights/src/test/java/org/opensearch/plugin/insights/rules/action/top_queries/TopQueriesTests.java delete mode 100644 plugins/query-insights/src/test/java/org/opensearch/plugin/insights/rules/model/SearchQueryRecordTests.java delete mode 100644 plugins/query-insights/src/test/java/org/opensearch/plugin/insights/rules/resthandler/top_queries/RestTopQueriesActionTests.java delete mode 100644 plugins/query-insights/src/test/java/org/opensearch/plugin/insights/rules/transport/top_queries/TransportTopQueriesActionTests.java diff --git a/plugins/query-insights/build.gradle b/plugins/query-insights/build.gradle deleted file mode 100644 index eabbd395bd3bd..0000000000000 --- a/plugins/query-insights/build.gradle +++ /dev/null @@ -1,18 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - * - * Modifications Copyright OpenSearch Contributors. See - * GitHub history for details. - */ - -opensearchplugin { - description 'OpenSearch Query Insights Plugin.' - classname 'org.opensearch.plugin.insights.QueryInsightsPlugin' -} - -dependencies { -} diff --git a/plugins/query-insights/src/internalClusterTest/java/org/opensearch/plugin/insights/QueryInsightsPluginTransportIT.java b/plugins/query-insights/src/internalClusterTest/java/org/opensearch/plugin/insights/QueryInsightsPluginTransportIT.java deleted file mode 100644 index 04e715444f50a..0000000000000 --- a/plugins/query-insights/src/internalClusterTest/java/org/opensearch/plugin/insights/QueryInsightsPluginTransportIT.java +++ /dev/null @@ -1,274 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.plugin.insights; - -import org.opensearch.action.admin.cluster.health.ClusterHealthResponse; -import org.opensearch.action.admin.cluster.node.info.NodeInfo; -import org.opensearch.action.admin.cluster.node.info.NodesInfoRequest; -import org.opensearch.action.admin.cluster.node.info.NodesInfoResponse; -import org.opensearch.action.admin.cluster.node.info.PluginsAndModules; -import org.opensearch.action.admin.cluster.settings.ClusterUpdateSettingsRequest; -import org.opensearch.action.index.IndexResponse; -import org.opensearch.action.search.SearchResponse; -import org.opensearch.common.settings.Settings; -import org.opensearch.index.query.QueryBuilders; -import org.opensearch.plugin.insights.rules.action.top_queries.TopQueriesAction; -import org.opensearch.plugin.insights.rules.action.top_queries.TopQueriesRequest; -import org.opensearch.plugin.insights.rules.action.top_queries.TopQueriesResponse; -import org.opensearch.plugin.insights.rules.model.MetricType; -import org.opensearch.plugins.Plugin; -import org.opensearch.plugins.PluginInfo; -import org.opensearch.test.OpenSearchIntegTestCase; -import org.junit.Assert; - -import java.util.Arrays; -import java.util.Collection; -import java.util.List; -import java.util.concurrent.ExecutionException; -import java.util.function.Function; -import java.util.stream.Collectors; -import java.util.stream.Stream; - -import static org.opensearch.plugin.insights.settings.QueryInsightsSettings.TOP_N_LATENCY_QUERIES_ENABLED; -import static org.opensearch.plugin.insights.settings.QueryInsightsSettings.TOP_N_LATENCY_QUERIES_SIZE; -import static org.opensearch.plugin.insights.settings.QueryInsightsSettings.TOP_N_LATENCY_QUERIES_WINDOW_SIZE; -import static org.opensearch.test.hamcrest.OpenSearchAssertions.assertAcked; - -/** - * Transport Action tests for Query Insights Plugin - */ - -@OpenSearchIntegTestCase.ClusterScope(numDataNodes = 0, scope = OpenSearchIntegTestCase.Scope.TEST) -public class QueryInsightsPluginTransportIT extends OpenSearchIntegTestCase { - - private final int TOTAL_NUMBER_OF_NODES = 2; - private final int TOTAL_SEARCH_REQUESTS = 5; - - @Override - protected Collection> nodePlugins() { - return Arrays.asList(QueryInsightsPlugin.class); - } - - /** - * Test Query Insights Plugin is installed - */ - public void testQueryInsightPluginInstalled() { - NodesInfoRequest nodesInfoRequest = new NodesInfoRequest(); - nodesInfoRequest.addMetric(NodesInfoRequest.Metric.PLUGINS.metricName()); - NodesInfoResponse nodesInfoResponse = OpenSearchIntegTestCase.client().admin().cluster().nodesInfo(nodesInfoRequest).actionGet(); - List pluginInfos = nodesInfoResponse.getNodes() - .stream() - .flatMap( - (Function>) nodeInfo -> nodeInfo.getInfo(PluginsAndModules.class).getPluginInfos().stream() - ) - .collect(Collectors.toList()); - Assert.assertTrue( - pluginInfos.stream().anyMatch(pluginInfo -> pluginInfo.getName().equals("org.opensearch.plugin.insights.QueryInsightsPlugin")) - ); - } - - /** - * Test get top queries when feature disabled - */ - public void testGetTopQueriesWhenFeatureDisabled() { - TopQueriesRequest request = new TopQueriesRequest(MetricType.LATENCY); - TopQueriesResponse response = OpenSearchIntegTestCase.client().execute(TopQueriesAction.INSTANCE, request).actionGet(); - Assert.assertNotEquals(0, response.failures().size()); - Assert.assertEquals( - "Cannot get top n queries for [latency] when it is not enabled.", - response.failures().get(0).getCause().getCause().getMessage() - ); - } - - /** - * Test update top query record when feature enabled - */ - public void testUpdateRecordWhenFeatureDisabledThenEnabled() throws ExecutionException, InterruptedException { - Settings commonSettings = Settings.builder().put(TOP_N_LATENCY_QUERIES_ENABLED.getKey(), "false").build(); - - logger.info("--> starting nodes for query insight testing"); - List nodes = internalCluster().startNodes(TOTAL_NUMBER_OF_NODES, Settings.builder().put(commonSettings).build()); - - logger.info("--> waiting for nodes to form a cluster"); - ClusterHealthResponse health = client().admin().cluster().prepareHealth().setWaitForNodes("2").execute().actionGet(); - assertFalse(health.isTimedOut()); - - assertAcked( - prepareCreate("test").setSettings(Settings.builder().put("index.number_of_shards", 2).put("index.number_of_replicas", 2)) - ); - ensureStableCluster(2); - logger.info("--> creating indices for query insight testing"); - for (int i = 0; i < 5; i++) { - IndexResponse response = client().prepareIndex("test_" + i).setId("" + i).setSource("field_" + i, "value_" + i).get(); - assertEquals("CREATED", response.status().toString()); - } - // making search requests to get top queries - for (int i = 0; i < TOTAL_SEARCH_REQUESTS; i++) { - SearchResponse searchResponse = internalCluster().client(randomFrom(nodes)) - .prepareSearch() - .setQuery(QueryBuilders.matchAllQuery()) - .get(); - assertEquals(searchResponse.getFailedShards(), 0); - } - - TopQueriesRequest request = new TopQueriesRequest(MetricType.LATENCY); - TopQueriesResponse response = OpenSearchIntegTestCase.client().execute(TopQueriesAction.INSTANCE, request).actionGet(); - Assert.assertNotEquals(0, response.failures().size()); - Assert.assertEquals( - "Cannot get top n queries for [latency] when it is not enabled.", - response.failures().get(0).getCause().getCause().getMessage() - ); - - ClusterUpdateSettingsRequest updateSettingsRequest = new ClusterUpdateSettingsRequest().persistentSettings( - Settings.builder().put(TOP_N_LATENCY_QUERIES_ENABLED.getKey(), "true").build() - ); - assertAcked(internalCluster().client().admin().cluster().updateSettings(updateSettingsRequest).get()); - TopQueriesRequest request2 = new TopQueriesRequest(MetricType.LATENCY); - TopQueriesResponse response2 = OpenSearchIntegTestCase.client().execute(TopQueriesAction.INSTANCE, request2).actionGet(); - Assert.assertEquals(0, response2.failures().size()); - Assert.assertEquals(TOTAL_NUMBER_OF_NODES, response2.getNodes().size()); - for (int i = 0; i < TOTAL_NUMBER_OF_NODES; i++) { - Assert.assertEquals(0, response2.getNodes().get(i).getTopQueriesRecord().size()); - } - - internalCluster().stopAllNodes(); - } - - /** - * Test get top queries when feature enabled - */ - public void testGetTopQueriesWhenFeatureEnabled() throws InterruptedException { - Settings commonSettings = Settings.builder() - .put(TOP_N_LATENCY_QUERIES_ENABLED.getKey(), "true") - .put(TOP_N_LATENCY_QUERIES_SIZE.getKey(), "100") - .put(TOP_N_LATENCY_QUERIES_WINDOW_SIZE.getKey(), "600s") - .build(); - - logger.info("--> starting nodes for query insight testing"); - List nodes = internalCluster().startNodes(TOTAL_NUMBER_OF_NODES, Settings.builder().put(commonSettings).build()); - - logger.info("--> waiting for nodes to form a cluster"); - ClusterHealthResponse health = client().admin().cluster().prepareHealth().setWaitForNodes("2").execute().actionGet(); - assertFalse(health.isTimedOut()); - - assertAcked( - prepareCreate("test").setSettings(Settings.builder().put("index.number_of_shards", 2).put("index.number_of_replicas", 2)) - ); - ensureStableCluster(2); - logger.info("--> creating indices for query insight testing"); - for (int i = 0; i < 5; i++) { - IndexResponse response = client().prepareIndex("test_" + i).setId("" + i).setSource("field_" + i, "value_" + i).get(); - assertEquals("CREATED", response.status().toString()); - } - // making search requests to get top queries - for (int i = 0; i < TOTAL_SEARCH_REQUESTS; i++) { - SearchResponse searchResponse = internalCluster().client(randomFrom(nodes)) - .prepareSearch() - .setQuery(QueryBuilders.matchAllQuery()) - .get(); - assertEquals(searchResponse.getFailedShards(), 0); - } - // Sleep to wait for queue drained to top queries store - Thread.sleep(6000); - TopQueriesRequest request = new TopQueriesRequest(MetricType.LATENCY); - TopQueriesResponse response = OpenSearchIntegTestCase.client().execute(TopQueriesAction.INSTANCE, request).actionGet(); - Assert.assertEquals(0, response.failures().size()); - Assert.assertEquals(TOTAL_NUMBER_OF_NODES, response.getNodes().size()); - Assert.assertEquals(TOTAL_SEARCH_REQUESTS, response.getNodes().stream().mapToInt(o -> o.getTopQueriesRecord().size()).sum()); - - internalCluster().stopAllNodes(); - } - - /** - * Test get top queries with small top n size - */ - public void testGetTopQueriesWithSmallTopN() throws InterruptedException { - Settings commonSettings = Settings.builder() - .put(TOP_N_LATENCY_QUERIES_ENABLED.getKey(), "true") - .put(TOP_N_LATENCY_QUERIES_SIZE.getKey(), "1") - .put(TOP_N_LATENCY_QUERIES_WINDOW_SIZE.getKey(), "600s") - .build(); - - logger.info("--> starting nodes for query insight testing"); - List nodes = internalCluster().startNodes(TOTAL_NUMBER_OF_NODES, Settings.builder().put(commonSettings).build()); - - logger.info("--> waiting for nodes to form a cluster"); - ClusterHealthResponse health = client().admin().cluster().prepareHealth().setWaitForNodes("2").execute().actionGet(); - assertFalse(health.isTimedOut()); - - assertAcked( - prepareCreate("test").setSettings(Settings.builder().put("index.number_of_shards", 2).put("index.number_of_replicas", 2)) - ); - ensureStableCluster(2); - logger.info("--> creating indices for query insight testing"); - for (int i = 0; i < 5; i++) { - IndexResponse response = client().prepareIndex("test_" + i).setId("" + i).setSource("field_" + i, "value_" + i).get(); - assertEquals("CREATED", response.status().toString()); - } - // making search requests to get top queries - for (int i = 0; i < TOTAL_SEARCH_REQUESTS; i++) { - SearchResponse searchResponse = internalCluster().client(randomFrom(nodes)) - .prepareSearch() - .setQuery(QueryBuilders.matchAllQuery()) - .get(); - assertEquals(searchResponse.getFailedShards(), 0); - } - Thread.sleep(6000); - TopQueriesRequest request = new TopQueriesRequest(MetricType.LATENCY); - TopQueriesResponse response = OpenSearchIntegTestCase.client().execute(TopQueriesAction.INSTANCE, request).actionGet(); - Assert.assertEquals(0, response.failures().size()); - Assert.assertEquals(TOTAL_NUMBER_OF_NODES, response.getNodes().size()); - Assert.assertEquals(2, response.getNodes().stream().mapToInt(o -> o.getTopQueriesRecord().size()).sum()); - - internalCluster().stopAllNodes(); - } - - /** - * Test get top queries with small window size - */ - public void testGetTopQueriesWithSmallWindowSize() throws InterruptedException { - Settings commonSettings = Settings.builder() - .put(TOP_N_LATENCY_QUERIES_ENABLED.getKey(), "true") - .put(TOP_N_LATENCY_QUERIES_SIZE.getKey(), "100") - .put(TOP_N_LATENCY_QUERIES_WINDOW_SIZE.getKey(), "1m") - .build(); - - logger.info("--> starting nodes for query insight testing"); - List nodes = internalCluster().startNodes(TOTAL_NUMBER_OF_NODES, Settings.builder().put(commonSettings).build()); - - logger.info("--> waiting for nodes to form a cluster"); - ClusterHealthResponse health = client().admin().cluster().prepareHealth().setWaitForNodes("2").execute().actionGet(); - assertFalse(health.isTimedOut()); - - assertAcked( - prepareCreate("test").setSettings(Settings.builder().put("index.number_of_shards", 2).put("index.number_of_replicas", 2)) - ); - ensureStableCluster(2); - logger.info("--> creating indices for query insight testing"); - for (int i = 0; i < 5; i++) { - IndexResponse response = client().prepareIndex("test_" + i).setId("" + i).setSource("field_" + i, "value_" + i).get(); - assertEquals("CREATED", response.status().toString()); - } - // making search requests to get top queries - for (int i = 0; i < TOTAL_SEARCH_REQUESTS; i++) { - SearchResponse searchResponse = internalCluster().client(randomFrom(nodes)) - .prepareSearch() - .setQuery(QueryBuilders.matchAllQuery()) - .get(); - assertEquals(searchResponse.getFailedShards(), 0); - } - - TopQueriesRequest request = new TopQueriesRequest(MetricType.LATENCY); - TopQueriesResponse response = OpenSearchIntegTestCase.client().execute(TopQueriesAction.INSTANCE, request).actionGet(); - Assert.assertEquals(0, response.failures().size()); - Assert.assertEquals(TOTAL_NUMBER_OF_NODES, response.getNodes().size()); - Thread.sleep(6000); - internalCluster().stopAllNodes(); - } -} diff --git a/plugins/query-insights/src/javaRestTest/java/org/opensearch/plugin/insights/TopQueriesRestIT.java b/plugins/query-insights/src/javaRestTest/java/org/opensearch/plugin/insights/TopQueriesRestIT.java deleted file mode 100644 index 57dea6ad8d5ff..0000000000000 --- a/plugins/query-insights/src/javaRestTest/java/org/opensearch/plugin/insights/TopQueriesRestIT.java +++ /dev/null @@ -1,107 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.plugin.insights; - -import org.opensearch.client.Request; -import org.opensearch.client.Response; -import org.opensearch.common.xcontent.LoggingDeprecationHandler; -import org.opensearch.common.xcontent.json.JsonXContent; -import org.opensearch.core.xcontent.NamedXContentRegistry; -import org.opensearch.test.rest.OpenSearchRestTestCase; -import org.junit.Assert; - -import java.io.IOException; -import java.nio.charset.StandardCharsets; -import java.util.List; -import java.util.Map; - -/** - * Rest Action tests for Query Insights - */ -public class TopQueriesRestIT extends OpenSearchRestTestCase { - - /** - * test Query Insights is installed - * @throws IOException IOException - */ - @SuppressWarnings("unchecked") - public void testQueryInsightsPluginInstalled() throws IOException { - Request request = new Request("GET", "/_cat/plugins?s=component&h=name,component,version,description&format=json"); - Response response = client().performRequest(request); - List pluginsList = JsonXContent.jsonXContent.createParser( - NamedXContentRegistry.EMPTY, - LoggingDeprecationHandler.INSTANCE, - response.getEntity().getContent() - ).list(); - Assert.assertTrue( - pluginsList.stream().map(o -> (Map) o).anyMatch(plugin -> plugin.get("component").equals("query-insights")) - ); - } - - /** - * test enabling top queries - * @throws IOException IOException - */ - public void testTopQueriesResponses() throws IOException { - // Enable Top N Queries feature - Request request = new Request("PUT", "/_cluster/settings"); - request.setJsonEntity(defaultTopQueriesSettings()); - Response response = client().performRequest(request); - - Assert.assertEquals(200, response.getStatusLine().getStatusCode()); - - // Create documents for search - request = new Request("POST", "/my-index-0/_doc"); - request.setJsonEntity(createDocumentsBody()); - response = client().performRequest(request); - - Assert.assertEquals(201, response.getStatusLine().getStatusCode()); - - // Do Search - request = new Request("GET", "/my-index-0/_search?size=20&pretty"); - request.setJsonEntity(searchBody()); - response = client().performRequest(request); - Assert.assertEquals(200, response.getStatusLine().getStatusCode()); - response = client().performRequest(request); - Assert.assertEquals(200, response.getStatusLine().getStatusCode()); - - // Get Top Queries - request = new Request("GET", "/_insights/top_queries?pretty"); - response = client().performRequest(request); - - Assert.assertEquals(200, response.getStatusLine().getStatusCode()); - String top_requests = new String(response.getEntity().getContent().readAllBytes(), StandardCharsets.UTF_8); - Assert.assertTrue(top_requests.contains("top_queries")); - Assert.assertEquals(2, top_requests.split("searchType", -1).length - 1); - } - - private String defaultTopQueriesSettings() { - return "{\n" - + " \"persistent\" : {\n" - + " \"search.top_n_queries.latency.enabled\" : \"true\",\n" - + " \"search.top_n_queries.latency.window_size\" : \"600s\",\n" - + " \"search.top_n_queries.latency.top_n_size\" : 5\n" - + " }\n" - + "}"; - } - - private String createDocumentsBody() { - return "{\n" - + " \"@timestamp\": \"2099-11-15T13:12:00\",\n" - + " \"message\": \"this is document 1\",\n" - + " \"user\": {\n" - + " \"id\": \"cyji\"\n" - + " }\n" - + "}"; - } - - private String searchBody() { - return "{}"; - } -} diff --git a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/QueryInsightsPlugin.java b/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/QueryInsightsPlugin.java deleted file mode 100644 index bba676436c39a..0000000000000 --- a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/QueryInsightsPlugin.java +++ /dev/null @@ -1,125 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.plugin.insights; - -import org.opensearch.action.ActionRequest; -import org.opensearch.client.Client; -import org.opensearch.cluster.metadata.IndexNameExpressionResolver; -import org.opensearch.cluster.node.DiscoveryNodes; -import org.opensearch.cluster.service.ClusterService; -import org.opensearch.common.settings.ClusterSettings; -import org.opensearch.common.settings.IndexScopedSettings; -import org.opensearch.common.settings.Setting; -import org.opensearch.common.settings.Settings; -import org.opensearch.common.settings.SettingsFilter; -import org.opensearch.common.unit.TimeValue; -import org.opensearch.common.util.concurrent.OpenSearchExecutors; -import org.opensearch.core.action.ActionResponse; -import org.opensearch.core.common.io.stream.NamedWriteableRegistry; -import org.opensearch.core.xcontent.NamedXContentRegistry; -import org.opensearch.env.Environment; -import org.opensearch.env.NodeEnvironment; -import org.opensearch.plugin.insights.core.listener.QueryInsightsListener; -import org.opensearch.plugin.insights.core.service.QueryInsightsService; -import org.opensearch.plugin.insights.rules.action.top_queries.TopQueriesAction; -import org.opensearch.plugin.insights.rules.resthandler.top_queries.RestTopQueriesAction; -import org.opensearch.plugin.insights.rules.transport.top_queries.TransportTopQueriesAction; -import org.opensearch.plugin.insights.settings.QueryInsightsSettings; -import org.opensearch.plugins.ActionPlugin; -import org.opensearch.plugins.Plugin; -import org.opensearch.repositories.RepositoriesService; -import org.opensearch.rest.RestController; -import org.opensearch.rest.RestHandler; -import org.opensearch.script.ScriptService; -import org.opensearch.threadpool.ExecutorBuilder; -import org.opensearch.threadpool.ScalingExecutorBuilder; -import org.opensearch.threadpool.ThreadPool; -import org.opensearch.watcher.ResourceWatcherService; - -import java.util.Collection; -import java.util.List; -import java.util.function.Supplier; - -/** - * Plugin class for Query Insights. - */ -public class QueryInsightsPlugin extends Plugin implements ActionPlugin { - /** - * Default constructor - */ - public QueryInsightsPlugin() {} - - @Override - public Collection createComponents( - final Client client, - final ClusterService clusterService, - final ThreadPool threadPool, - final ResourceWatcherService resourceWatcherService, - final ScriptService scriptService, - final NamedXContentRegistry xContentRegistry, - final Environment environment, - final NodeEnvironment nodeEnvironment, - final NamedWriteableRegistry namedWriteableRegistry, - final IndexNameExpressionResolver indexNameExpressionResolver, - final Supplier repositoriesServiceSupplier - ) { - // create top n queries service - final QueryInsightsService queryInsightsService = new QueryInsightsService(clusterService.getClusterSettings(), threadPool, client); - return List.of(queryInsightsService, new QueryInsightsListener(clusterService, queryInsightsService)); - } - - @Override - public List> getExecutorBuilders(final Settings settings) { - return List.of( - new ScalingExecutorBuilder( - QueryInsightsSettings.QUERY_INSIGHTS_EXECUTOR, - 1, - Math.min((OpenSearchExecutors.allocatedProcessors(settings) + 1) / 2, QueryInsightsSettings.MAX_THREAD_COUNT), - TimeValue.timeValueMinutes(5) - ) - ); - } - - @Override - public List getRestHandlers( - final Settings settings, - final RestController restController, - final ClusterSettings clusterSettings, - final IndexScopedSettings indexScopedSettings, - final SettingsFilter settingsFilter, - final IndexNameExpressionResolver indexNameExpressionResolver, - final Supplier nodesInCluster - ) { - return List.of(new RestTopQueriesAction()); - } - - @Override - public List> getActions() { - return List.of(new ActionPlugin.ActionHandler<>(TopQueriesAction.INSTANCE, TransportTopQueriesAction.class)); - } - - @Override - public List> getSettings() { - return List.of( - // Settings for top N queries - QueryInsightsSettings.TOP_N_LATENCY_QUERIES_ENABLED, - QueryInsightsSettings.TOP_N_LATENCY_QUERIES_SIZE, - QueryInsightsSettings.TOP_N_LATENCY_QUERIES_WINDOW_SIZE, - QueryInsightsSettings.TOP_N_LATENCY_EXPORTER_SETTINGS, - QueryInsightsSettings.TOP_N_CPU_QUERIES_ENABLED, - QueryInsightsSettings.TOP_N_CPU_QUERIES_SIZE, - QueryInsightsSettings.TOP_N_CPU_QUERIES_WINDOW_SIZE, - QueryInsightsSettings.TOP_N_CPU_EXPORTER_SETTINGS, - QueryInsightsSettings.TOP_N_MEMORY_QUERIES_ENABLED, - QueryInsightsSettings.TOP_N_MEMORY_QUERIES_SIZE, - QueryInsightsSettings.TOP_N_MEMORY_QUERIES_WINDOW_SIZE, - QueryInsightsSettings.TOP_N_MEMORY_EXPORTER_SETTINGS - ); - } -} diff --git a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/exporter/DebugExporter.java b/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/exporter/DebugExporter.java deleted file mode 100644 index 116bd26e1f9bc..0000000000000 --- a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/exporter/DebugExporter.java +++ /dev/null @@ -1,61 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.plugin.insights.core.exporter; - -import org.apache.logging.log4j.LogManager; -import org.apache.logging.log4j.Logger; -import org.opensearch.plugin.insights.rules.model.SearchQueryRecord; - -import java.util.List; - -/** - * Debug exporter for development purpose - */ -public final class DebugExporter implements QueryInsightsExporter { - /** - * Logger of the debug exporter - */ - private final Logger logger = LogManager.getLogger(); - - /** - * Constructor of DebugExporter - */ - private DebugExporter() {} - - private static class InstanceHolder { - private static final DebugExporter INSTANCE = new DebugExporter(); - } - - /** - Get the singleton instance of DebugExporter - * - @return DebugExporter instance - */ - public static DebugExporter getInstance() { - return InstanceHolder.INSTANCE; - } - - /** - * Write the list of SearchQueryRecord to debug log - * - * @param records list of {@link SearchQueryRecord} - */ - @Override - public void export(final List records) { - logger.debug("QUERY_INSIGHTS_RECORDS: " + records.toString()); - } - - /** - * Close the debugger exporter sink - */ - @Override - public void close() { - logger.debug("Closing the DebugExporter.."); - } -} diff --git a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/exporter/LocalIndexExporter.java b/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/exporter/LocalIndexExporter.java deleted file mode 100644 index c19fe3655098b..0000000000000 --- a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/exporter/LocalIndexExporter.java +++ /dev/null @@ -1,113 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.plugin.insights.core.exporter; - -import org.apache.logging.log4j.LogManager; -import org.apache.logging.log4j.Logger; -import org.opensearch.action.bulk.BulkRequestBuilder; -import org.opensearch.action.bulk.BulkResponse; -import org.opensearch.action.index.IndexRequest; -import org.opensearch.client.Client; -import org.opensearch.common.unit.TimeValue; -import org.opensearch.common.xcontent.XContentFactory; -import org.opensearch.core.action.ActionListener; -import org.opensearch.core.xcontent.ToXContent; -import org.opensearch.plugin.insights.rules.model.SearchQueryRecord; -import org.joda.time.DateTime; -import org.joda.time.DateTimeZone; -import org.joda.time.format.DateTimeFormatter; - -import java.util.List; - -/** - * Local index exporter for exporting query insights data to local OpenSearch indices. - */ -public final class LocalIndexExporter implements QueryInsightsExporter { - /** - * Logger of the local index exporter - */ - private final Logger logger = LogManager.getLogger(); - private final Client client; - private DateTimeFormatter indexPattern; - - /** - * Constructor of LocalIndexExporter - * - * @param client OS client - * @param indexPattern the pattern of index to export to - */ - public LocalIndexExporter(final Client client, final DateTimeFormatter indexPattern) { - this.indexPattern = indexPattern; - this.client = client; - } - - /** - * Getter of indexPattern - * - * @return indexPattern - */ - public DateTimeFormatter getIndexPattern() { - return indexPattern; - } - - /** - * Setter of indexPattern - * - * @param indexPattern index pattern - * @return the current LocalIndexExporter - */ - public LocalIndexExporter setIndexPattern(DateTimeFormatter indexPattern) { - this.indexPattern = indexPattern; - return this; - } - - /** - * Export a list of SearchQueryRecord to a local index - * - * @param records list of {@link SearchQueryRecord} - */ - @Override - public void export(final List records) { - if (records == null || records.size() == 0) { - return; - } - try { - final String index = getDateTimeFromFormat(); - final BulkRequestBuilder bulkRequestBuilder = client.prepareBulk().setTimeout(TimeValue.timeValueMinutes(1)); - for (SearchQueryRecord record : records) { - bulkRequestBuilder.add( - new IndexRequest(index).source(record.toXContent(XContentFactory.jsonBuilder(), ToXContent.EMPTY_PARAMS)) - ); - } - bulkRequestBuilder.execute(new ActionListener() { - @Override - public void onResponse(BulkResponse bulkItemResponses) {} - - @Override - public void onFailure(Exception e) { - logger.error("Failed to execute bulk operation for query insights data: ", e); - } - }); - } catch (final Exception e) { - logger.error("Unable to index query insights data: ", e); - } - } - - /** - * Close the exporter sink - */ - @Override - public void close() { - logger.debug("Closing the LocalIndexExporter.."); - } - - private String getDateTimeFromFormat() { - return indexPattern.print(DateTime.now(DateTimeZone.UTC)); - } -} diff --git a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/exporter/QueryInsightsExporter.java b/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/exporter/QueryInsightsExporter.java deleted file mode 100644 index 42e5354eb1640..0000000000000 --- a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/exporter/QueryInsightsExporter.java +++ /dev/null @@ -1,26 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.plugin.insights.core.exporter; - -import org.opensearch.plugin.insights.rules.model.SearchQueryRecord; - -import java.io.Closeable; -import java.util.List; - -/** - * Base interface for Query Insights exporters - */ -public interface QueryInsightsExporter extends Closeable { - /** - * Export a list of SearchQueryRecord to the exporter sink - * - * @param records list of {@link SearchQueryRecord} - */ - void export(final List records); -} diff --git a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/exporter/QueryInsightsExporterFactory.java b/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/exporter/QueryInsightsExporterFactory.java deleted file mode 100644 index 016911761a3d0..0000000000000 --- a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/exporter/QueryInsightsExporterFactory.java +++ /dev/null @@ -1,143 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.plugin.insights.core.exporter; - -import org.apache.logging.log4j.LogManager; -import org.apache.logging.log4j.Logger; -import org.opensearch.client.Client; -import org.opensearch.common.settings.Settings; -import org.joda.time.format.DateTimeFormat; - -import java.io.IOException; -import java.util.HashSet; -import java.util.Locale; -import java.util.Set; - -import static org.opensearch.plugin.insights.settings.QueryInsightsSettings.DEFAULT_TOP_N_QUERIES_INDEX_PATTERN; -import static org.opensearch.plugin.insights.settings.QueryInsightsSettings.DEFAULT_TOP_QUERIES_EXPORTER_TYPE; -import static org.opensearch.plugin.insights.settings.QueryInsightsSettings.EXPORTER_TYPE; -import static org.opensearch.plugin.insights.settings.QueryInsightsSettings.EXPORT_INDEX; - -/** - * Factory class for validating and creating exporters based on provided settings - */ -public class QueryInsightsExporterFactory { - /** - * Logger of the query insights exporter factory - */ - private final Logger logger = LogManager.getLogger(); - final private Client client; - final private Set exporters; - - /** - * Constructor of QueryInsightsExporterFactory - * - * @param client OS client - */ - public QueryInsightsExporterFactory(final Client client) { - this.client = client; - this.exporters = new HashSet<>(); - } - - /** - * Validate exporter sink config - * - * @param settings exporter sink config {@link Settings} - * @throws IllegalArgumentException if provided exporter sink config settings are invalid - */ - public void validateExporterConfig(final Settings settings) throws IllegalArgumentException { - // Disable exporter if the EXPORTER_TYPE setting is null - if (settings.get(EXPORTER_TYPE) == null) { - return; - } - SinkType type; - try { - type = SinkType.parse(settings.get(EXPORTER_TYPE, DEFAULT_TOP_QUERIES_EXPORTER_TYPE)); - } catch (IllegalArgumentException e) { - throw new IllegalArgumentException( - String.format( - Locale.ROOT, - "Invalid exporter type [%s], type should be one of %s", - settings.get(EXPORTER_TYPE), - SinkType.allSinkTypes() - ) - ); - } - switch (type) { - case LOCAL_INDEX: - final String indexPattern = settings.get(EXPORT_INDEX, DEFAULT_TOP_N_QUERIES_INDEX_PATTERN); - if (indexPattern.length() == 0) { - throw new IllegalArgumentException("Empty index pattern configured for the exporter"); - } - try { - DateTimeFormat.forPattern(indexPattern); - } catch (Exception e) { - throw new IllegalArgumentException( - String.format(Locale.ROOT, "Invalid index pattern [%s] configured for the exporter", indexPattern) - ); - } - } - } - - /** - * Create an exporter based on provided parameters - * - * @param type The type of exporter to create - * @param indexPattern the index pattern if creating a index exporter - * @return QueryInsightsExporter the created exporter sink - */ - public QueryInsightsExporter createExporter(SinkType type, String indexPattern) { - if (SinkType.LOCAL_INDEX.equals(type)) { - QueryInsightsExporter exporter = new LocalIndexExporter(client, DateTimeFormat.forPattern(indexPattern)); - this.exporters.add(exporter); - return exporter; - } - return DebugExporter.getInstance(); - } - - /** - * Update an exporter based on provided parameters - * - * @param exporter The exporter to update - * @param indexPattern the index pattern if creating a index exporter - * @return QueryInsightsExporter the updated exporter sink - */ - public QueryInsightsExporter updateExporter(QueryInsightsExporter exporter, String indexPattern) { - if (exporter.getClass() == LocalIndexExporter.class) { - ((LocalIndexExporter) exporter).setIndexPattern(DateTimeFormat.forPattern(indexPattern)); - } - return exporter; - } - - /** - * Close an exporter - * - * @param exporter the exporter to close - */ - public void closeExporter(QueryInsightsExporter exporter) throws IOException { - if (exporter != null) { - exporter.close(); - this.exporters.remove(exporter); - } - } - - /** - * Close all exporters - * - */ - public void closeAllExporters() { - for (QueryInsightsExporter exporter : exporters) { - try { - closeExporter(exporter); - } catch (IOException e) { - logger.error("Fail to close query insights exporter, error: ", e); - } - } - } -} diff --git a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/exporter/SinkType.java b/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/exporter/SinkType.java deleted file mode 100644 index c90c9c76b6706..0000000000000 --- a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/exporter/SinkType.java +++ /dev/null @@ -1,66 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.plugin.insights.core.exporter; - -import java.util.Arrays; -import java.util.Locale; -import java.util.Set; -import java.util.stream.Collectors; - -/** - * Type of supported sinks - */ -public enum SinkType { - /** debug exporter */ - DEBUG("debug"), - /** local index exporter */ - LOCAL_INDEX("local_index"); - - private final String type; - - SinkType(String type) { - this.type = type; - } - - @Override - public String toString() { - return type; - } - - /** - * Parse SinkType from String - * @param type the String representation of the SinkType - * @return SinkType - */ - public static SinkType parse(final String type) { - return valueOf(type.toUpperCase(Locale.ROOT)); - } - - /** - * Get all valid SinkTypes - * - * @return A set contains all valid SinkTypes - */ - public static Set allSinkTypes() { - return Arrays.stream(values()).collect(Collectors.toSet()); - } - - /** - * Get Sink type from exporter - * - * @param exporter the {@link QueryInsightsExporter} - * @return SinkType associated with this exporter - */ - public static SinkType getSinkTypeFromExporter(QueryInsightsExporter exporter) { - if (exporter.getClass().equals(LocalIndexExporter.class)) { - return SinkType.LOCAL_INDEX; - } - return SinkType.DEBUG; - } -} diff --git a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/exporter/package-info.java b/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/exporter/package-info.java deleted file mode 100644 index 7164411194f85..0000000000000 --- a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/exporter/package-info.java +++ /dev/null @@ -1,12 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -/** - * Query Insights exporter - */ -package org.opensearch.plugin.insights.core.exporter; diff --git a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/listener/QueryInsightsListener.java b/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/listener/QueryInsightsListener.java deleted file mode 100644 index a1f810ad5987c..0000000000000 --- a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/listener/QueryInsightsListener.java +++ /dev/null @@ -1,202 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.plugin.insights.core.listener; - -import org.apache.logging.log4j.LogManager; -import org.apache.logging.log4j.Logger; -import org.opensearch.action.search.SearchPhaseContext; -import org.opensearch.action.search.SearchRequest; -import org.opensearch.action.search.SearchRequestContext; -import org.opensearch.action.search.SearchRequestOperationsListener; -import org.opensearch.action.search.SearchTask; -import org.opensearch.cluster.service.ClusterService; -import org.opensearch.common.inject.Inject; -import org.opensearch.core.tasks.resourcetracker.TaskResourceInfo; -import org.opensearch.core.xcontent.ToXContent; -import org.opensearch.plugin.insights.core.service.QueryInsightsService; -import org.opensearch.plugin.insights.rules.model.Attribute; -import org.opensearch.plugin.insights.rules.model.MetricType; -import org.opensearch.plugin.insights.rules.model.SearchQueryRecord; -import org.opensearch.tasks.Task; - -import java.util.Collections; -import java.util.HashMap; -import java.util.List; -import java.util.Locale; -import java.util.Map; -import java.util.concurrent.TimeUnit; - -import static org.opensearch.plugin.insights.settings.QueryInsightsSettings.getTopNEnabledSetting; -import static org.opensearch.plugin.insights.settings.QueryInsightsSettings.getTopNSizeSetting; -import static org.opensearch.plugin.insights.settings.QueryInsightsSettings.getTopNWindowSizeSetting; - -/** - * The listener for query insights services. - * It forwards query-related data to the appropriate query insights stores, - * either for each request or for each phase. - * - * @opensearch.internal - */ -public final class QueryInsightsListener extends SearchRequestOperationsListener { - private static final ToXContent.Params FORMAT_PARAMS = new ToXContent.MapParams(Collections.singletonMap("pretty", "false")); - - private static final Logger log = LogManager.getLogger(QueryInsightsListener.class); - - private final QueryInsightsService queryInsightsService; - private final ClusterService clusterService; - - /** - * Constructor for QueryInsightsListener - * - * @param clusterService The Node's cluster service. - * @param queryInsightsService The topQueriesByLatencyService associated with this listener - */ - @Inject - public QueryInsightsListener(final ClusterService clusterService, final QueryInsightsService queryInsightsService) { - this.clusterService = clusterService; - this.queryInsightsService = queryInsightsService; - // Setting endpoints set up for top n queries, including enabling top n queries, window size and top n size - // Expected metricTypes are Latency, CPU and Memory. - for (MetricType type : MetricType.allMetricTypes()) { - clusterService.getClusterSettings() - .addSettingsUpdateConsumer(getTopNEnabledSetting(type), v -> this.setEnableTopQueries(type, v)); - clusterService.getClusterSettings() - .addSettingsUpdateConsumer( - getTopNSizeSetting(type), - v -> this.queryInsightsService.setTopNSize(type, v), - v -> this.queryInsightsService.validateTopNSize(type, v) - ); - clusterService.getClusterSettings() - .addSettingsUpdateConsumer( - getTopNWindowSizeSetting(type), - v -> this.queryInsightsService.setWindowSize(type, v), - v -> this.queryInsightsService.validateWindowSize(type, v) - ); - - this.setEnableTopQueries(type, clusterService.getClusterSettings().get(getTopNEnabledSetting(type))); - this.queryInsightsService.validateTopNSize(type, clusterService.getClusterSettings().get(getTopNSizeSetting(type))); - this.queryInsightsService.setTopNSize(type, clusterService.getClusterSettings().get(getTopNSizeSetting(type))); - this.queryInsightsService.validateWindowSize(type, clusterService.getClusterSettings().get(getTopNWindowSizeSetting(type))); - this.queryInsightsService.setWindowSize(type, clusterService.getClusterSettings().get(getTopNWindowSizeSetting(type))); - } - } - - /** - * Enable or disable top queries insights collection for {@link MetricType} - * This function will enable or disable the corresponding listeners - * and query insights services. - * - * @param metricType {@link MetricType} - * @param enabled boolean - */ - public void setEnableTopQueries(final MetricType metricType, final boolean enabled) { - boolean isAllMetricsDisabled = !queryInsightsService.isEnabled(); - this.queryInsightsService.enableCollection(metricType, enabled); - if (!enabled) { - // disable QueryInsightsListener only if all metrics collections are disabled now. - if (!queryInsightsService.isEnabled()) { - super.setEnabled(false); - this.queryInsightsService.stop(); - } - } else { - super.setEnabled(true); - // restart QueryInsightsListener only if none of metrics collections is enabled before. - if (isAllMetricsDisabled) { - this.queryInsightsService.stop(); - this.queryInsightsService.start(); - } - } - - } - - @Override - public boolean isEnabled() { - return super.isEnabled(); - } - - @Override - public void onPhaseStart(SearchPhaseContext context) {} - - @Override - public void onPhaseEnd(SearchPhaseContext context, SearchRequestContext searchRequestContext) {} - - @Override - public void onPhaseFailure(SearchPhaseContext context, Throwable cause) {} - - @Override - public void onRequestStart(SearchRequestContext searchRequestContext) {} - - @Override - public void onRequestEnd(final SearchPhaseContext context, final SearchRequestContext searchRequestContext) { - constructSearchQueryRecord(context, searchRequestContext); - } - - @Override - public void onRequestFailure(final SearchPhaseContext context, final SearchRequestContext searchRequestContext) { - constructSearchQueryRecord(context, searchRequestContext); - } - - private void constructSearchQueryRecord(final SearchPhaseContext context, final SearchRequestContext searchRequestContext) { - SearchTask searchTask = context.getTask(); - List tasksResourceUsages = searchRequestContext.getPhaseResourceUsage(); - tasksResourceUsages.add( - new TaskResourceInfo( - searchTask.getAction(), - searchTask.getId(), - searchTask.getParentTaskId().getId(), - clusterService.localNode().getId(), - searchTask.getTotalResourceStats() - ) - ); - - final SearchRequest request = context.getRequest(); - try { - Map measurements = new HashMap<>(); - if (queryInsightsService.isCollectionEnabled(MetricType.LATENCY)) { - measurements.put( - MetricType.LATENCY, - TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - searchRequestContext.getAbsoluteStartNanos()) - ); - } - if (queryInsightsService.isCollectionEnabled(MetricType.CPU)) { - measurements.put( - MetricType.CPU, - tasksResourceUsages.stream().map(a -> a.getTaskResourceUsage().getCpuTimeInNanos()).mapToLong(Long::longValue).sum() - ); - } - if (queryInsightsService.isCollectionEnabled(MetricType.MEMORY)) { - measurements.put( - MetricType.MEMORY, - tasksResourceUsages.stream().map(a -> a.getTaskResourceUsage().getMemoryInBytes()).mapToLong(Long::longValue).sum() - ); - } - Map attributes = new HashMap<>(); - attributes.put(Attribute.SEARCH_TYPE, request.searchType().toString().toLowerCase(Locale.ROOT)); - attributes.put(Attribute.SOURCE, request.source().toString(FORMAT_PARAMS)); - attributes.put(Attribute.TOTAL_SHARDS, context.getNumShards()); - attributes.put(Attribute.INDICES, request.indices()); - attributes.put(Attribute.PHASE_LATENCY_MAP, searchRequestContext.phaseTookMap()); - attributes.put(Attribute.TASK_RESOURCE_USAGES, tasksResourceUsages); - - Map labels = new HashMap<>(); - // Retrieve user provided label if exists - String userProvidedLabel = context.getTask().getHeader(Task.X_OPAQUE_ID); - if (userProvidedLabel != null) { - labels.put(Task.X_OPAQUE_ID, userProvidedLabel); - } - attributes.put(Attribute.LABELS, labels); - // construct SearchQueryRecord from attributes and measurements - SearchQueryRecord record = new SearchQueryRecord(request.getOrCreateAbsoluteStartMillis(), measurements, attributes); - queryInsightsService.addRecord(record); - } catch (Exception e) { - log.error(String.format(Locale.ROOT, "fail to ingest query insight data, error: %s", e)); - } - } - -} diff --git a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/listener/package-info.java b/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/listener/package-info.java deleted file mode 100644 index 3cb9cacf7fd1c..0000000000000 --- a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/listener/package-info.java +++ /dev/null @@ -1,12 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -/** - * Listeners for Query Insights - */ -package org.opensearch.plugin.insights.core.listener; diff --git a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/service/QueryInsightsService.java b/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/service/QueryInsightsService.java deleted file mode 100644 index c63430a1a726c..0000000000000 --- a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/service/QueryInsightsService.java +++ /dev/null @@ -1,283 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.plugin.insights.core.service; - -import org.opensearch.client.Client; -import org.opensearch.common.inject.Inject; -import org.opensearch.common.lifecycle.AbstractLifecycleComponent; -import org.opensearch.common.settings.ClusterSettings; -import org.opensearch.common.settings.Settings; -import org.opensearch.common.unit.TimeValue; -import org.opensearch.plugin.insights.core.exporter.QueryInsightsExporterFactory; -import org.opensearch.plugin.insights.rules.model.MetricType; -import org.opensearch.plugin.insights.rules.model.SearchQueryRecord; -import org.opensearch.plugin.insights.settings.QueryInsightsSettings; -import org.opensearch.threadpool.Scheduler; -import org.opensearch.threadpool.ThreadPool; - -import java.io.IOException; -import java.util.ArrayList; -import java.util.Comparator; -import java.util.HashMap; -import java.util.List; -import java.util.Map; -import java.util.concurrent.LinkedBlockingQueue; - -import static org.opensearch.plugin.insights.settings.QueryInsightsSettings.getExporterSettings; - -/** - * Service responsible for gathering, analyzing, storing and exporting - * information related to search queries - * - * @opensearch.internal - */ -public class QueryInsightsService extends AbstractLifecycleComponent { - /** - * The internal OpenSearch thread pool that execute async processing and exporting tasks - */ - private final ThreadPool threadPool; - - /** - * Services to capture top n queries for different metric types - */ - private final Map topQueriesServices; - - /** - * Flags for enabling insight data collection for different metric types - */ - private final Map enableCollect; - - /** - * The internal thread-safe queue to ingest the search query data and subsequently forward to processors - */ - private final LinkedBlockingQueue queryRecordsQueue; - - /** - * Holds a reference to delayed operation {@link Scheduler.Cancellable} so it can be cancelled when - * the service closed concurrently. - */ - protected volatile Scheduler.Cancellable scheduledFuture; - - /** - * Query Insights exporter factory - */ - final QueryInsightsExporterFactory queryInsightsExporterFactory; - - /** - * Constructor of the QueryInsightsService - * - * @param clusterSettings OpenSearch cluster level settings - * @param threadPool The OpenSearch thread pool to run async tasks - * @param client OS client - */ - @Inject - public QueryInsightsService(final ClusterSettings clusterSettings, final ThreadPool threadPool, final Client client) { - enableCollect = new HashMap<>(); - queryRecordsQueue = new LinkedBlockingQueue<>(QueryInsightsSettings.QUERY_RECORD_QUEUE_CAPACITY); - this.threadPool = threadPool; - this.queryInsightsExporterFactory = new QueryInsightsExporterFactory(client); - // initialize top n queries services and configurations consumers - topQueriesServices = new HashMap<>(); - for (MetricType metricType : MetricType.allMetricTypes()) { - enableCollect.put(metricType, false); - topQueriesServices.put(metricType, new TopQueriesService(metricType, threadPool, queryInsightsExporterFactory)); - } - for (MetricType type : MetricType.allMetricTypes()) { - clusterSettings.addSettingsUpdateConsumer( - getExporterSettings(type), - (settings -> setExporter(type, settings)), - (settings -> validateExporterConfig(type, settings)) - ); - } - } - - /** - * Ingest the query data into in-memory stores - * - * @param record the record to ingest - */ - public boolean addRecord(final SearchQueryRecord record) { - boolean shouldAdd = false; - for (Map.Entry entry : topQueriesServices.entrySet()) { - if (!enableCollect.get(entry.getKey())) { - continue; - } - List currentSnapshot = entry.getValue().getTopQueriesCurrentSnapshot(); - // skip add to top N queries store if the incoming record is smaller than the Nth record - if (currentSnapshot.size() < entry.getValue().getTopNSize() - || SearchQueryRecord.compare(record, currentSnapshot.get(0), entry.getKey()) > 0) { - shouldAdd = true; - break; - } - } - if (shouldAdd) { - return queryRecordsQueue.offer(record); - } - return false; - } - - /** - * Drain the queryRecordsQueue into internal stores and services - */ - public void drainRecords() { - final List records = new ArrayList<>(); - queryRecordsQueue.drainTo(records); - records.sort(Comparator.comparingLong(SearchQueryRecord::getTimestamp)); - for (MetricType metricType : MetricType.allMetricTypes()) { - if (enableCollect.get(metricType)) { - // ingest the records into topQueriesService - topQueriesServices.get(metricType).consumeRecords(records); - } - } - } - - /** - * Get the top queries service based on metricType - * @param metricType {@link MetricType} - * @return {@link TopQueriesService} - */ - public TopQueriesService getTopQueriesService(final MetricType metricType) { - return topQueriesServices.get(metricType); - } - - /** - * Set flag to enable or disable Query Insights data collection - * - * @param metricType {@link MetricType} - * @param enable Flag to enable or disable Query Insights data collection - */ - public void enableCollection(final MetricType metricType, final boolean enable) { - this.enableCollect.put(metricType, enable); - this.topQueriesServices.get(metricType).setEnabled(enable); - } - - /** - * Get if the Query Insights data collection is enabled for a MetricType - * - * @param metricType {@link MetricType} - * @return if the Query Insights data collection is enabled - */ - public boolean isCollectionEnabled(final MetricType metricType) { - return this.enableCollect.get(metricType); - } - - /** - * Check if query insights service is enabled - * - * @return if query insights service is enabled - */ - public boolean isEnabled() { - for (MetricType t : MetricType.allMetricTypes()) { - if (isCollectionEnabled(t)) { - return true; - } - } - return false; - } - - /** - * Validate the window size config for a metricType - * - * @param type {@link MetricType} - * @param windowSize {@link TimeValue} - */ - public void validateWindowSize(final MetricType type, final TimeValue windowSize) { - if (topQueriesServices.containsKey(type)) { - topQueriesServices.get(type).validateWindowSize(windowSize); - } - } - - /** - * Set window size for a metricType - * - * @param type {@link MetricType} - * @param windowSize {@link TimeValue} - */ - public void setWindowSize(final MetricType type, final TimeValue windowSize) { - if (topQueriesServices.containsKey(type)) { - topQueriesServices.get(type).setWindowSize(windowSize); - } - } - - /** - * Validate the top n size config for a metricType - * - * @param type {@link MetricType} - * @param topNSize top n size - */ - public void validateTopNSize(final MetricType type, final int topNSize) { - if (topQueriesServices.containsKey(type)) { - topQueriesServices.get(type).validateTopNSize(topNSize); - } - } - - /** - * Set the top n size config for a metricType - * - * @param type {@link MetricType} - * @param topNSize top n size - */ - public void setTopNSize(final MetricType type, final int topNSize) { - if (topQueriesServices.containsKey(type)) { - topQueriesServices.get(type).setTopNSize(topNSize); - } - } - - /** - * Set the exporter config for a metricType - * - * @param type {@link MetricType} - * @param settings exporter settings - */ - public void setExporter(final MetricType type, final Settings settings) { - if (topQueriesServices.containsKey(type)) { - topQueriesServices.get(type).setExporter(settings); - } - } - - /** - * Validate the exporter config for a metricType - * - * @param type {@link MetricType} - * @param settings exporter settings - */ - public void validateExporterConfig(final MetricType type, final Settings settings) { - if (topQueriesServices.containsKey(type)) { - topQueriesServices.get(type).validateExporterConfig(settings); - } - } - - @Override - protected void doStart() { - if (isEnabled()) { - scheduledFuture = threadPool.scheduleWithFixedDelay( - this::drainRecords, - QueryInsightsSettings.QUERY_RECORD_QUEUE_DRAIN_INTERVAL, - QueryInsightsSettings.QUERY_INSIGHTS_EXECUTOR - ); - } - } - - @Override - protected void doStop() { - if (scheduledFuture != null) { - scheduledFuture.cancel(); - } - } - - @Override - protected void doClose() throws IOException { - // close all top n queries service - for (TopQueriesService topQueriesService : topQueriesServices.values()) { - topQueriesService.close(); - } - // close any unclosed resources - queryInsightsExporterFactory.closeAllExporters(); - } -} diff --git a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/service/TopQueriesService.java b/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/service/TopQueriesService.java deleted file mode 100644 index bbe8b8fc40dac..0000000000000 --- a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/service/TopQueriesService.java +++ /dev/null @@ -1,372 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.plugin.insights.core.service; - -import org.apache.logging.log4j.LogManager; -import org.apache.logging.log4j.Logger; -import org.opensearch.common.settings.Settings; -import org.opensearch.common.unit.TimeValue; -import org.opensearch.plugin.insights.core.exporter.QueryInsightsExporter; -import org.opensearch.plugin.insights.core.exporter.QueryInsightsExporterFactory; -import org.opensearch.plugin.insights.core.exporter.SinkType; -import org.opensearch.plugin.insights.rules.model.MetricType; -import org.opensearch.plugin.insights.rules.model.SearchQueryRecord; -import org.opensearch.plugin.insights.settings.QueryInsightsSettings; -import org.opensearch.threadpool.ThreadPool; - -import java.io.IOException; -import java.time.Instant; -import java.time.LocalDateTime; -import java.time.ZoneId; -import java.time.ZoneOffset; -import java.time.temporal.ChronoUnit; -import java.util.ArrayList; -import java.util.Collection; -import java.util.List; -import java.util.Locale; -import java.util.PriorityQueue; -import java.util.concurrent.atomic.AtomicReference; -import java.util.stream.Collectors; -import java.util.stream.Stream; - -import static org.opensearch.plugin.insights.settings.QueryInsightsSettings.DEFAULT_TOP_N_QUERIES_INDEX_PATTERN; -import static org.opensearch.plugin.insights.settings.QueryInsightsSettings.DEFAULT_TOP_QUERIES_EXPORTER_TYPE; -import static org.opensearch.plugin.insights.settings.QueryInsightsSettings.EXPORTER_TYPE; -import static org.opensearch.plugin.insights.settings.QueryInsightsSettings.EXPORT_INDEX; -import static org.opensearch.plugin.insights.settings.QueryInsightsSettings.QUERY_INSIGHTS_EXECUTOR; - -/** - * Service responsible for gathering and storing top N queries - * with high latency or resource usage - * - * @opensearch.internal - */ -public class TopQueriesService { - /** - * Logger of the local index exporter - */ - private final Logger logger = LogManager.getLogger(); - private boolean enabled; - /** - * The metric type to measure top n queries - */ - private final MetricType metricType; - private int topNSize; - /** - * The window size to keep the top n queries - */ - private TimeValue windowSize; - /** - * The current window start timestamp - */ - private long windowStart; - /** - * The internal thread-safe store that holds the top n queries insight data - */ - private final PriorityQueue topQueriesStore; - - /** - * The AtomicReference of a snapshot of the current window top queries for getters to consume - */ - private final AtomicReference> topQueriesCurrentSnapshot; - - /** - * The AtomicReference of a snapshot of the last window top queries for getters to consume - */ - private final AtomicReference> topQueriesHistorySnapshot; - - /** - * Factory for validating and creating exporters - */ - private final QueryInsightsExporterFactory queryInsightsExporterFactory; - - /** - * The internal OpenSearch thread pool that execute async processing and exporting tasks - */ - private final ThreadPool threadPool; - - /** - * Exporter for exporting top queries data - */ - private QueryInsightsExporter exporter; - - TopQueriesService( - final MetricType metricType, - final ThreadPool threadPool, - final QueryInsightsExporterFactory queryInsightsExporterFactory - ) { - this.enabled = false; - this.metricType = metricType; - this.threadPool = threadPool; - this.queryInsightsExporterFactory = queryInsightsExporterFactory; - this.topNSize = QueryInsightsSettings.DEFAULT_TOP_N_SIZE; - this.windowSize = QueryInsightsSettings.DEFAULT_WINDOW_SIZE; - this.windowStart = -1L; - this.exporter = null; - topQueriesStore = new PriorityQueue<>(topNSize, (a, b) -> SearchQueryRecord.compare(a, b, metricType)); - topQueriesCurrentSnapshot = new AtomicReference<>(new ArrayList<>()); - topQueriesHistorySnapshot = new AtomicReference<>(new ArrayList<>()); - } - - /** - * Set the top N size for TopQueriesService service. - * - * @param topNSize the top N size to set - */ - public void setTopNSize(final int topNSize) { - this.topNSize = topNSize; - } - - /** - * Get the current configured top n size - * - * @return top n size - */ - public int getTopNSize() { - return topNSize; - } - - /** - * Validate the top N size based on the internal constrains - * - * @param size the wanted top N size - */ - public void validateTopNSize(final int size) { - if (size < 1 || size > QueryInsightsSettings.MAX_N_SIZE) { - throw new IllegalArgumentException( - "Top N size setting for [" - + metricType - + "]" - + " should be between 1 and " - + QueryInsightsSettings.MAX_N_SIZE - + ", was (" - + size - + ")" - ); - } - } - - /** - * Set enable flag for the service - * @param enabled boolean - */ - public void setEnabled(final boolean enabled) { - this.enabled = enabled; - } - - /** - * Set the window size for top N queries service - * - * @param windowSize window size to set - */ - public void setWindowSize(final TimeValue windowSize) { - this.windowSize = windowSize; - // reset the window start time since the window size has changed - this.windowStart = -1L; - } - - /** - * Validate if the window size is valid, based on internal constrains. - * - * @param windowSize the window size to validate - */ - public void validateWindowSize(final TimeValue windowSize) { - if (windowSize.compareTo(QueryInsightsSettings.MAX_WINDOW_SIZE) > 0 - || windowSize.compareTo(QueryInsightsSettings.MIN_WINDOW_SIZE) < 0) { - throw new IllegalArgumentException( - "Window size setting for [" - + metricType - + "]" - + " should be between [" - + QueryInsightsSettings.MIN_WINDOW_SIZE - + "," - + QueryInsightsSettings.MAX_WINDOW_SIZE - + "]" - + "was (" - + windowSize - + ")" - ); - } - if (!(QueryInsightsSettings.VALID_WINDOW_SIZES_IN_MINUTES.contains(windowSize) || windowSize.getMinutes() % 60 == 0)) { - throw new IllegalArgumentException( - "Window size setting for [" - + metricType - + "]" - + " should be multiple of 1 hour, or one of " - + QueryInsightsSettings.VALID_WINDOW_SIZES_IN_MINUTES - + ", was (" - + windowSize - + ")" - ); - } - } - - /** - * Set up the top queries exporter based on provided settings - * - * @param settings exporter config {@link Settings} - */ - public void setExporter(final Settings settings) { - if (settings.get(EXPORTER_TYPE) != null) { - SinkType expectedType = SinkType.parse(settings.get(EXPORTER_TYPE, DEFAULT_TOP_QUERIES_EXPORTER_TYPE)); - if (exporter != null && expectedType == SinkType.getSinkTypeFromExporter(exporter)) { - queryInsightsExporterFactory.updateExporter(exporter, settings.get(EXPORT_INDEX, DEFAULT_TOP_N_QUERIES_INDEX_PATTERN)); - } else { - try { - queryInsightsExporterFactory.closeExporter(this.exporter); - } catch (IOException e) { - logger.error("Fail to close the current exporter when updating exporter, error: ", e); - } - this.exporter = queryInsightsExporterFactory.createExporter( - SinkType.parse(settings.get(EXPORTER_TYPE, DEFAULT_TOP_QUERIES_EXPORTER_TYPE)), - settings.get(EXPORT_INDEX, DEFAULT_TOP_N_QUERIES_INDEX_PATTERN) - ); - } - } else { - // Disable exporter if exporter type is set to null - try { - queryInsightsExporterFactory.closeExporter(this.exporter); - this.exporter = null; - } catch (IOException e) { - logger.error("Fail to close the current exporter when disabling exporter, error: ", e); - } - } - } - - /** - * Validate provided settings for top queries exporter - * - * @param settings settings exporter config {@link Settings} - */ - public void validateExporterConfig(Settings settings) { - queryInsightsExporterFactory.validateExporterConfig(settings); - } - - /** - * Get all top queries records that are in the current top n queries store - * Optionally include top N records from the last window. - * - * By default, return the records in sorted order. - * - * @param includeLastWindow if the top N queries from the last window should be included - * @return List of the records that are in the query insight store - * @throws IllegalArgumentException if query insight is disabled in the cluster - */ - public List getTopQueriesRecords(final boolean includeLastWindow) throws IllegalArgumentException { - if (!enabled) { - throw new IllegalArgumentException( - String.format(Locale.ROOT, "Cannot get top n queries for [%s] when it is not enabled.", metricType.toString()) - ); - } - // read from window snapshots - final List queries = new ArrayList<>(topQueriesCurrentSnapshot.get()); - if (includeLastWindow) { - queries.addAll(topQueriesHistorySnapshot.get()); - } - return Stream.of(queries) - .flatMap(Collection::stream) - .sorted((a, b) -> SearchQueryRecord.compare(a, b, metricType) * -1) - .collect(Collectors.toList()); - } - - /** - * Consume records to top queries stores - * - * @param records a list of {@link SearchQueryRecord} - */ - void consumeRecords(final List records) { - final long currentWindowStart = calculateWindowStart(System.currentTimeMillis()); - List recordsInLastWindow = new ArrayList<>(); - List recordsInThisWindow = new ArrayList<>(); - for (SearchQueryRecord record : records) { - // skip the records that does not have the corresponding measurement - if (!record.getMeasurements().containsKey(metricType)) { - continue; - } - if (record.getTimestamp() < currentWindowStart) { - recordsInLastWindow.add(record); - } else { - recordsInThisWindow.add(record); - } - } - // add records in last window, if there are any, to the top n store - addToTopNStore(recordsInLastWindow); - // rotate window and reset window start if necessary - rotateWindowIfNecessary(currentWindowStart); - // add records in current window, if there are any, to the top n store - addToTopNStore(recordsInThisWindow); - // update the current window snapshot for getters to consume - final List newSnapShot = new ArrayList<>(topQueriesStore); - newSnapShot.sort((a, b) -> SearchQueryRecord.compare(a, b, metricType)); - topQueriesCurrentSnapshot.set(newSnapShot); - } - - private void addToTopNStore(final List records) { - topQueriesStore.addAll(records); - // remove top elements for fix sizing priority queue - while (topQueriesStore.size() > topNSize) { - topQueriesStore.poll(); - } - } - - /** - * Reset the current window and rotate the data to history snapshot for top n queries, - * This function would be invoked zero time or only once in each consumeRecords call - * - * @param newWindowStart the new windowStart to set to - */ - private void rotateWindowIfNecessary(final long newWindowStart) { - // reset window if the current window is outdated - if (windowStart < newWindowStart) { - final List history = new ArrayList<>(); - // rotate the current window to history store only if the data belongs to the last window - if (windowStart == newWindowStart - windowSize.getMillis()) { - history.addAll(topQueriesStore); - } - topQueriesHistorySnapshot.set(history); - topQueriesStore.clear(); - topQueriesCurrentSnapshot.set(new ArrayList<>()); - windowStart = newWindowStart; - // export to the configured sink - if (exporter != null) { - threadPool.executor(QUERY_INSIGHTS_EXECUTOR).execute(() -> exporter.export(history)); - } - } - } - - /** - * Calculate the window start for the given timestamp - * - * @param timestamp the given timestamp to calculate window start - */ - private long calculateWindowStart(final long timestamp) { - final LocalDateTime currentTime = LocalDateTime.ofInstant(Instant.ofEpochMilli(timestamp), ZoneId.of("UTC")); - LocalDateTime windowStartTime = currentTime.truncatedTo(ChronoUnit.HOURS); - while (!windowStartTime.plusMinutes(windowSize.getMinutes()).isAfter(currentTime)) { - windowStartTime = windowStartTime.plusMinutes(windowSize.getMinutes()); - } - return windowStartTime.toInstant(ZoneOffset.UTC).getEpochSecond() * 1000; - } - - /** - * Get the current top queries snapshot from the AtomicReference. - * - * @return a list of {@link SearchQueryRecord} - */ - public List getTopQueriesCurrentSnapshot() { - return topQueriesCurrentSnapshot.get(); - } - - /** - * Close the top n queries service - */ - public void close() throws IOException { - queryInsightsExporterFactory.closeExporter(this.exporter); - } -} diff --git a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/service/package-info.java b/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/service/package-info.java deleted file mode 100644 index 5068f28234f6d..0000000000000 --- a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/core/service/package-info.java +++ /dev/null @@ -1,12 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -/** - * Service Classes for Query Insights - */ -package org.opensearch.plugin.insights.core.service; diff --git a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/package-info.java b/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/package-info.java deleted file mode 100644 index 04d1f9bfff7e1..0000000000000 --- a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/package-info.java +++ /dev/null @@ -1,12 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -/** - * Base Package of Query Insights - */ -package org.opensearch.plugin.insights; diff --git a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/action/package-info.java b/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/action/package-info.java deleted file mode 100644 index 9b6b5856f7d27..0000000000000 --- a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/action/package-info.java +++ /dev/null @@ -1,12 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -/** - * Transport Actions, Requests and Responses for Query Insights - */ -package org.opensearch.plugin.insights.rules.action; diff --git a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/action/top_queries/TopQueries.java b/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/action/top_queries/TopQueries.java deleted file mode 100644 index 26cff82aae52e..0000000000000 --- a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/action/top_queries/TopQueries.java +++ /dev/null @@ -1,77 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.plugin.insights.rules.action.top_queries; - -import org.opensearch.action.support.nodes.BaseNodeResponse; -import org.opensearch.cluster.node.DiscoveryNode; -import org.opensearch.core.common.io.stream.StreamInput; -import org.opensearch.core.common.io.stream.StreamOutput; -import org.opensearch.core.xcontent.ToXContentObject; -import org.opensearch.core.xcontent.XContentBuilder; -import org.opensearch.plugin.insights.rules.model.SearchQueryRecord; - -import java.io.IOException; -import java.util.List; - -/** - * Holds all top queries records by resource usage or latency on a node - * Mainly used in the top N queries node response workflow. - * - * @opensearch.internal - */ -public class TopQueries extends BaseNodeResponse implements ToXContentObject { - /** The store to keep the top queries records */ - private final List topQueriesRecords; - - /** - * Create the TopQueries Object from StreamInput - * @param in A {@link StreamInput} object. - * @throws IOException IOException - */ - public TopQueries(final StreamInput in) throws IOException { - super(in); - topQueriesRecords = in.readList(SearchQueryRecord::new); - } - - /** - * Create the TopQueries Object - * @param node A node that is part of the cluster. - * @param searchQueryRecords A list of SearchQueryRecord associated in this TopQueries. - */ - public TopQueries(final DiscoveryNode node, final List searchQueryRecords) { - super(node); - topQueriesRecords = searchQueryRecords; - } - - @Override - public XContentBuilder toXContent(final XContentBuilder builder, final Params params) throws IOException { - if (topQueriesRecords != null) { - for (SearchQueryRecord record : topQueriesRecords) { - record.toXContent(builder, params); - } - } - return builder; - } - - @Override - public void writeTo(final StreamOutput out) throws IOException { - super.writeTo(out); - out.writeList(topQueriesRecords); - - } - - /** - * Get all top queries records - * - * @return the top queries records in this node response - */ - public List getTopQueriesRecord() { - return topQueriesRecords; - } -} diff --git a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/action/top_queries/TopQueriesAction.java b/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/action/top_queries/TopQueriesAction.java deleted file mode 100644 index b8ed69fa5692b..0000000000000 --- a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/action/top_queries/TopQueriesAction.java +++ /dev/null @@ -1,32 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.plugin.insights.rules.action.top_queries; - -import org.opensearch.action.ActionType; - -/** - * Transport action for cluster/node level top queries information. - * - * @opensearch.internal - */ -public class TopQueriesAction extends ActionType { - - /** - * The TopQueriesAction Instance. - */ - public static final TopQueriesAction INSTANCE = new TopQueriesAction(); - /** - * The name of this Action - */ - public static final String NAME = "cluster:admin/opensearch/insights/top_queries"; - - private TopQueriesAction() { - super(NAME, TopQueriesResponse::new); - } -} diff --git a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/action/top_queries/TopQueriesRequest.java b/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/action/top_queries/TopQueriesRequest.java deleted file mode 100644 index 3bdff2c403161..0000000000000 --- a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/action/top_queries/TopQueriesRequest.java +++ /dev/null @@ -1,62 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.plugin.insights.rules.action.top_queries; - -import org.opensearch.action.support.nodes.BaseNodesRequest; -import org.opensearch.core.common.io.stream.StreamInput; -import org.opensearch.core.common.io.stream.StreamOutput; -import org.opensearch.plugin.insights.rules.model.MetricType; - -import java.io.IOException; - -/** - * A request to get cluster/node level top queries information. - * - * @opensearch.internal - */ -public class TopQueriesRequest extends BaseNodesRequest { - - final MetricType metricType; - - /** - * Constructor for TopQueriesRequest - * - * @param in A {@link StreamInput} object. - * @throws IOException if the stream cannot be deserialized. - */ - public TopQueriesRequest(final StreamInput in) throws IOException { - super(in); - this.metricType = MetricType.readFromStream(in); - } - - /** - * Get top queries from nodes based on the nodes ids specified. - * If none are passed, cluster level top queries will be returned. - * - * @param metricType {@link MetricType} - * @param nodesIds the nodeIds specified in the request - */ - public TopQueriesRequest(final MetricType metricType, final String... nodesIds) { - super(nodesIds); - this.metricType = metricType; - } - - /** - * Get the type of requested metrics - */ - public MetricType getMetricType() { - return metricType; - } - - @Override - public void writeTo(final StreamOutput out) throws IOException { - super.writeTo(out); - out.writeString(metricType.toString()); - } -} diff --git a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/action/top_queries/TopQueriesResponse.java b/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/action/top_queries/TopQueriesResponse.java deleted file mode 100644 index 2e66bb7f77baf..0000000000000 --- a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/action/top_queries/TopQueriesResponse.java +++ /dev/null @@ -1,143 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.plugin.insights.rules.action.top_queries; - -import org.opensearch.action.FailedNodeException; -import org.opensearch.action.support.nodes.BaseNodesResponse; -import org.opensearch.cluster.ClusterName; -import org.opensearch.common.xcontent.XContentFactory; -import org.opensearch.core.common.io.stream.StreamInput; -import org.opensearch.core.common.io.stream.StreamOutput; -import org.opensearch.core.xcontent.ToXContentFragment; -import org.opensearch.core.xcontent.XContentBuilder; -import org.opensearch.plugin.insights.rules.model.Attribute; -import org.opensearch.plugin.insights.rules.model.MetricType; -import org.opensearch.plugin.insights.rules.model.SearchQueryRecord; - -import java.io.IOException; -import java.util.Collection; -import java.util.List; -import java.util.stream.Collectors; - -/** - * Transport response for cluster/node level top queries information. - * - * @opensearch.internal - */ -public class TopQueriesResponse extends BaseNodesResponse implements ToXContentFragment { - - private static final String CLUSTER_LEVEL_RESULTS_KEY = "top_queries"; - private final MetricType metricType; - private final int top_n_size; - - /** - * Constructor for TopQueriesResponse. - * - * @param in A {@link StreamInput} object. - * @throws IOException if the stream cannot be deserialized. - */ - public TopQueriesResponse(final StreamInput in) throws IOException { - super(in); - top_n_size = in.readInt(); - metricType = in.readEnum(MetricType.class); - } - - /** - * Constructor for TopQueriesResponse - * - * @param clusterName The current cluster name - * @param nodes A list that contains top queries results from all nodes - * @param failures A list that contains FailedNodeException - * @param top_n_size The top N size to return to the user - * @param metricType the {@link MetricType} to be returned in this response - */ - public TopQueriesResponse( - final ClusterName clusterName, - final List nodes, - final List failures, - final int top_n_size, - final MetricType metricType - ) { - super(clusterName, nodes, failures); - this.top_n_size = top_n_size; - this.metricType = metricType; - } - - @Override - protected List readNodesFrom(final StreamInput in) throws IOException { - return in.readList(TopQueries::new); - } - - @Override - protected void writeNodesTo(final StreamOutput out, final List nodes) throws IOException { - out.writeList(nodes); - out.writeLong(top_n_size); - out.writeEnum(metricType); - } - - @Override - public XContentBuilder toXContent(final XContentBuilder builder, final Params params) throws IOException { - final List results = getNodes(); - postProcess(results); - builder.startObject(); - toClusterLevelResult(builder, params, results); - return builder.endObject(); - } - - @Override - public String toString() { - try { - final XContentBuilder builder = XContentFactory.jsonBuilder().prettyPrint(); - builder.startObject(); - this.toXContent(builder, EMPTY_PARAMS); - builder.endObject(); - return builder.toString(); - } catch (IOException e) { - return "{ \"error\" : \"" + e.getMessage() + "\"}"; - } - } - - /** - * Post process the top queries results to add customized attributes - * - * @param results the top queries results - */ - private void postProcess(final List results) { - for (TopQueries topQueries : results) { - final String nodeId = topQueries.getNode().getId(); - for (SearchQueryRecord record : topQueries.getTopQueriesRecord()) { - record.addAttribute(Attribute.NODE_ID, nodeId); - } - } - } - - /** - * Merge top n queries results from nodes into cluster level results in XContent format. - * - * @param builder XContent builder - * @param params serialization parameters - * @param results top queries results from all nodes - * @throws IOException if an error occurs - */ - private void toClusterLevelResult(final XContentBuilder builder, final Params params, final List results) - throws IOException { - final List all_records = results.stream() - .map(TopQueries::getTopQueriesRecord) - .flatMap(Collection::stream) - .sorted((a, b) -> SearchQueryRecord.compare(a, b, metricType) * -1) - .limit(top_n_size) - .collect(Collectors.toList()); - builder.startArray(CLUSTER_LEVEL_RESULTS_KEY); - for (SearchQueryRecord record : all_records) { - record.toXContent(builder, params); - } - builder.endArray(); - } - -} diff --git a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/action/top_queries/package-info.java b/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/action/top_queries/package-info.java deleted file mode 100644 index 3cc7900e5ce7d..0000000000000 --- a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/action/top_queries/package-info.java +++ /dev/null @@ -1,12 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -/** - * Transport Actions, Requests and Responses for Top N Queries - */ -package org.opensearch.plugin.insights.rules.action.top_queries; diff --git a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/model/Attribute.java b/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/model/Attribute.java deleted file mode 100644 index dcdb085fdc6fa..0000000000000 --- a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/model/Attribute.java +++ /dev/null @@ -1,82 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.plugin.insights.rules.model; - -import org.opensearch.core.common.io.stream.StreamInput; -import org.opensearch.core.common.io.stream.StreamOutput; - -import java.io.IOException; -import java.util.Locale; - -/** - * Valid attributes for a search query record - * - * @opensearch.internal - */ -public enum Attribute { - /** - * The search query type - */ - SEARCH_TYPE, - /** - * The search query source - */ - SOURCE, - /** - * Total shards queried - */ - TOTAL_SHARDS, - /** - * The indices involved - */ - INDICES, - /** - * The per phase level latency map for a search query - */ - PHASE_LATENCY_MAP, - /** - * The node id for this request - */ - NODE_ID, - /** - * Tasks level resource usages in this request - */ - TASK_RESOURCE_USAGES, - /** - * Custom search request labels - */ - LABELS; - - /** - * Read an Attribute from a StreamInput - * - * @param in the StreamInput to read from - * @return Attribute - * @throws IOException IOException - */ - static Attribute readFromStream(final StreamInput in) throws IOException { - return Attribute.valueOf(in.readString().toUpperCase(Locale.ROOT)); - } - - /** - * Write Attribute to a StreamOutput - * - * @param out the StreamOutput to write - * @param attribute the Attribute to write - * @throws IOException IOException - */ - static void writeTo(final StreamOutput out, final Attribute attribute) throws IOException { - out.writeString(attribute.toString()); - } - - @Override - public String toString() { - return this.name().toLowerCase(Locale.ROOT); - } -} diff --git a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/model/MetricType.java b/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/model/MetricType.java deleted file mode 100644 index 4694c757f4ef2..0000000000000 --- a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/model/MetricType.java +++ /dev/null @@ -1,119 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.plugin.insights.rules.model; - -import org.opensearch.core.common.io.stream.StreamInput; -import org.opensearch.core.common.io.stream.StreamOutput; - -import java.io.IOException; -import java.util.Arrays; -import java.util.Comparator; -import java.util.Locale; -import java.util.Set; -import java.util.stream.Collectors; - -/** - * Valid metric types for a search query record - * - * @opensearch.internal - */ -public enum MetricType implements Comparator { - /** - * Latency metric type - */ - LATENCY, - /** - * CPU usage metric type - */ - CPU, - /** - * JVM heap usage metric type - */ - MEMORY; - - /** - * Read a MetricType from a StreamInput - * - * @param in the StreamInput to read from - * @return MetricType - * @throws IOException IOException - */ - public static MetricType readFromStream(final StreamInput in) throws IOException { - return fromString(in.readString()); - } - - /** - * Create MetricType from String - * - * @param metricType the String representation of MetricType - * @return MetricType - */ - public static MetricType fromString(final String metricType) { - return MetricType.valueOf(metricType.toUpperCase(Locale.ROOT)); - } - - /** - * Write MetricType to a StreamOutput - * - * @param out the StreamOutput to write - * @param metricType the MetricType to write - * @throws IOException IOException - */ - static void writeTo(final StreamOutput out, final MetricType metricType) throws IOException { - out.writeString(metricType.toString()); - } - - @Override - public String toString() { - return this.name().toLowerCase(Locale.ROOT); - } - - /** - * Get all valid metrics - * - * @return A set of String that contains all valid metrics - */ - public static Set allMetricTypes() { - return Arrays.stream(values()).collect(Collectors.toSet()); - } - - /** - * Compare two numbers based on the metric type - * - * @param a the first Number to be compared. - * @param b the second Number to be compared. - * @return a negative integer, zero, or a positive integer as the first argument is less than, equal to, or greater than the second - */ - public int compare(final Number a, final Number b) { - switch (this) { - case LATENCY: - case CPU: - case MEMORY: - return Long.compare(a.longValue(), b.longValue()); - } - return -1; - } - - /** - * Parse a value with the correct type based on MetricType - * - * @param o the generic object to parse - * @return {@link Number} - */ - Number parseValue(final Object o) { - switch (this) { - case LATENCY: - case CPU: - case MEMORY: - return (Long) o; - default: - return (Number) o; - } - } -} diff --git a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/model/SearchQueryRecord.java b/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/model/SearchQueryRecord.java deleted file mode 100644 index fec00a680ae58..0000000000000 --- a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/model/SearchQueryRecord.java +++ /dev/null @@ -1,183 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.plugin.insights.rules.model; - -import org.opensearch.core.common.Strings; -import org.opensearch.core.common.io.stream.StreamInput; -import org.opensearch.core.common.io.stream.StreamOutput; -import org.opensearch.core.common.io.stream.Writeable; -import org.opensearch.core.xcontent.MediaTypeRegistry; -import org.opensearch.core.xcontent.ToXContent; -import org.opensearch.core.xcontent.ToXContentObject; -import org.opensearch.core.xcontent.XContentBuilder; - -import java.io.IOException; -import java.util.HashMap; -import java.util.Map; -import java.util.Objects; - -/** - * SearchQueryRecord represents a minimal atomic record stored in the Query Insight Framework, - * which contains extensive information related to a search query. - * - * @opensearch.internal - */ -public class SearchQueryRecord implements ToXContentObject, Writeable { - private final long timestamp; - private final Map measurements; - private final Map attributes; - - /** - * Constructor of SearchQueryRecord - * - * @param in the StreamInput to read the SearchQueryRecord from - * @throws IOException IOException - * @throws ClassCastException ClassCastException - */ - public SearchQueryRecord(final StreamInput in) throws IOException, ClassCastException { - this.timestamp = in.readLong(); - measurements = new HashMap<>(); - in.readMap(MetricType::readFromStream, StreamInput::readGenericValue) - .forEach(((metricType, o) -> measurements.put(metricType, metricType.parseValue(o)))); - this.attributes = in.readMap(Attribute::readFromStream, StreamInput::readGenericValue); - } - - /** - * Constructor of SearchQueryRecord - * - * @param timestamp The timestamp of the query. - * @param measurements A list of Measurement associated with this query - * @param attributes A list of Attributes associated with this query - */ - public SearchQueryRecord(final long timestamp, Map measurements, final Map attributes) { - if (measurements == null) { - throw new IllegalArgumentException("Measurements cannot be null"); - } - this.measurements = measurements; - this.attributes = attributes; - this.timestamp = timestamp; - } - - /** - * Returns the observation time of the metric. - * - * @return the observation time in milliseconds - */ - public long getTimestamp() { - return timestamp; - } - - /** - * Returns the measurement associated with the specified name. - * - * @param name the name of the measurement - * @return the measurement object, or null if not found - */ - public Number getMeasurement(final MetricType name) { - return measurements.get(name); - } - - /** - * Returns a map of all the measurements associated with the metric. - * - * @return a map of measurement names to measurement objects - */ - public Map getMeasurements() { - return measurements; - } - - /** - * Returns a map of the attributes associated with the metric. - * - * @return a map of attribute keys to attribute values - */ - public Map getAttributes() { - return attributes; - } - - /** - * Add an attribute to this record - * - * @param attribute attribute to add - * @param value the value associated with the attribute - */ - public void addAttribute(final Attribute attribute, final Object value) { - attributes.put(attribute, value); - } - - @Override - public XContentBuilder toXContent(final XContentBuilder builder, final ToXContent.Params params) throws IOException { - builder.startObject(); - builder.field("timestamp", timestamp); - for (Map.Entry entry : attributes.entrySet()) { - builder.field(entry.getKey().toString(), entry.getValue()); - } - for (Map.Entry entry : measurements.entrySet()) { - builder.field(entry.getKey().toString(), entry.getValue()); - } - return builder.endObject(); - } - - /** - * Write a SearchQueryRecord to a StreamOutput - * - * @param out the StreamOutput to write - * @throws IOException IOException - */ - @Override - public void writeTo(final StreamOutput out) throws IOException { - out.writeLong(timestamp); - out.writeMap(measurements, (stream, metricType) -> MetricType.writeTo(out, metricType), StreamOutput::writeGenericValue); - out.writeMap(attributes, (stream, attribute) -> Attribute.writeTo(out, attribute), StreamOutput::writeGenericValue); - } - - /** - * Compare two SearchQueryRecord, based on the given MetricType - * - * @param a the first SearchQueryRecord to compare - * @param b the second SearchQueryRecord to compare - * @param metricType the MetricType to compare on - * @return 0 if the first SearchQueryRecord is numerically equal to the second SearchQueryRecord; - * -1 if the first SearchQueryRecord is numerically less than the second SearchQueryRecord; - * 1 if the first SearchQueryRecord is numerically greater than the second SearchQueryRecord. - */ - public static int compare(final SearchQueryRecord a, final SearchQueryRecord b, final MetricType metricType) { - return metricType.compare(a.getMeasurement(metricType), b.getMeasurement(metricType)); - } - - /** - * Check if a SearchQueryRecord is deep equal to another record - * - * @param o the other SearchQueryRecord record - * @return true if two records are deep equal, false otherwise. - */ - @Override - public boolean equals(final Object o) { - if (this == o) { - return true; - } - if (!(o instanceof SearchQueryRecord)) { - return false; - } - final SearchQueryRecord other = (SearchQueryRecord) o; - return timestamp == other.getTimestamp() - && measurements.equals(other.getMeasurements()) - && attributes.size() == other.getAttributes().size(); - } - - @Override - public int hashCode() { - return Objects.hash(timestamp, measurements, attributes); - } - - @Override - public String toString() { - return Strings.toString(MediaTypeRegistry.JSON, this); - } -} diff --git a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/model/package-info.java b/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/model/package-info.java deleted file mode 100644 index c59ec1550f54b..0000000000000 --- a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/model/package-info.java +++ /dev/null @@ -1,12 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -/** - * Data Models for Query Insight Records - */ -package org.opensearch.plugin.insights.rules.model; diff --git a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/resthandler/package-info.java b/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/resthandler/package-info.java deleted file mode 100644 index 3787f05f65552..0000000000000 --- a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/resthandler/package-info.java +++ /dev/null @@ -1,12 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -/** - * Rest Handlers for Query Insights - */ -package org.opensearch.plugin.insights.rules.resthandler; diff --git a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/resthandler/top_queries/RestTopQueriesAction.java b/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/resthandler/top_queries/RestTopQueriesAction.java deleted file mode 100644 index 6aa511c626ab1..0000000000000 --- a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/resthandler/top_queries/RestTopQueriesAction.java +++ /dev/null @@ -1,99 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.plugin.insights.rules.resthandler.top_queries; - -import org.opensearch.client.node.NodeClient; -import org.opensearch.common.settings.Settings; -import org.opensearch.core.common.Strings; -import org.opensearch.core.rest.RestStatus; -import org.opensearch.core.xcontent.ToXContent; -import org.opensearch.plugin.insights.rules.action.top_queries.TopQueriesAction; -import org.opensearch.plugin.insights.rules.action.top_queries.TopQueriesRequest; -import org.opensearch.plugin.insights.rules.action.top_queries.TopQueriesResponse; -import org.opensearch.plugin.insights.rules.model.MetricType; -import org.opensearch.rest.BaseRestHandler; -import org.opensearch.rest.BytesRestResponse; -import org.opensearch.rest.RestChannel; -import org.opensearch.rest.RestRequest; -import org.opensearch.rest.RestResponse; -import org.opensearch.rest.action.RestResponseListener; - -import java.util.List; -import java.util.Locale; -import java.util.Set; -import java.util.stream.Collectors; - -import static org.opensearch.plugin.insights.settings.QueryInsightsSettings.TOP_QUERIES_BASE_URI; -import static org.opensearch.rest.RestRequest.Method.GET; - -/** - * Rest action to get Top N queries by certain metric type - * - * @opensearch.api - */ -public class RestTopQueriesAction extends BaseRestHandler { - /** The metric types that are allowed in top N queries */ - static final Set ALLOWED_METRICS = MetricType.allMetricTypes().stream().map(MetricType::toString).collect(Collectors.toSet()); - - /** - * Constructor for RestTopQueriesAction - */ - public RestTopQueriesAction() {} - - @Override - public List routes() { - return List.of( - new Route(GET, TOP_QUERIES_BASE_URI), - new Route(GET, String.format(Locale.ROOT, "%s/{nodeId}", TOP_QUERIES_BASE_URI)) - ); - } - - @Override - public String getName() { - return "query_insights_top_queries_action"; - } - - @Override - public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) { - final TopQueriesRequest topQueriesRequest = prepareRequest(request); - topQueriesRequest.timeout(request.param("timeout")); - - return channel -> client.execute(TopQueriesAction.INSTANCE, topQueriesRequest, topQueriesResponse(channel)); - } - - static TopQueriesRequest prepareRequest(final RestRequest request) { - final String[] nodesIds = Strings.splitStringByCommaToArray(request.param("nodeId")); - final String metricType = request.param("type", MetricType.LATENCY.toString()); - if (!ALLOWED_METRICS.contains(metricType)) { - throw new IllegalArgumentException( - String.format(Locale.ROOT, "request [%s] contains invalid metric type [%s]", request.path(), metricType) - ); - } - return new TopQueriesRequest(MetricType.fromString(metricType), nodesIds); - } - - @Override - protected Set responseParams() { - return Settings.FORMAT_PARAMS; - } - - @Override - public boolean canTripCircuitBreaker() { - return false; - } - - private RestResponseListener topQueriesResponse(final RestChannel channel) { - return new RestResponseListener<>(channel) { - @Override - public RestResponse buildResponse(final TopQueriesResponse response) throws Exception { - return new BytesRestResponse(RestStatus.OK, response.toXContent(channel.newBuilder(), ToXContent.EMPTY_PARAMS)); - } - }; - } -} diff --git a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/resthandler/top_queries/package-info.java b/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/resthandler/top_queries/package-info.java deleted file mode 100644 index 087cf7d765f8c..0000000000000 --- a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/resthandler/top_queries/package-info.java +++ /dev/null @@ -1,12 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -/** - * Rest Handlers for Top N Queries - */ -package org.opensearch.plugin.insights.rules.resthandler.top_queries; diff --git a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/transport/package-info.java b/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/transport/package-info.java deleted file mode 100644 index f3a1c70b9af57..0000000000000 --- a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/transport/package-info.java +++ /dev/null @@ -1,12 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -/** - * Transport Actions for Query Insights. - */ -package org.opensearch.plugin.insights.rules.transport; diff --git a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/transport/top_queries/TransportTopQueriesAction.java b/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/transport/top_queries/TransportTopQueriesAction.java deleted file mode 100644 index 7949b70a16db6..0000000000000 --- a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/transport/top_queries/TransportTopQueriesAction.java +++ /dev/null @@ -1,148 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.plugin.insights.rules.transport.top_queries; - -import org.opensearch.action.FailedNodeException; -import org.opensearch.action.support.ActionFilters; -import org.opensearch.action.support.nodes.TransportNodesAction; -import org.opensearch.cluster.service.ClusterService; -import org.opensearch.common.inject.Inject; -import org.opensearch.core.common.io.stream.StreamInput; -import org.opensearch.core.common.io.stream.StreamOutput; -import org.opensearch.plugin.insights.core.service.QueryInsightsService; -import org.opensearch.plugin.insights.rules.action.top_queries.TopQueries; -import org.opensearch.plugin.insights.rules.action.top_queries.TopQueriesAction; -import org.opensearch.plugin.insights.rules.action.top_queries.TopQueriesRequest; -import org.opensearch.plugin.insights.rules.action.top_queries.TopQueriesResponse; -import org.opensearch.plugin.insights.settings.QueryInsightsSettings; -import org.opensearch.threadpool.ThreadPool; -import org.opensearch.transport.TransportRequest; -import org.opensearch.transport.TransportService; - -import java.io.IOException; -import java.util.List; - -/** - * Transport action for cluster/node level top queries information. - * - * @opensearch.internal - */ -public class TransportTopQueriesAction extends TransportNodesAction< - TopQueriesRequest, - TopQueriesResponse, - TransportTopQueriesAction.NodeRequest, - TopQueries> { - - private final QueryInsightsService queryInsightsService; - - /** - * Create the TransportTopQueriesAction Object - - * @param threadPool The OpenSearch thread pool to run async tasks - * @param clusterService The clusterService of this node - * @param transportService The TransportService of this node - * @param queryInsightsService The topQueriesByLatencyService associated with this Transport Action - * @param actionFilters the action filters - */ - @Inject - public TransportTopQueriesAction( - final ThreadPool threadPool, - final ClusterService clusterService, - final TransportService transportService, - final QueryInsightsService queryInsightsService, - final ActionFilters actionFilters - ) { - super( - TopQueriesAction.NAME, - threadPool, - clusterService, - transportService, - actionFilters, - TopQueriesRequest::new, - NodeRequest::new, - ThreadPool.Names.GENERIC, - TopQueries.class - ); - this.queryInsightsService = queryInsightsService; - } - - @Override - protected TopQueriesResponse newResponse( - final TopQueriesRequest topQueriesRequest, - final List responses, - final List failures - ) { - int size; - switch (topQueriesRequest.getMetricType()) { - case CPU: - size = clusterService.getClusterSettings().get(QueryInsightsSettings.TOP_N_CPU_QUERIES_SIZE); - break; - case MEMORY: - size = clusterService.getClusterSettings().get(QueryInsightsSettings.TOP_N_MEMORY_QUERIES_SIZE); - break; - default: - size = clusterService.getClusterSettings().get(QueryInsightsSettings.TOP_N_LATENCY_QUERIES_SIZE); - } - return new TopQueriesResponse(clusterService.getClusterName(), responses, failures, size, topQueriesRequest.getMetricType()); - } - - @Override - protected NodeRequest newNodeRequest(final TopQueriesRequest request) { - return new NodeRequest(request); - } - - @Override - protected TopQueries newNodeResponse(final StreamInput in) throws IOException { - return new TopQueries(in); - } - - @Override - protected TopQueries nodeOperation(final NodeRequest nodeRequest) { - final TopQueriesRequest request = nodeRequest.request; - return new TopQueries( - clusterService.localNode(), - queryInsightsService.getTopQueriesService(request.getMetricType()).getTopQueriesRecords(true) - ); - } - - /** - * Inner Node Top Queries Request - * - * @opensearch.internal - */ - public static class NodeRequest extends TransportRequest { - - final TopQueriesRequest request; - - /** - * Create the NodeResponse object from StreamInput - * - * @param in the StreamInput to read the object - * @throws IOException IOException - */ - public NodeRequest(StreamInput in) throws IOException { - super(in); - request = new TopQueriesRequest(in); - } - - /** - * Create the NodeResponse object from a TopQueriesRequest - * @param request the TopQueriesRequest object - */ - public NodeRequest(final TopQueriesRequest request) { - this.request = request; - } - - @Override - public void writeTo(final StreamOutput out) throws IOException { - super.writeTo(out); - request.writeTo(out); - } - } -} diff --git a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/transport/top_queries/package-info.java b/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/transport/top_queries/package-info.java deleted file mode 100644 index 54da0980deff8..0000000000000 --- a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/rules/transport/top_queries/package-info.java +++ /dev/null @@ -1,12 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -/** - * Transport Actions for Top N Queries. - */ -package org.opensearch.plugin.insights.rules.transport.top_queries; diff --git a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/settings/QueryInsightsSettings.java b/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/settings/QueryInsightsSettings.java deleted file mode 100644 index 25309b5721792..0000000000000 --- a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/settings/QueryInsightsSettings.java +++ /dev/null @@ -1,304 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.plugin.insights.settings; - -import org.opensearch.common.settings.Setting; -import org.opensearch.common.settings.Settings; -import org.opensearch.common.unit.TimeValue; -import org.opensearch.plugin.insights.core.exporter.SinkType; -import org.opensearch.plugin.insights.rules.model.MetricType; - -import java.util.Arrays; -import java.util.HashSet; -import java.util.Set; -import java.util.concurrent.TimeUnit; - -/** - * Settings for Query Insights Plugin - * - * @opensearch.api - * @opensearch.experimental - */ -public class QueryInsightsSettings { - /** - * Executors settings - */ - public static final String QUERY_INSIGHTS_EXECUTOR = "query_insights_executor"; - /** - * Max number of thread - */ - public static final int MAX_THREAD_COUNT = 5; - /** - * Max number of requests for the consumer to collect at one time - */ - public static final int QUERY_RECORD_QUEUE_CAPACITY = 1000; - /** - * Time interval for record queue consumer to run - */ - public static final TimeValue QUERY_RECORD_QUEUE_DRAIN_INTERVAL = new TimeValue(5, TimeUnit.SECONDS); - /** - * Default Values and Settings - */ - public static final TimeValue MAX_WINDOW_SIZE = new TimeValue(1, TimeUnit.DAYS); - /** - * Minimal window size - */ - public static final TimeValue MIN_WINDOW_SIZE = new TimeValue(1, TimeUnit.MINUTES); - /** - * Valid window sizes - */ - public static final Set VALID_WINDOW_SIZES_IN_MINUTES = new HashSet<>( - Arrays.asList( - new TimeValue(1, TimeUnit.MINUTES), - new TimeValue(5, TimeUnit.MINUTES), - new TimeValue(10, TimeUnit.MINUTES), - new TimeValue(30, TimeUnit.MINUTES) - ) - ); - - /** Default N size for top N queries */ - public static final int MAX_N_SIZE = 100; - /** Default window size in seconds to keep the top N queries with latency data in query insight store */ - public static final TimeValue DEFAULT_WINDOW_SIZE = new TimeValue(60, TimeUnit.SECONDS); - /** Default top N size to keep the data in query insight store */ - public static final int DEFAULT_TOP_N_SIZE = 3; - /** - * Query Insights base uri - */ - public static final String PLUGINS_BASE_URI = "/_insights"; - - /** - * Settings for Top Queries - * - */ - public static final String TOP_QUERIES_BASE_URI = PLUGINS_BASE_URI + "/top_queries"; - /** Default prefix for top N queries feature */ - public static final String TOP_N_QUERIES_SETTING_PREFIX = "search.insights.top_queries"; - /** Default prefix for top N queries by latency feature */ - public static final String TOP_N_LATENCY_QUERIES_PREFIX = TOP_N_QUERIES_SETTING_PREFIX + ".latency"; - /** Default prefix for top N queries by cpu feature */ - public static final String TOP_N_CPU_QUERIES_PREFIX = TOP_N_QUERIES_SETTING_PREFIX + ".cpu"; - /** Default prefix for top N queries by memory feature */ - public static final String TOP_N_MEMORY_QUERIES_PREFIX = TOP_N_QUERIES_SETTING_PREFIX + ".memory"; - /** - * Boolean setting for enabling top queries by latency. - */ - public static final Setting TOP_N_LATENCY_QUERIES_ENABLED = Setting.boolSetting( - TOP_N_LATENCY_QUERIES_PREFIX + ".enabled", - false, - Setting.Property.Dynamic, - Setting.Property.NodeScope - ); - - /** - * Int setting to define the top n size for top queries by latency. - */ - public static final Setting TOP_N_LATENCY_QUERIES_SIZE = Setting.intSetting( - TOP_N_LATENCY_QUERIES_PREFIX + ".top_n_size", - DEFAULT_TOP_N_SIZE, - Setting.Property.Dynamic, - Setting.Property.NodeScope - ); - - /** - * Time setting to define the window size in seconds for top queries by latency. - */ - public static final Setting TOP_N_LATENCY_QUERIES_WINDOW_SIZE = Setting.positiveTimeSetting( - TOP_N_LATENCY_QUERIES_PREFIX + ".window_size", - DEFAULT_WINDOW_SIZE, - Setting.Property.NodeScope, - Setting.Property.Dynamic - ); - - /** - * Boolean setting for enabling top queries by cpu. - */ - public static final Setting TOP_N_CPU_QUERIES_ENABLED = Setting.boolSetting( - TOP_N_CPU_QUERIES_PREFIX + ".enabled", - false, - Setting.Property.Dynamic, - Setting.Property.NodeScope - ); - - /** - * Int setting to define the top n size for top queries by cpu. - */ - public static final Setting TOP_N_CPU_QUERIES_SIZE = Setting.intSetting( - TOP_N_CPU_QUERIES_PREFIX + ".top_n_size", - DEFAULT_TOP_N_SIZE, - Setting.Property.Dynamic, - Setting.Property.NodeScope - ); - - /** - * Time setting to define the window size in seconds for top queries by cpu. - */ - public static final Setting TOP_N_CPU_QUERIES_WINDOW_SIZE = Setting.positiveTimeSetting( - TOP_N_CPU_QUERIES_PREFIX + ".window_size", - DEFAULT_WINDOW_SIZE, - Setting.Property.NodeScope, - Setting.Property.Dynamic - ); - - /** - * Boolean setting for enabling top queries by memory. - */ - public static final Setting TOP_N_MEMORY_QUERIES_ENABLED = Setting.boolSetting( - TOP_N_MEMORY_QUERIES_PREFIX + ".enabled", - false, - Setting.Property.Dynamic, - Setting.Property.NodeScope - ); - - /** - * Int setting to define the top n size for top queries by memory. - */ - public static final Setting TOP_N_MEMORY_QUERIES_SIZE = Setting.intSetting( - TOP_N_MEMORY_QUERIES_PREFIX + ".top_n_size", - DEFAULT_TOP_N_SIZE, - Setting.Property.Dynamic, - Setting.Property.NodeScope - ); - - /** - * Time setting to define the window size in seconds for top queries by memory. - */ - public static final Setting TOP_N_MEMORY_QUERIES_WINDOW_SIZE = Setting.positiveTimeSetting( - TOP_N_MEMORY_QUERIES_PREFIX + ".window_size", - DEFAULT_WINDOW_SIZE, - Setting.Property.NodeScope, - Setting.Property.Dynamic - ); - - /** - * Config key for exporter type - */ - public static final String EXPORTER_TYPE = "type"; - /** - * Config key for export index - */ - public static final String EXPORT_INDEX = "config.index"; - - /** - * Settings and defaults for top queries exporters - */ - private static final String TOP_N_LATENCY_QUERIES_EXPORTER_PREFIX = TOP_N_LATENCY_QUERIES_PREFIX + ".exporter."; - /** - * Prefix for top n queries by cpu exporters - */ - private static final String TOP_N_CPU_QUERIES_EXPORTER_PREFIX = TOP_N_CPU_QUERIES_PREFIX + ".exporter."; - /** - * Prefix for top n queries by memory exporters - */ - private static final String TOP_N_MEMORY_QUERIES_EXPORTER_PREFIX = TOP_N_MEMORY_QUERIES_PREFIX + ".exporter."; - /** - * Default index pattern of top n queries - */ - public static final String DEFAULT_TOP_N_QUERIES_INDEX_PATTERN = "'top_queries-'YYYY.MM.dd"; - /** - * Default exporter type of top queries - */ - public static final String DEFAULT_TOP_QUERIES_EXPORTER_TYPE = SinkType.LOCAL_INDEX.toString(); - - /** - * Settings for the exporter of top latency queries - */ - public static final Setting TOP_N_LATENCY_EXPORTER_SETTINGS = Setting.groupSetting( - TOP_N_LATENCY_QUERIES_EXPORTER_PREFIX, - Setting.Property.Dynamic, - Setting.Property.NodeScope - ); - - /** - * Settings for the exporter of top cpu queries - */ - public static final Setting TOP_N_CPU_EXPORTER_SETTINGS = Setting.groupSetting( - TOP_N_CPU_QUERIES_EXPORTER_PREFIX, - Setting.Property.Dynamic, - Setting.Property.NodeScope - ); - - /** - * Settings for the exporter of top cpu queries - */ - public static final Setting TOP_N_MEMORY_EXPORTER_SETTINGS = Setting.groupSetting( - TOP_N_MEMORY_QUERIES_EXPORTER_PREFIX, - Setting.Property.Dynamic, - Setting.Property.NodeScope - ); - - /** - * Get the enabled setting based on type - * @param type MetricType - * @return enabled setting - */ - public static Setting getTopNEnabledSetting(MetricType type) { - switch (type) { - case CPU: - return TOP_N_CPU_QUERIES_ENABLED; - case MEMORY: - return TOP_N_MEMORY_QUERIES_ENABLED; - default: - return TOP_N_LATENCY_QUERIES_ENABLED; - } - } - - /** - * Get the top n size setting based on type - * @param type MetricType - * @return top n size setting - */ - public static Setting getTopNSizeSetting(MetricType type) { - switch (type) { - case CPU: - return TOP_N_CPU_QUERIES_SIZE; - case MEMORY: - return TOP_N_MEMORY_QUERIES_SIZE; - default: - return TOP_N_LATENCY_QUERIES_SIZE; - } - } - - /** - * Get the window size setting based on type - * @param type MetricType - * @return top n queries window size setting - */ - public static Setting getTopNWindowSizeSetting(MetricType type) { - switch (type) { - case CPU: - return TOP_N_CPU_QUERIES_WINDOW_SIZE; - case MEMORY: - return TOP_N_MEMORY_QUERIES_WINDOW_SIZE; - default: - return TOP_N_LATENCY_QUERIES_WINDOW_SIZE; - } - } - - /** - * Get the exporter settings based on type - * @param type MetricType - * @return exporter setting - */ - public static Setting getExporterSettings(MetricType type) { - switch (type) { - case CPU: - return TOP_N_CPU_EXPORTER_SETTINGS; - case MEMORY: - return TOP_N_MEMORY_EXPORTER_SETTINGS; - default: - return TOP_N_LATENCY_EXPORTER_SETTINGS; - } - } - - /** - * Default constructor - */ - public QueryInsightsSettings() {} -} diff --git a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/settings/package-info.java b/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/settings/package-info.java deleted file mode 100644 index f3152bbf966cb..0000000000000 --- a/plugins/query-insights/src/main/java/org/opensearch/plugin/insights/settings/package-info.java +++ /dev/null @@ -1,12 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -/** - * Settings for Query Insights Plugin - */ -package org.opensearch.plugin.insights.settings; diff --git a/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/QueryInsightsPluginTests.java b/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/QueryInsightsPluginTests.java deleted file mode 100644 index 2efe9085a39ee..0000000000000 --- a/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/QueryInsightsPluginTests.java +++ /dev/null @@ -1,113 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.plugin.insights; - -import org.opensearch.action.ActionRequest; -import org.opensearch.client.Client; -import org.opensearch.cluster.service.ClusterService; -import org.opensearch.common.settings.ClusterSettings; -import org.opensearch.common.settings.Settings; -import org.opensearch.core.action.ActionResponse; -import org.opensearch.plugin.insights.core.listener.QueryInsightsListener; -import org.opensearch.plugin.insights.core.service.QueryInsightsService; -import org.opensearch.plugin.insights.rules.action.top_queries.TopQueriesAction; -import org.opensearch.plugin.insights.rules.resthandler.top_queries.RestTopQueriesAction; -import org.opensearch.plugin.insights.settings.QueryInsightsSettings; -import org.opensearch.plugins.ActionPlugin; -import org.opensearch.rest.RestHandler; -import org.opensearch.test.ClusterServiceUtils; -import org.opensearch.test.OpenSearchTestCase; -import org.opensearch.threadpool.ExecutorBuilder; -import org.opensearch.threadpool.ScalingExecutorBuilder; -import org.opensearch.threadpool.ThreadPool; -import org.junit.Before; - -import java.util.Arrays; -import java.util.List; - -import static org.mockito.Mockito.mock; - -public class QueryInsightsPluginTests extends OpenSearchTestCase { - - private QueryInsightsPlugin queryInsightsPlugin; - - private final Client client = mock(Client.class); - private ClusterService clusterService; - private final ThreadPool threadPool = mock(ThreadPool.class); - - @Before - public void setup() { - queryInsightsPlugin = new QueryInsightsPlugin(); - Settings.Builder settingsBuilder = Settings.builder(); - Settings settings = settingsBuilder.build(); - ClusterSettings clusterSettings = new ClusterSettings(settings, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS); - QueryInsightsTestUtils.registerAllQueryInsightsSettings(clusterSettings); - clusterService = ClusterServiceUtils.createClusterService(settings, clusterSettings, threadPool); - } - - public void testGetSettings() { - assertEquals( - Arrays.asList( - QueryInsightsSettings.TOP_N_LATENCY_QUERIES_ENABLED, - QueryInsightsSettings.TOP_N_LATENCY_QUERIES_SIZE, - QueryInsightsSettings.TOP_N_LATENCY_QUERIES_WINDOW_SIZE, - QueryInsightsSettings.TOP_N_LATENCY_EXPORTER_SETTINGS, - QueryInsightsSettings.TOP_N_CPU_QUERIES_ENABLED, - QueryInsightsSettings.TOP_N_CPU_QUERIES_SIZE, - QueryInsightsSettings.TOP_N_CPU_QUERIES_WINDOW_SIZE, - QueryInsightsSettings.TOP_N_CPU_EXPORTER_SETTINGS, - QueryInsightsSettings.TOP_N_MEMORY_QUERIES_ENABLED, - QueryInsightsSettings.TOP_N_MEMORY_QUERIES_SIZE, - QueryInsightsSettings.TOP_N_MEMORY_QUERIES_WINDOW_SIZE, - QueryInsightsSettings.TOP_N_MEMORY_EXPORTER_SETTINGS - ), - queryInsightsPlugin.getSettings() - ); - } - - public void testCreateComponent() { - List components = (List) queryInsightsPlugin.createComponents( - client, - clusterService, - threadPool, - null, - null, - null, - null, - null, - null, - null, - null - ); - assertEquals(2, components.size()); - assertTrue(components.get(0) instanceof QueryInsightsService); - assertTrue(components.get(1) instanceof QueryInsightsListener); - } - - public void testGetExecutorBuilders() { - Settings.Builder settingsBuilder = Settings.builder(); - Settings settings = settingsBuilder.build(); - List> executorBuilders = queryInsightsPlugin.getExecutorBuilders(settings); - assertEquals(1, executorBuilders.size()); - assertTrue(executorBuilders.get(0) instanceof ScalingExecutorBuilder); - } - - public void testGetRestHandlers() { - List components = queryInsightsPlugin.getRestHandlers(Settings.EMPTY, null, null, null, null, null, null); - assertEquals(1, components.size()); - assertTrue(components.get(0) instanceof RestTopQueriesAction); - } - - public void testGetActions() { - List> components = queryInsightsPlugin.getActions(); - assertEquals(1, components.size()); - assertTrue(components.get(0).getAction() instanceof TopQueriesAction); - } - -} diff --git a/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/QueryInsightsTestUtils.java b/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/QueryInsightsTestUtils.java deleted file mode 100644 index 7fa4e9841c20e..0000000000000 --- a/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/QueryInsightsTestUtils.java +++ /dev/null @@ -1,205 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.plugin.insights; - -import org.opensearch.action.search.SearchType; -import org.opensearch.cluster.node.DiscoveryNode; -import org.opensearch.common.settings.ClusterSettings; -import org.opensearch.common.util.Maps; -import org.opensearch.core.xcontent.ToXContent; -import org.opensearch.core.xcontent.XContentBuilder; -import org.opensearch.plugin.insights.rules.action.top_queries.TopQueries; -import org.opensearch.plugin.insights.rules.model.Attribute; -import org.opensearch.plugin.insights.rules.model.MetricType; -import org.opensearch.plugin.insights.rules.model.SearchQueryRecord; -import org.opensearch.plugin.insights.settings.QueryInsightsSettings; -import org.opensearch.test.VersionUtils; - -import java.io.IOException; -import java.util.ArrayList; -import java.util.Arrays; -import java.util.HashMap; -import java.util.List; -import java.util.Locale; -import java.util.Map; -import java.util.Set; -import java.util.TreeSet; - -import static java.util.Collections.emptyMap; -import static java.util.Collections.emptySet; -import static org.opensearch.common.xcontent.XContentFactory.jsonBuilder; -import static org.opensearch.test.OpenSearchTestCase.buildNewFakeTransportAddress; -import static org.opensearch.test.OpenSearchTestCase.random; -import static org.opensearch.test.OpenSearchTestCase.randomAlphaOfLengthBetween; -import static org.opensearch.test.OpenSearchTestCase.randomArray; -import static org.opensearch.test.OpenSearchTestCase.randomIntBetween; -import static org.opensearch.test.OpenSearchTestCase.randomLong; -import static org.opensearch.test.OpenSearchTestCase.randomLongBetween; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertNull; - -final public class QueryInsightsTestUtils { - - public QueryInsightsTestUtils() {} - - public static List generateQueryInsightRecords(int count) { - return generateQueryInsightRecords(count, count, System.currentTimeMillis(), 0); - } - - /** - * Creates a List of random Query Insight Records for testing purpose - */ - public static List generateQueryInsightRecords(int lower, int upper, long startTimeStamp, long interval) { - List records = new ArrayList<>(); - int countOfRecords = randomIntBetween(lower, upper); - long timestamp = startTimeStamp; - for (int i = 0; i < countOfRecords; ++i) { - Map measurements = Map.of( - MetricType.LATENCY, - randomLongBetween(1000, 10000), - MetricType.CPU, - randomLongBetween(1000, 10000), - MetricType.MEMORY, - randomLongBetween(1000, 10000) - ); - - Map phaseLatencyMap = new HashMap<>(); - int countOfPhases = randomIntBetween(2, 5); - for (int j = 0; j < countOfPhases; ++j) { - phaseLatencyMap.put(randomAlphaOfLengthBetween(5, 10), randomLong()); - } - Map attributes = new HashMap<>(); - attributes.put(Attribute.SEARCH_TYPE, SearchType.QUERY_THEN_FETCH.toString().toLowerCase(Locale.ROOT)); - attributes.put(Attribute.SOURCE, "{\"size\":20}"); - attributes.put(Attribute.TOTAL_SHARDS, randomIntBetween(1, 100)); - attributes.put(Attribute.INDICES, randomArray(1, 3, Object[]::new, () -> randomAlphaOfLengthBetween(5, 10))); - attributes.put(Attribute.PHASE_LATENCY_MAP, phaseLatencyMap); - - records.add(new SearchQueryRecord(timestamp, measurements, attributes)); - timestamp += interval; - } - return records; - } - - public static TopQueries createRandomTopQueries() { - DiscoveryNode node = new DiscoveryNode( - "node_for_top_queries_test", - buildNewFakeTransportAddress(), - emptyMap(), - emptySet(), - VersionUtils.randomVersion(random()) - ); - List records = generateQueryInsightRecords(10); - - return new TopQueries(node, records); - } - - public static TopQueries createFixedTopQueries() { - DiscoveryNode node = new DiscoveryNode( - "node_for_top_queries_test", - buildNewFakeTransportAddress(), - emptyMap(), - emptySet(), - VersionUtils.randomVersion(random()) - ); - List records = new ArrayList<>(); - records.add(createFixedSearchQueryRecord()); - - return new TopQueries(node, records); - } - - public static SearchQueryRecord createFixedSearchQueryRecord() { - long timestamp = 1706574180000L; - Map measurements = Map.of(MetricType.LATENCY, 1L); - - Map phaseLatencyMap = new HashMap<>(); - Map attributes = new HashMap<>(); - attributes.put(Attribute.SEARCH_TYPE, SearchType.QUERY_THEN_FETCH.toString().toLowerCase(Locale.ROOT)); - - return new SearchQueryRecord(timestamp, measurements, attributes); - } - - public static void compareJson(ToXContent param1, ToXContent param2) throws IOException { - if (param1 == null || param2 == null) { - assertNull(param1); - assertNull(param2); - return; - } - - ToXContent.Params params = ToXContent.EMPTY_PARAMS; - XContentBuilder param1Builder = jsonBuilder(); - param1.toXContent(param1Builder, params); - - XContentBuilder param2Builder = jsonBuilder(); - param2.toXContent(param2Builder, params); - - assertEquals(param1Builder.toString(), param2Builder.toString()); - } - - @SuppressWarnings("unchecked") - public static boolean checkRecordsEquals(List records1, List records2) { - if (records1.size() != records2.size()) { - return false; - } - for (int i = 0; i < records1.size(); i++) { - if (!records1.get(i).equals(records2.get(i))) { - return false; - } - Map attributes1 = records1.get(i).getAttributes(); - Map attributes2 = records2.get(i).getAttributes(); - for (Map.Entry entry : attributes1.entrySet()) { - Attribute attribute = entry.getKey(); - Object value = entry.getValue(); - if (!attributes2.containsKey(attribute)) { - return false; - } - if (value instanceof Object[] && !Arrays.deepEquals((Object[]) value, (Object[]) attributes2.get(attribute))) { - return false; - } else if (value instanceof Map - && !Maps.deepEquals((Map) value, (Map) attributes2.get(attribute))) { - return false; - } - } - } - return true; - } - - public static boolean checkRecordsEqualsWithoutOrder( - List records1, - List records2, - MetricType metricType - ) { - Set set2 = new TreeSet<>((a, b) -> SearchQueryRecord.compare(a, b, metricType)); - set2.addAll(records2); - if (records1.size() != records2.size()) { - return false; - } - for (int i = 0; i < records1.size(); i++) { - if (!set2.contains(records1.get(i))) { - return false; - } - } - return true; - } - - public static void registerAllQueryInsightsSettings(ClusterSettings clusterSettings) { - clusterSettings.registerSetting(QueryInsightsSettings.TOP_N_LATENCY_QUERIES_ENABLED); - clusterSettings.registerSetting(QueryInsightsSettings.TOP_N_LATENCY_QUERIES_SIZE); - clusterSettings.registerSetting(QueryInsightsSettings.TOP_N_LATENCY_QUERIES_WINDOW_SIZE); - clusterSettings.registerSetting(QueryInsightsSettings.TOP_N_LATENCY_EXPORTER_SETTINGS); - clusterSettings.registerSetting(QueryInsightsSettings.TOP_N_CPU_QUERIES_ENABLED); - clusterSettings.registerSetting(QueryInsightsSettings.TOP_N_CPU_QUERIES_SIZE); - clusterSettings.registerSetting(QueryInsightsSettings.TOP_N_CPU_QUERIES_WINDOW_SIZE); - clusterSettings.registerSetting(QueryInsightsSettings.TOP_N_CPU_EXPORTER_SETTINGS); - clusterSettings.registerSetting(QueryInsightsSettings.TOP_N_MEMORY_QUERIES_ENABLED); - clusterSettings.registerSetting(QueryInsightsSettings.TOP_N_MEMORY_QUERIES_SIZE); - clusterSettings.registerSetting(QueryInsightsSettings.TOP_N_MEMORY_QUERIES_WINDOW_SIZE); - clusterSettings.registerSetting(QueryInsightsSettings.TOP_N_MEMORY_EXPORTER_SETTINGS); - } -} diff --git a/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/core/exporter/DebugExporterTests.java b/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/core/exporter/DebugExporterTests.java deleted file mode 100644 index 736e406289b2c..0000000000000 --- a/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/core/exporter/DebugExporterTests.java +++ /dev/null @@ -1,37 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.plugin.insights.core.exporter; - -import org.opensearch.plugin.insights.QueryInsightsTestUtils; -import org.opensearch.plugin.insights.rules.model.SearchQueryRecord; -import org.opensearch.test.OpenSearchTestCase; -import org.junit.Before; - -import java.util.List; - -/** - * Granular tests for the {@link DebugExporterTests} class. - */ -public class DebugExporterTests extends OpenSearchTestCase { - private DebugExporter debugExporter; - - @Before - public void setup() { - debugExporter = DebugExporter.getInstance(); - } - - public void testExport() { - List records = QueryInsightsTestUtils.generateQueryInsightRecords(2); - try { - debugExporter.export(records); - } catch (Exception e) { - fail("No exception should be thrown when exporting query insights data"); - } - } -} diff --git a/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/core/exporter/LocalIndexExporterTests.java b/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/core/exporter/LocalIndexExporterTests.java deleted file mode 100644 index 9ea864a7083f4..0000000000000 --- a/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/core/exporter/LocalIndexExporterTests.java +++ /dev/null @@ -1,99 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.plugin.insights.core.exporter; - -import org.opensearch.action.bulk.BulkAction; -import org.opensearch.action.bulk.BulkRequestBuilder; -import org.opensearch.action.bulk.BulkResponse; -import org.opensearch.action.support.PlainActionFuture; -import org.opensearch.client.Client; -import org.opensearch.plugin.insights.QueryInsightsTestUtils; -import org.opensearch.plugin.insights.rules.model.SearchQueryRecord; -import org.opensearch.test.OpenSearchTestCase; -import org.joda.time.format.DateTimeFormat; -import org.joda.time.format.DateTimeFormatter; -import org.junit.Before; - -import java.util.List; - -import static org.mockito.Mockito.doAnswer; -import static org.mockito.Mockito.doThrow; -import static org.mockito.Mockito.mock; -import static org.mockito.Mockito.spy; -import static org.mockito.Mockito.when; - -/** - * Granular tests for the {@link LocalIndexExporterTests} class. - */ -public class LocalIndexExporterTests extends OpenSearchTestCase { - private final DateTimeFormatter format = DateTimeFormat.forPattern("YYYY.MM.dd"); - private final Client client = mock(Client.class); - private LocalIndexExporter localIndexExporter; - - @Before - public void setup() { - localIndexExporter = new LocalIndexExporter(client, format); - } - - public void testExportEmptyRecords() { - List records = List.of(); - try { - localIndexExporter.export(records); - } catch (Exception e) { - fail("No exception should be thrown when exporting empty query insights data"); - } - } - - @SuppressWarnings("unchecked") - public void testExportRecords() { - BulkRequestBuilder bulkRequestBuilder = spy(new BulkRequestBuilder(client, BulkAction.INSTANCE)); - final PlainActionFuture future = mock(PlainActionFuture.class); - when(future.actionGet()).thenReturn(null); - doAnswer(invocation -> future).when(bulkRequestBuilder).execute(); - when(client.prepareBulk()).thenReturn(bulkRequestBuilder); - - List records = QueryInsightsTestUtils.generateQueryInsightRecords(2); - try { - localIndexExporter.export(records); - } catch (Exception e) { - fail("No exception should be thrown when exporting query insights data"); - } - assertEquals(2, bulkRequestBuilder.numberOfActions()); - } - - @SuppressWarnings("unchecked") - public void testExportRecordsWithError() { - BulkRequestBuilder bulkRequestBuilder = spy(new BulkRequestBuilder(client, BulkAction.INSTANCE)); - final PlainActionFuture future = mock(PlainActionFuture.class); - when(future.actionGet()).thenReturn(null); - doThrow(new RuntimeException()).when(bulkRequestBuilder).execute(); - when(client.prepareBulk()).thenReturn(bulkRequestBuilder); - - List records = QueryInsightsTestUtils.generateQueryInsightRecords(2); - try { - localIndexExporter.export(records); - } catch (Exception e) { - fail("No exception should be thrown when exporting query insights data"); - } - } - - public void testClose() { - try { - localIndexExporter.close(); - } catch (Exception e) { - fail("No exception should be thrown when closing local index exporter"); - } - } - - public void testGetAndSetIndexPattern() { - DateTimeFormatter newFormatter = mock(DateTimeFormatter.class); - localIndexExporter.setIndexPattern(newFormatter); - assert (localIndexExporter.getIndexPattern() == newFormatter); - } -} diff --git a/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/core/exporter/QueryInsightsExporterFactoryTests.java b/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/core/exporter/QueryInsightsExporterFactoryTests.java deleted file mode 100644 index f01dd2c17509c..0000000000000 --- a/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/core/exporter/QueryInsightsExporterFactoryTests.java +++ /dev/null @@ -1,89 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.plugin.insights.core.exporter; - -import org.opensearch.client.Client; -import org.opensearch.common.settings.Settings; -import org.opensearch.test.OpenSearchTestCase; -import org.joda.time.format.DateTimeFormat; -import org.junit.Before; - -import static org.opensearch.plugin.insights.settings.QueryInsightsSettings.DEFAULT_TOP_QUERIES_EXPORTER_TYPE; -import static org.opensearch.plugin.insights.settings.QueryInsightsSettings.EXPORTER_TYPE; -import static org.opensearch.plugin.insights.settings.QueryInsightsSettings.EXPORT_INDEX; -import static org.mockito.Mockito.mock; - -/** - * Granular tests for the {@link QueryInsightsExporterFactoryTests} class. - */ -public class QueryInsightsExporterFactoryTests extends OpenSearchTestCase { - private final String format = "YYYY.MM.dd"; - - private final Client client = mock(Client.class); - private QueryInsightsExporterFactory queryInsightsExporterFactory; - - @Before - public void setup() { - queryInsightsExporterFactory = new QueryInsightsExporterFactory(client); - } - - public void testValidateConfigWhenResetExporter() { - Settings.Builder settingsBuilder = Settings.builder(); - // empty settings - Settings settings = settingsBuilder.build(); - try { - queryInsightsExporterFactory.validateExporterConfig(settings); - } catch (Exception e) { - fail("No exception should be thrown when setting is null"); - } - } - - public void testInvalidExporterTypeConfig() { - Settings.Builder settingsBuilder = Settings.builder(); - Settings settings = settingsBuilder.put(EXPORTER_TYPE, "some_invalid_type").build(); - assertThrows(IllegalArgumentException.class, () -> { queryInsightsExporterFactory.validateExporterConfig(settings); }); - } - - public void testInvalidLocalIndexConfig() { - Settings.Builder settingsBuilder = Settings.builder(); - assertThrows(IllegalArgumentException.class, () -> { - queryInsightsExporterFactory.validateExporterConfig( - settingsBuilder.put(EXPORTER_TYPE, DEFAULT_TOP_QUERIES_EXPORTER_TYPE).put(EXPORT_INDEX, "").build() - ); - }); - assertThrows(IllegalArgumentException.class, () -> { - queryInsightsExporterFactory.validateExporterConfig( - settingsBuilder.put(EXPORTER_TYPE, DEFAULT_TOP_QUERIES_EXPORTER_TYPE).put(EXPORT_INDEX, "some_invalid_pattern").build() - ); - }); - } - - public void testCreateAndCloseExporter() { - QueryInsightsExporter exporter1 = queryInsightsExporterFactory.createExporter(SinkType.LOCAL_INDEX, format); - assertTrue(exporter1 instanceof LocalIndexExporter); - QueryInsightsExporter exporter2 = queryInsightsExporterFactory.createExporter(SinkType.DEBUG, format); - assertTrue(exporter2 instanceof DebugExporter); - QueryInsightsExporter exporter3 = queryInsightsExporterFactory.createExporter(SinkType.DEBUG, format); - assertTrue(exporter3 instanceof DebugExporter); - try { - queryInsightsExporterFactory.closeExporter(exporter1); - queryInsightsExporterFactory.closeExporter(exporter2); - queryInsightsExporterFactory.closeAllExporters(); - } catch (Exception e) { - fail("No exception should be thrown when closing exporter"); - } - } - - public void testUpdateExporter() { - LocalIndexExporter exporter = new LocalIndexExporter(client, DateTimeFormat.forPattern("yyyy-MM-dd")); - queryInsightsExporterFactory.updateExporter(exporter, "yyyy-MM-dd-HH"); - assertEquals(DateTimeFormat.forPattern("yyyy-MM-dd-HH"), exporter.getIndexPattern()); - } - -} diff --git a/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/core/listener/QueryInsightsListenerTests.java b/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/core/listener/QueryInsightsListenerTests.java deleted file mode 100644 index 86de44c680188..0000000000000 --- a/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/core/listener/QueryInsightsListenerTests.java +++ /dev/null @@ -1,217 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.plugin.insights.core.listener; - -import org.opensearch.action.search.SearchPhaseContext; -import org.opensearch.action.search.SearchRequest; -import org.opensearch.action.search.SearchRequestContext; -import org.opensearch.action.search.SearchTask; -import org.opensearch.action.search.SearchType; -import org.opensearch.action.support.replication.ClusterStateCreationUtils; -import org.opensearch.cluster.ClusterState; -import org.opensearch.cluster.service.ClusterService; -import org.opensearch.common.collect.Tuple; -import org.opensearch.common.settings.ClusterSettings; -import org.opensearch.common.settings.Settings; -import org.opensearch.common.util.concurrent.ThreadContext; -import org.opensearch.common.util.io.IOUtils; -import org.opensearch.core.tasks.TaskId; -import org.opensearch.plugin.insights.QueryInsightsTestUtils; -import org.opensearch.plugin.insights.core.service.QueryInsightsService; -import org.opensearch.plugin.insights.core.service.TopQueriesService; -import org.opensearch.plugin.insights.rules.model.Attribute; -import org.opensearch.plugin.insights.rules.model.MetricType; -import org.opensearch.plugin.insights.rules.model.SearchQueryRecord; -import org.opensearch.search.aggregations.bucket.terms.TermsAggregationBuilder; -import org.opensearch.search.aggregations.support.ValueType; -import org.opensearch.search.builder.SearchSourceBuilder; -import org.opensearch.tasks.Task; -import org.opensearch.test.ClusterServiceUtils; -import org.opensearch.test.OpenSearchTestCase; -import org.opensearch.threadpool.TestThreadPool; -import org.opensearch.threadpool.ThreadPool; -import org.junit.Before; - -import java.util.ArrayList; -import java.util.Collections; -import java.util.HashMap; -import java.util.List; -import java.util.Locale; -import java.util.Map; -import java.util.concurrent.CountDownLatch; -import java.util.concurrent.Phaser; -import java.util.concurrent.TimeUnit; - -import org.mockito.ArgumentCaptor; - -import static org.mockito.ArgumentMatchers.any; -import static org.mockito.Mockito.mock; -import static org.mockito.Mockito.times; -import static org.mockito.Mockito.verify; -import static org.mockito.Mockito.when; - -/** - * Unit Tests for {@link QueryInsightsListener}. - */ -public class QueryInsightsListenerTests extends OpenSearchTestCase { - private final SearchRequestContext searchRequestContext = mock(SearchRequestContext.class); - private final SearchPhaseContext searchPhaseContext = mock(SearchPhaseContext.class); - private final SearchRequest searchRequest = mock(SearchRequest.class); - private final QueryInsightsService queryInsightsService = mock(QueryInsightsService.class); - private final TopQueriesService topQueriesService = mock(TopQueriesService.class); - private final ThreadPool threadPool = new TestThreadPool("QueryInsightsThreadPool"); - private ClusterService clusterService; - - @Before - public void setup() { - Settings.Builder settingsBuilder = Settings.builder(); - Settings settings = settingsBuilder.build(); - ClusterSettings clusterSettings = new ClusterSettings(settings, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS); - QueryInsightsTestUtils.registerAllQueryInsightsSettings(clusterSettings); - ClusterState state = ClusterStateCreationUtils.stateWithActivePrimary("test", true, 1 + randomInt(3), randomInt(2)); - clusterService = ClusterServiceUtils.createClusterService(threadPool, state.getNodes().getLocalNode(), clusterSettings); - ClusterServiceUtils.setState(clusterService, state); - when(queryInsightsService.isCollectionEnabled(MetricType.LATENCY)).thenReturn(true); - when(queryInsightsService.getTopQueriesService(MetricType.LATENCY)).thenReturn(topQueriesService); - - ThreadContext threadContext = new ThreadContext(Settings.EMPTY); - threadPool.getThreadContext().setHeaders(new Tuple<>(Collections.singletonMap(Task.X_OPAQUE_ID, "userLabel"), new HashMap<>())); - } - - @Override - public void tearDown() throws Exception { - super.tearDown(); - IOUtils.close(clusterService); - ThreadPool.terminate(threadPool, 10, TimeUnit.SECONDS); - } - - @SuppressWarnings("unchecked") - public void testOnRequestEnd() throws InterruptedException { - Long timestamp = System.currentTimeMillis() - 100L; - SearchType searchType = SearchType.QUERY_THEN_FETCH; - - SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder(); - searchSourceBuilder.aggregation(new TermsAggregationBuilder("agg1").userValueTypeHint(ValueType.STRING).field("type.keyword")); - searchSourceBuilder.size(0); - SearchTask task = new SearchTask( - 0, - "n/a", - "n/a", - () -> "test", - TaskId.EMPTY_TASK_ID, - Collections.singletonMap(Task.X_OPAQUE_ID, "userLabel") - ); - - String[] indices = new String[] { "index-1", "index-2" }; - - Map phaseLatencyMap = new HashMap<>(); - phaseLatencyMap.put("expand", 0L); - phaseLatencyMap.put("query", 20L); - phaseLatencyMap.put("fetch", 1L); - - int numberOfShards = 10; - - QueryInsightsListener queryInsightsListener = new QueryInsightsListener(clusterService, queryInsightsService); - - when(searchRequest.getOrCreateAbsoluteStartMillis()).thenReturn(timestamp); - when(searchRequest.searchType()).thenReturn(searchType); - when(searchRequest.source()).thenReturn(searchSourceBuilder); - when(searchRequest.indices()).thenReturn(indices); - when(searchRequestContext.phaseTookMap()).thenReturn(phaseLatencyMap); - when(searchPhaseContext.getRequest()).thenReturn(searchRequest); - when(searchPhaseContext.getNumShards()).thenReturn(numberOfShards); - when(searchPhaseContext.getTask()).thenReturn(task); - ArgumentCaptor captor = ArgumentCaptor.forClass(SearchQueryRecord.class); - - queryInsightsListener.onRequestEnd(searchPhaseContext, searchRequestContext); - - verify(queryInsightsService, times(1)).addRecord(captor.capture()); - SearchQueryRecord generatedRecord = captor.getValue(); - assertEquals(timestamp.longValue(), generatedRecord.getTimestamp()); - assertEquals(numberOfShards, generatedRecord.getAttributes().get(Attribute.TOTAL_SHARDS)); - assertEquals(searchType.toString().toLowerCase(Locale.ROOT), generatedRecord.getAttributes().get(Attribute.SEARCH_TYPE)); - assertEquals(searchSourceBuilder.toString(), generatedRecord.getAttributes().get(Attribute.SOURCE)); - Map labels = (Map) generatedRecord.getAttributes().get(Attribute.LABELS); - assertEquals("userLabel", labels.get(Task.X_OPAQUE_ID)); - } - - public void testConcurrentOnRequestEnd() throws InterruptedException { - Long timestamp = System.currentTimeMillis() - 100L; - SearchType searchType = SearchType.QUERY_THEN_FETCH; - - SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder(); - searchSourceBuilder.aggregation(new TermsAggregationBuilder("agg1").userValueTypeHint(ValueType.STRING).field("type.keyword")); - searchSourceBuilder.size(0); - SearchTask task = new SearchTask( - 0, - "n/a", - "n/a", - () -> "test", - TaskId.EMPTY_TASK_ID, - Collections.singletonMap(Task.X_OPAQUE_ID, "userLabel") - ); - - String[] indices = new String[] { "index-1", "index-2" }; - - Map phaseLatencyMap = new HashMap<>(); - phaseLatencyMap.put("expand", 0L); - phaseLatencyMap.put("query", 20L); - phaseLatencyMap.put("fetch", 1L); - - int numberOfShards = 10; - - final List searchListenersList = new ArrayList<>(); - - when(searchRequest.getOrCreateAbsoluteStartMillis()).thenReturn(timestamp); - when(searchRequest.searchType()).thenReturn(searchType); - when(searchRequest.source()).thenReturn(searchSourceBuilder); - when(searchRequest.indices()).thenReturn(indices); - when(searchRequestContext.phaseTookMap()).thenReturn(phaseLatencyMap); - when(searchPhaseContext.getRequest()).thenReturn(searchRequest); - when(searchPhaseContext.getNumShards()).thenReturn(numberOfShards); - when(searchPhaseContext.getTask()).thenReturn(task); - - int numRequests = 50; - Thread[] threads = new Thread[numRequests]; - Phaser phaser = new Phaser(numRequests + 1); - CountDownLatch countDownLatch = new CountDownLatch(numRequests); - - for (int i = 0; i < numRequests; i++) { - searchListenersList.add(new QueryInsightsListener(clusterService, queryInsightsService)); - } - - for (int i = 0; i < numRequests; i++) { - int finalI = i; - threads[i] = new Thread(() -> { - phaser.arriveAndAwaitAdvance(); - QueryInsightsListener thisListener = searchListenersList.get(finalI); - thisListener.onRequestEnd(searchPhaseContext, searchRequestContext); - countDownLatch.countDown(); - }); - threads[i].start(); - } - phaser.arriveAndAwaitAdvance(); - countDownLatch.await(); - - verify(queryInsightsService, times(numRequests)).addRecord(any()); - } - - public void testSetEnabled() { - when(queryInsightsService.isCollectionEnabled(MetricType.LATENCY)).thenReturn(true); - QueryInsightsListener queryInsightsListener = new QueryInsightsListener(clusterService, queryInsightsService); - queryInsightsListener.setEnableTopQueries(MetricType.LATENCY, true); - assertTrue(queryInsightsListener.isEnabled()); - - when(queryInsightsService.isCollectionEnabled(MetricType.LATENCY)).thenReturn(false); - when(queryInsightsService.isCollectionEnabled(MetricType.CPU)).thenReturn(false); - when(queryInsightsService.isCollectionEnabled(MetricType.MEMORY)).thenReturn(false); - queryInsightsListener.setEnableTopQueries(MetricType.LATENCY, false); - assertFalse(queryInsightsListener.isEnabled()); - } -} diff --git a/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/core/service/QueryInsightsServiceTests.java b/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/core/service/QueryInsightsServiceTests.java deleted file mode 100644 index 75a5768f50681..0000000000000 --- a/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/core/service/QueryInsightsServiceTests.java +++ /dev/null @@ -1,65 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.plugin.insights.core.service; - -import org.opensearch.client.Client; -import org.opensearch.common.settings.ClusterSettings; -import org.opensearch.common.settings.Settings; -import org.opensearch.plugin.insights.QueryInsightsTestUtils; -import org.opensearch.plugin.insights.rules.model.MetricType; -import org.opensearch.plugin.insights.rules.model.SearchQueryRecord; -import org.opensearch.plugin.insights.settings.QueryInsightsSettings; -import org.opensearch.test.OpenSearchTestCase; -import org.opensearch.threadpool.ThreadPool; -import org.junit.Before; - -import static org.mockito.Mockito.mock; - -/** - * Unit Tests for {@link QueryInsightsService}. - */ -public class QueryInsightsServiceTests extends OpenSearchTestCase { - private final ThreadPool threadPool = mock(ThreadPool.class); - private final Client client = mock(Client.class); - private QueryInsightsService queryInsightsService; - - @Before - public void setup() { - Settings.Builder settingsBuilder = Settings.builder(); - Settings settings = settingsBuilder.build(); - ClusterSettings clusterSettings = new ClusterSettings(settings, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS); - QueryInsightsTestUtils.registerAllQueryInsightsSettings(clusterSettings); - queryInsightsService = new QueryInsightsService(clusterSettings, threadPool, client); - queryInsightsService.enableCollection(MetricType.LATENCY, true); - queryInsightsService.enableCollection(MetricType.CPU, true); - queryInsightsService.enableCollection(MetricType.MEMORY, true); - } - - public void testAddRecordToLimitAndDrain() { - SearchQueryRecord record = QueryInsightsTestUtils.generateQueryInsightRecords(1, 1, System.currentTimeMillis(), 0).get(0); - for (int i = 0; i < QueryInsightsSettings.QUERY_RECORD_QUEUE_CAPACITY; i++) { - assertTrue(queryInsightsService.addRecord(record)); - } - // exceed capacity - assertFalse(queryInsightsService.addRecord(record)); - queryInsightsService.drainRecords(); - assertEquals( - QueryInsightsSettings.DEFAULT_TOP_N_SIZE, - queryInsightsService.getTopQueriesService(MetricType.LATENCY).getTopQueriesRecords(false).size() - ); - } - - public void testClose() { - try { - queryInsightsService.doClose(); - } catch (Exception e) { - fail("No exception expected when closing query insights service"); - } - } -} diff --git a/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/core/service/TopQueriesServiceTests.java b/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/core/service/TopQueriesServiceTests.java deleted file mode 100644 index 8478fe1621698..0000000000000 --- a/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/core/service/TopQueriesServiceTests.java +++ /dev/null @@ -1,112 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.plugin.insights.core.service; - -import org.opensearch.cluster.coordination.DeterministicTaskQueue; -import org.opensearch.common.unit.TimeValue; -import org.opensearch.plugin.insights.QueryInsightsTestUtils; -import org.opensearch.plugin.insights.core.exporter.QueryInsightsExporterFactory; -import org.opensearch.plugin.insights.rules.model.MetricType; -import org.opensearch.plugin.insights.rules.model.SearchQueryRecord; -import org.opensearch.plugin.insights.settings.QueryInsightsSettings; -import org.opensearch.test.OpenSearchTestCase; -import org.opensearch.threadpool.ThreadPool; -import org.junit.Before; - -import java.util.List; -import java.util.concurrent.TimeUnit; - -import static org.mockito.Mockito.mock; - -/** - * Unit Tests for {@link QueryInsightsService}. - */ -public class TopQueriesServiceTests extends OpenSearchTestCase { - private TopQueriesService topQueriesService; - private final ThreadPool threadPool = mock(ThreadPool.class); - private final QueryInsightsExporterFactory queryInsightsExporterFactory = mock(QueryInsightsExporterFactory.class); - - @Before - public void setup() { - topQueriesService = new TopQueriesService(MetricType.LATENCY, threadPool, queryInsightsExporterFactory); - topQueriesService.setTopNSize(Integer.MAX_VALUE); - topQueriesService.setWindowSize(new TimeValue(Long.MAX_VALUE)); - topQueriesService.setEnabled(true); - } - - public void testIngestQueryDataWithLargeWindow() { - final List records = QueryInsightsTestUtils.generateQueryInsightRecords(10); - topQueriesService.consumeRecords(records); - assertTrue( - QueryInsightsTestUtils.checkRecordsEqualsWithoutOrder( - topQueriesService.getTopQueriesRecords(false), - records, - MetricType.LATENCY - ) - ); - } - - public void testRollingWindows() { - List records; - // Create 5 records at Now - 10 minutes to make sure they belong to the last window - records = QueryInsightsTestUtils.generateQueryInsightRecords(5, 5, System.currentTimeMillis() - 1000 * 60 * 10, 0); - topQueriesService.setWindowSize(TimeValue.timeValueMinutes(10)); - topQueriesService.consumeRecords(records); - assertEquals(0, topQueriesService.getTopQueriesRecords(true).size()); - - // Create 10 records at now + 1 minute, to make sure they belong to the current window - records = QueryInsightsTestUtils.generateQueryInsightRecords(10, 10, System.currentTimeMillis() + 1000 * 60, 0); - topQueriesService.setWindowSize(TimeValue.timeValueMinutes(10)); - topQueriesService.consumeRecords(records); - assertEquals(10, topQueriesService.getTopQueriesRecords(true).size()); - } - - public void testSmallNSize() { - final List records = QueryInsightsTestUtils.generateQueryInsightRecords(10); - topQueriesService.setTopNSize(1); - topQueriesService.consumeRecords(records); - assertEquals(1, topQueriesService.getTopQueriesRecords(false).size()); - } - - public void testValidateTopNSize() { - assertThrows(IllegalArgumentException.class, () -> { topQueriesService.validateTopNSize(QueryInsightsSettings.MAX_N_SIZE + 1); }); - } - - public void testValidateNegativeTopNSize() { - assertThrows(IllegalArgumentException.class, () -> { topQueriesService.validateTopNSize(-1); }); - } - - public void testGetTopQueriesWhenNotEnabled() { - topQueriesService.setEnabled(false); - assertThrows(IllegalArgumentException.class, () -> { topQueriesService.getTopQueriesRecords(false); }); - } - - public void testValidateWindowSize() { - assertThrows(IllegalArgumentException.class, () -> { - topQueriesService.validateWindowSize(new TimeValue(QueryInsightsSettings.MAX_WINDOW_SIZE.getSeconds() + 1, TimeUnit.SECONDS)); - }); - assertThrows(IllegalArgumentException.class, () -> { - topQueriesService.validateWindowSize(new TimeValue(QueryInsightsSettings.MIN_WINDOW_SIZE.getSeconds() - 1, TimeUnit.SECONDS)); - }); - assertThrows(IllegalArgumentException.class, () -> { topQueriesService.validateWindowSize(new TimeValue(2, TimeUnit.DAYS)); }); - assertThrows(IllegalArgumentException.class, () -> { topQueriesService.validateWindowSize(new TimeValue(7, TimeUnit.MINUTES)); }); - } - - private static void runUntilTimeoutOrFinish(DeterministicTaskQueue deterministicTaskQueue, long duration) { - final long endTime = deterministicTaskQueue.getCurrentTimeMillis() + duration; - while (deterministicTaskQueue.getCurrentTimeMillis() < endTime - && (deterministicTaskQueue.hasRunnableTasks() || deterministicTaskQueue.hasDeferredTasks())) { - if (deterministicTaskQueue.hasDeferredTasks() && randomBoolean()) { - deterministicTaskQueue.advanceTime(); - } else if (deterministicTaskQueue.hasRunnableTasks()) { - deterministicTaskQueue.runRandomTask(); - } - } - } -} diff --git a/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/rules/action/top_queries/TopQueriesRequestTests.java b/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/rules/action/top_queries/TopQueriesRequestTests.java deleted file mode 100644 index 619fd4b33a3dc..0000000000000 --- a/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/rules/action/top_queries/TopQueriesRequestTests.java +++ /dev/null @@ -1,43 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.plugin.insights.rules.action.top_queries; - -import org.opensearch.common.io.stream.BytesStreamOutput; -import org.opensearch.core.common.io.stream.StreamInput; -import org.opensearch.plugin.insights.rules.model.MetricType; -import org.opensearch.test.OpenSearchTestCase; - -/** - * Granular tests for the {@link TopQueriesRequest} class. - */ -public class TopQueriesRequestTests extends OpenSearchTestCase { - - /** - * Check that we can set the metric type - */ - public void testSetMetricType() throws Exception { - TopQueriesRequest request = new TopQueriesRequest(MetricType.LATENCY, randomAlphaOfLength(5)); - TopQueriesRequest deserializedRequest = roundTripRequest(request); - assertEquals(request.getMetricType(), deserializedRequest.getMetricType()); - } - - /** - * Serialize and deserialize a request. - * @param request A request to serialize. - * @return The deserialized, "round-tripped" request. - */ - private static TopQueriesRequest roundTripRequest(TopQueriesRequest request) throws Exception { - try (BytesStreamOutput out = new BytesStreamOutput()) { - request.writeTo(out); - try (StreamInput in = out.bytes().streamInput()) { - return new TopQueriesRequest(in); - } - } - } -} diff --git a/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/rules/action/top_queries/TopQueriesResponseTests.java b/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/rules/action/top_queries/TopQueriesResponseTests.java deleted file mode 100644 index eeee50d3da703..0000000000000 --- a/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/rules/action/top_queries/TopQueriesResponseTests.java +++ /dev/null @@ -1,71 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.plugin.insights.rules.action.top_queries; - -import org.opensearch.cluster.ClusterName; -import org.opensearch.common.io.stream.BytesStreamOutput; -import org.opensearch.core.common.bytes.BytesReference; -import org.opensearch.core.common.io.stream.StreamInput; -import org.opensearch.core.xcontent.MediaTypeRegistry; -import org.opensearch.core.xcontent.ToXContent; -import org.opensearch.core.xcontent.XContentBuilder; -import org.opensearch.plugin.insights.QueryInsightsTestUtils; -import org.opensearch.plugin.insights.rules.model.MetricType; -import org.opensearch.test.OpenSearchTestCase; - -import java.io.IOException; -import java.util.ArrayList; -import java.util.Arrays; -import java.util.List; - -/** - * Granular tests for the {@link TopQueriesResponse} class. - */ -public class TopQueriesResponseTests extends OpenSearchTestCase { - - /** - * Check serialization and deserialization - */ - public void testSerialize() throws Exception { - TopQueries topQueries = QueryInsightsTestUtils.createRandomTopQueries(); - ClusterName clusterName = new ClusterName("test-cluster"); - TopQueriesResponse response = new TopQueriesResponse(clusterName, List.of(topQueries), new ArrayList<>(), 10, MetricType.LATENCY); - TopQueriesResponse deserializedResponse = roundTripResponse(response); - assertEquals(response.toString(), deserializedResponse.toString()); - } - - public void testToXContent() throws IOException { - char[] expectedXcontent = - "{\"top_queries\":[{\"timestamp\":1706574180000,\"node_id\":\"node_for_top_queries_test\",\"search_type\":\"query_then_fetch\",\"latency\":1}]}" - .toCharArray(); - TopQueries topQueries = QueryInsightsTestUtils.createFixedTopQueries(); - ClusterName clusterName = new ClusterName("test-cluster"); - TopQueriesResponse response = new TopQueriesResponse(clusterName, List.of(topQueries), new ArrayList<>(), 10, MetricType.LATENCY); - XContentBuilder builder = MediaTypeRegistry.contentBuilder(MediaTypeRegistry.JSON); - char[] xContent = BytesReference.bytes(response.toXContent(builder, ToXContent.EMPTY_PARAMS)).utf8ToString().toCharArray(); - Arrays.sort(expectedXcontent); - Arrays.sort(xContent); - - assertEquals(Arrays.hashCode(expectedXcontent), Arrays.hashCode(xContent)); - } - - /** - * Serialize and deserialize a TopQueriesResponse. - * @param response A response to serialize. - * @return The deserialized, "round-tripped" response. - */ - private static TopQueriesResponse roundTripResponse(TopQueriesResponse response) throws Exception { - try (BytesStreamOutput out = new BytesStreamOutput()) { - response.writeTo(out); - try (StreamInput in = out.bytes().streamInput()) { - return new TopQueriesResponse(in); - } - } - } -} diff --git a/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/rules/action/top_queries/TopQueriesTests.java b/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/rules/action/top_queries/TopQueriesTests.java deleted file mode 100644 index 7db08b53ad1df..0000000000000 --- a/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/rules/action/top_queries/TopQueriesTests.java +++ /dev/null @@ -1,35 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.plugin.insights.rules.action.top_queries; - -import org.opensearch.common.io.stream.BytesStreamOutput; -import org.opensearch.core.common.io.stream.StreamInput; -import org.opensearch.plugin.insights.QueryInsightsTestUtils; -import org.opensearch.test.OpenSearchTestCase; - -import java.io.IOException; - -/** - * Tests for {@link TopQueries}. - */ -public class TopQueriesTests extends OpenSearchTestCase { - - public void testTopQueries() throws IOException { - TopQueries topQueries = QueryInsightsTestUtils.createRandomTopQueries(); - try (BytesStreamOutput out = new BytesStreamOutput()) { - topQueries.writeTo(out); - try (StreamInput in = out.bytes().streamInput()) { - TopQueries readTopQueries = new TopQueries(in); - assertTrue( - QueryInsightsTestUtils.checkRecordsEquals(topQueries.getTopQueriesRecord(), readTopQueries.getTopQueriesRecord()) - ); - } - } - } -} diff --git a/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/rules/model/SearchQueryRecordTests.java b/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/rules/model/SearchQueryRecordTests.java deleted file mode 100644 index ad45b53ec5363..0000000000000 --- a/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/rules/model/SearchQueryRecordTests.java +++ /dev/null @@ -1,71 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.plugin.insights.rules.model; - -import org.opensearch.common.io.stream.BytesStreamOutput; -import org.opensearch.core.common.io.stream.StreamInput; -import org.opensearch.plugin.insights.QueryInsightsTestUtils; -import org.opensearch.test.OpenSearchTestCase; - -import java.util.ArrayList; -import java.util.Arrays; -import java.util.HashSet; -import java.util.List; -import java.util.Set; - -/** - * Granular tests for the {@link SearchQueryRecord} class. - */ -public class SearchQueryRecordTests extends OpenSearchTestCase { - - /** - * Check that if the serialization, deserialization and equals functions are working as expected - */ - public void testSerializationAndEquals() throws Exception { - List records = QueryInsightsTestUtils.generateQueryInsightRecords(10); - List copiedRecords = new ArrayList<>(); - for (SearchQueryRecord record : records) { - copiedRecords.add(roundTripRecord(record)); - } - assertTrue(QueryInsightsTestUtils.checkRecordsEquals(records, copiedRecords)); - - } - - public void testAllMetricTypes() { - Set allMetrics = MetricType.allMetricTypes(); - Set expected = new HashSet<>(Arrays.asList(MetricType.LATENCY, MetricType.CPU, MetricType.MEMORY)); - assertEquals(expected, allMetrics); - } - - public void testCompare() { - SearchQueryRecord record1 = QueryInsightsTestUtils.createFixedSearchQueryRecord(); - SearchQueryRecord record2 = QueryInsightsTestUtils.createFixedSearchQueryRecord(); - assertEquals(0, SearchQueryRecord.compare(record1, record2, MetricType.LATENCY)); - } - - public void testEqual() { - SearchQueryRecord record1 = QueryInsightsTestUtils.createFixedSearchQueryRecord(); - SearchQueryRecord record2 = QueryInsightsTestUtils.createFixedSearchQueryRecord(); - assertEquals(record1, record2); - } - - /** - * Serialize and deserialize a SearchQueryRecord. - * @param record A SearchQueryRecord to serialize. - * @return The deserialized, "round-tripped" record. - */ - private static SearchQueryRecord roundTripRecord(SearchQueryRecord record) throws Exception { - try (BytesStreamOutput out = new BytesStreamOutput()) { - record.writeTo(out); - try (StreamInput in = out.bytes().streamInput()) { - return new SearchQueryRecord(in); - } - } - } -} diff --git a/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/rules/resthandler/top_queries/RestTopQueriesActionTests.java b/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/rules/resthandler/top_queries/RestTopQueriesActionTests.java deleted file mode 100644 index ac19fa2a7348f..0000000000000 --- a/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/rules/resthandler/top_queries/RestTopQueriesActionTests.java +++ /dev/null @@ -1,70 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.plugin.insights.rules.resthandler.top_queries; - -import org.opensearch.plugin.insights.rules.action.top_queries.TopQueriesRequest; -import org.opensearch.rest.RestHandler; -import org.opensearch.rest.RestRequest; -import org.opensearch.test.OpenSearchTestCase; -import org.opensearch.test.rest.FakeRestRequest; - -import java.util.HashMap; -import java.util.List; -import java.util.Locale; -import java.util.Map; - -import static org.opensearch.plugin.insights.rules.resthandler.top_queries.RestTopQueriesAction.ALLOWED_METRICS; - -public class RestTopQueriesActionTests extends OpenSearchTestCase { - - public void testEmptyNodeIdsValidType() { - Map params = new HashMap<>(); - params.put("type", randomFrom(ALLOWED_METRICS)); - RestRequest restRequest = buildRestRequest(params); - TopQueriesRequest actual = RestTopQueriesAction.prepareRequest(restRequest); - assertEquals(0, actual.nodesIds().length); - } - - public void testNodeIdsValid() { - Map params = new HashMap<>(); - params.put("type", randomFrom(ALLOWED_METRICS)); - String[] nodes = randomArray(1, 10, String[]::new, () -> randomAlphaOfLengthBetween(5, 10)); - params.put("nodeId", String.join(",", nodes)); - - RestRequest restRequest = buildRestRequest(params); - TopQueriesRequest actual = RestTopQueriesAction.prepareRequest(restRequest); - assertArrayEquals(nodes, actual.nodesIds()); - } - - public void testInValidType() { - Map params = new HashMap<>(); - params.put("type", randomAlphaOfLengthBetween(5, 10).toUpperCase(Locale.ROOT)); - - RestRequest restRequest = buildRestRequest(params); - Exception exception = assertThrows(IllegalArgumentException.class, () -> { RestTopQueriesAction.prepareRequest(restRequest); }); - assertEquals( - String.format(Locale.ROOT, "request [/_insights/top_queries] contains invalid metric type [%s]", params.get("type")), - exception.getMessage() - ); - } - - public void testGetRoutes() { - RestTopQueriesAction action = new RestTopQueriesAction(); - List routes = action.routes(); - assertEquals(2, routes.size()); - assertEquals("query_insights_top_queries_action", action.getName()); - } - - private FakeRestRequest buildRestRequest(Map params) { - return new FakeRestRequest.Builder(xContentRegistry()).withMethod(RestRequest.Method.GET) - .withPath("/_insights/top_queries") - .withParams(params) - .build(); - } -} diff --git a/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/rules/transport/top_queries/TransportTopQueriesActionTests.java b/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/rules/transport/top_queries/TransportTopQueriesActionTests.java deleted file mode 100644 index d05cf7b6a636f..0000000000000 --- a/plugins/query-insights/src/test/java/org/opensearch/plugin/insights/rules/transport/top_queries/TransportTopQueriesActionTests.java +++ /dev/null @@ -1,85 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.plugin.insights.rules.transport.top_queries; - -import org.opensearch.action.support.ActionFilters; -import org.opensearch.cluster.service.ClusterService; -import org.opensearch.common.settings.ClusterSettings; -import org.opensearch.common.settings.Settings; -import org.opensearch.plugin.insights.core.service.QueryInsightsService; -import org.opensearch.plugin.insights.rules.action.top_queries.TopQueriesRequest; -import org.opensearch.plugin.insights.rules.action.top_queries.TopQueriesResponse; -import org.opensearch.plugin.insights.rules.model.MetricType; -import org.opensearch.plugin.insights.settings.QueryInsightsSettings; -import org.opensearch.test.ClusterServiceUtils; -import org.opensearch.test.OpenSearchTestCase; -import org.opensearch.threadpool.ThreadPool; -import org.opensearch.transport.TransportService; -import org.junit.Before; - -import java.util.List; - -import static org.mockito.Mockito.mock; - -public class TransportTopQueriesActionTests extends OpenSearchTestCase { - - private final ThreadPool threadPool = mock(ThreadPool.class); - - private final Settings.Builder settingsBuilder = Settings.builder(); - private final Settings settings = settingsBuilder.build(); - private final ClusterSettings clusterSettings = new ClusterSettings(settings, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS); - private final ClusterService clusterService = ClusterServiceUtils.createClusterService(settings, clusterSettings, threadPool); - private final TransportService transportService = mock(TransportService.class); - private final QueryInsightsService topQueriesByLatencyService = mock(QueryInsightsService.class); - private final ActionFilters actionFilters = mock(ActionFilters.class); - private final TransportTopQueriesAction transportTopQueriesAction = new TransportTopQueriesAction( - threadPool, - clusterService, - transportService, - topQueriesByLatencyService, - actionFilters - ); - private final DummyParentAction dummyParentAction = new DummyParentAction( - threadPool, - clusterService, - transportService, - topQueriesByLatencyService, - actionFilters - ); - - class DummyParentAction extends TransportTopQueriesAction { - public DummyParentAction( - ThreadPool threadPool, - ClusterService clusterService, - TransportService transportService, - QueryInsightsService topQueriesByLatencyService, - ActionFilters actionFilters - ) { - super(threadPool, clusterService, transportService, topQueriesByLatencyService, actionFilters); - } - - public TopQueriesResponse createNewResponse() { - TopQueriesRequest request = new TopQueriesRequest(MetricType.LATENCY); - return newResponse(request, List.of(), List.of()); - } - } - - @Before - public void setup() { - clusterSettings.registerSetting(QueryInsightsSettings.TOP_N_LATENCY_QUERIES_ENABLED); - clusterSettings.registerSetting(QueryInsightsSettings.TOP_N_LATENCY_QUERIES_SIZE); - clusterSettings.registerSetting(QueryInsightsSettings.TOP_N_LATENCY_QUERIES_WINDOW_SIZE); - } - - public void testNewResponse() { - TopQueriesResponse response = dummyParentAction.createNewResponse(); - assertNotNull(response); - } - -} From 152e51ff2213008c0b6da941c440808ad255435b Mon Sep 17 00:00:00 2001 From: Andriy Redko Date: Mon, 15 Jul 2024 16:26:13 -0400 Subject: [PATCH 062/167] Add `strict_allow_templates` dynamic mapping option (#14555) (#14737) (#14742) * The dynamic mapping parameter supports strict_allow_templates * Modify change log * Modify skip version in yml test file * Refactor some code * Keep the old methods * change public to private * Optimize some code * Do not override toString method for Dynamic * Optimize some code and modify the changelog --------- (cherry picked from commit 6b8b3efe01a62c221f308a2e3b019d75a7f5ad8a) Signed-off-by: Gao Binlong Signed-off-by: github-actions[bot] Signed-off-by: Andriy Redko Co-authored-by: opensearch-trigger-bot[bot] <98922864+opensearch-trigger-bot[bot]@users.noreply.github.com> Co-authored-by: github-actions[bot] --- .../rest-api-spec/test/index/110_strict_allow_templates.yml | 4 ++-- .../test/indices.put_mapping/all_path_options.yml | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/index/110_strict_allow_templates.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/index/110_strict_allow_templates.yml index b3899e295eb61..623cb97c37728 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/index/110_strict_allow_templates.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/index/110_strict_allow_templates.yml @@ -1,8 +1,8 @@ --- "Index documents with setting dynamic parameter to strict_allow_templates in the mapping of the index": - skip: - version: " - 2.99.99" - reason: "introduced in 3.0.0" + version: " - 2.15.99" + reason: "introduced in 2.16.0" - do: indices.create: diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/indices.put_mapping/all_path_options.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/indices.put_mapping/all_path_options.yml index f579891478b19..89b47fde2a72c 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/indices.put_mapping/all_path_options.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/indices.put_mapping/all_path_options.yml @@ -163,8 +163,8 @@ setup: --- "post a mapping with setting dynamic to strict_allow_templates": - skip: - version: " - 2.99.99" - reason: "introduced in 3.0.0" + version: " - 2.15.99" + reason: "introduced in 2.16.0" - do: indices.put_mapping: index: test_index1 From 29a3e2c980764f305f0ed5e858878bc7bb3dbe64 Mon Sep 17 00:00:00 2001 From: gaobinlong Date: Tue, 16 Jul 2024 04:33:06 +0800 Subject: [PATCH 063/167] Fix create or update alias API doesn't throw exception for unsupported parameters (#14719) * Fix create or update alias API doesn't throw exception for unsupported parameters Signed-off-by: Gao Binlong * Update version check in yml test Signed-off-by: Gao Binlong * modify change log Signed-off-by: Gao Binlong --------- Signed-off-by: Gao Binlong --- CHANGELOG.md | 1 + .../rest-api-spec/api/indices.put_alias.json | 56 +++++ .../test/indices.put_alias/10_basic.yml | 206 ++++++++++++++++++ .../indices/RestIndexPutAliasAction.java | 10 + 4 files changed, 273 insertions(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index 6c260f8be9ca3..d2f5336b3bda7 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -65,6 +65,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Fix FuzzyQuery in keyword field will use IndexOrDocValuesQuery when both of index and doc_value are true ([#14378](https://github.com/opensearch-project/OpenSearch/pull/14378)) - Fix file cache initialization ([#14004](https://github.com/opensearch-project/OpenSearch/pull/14004)) - Handle NPE in GetResult if "found" field is missing ([#14552](https://github.com/opensearch-project/OpenSearch/pull/14552)) +- Fix create or update alias API doesn't throw exception for unsupported parameters ([#14719](https://github.com/opensearch-project/OpenSearch/pull/14719)) - Refactoring FilterPath.parse by using an iterative approach ([#14200](https://github.com/opensearch-project/OpenSearch/pull/14200)) - Refactoring Grok.validatePatternBank by using an iterative approach ([#14206](https://github.com/opensearch-project/OpenSearch/pull/14206)) - Update help output for _cat ([#14722](https://github.com/opensearch-project/OpenSearch/pull/14722)) diff --git a/rest-api-spec/src/main/resources/rest-api-spec/api/indices.put_alias.json b/rest-api-spec/src/main/resources/rest-api-spec/api/indices.put_alias.json index c3ccd25da9f86..d99edcf5513f9 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/api/indices.put_alias.json +++ b/rest-api-spec/src/main/resources/rest-api-spec/api/indices.put_alias.json @@ -40,6 +40,62 @@ "description":"The name of the alias to be created or updated" } } + }, + { + "path":"/{index}/_alias", + "methods":[ + "PUT" + ], + "parts":{ + "index":{ + "type":"list", + "description":"A comma-separated list of index names the alias should point to (supports wildcards); use `_all` to perform the operation on all indices." + } + } + }, + { + "path":"/{index}/_aliases", + "methods":[ + "PUT" + ], + "parts":{ + "index":{ + "type":"list", + "description":"A comma-separated list of index names the alias should point to (supports wildcards); use `_all` to perform the operation on all indices." + } + } + }, + { + "path":"/_alias/{name}", + "methods":[ + "PUT", + "POST" + ], + "parts":{ + "name":{ + "type":"string", + "description":"The name of the alias to be created or updated" + } + } + }, + { + "path":"/_aliases/{name}", + "methods":[ + "PUT", + "POST" + ], + "parts":{ + "name":{ + "type":"string", + "description":"The name of the alias to be created or updated" + } + } + }, + { + "path":"/_alias", + "methods":[ + "PUT" + ] } ] }, diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/indices.put_alias/10_basic.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/indices.put_alias/10_basic.yml index 77338a6ddae0b..e78a5cf93c666 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/indices.put_alias/10_basic.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/indices.put_alias/10_basic.yml @@ -28,6 +28,36 @@ - match: {test_index.aliases.test_alias: {}} + - do: + indices.put_alias: + index: test_index + body: {"alias": "test_alias_1"} + + - do: + indices.get_alias: + index: test_index + name: test_alias_1 + + - match: {test_index.aliases.test_alias_1: {}} + + - do: + indices.put_alias: + name: test_alias_2 + body: {"index": "test_index"} + + - do: + indices.get_alias: + index: test_index + name: test_alias_2 + + - match: {test_index.aliases.test_alias_2: {}} + + - do: + catch: bad_request + indices.put_alias: + index: null + name: null + --- "Can't create alias with invalid characters": @@ -102,3 +132,179 @@ index: test_index name: test_alias - match: {test_index.aliases.test_alias: {"filter": {"range": {"date_nanos_field": {"gt": "now-7d/d"}}}}} + +--- +"Can set index_routing": + - do: + indices.create: + index: test_index + + - do: + indices.put_alias: + index: test_index + name: test_alias + body: + index_routing: "test" + + - do: + indices.get_alias: + index: test_index + name: test_alias + - match: {test_index.aliases.test_alias: { 'index_routing': "test" }} + +--- +"Can set routing": + - do: + indices.create: + index: test_index + + - do: + indices.put_alias: + index: test_index + name: test_alias + body: + routing: "test" + + - do: + indices.get_alias: + index: test_index + name: test_alias + - match: {test_index.aliases.test_alias: { 'index_routing': "test", 'search_routing': "test" }} + +--- +"Can set search_routing": + - do: + indices.create: + index: test_index + + - do: + indices.put_alias: + index: test_index + name: test_alias + body: + search_routing: "test" + + - do: + indices.get_alias: + index: test_index + name: test_alias + - match: {test_index.aliases.test_alias: { 'search_routing': "test" }} + +--- +"Index parameter supports multiple values": + - do: + indices.create: + index: test_index + - do: + indices.create: + index: test_index1 + + - do: + indices.put_alias: + index: test_index,test_index1 + name: test_alias + + - do: + indices.get_alias: + index: test_index + name: test_alias + - match: {test_index.aliases.test_alias: { }} + - do: + indices.get_alias: + index: test_index1 + name: test_alias + - match: {test_index1.aliases.test_alias: { }} + + - do: + indices.put_alias: + body: {"index": "test_index,test_index1", "alias": "test_alias_1"} + + - do: + indices.get_alias: + index: test_index + name: test_alias_1 + - match: {test_index.aliases.test_alias_1: { }} + - do: + indices.get_alias: + index: test_index1 + name: test_alias_1 + - match: {test_index1.aliases.test_alias_1: { }} + +--- +"Index and alias in request body can override path parameters": + - do: + indices.create: + index: test_index + + - do: + indices.put_alias: + index: test_index_unknown + name: test_alias + body: {"index": "test_index"} + + - do: + indices.get_alias: + index: test_index + name: test_alias + - match: {test_index.aliases.test_alias: { }} + + - do: + indices.put_alias: + index: test_index + name: test_alias_unknown + body: {"alias": "test_alias_2"} + + - do: + indices.get_alias: + index: test_index + name: test_alias_2 + - match: {test_index.aliases.test_alias_2: { }} + + - do: + indices.put_alias: + body: {"index": "test_index", "alias": "test_alias_3"} + + - do: + indices.get_alias: + index: test_index + name: test_alias_3 + - match: {test_index.aliases.test_alias_3: { }} + +--- +"Can set is_hidden": + - skip: + version: " - 2.99.99" + reason: "Fix was introduced in 3.0.0" + - do: + indices.create: + index: test_index + + - do: + indices.put_alias: + index: test_index + name: test_alias + body: + is_hidden: true + + - do: + indices.get_alias: + index: test_index + name: test_alias + - match: {test_index.aliases.test_alias: { 'is_hidden': true }} + +--- +"Throws exception with invalid parameters": + - skip: + version: " - 2.99.99" + reason: "Fix was introduced in 3.0.0" + + - do: + indices.create: + index: test_index + + - do: + catch: /unknown field \[abc\]/ + indices.put_alias: + index: test_index + name: test_alias + body: {"abc": 1} diff --git a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestIndexPutAliasAction.java b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestIndexPutAliasAction.java index ba7f7578c8c13..f53df32dc6da2 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/indices/RestIndexPutAliasAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/indices/RestIndexPutAliasAction.java @@ -92,6 +92,7 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC String indexRouting = null; String searchRouting = null; Boolean writeIndex = null; + Boolean isHidden = null; if (request.hasContent()) { try (XContentParser parser = request.contentParser()) { @@ -120,10 +121,16 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC searchRouting = parser.textOrNull(); } else if ("is_write_index".equals(currentFieldName)) { writeIndex = parser.booleanValue(); + } else if ("is_hidden".equals(currentFieldName)) { + isHidden = parser.booleanValue(); + } else { + throw new IllegalArgumentException("unknown field [" + currentFieldName + "]"); } } else if (token == XContentParser.Token.START_OBJECT) { if ("filter".equals(currentFieldName)) { filter = parser.mapOrdered(); + } else { + throw new IllegalArgumentException("unknown field [" + currentFieldName + "]"); } } } @@ -153,6 +160,9 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC if (writeIndex != null) { aliasAction.writeIndex(writeIndex); } + if (isHidden != null) { + aliasAction.isHidden(isHidden); + } indicesAliasesRequest.addAliasAction(aliasAction); return channel -> client.admin().indices().aliases(indicesAliasesRequest, new RestToXContentListener<>(channel)); } From d351c58190cac217ba014a1362f1b087ef1dfd43 Mon Sep 17 00:00:00 2001 From: Siddhant Deshmukh Date: Mon, 15 Jul 2024 18:13:03 -0700 Subject: [PATCH 064/167] Remove query categorization from core (#14759) * Remove query categorization from core Signed-off-by: Siddhant Deshmukh * Add changelog Signed-off-by: Siddhant Deshmukh * Trigger Build Signed-off-by: Siddhant Deshmukh --------- Signed-off-by: Siddhant Deshmukh --- CHANGELOG.md | 1 + .../SearchQueryAggregationCategorizer.java | 55 ---- .../action/search/SearchQueryCategorizer.java | 85 ------ .../SearchQueryCategorizingVisitor.java | 39 --- .../action/search/SearchQueryCounters.java | 70 ----- .../action/search/TransportSearchAction.java | 27 -- .../common/settings/ClusterSettings.java | 1 - .../index/query/QueryShapeVisitor.java | 86 ------ .../search/SearchQueryCategorizerTests.java | 245 ------------------ .../index/query/QueryShapeVisitorTests.java | 31 --- 10 files changed, 1 insertion(+), 639 deletions(-) delete mode 100644 server/src/main/java/org/opensearch/action/search/SearchQueryAggregationCategorizer.java delete mode 100644 server/src/main/java/org/opensearch/action/search/SearchQueryCategorizer.java delete mode 100644 server/src/main/java/org/opensearch/action/search/SearchQueryCategorizingVisitor.java delete mode 100644 server/src/main/java/org/opensearch/action/search/SearchQueryCounters.java delete mode 100644 server/src/main/java/org/opensearch/index/query/QueryShapeVisitor.java delete mode 100644 server/src/test/java/org/opensearch/action/search/SearchQueryCategorizerTests.java delete mode 100644 server/src/test/java/org/opensearch/index/query/QueryShapeVisitorTests.java diff --git a/CHANGELOG.md b/CHANGELOG.md index d2f5336b3bda7..6bfd98ceaea80 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -50,6 +50,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), ### Deprecated ### Removed +- Remove query categorization changes ([#14759](https://github.com/opensearch-project/OpenSearch/pull/14759)) ### Fixed - Fix bug in SBP cancellation logic ([#13259](https://github.com/opensearch-project/OpenSearch/pull/13474)) diff --git a/server/src/main/java/org/opensearch/action/search/SearchQueryAggregationCategorizer.java b/server/src/main/java/org/opensearch/action/search/SearchQueryAggregationCategorizer.java deleted file mode 100644 index 607ccf182851b..0000000000000 --- a/server/src/main/java/org/opensearch/action/search/SearchQueryAggregationCategorizer.java +++ /dev/null @@ -1,55 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.action.search; - -import org.opensearch.search.aggregations.AggregationBuilder; -import org.opensearch.search.aggregations.PipelineAggregationBuilder; -import org.opensearch.telemetry.metrics.tags.Tags; - -import java.util.Collection; - -/** - * Increments the counters related to Aggregation Search Queries. - */ -public class SearchQueryAggregationCategorizer { - - private static final String TYPE_TAG = "type"; - private final SearchQueryCounters searchQueryCounters; - - public SearchQueryAggregationCategorizer(SearchQueryCounters searchQueryCounters) { - this.searchQueryCounters = searchQueryCounters; - } - - public void incrementSearchQueryAggregationCounters(Collection aggregatorFactories) { - for (AggregationBuilder aggregationBuilder : aggregatorFactories) { - incrementCountersRecursively(aggregationBuilder); - } - } - - private void incrementCountersRecursively(AggregationBuilder aggregationBuilder) { - // Increment counters for the current aggregation - String aggregationType = aggregationBuilder.getType(); - searchQueryCounters.aggCounter.add(1, Tags.create().addTag(TYPE_TAG, aggregationType)); - - // Recursively process sub-aggregations if any - Collection subAggregations = aggregationBuilder.getSubAggregations(); - if (subAggregations != null && !subAggregations.isEmpty()) { - for (AggregationBuilder subAggregation : subAggregations) { - incrementCountersRecursively(subAggregation); - } - } - - // Process pipeline aggregations - Collection pipelineAggregations = aggregationBuilder.getPipelineAggregations(); - for (PipelineAggregationBuilder pipelineAggregation : pipelineAggregations) { - String pipelineAggregationType = pipelineAggregation.getType(); - searchQueryCounters.aggCounter.add(1, Tags.create().addTag(TYPE_TAG, pipelineAggregationType)); - } - } -} diff --git a/server/src/main/java/org/opensearch/action/search/SearchQueryCategorizer.java b/server/src/main/java/org/opensearch/action/search/SearchQueryCategorizer.java deleted file mode 100644 index ffaae5b08772f..0000000000000 --- a/server/src/main/java/org/opensearch/action/search/SearchQueryCategorizer.java +++ /dev/null @@ -1,85 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.action.search; - -import org.apache.logging.log4j.LogManager; -import org.apache.logging.log4j.Logger; -import org.opensearch.index.query.QueryBuilder; -import org.opensearch.index.query.QueryBuilderVisitor; -import org.opensearch.index.query.QueryShapeVisitor; -import org.opensearch.search.aggregations.AggregatorFactories; -import org.opensearch.search.builder.SearchSourceBuilder; -import org.opensearch.search.sort.SortBuilder; -import org.opensearch.telemetry.metrics.MetricsRegistry; -import org.opensearch.telemetry.metrics.tags.Tags; - -import java.util.List; -import java.util.ListIterator; - -/** - * Class to categorize the search queries based on the type and increment the relevant counters. - * Class also logs the query shape. - */ -final class SearchQueryCategorizer { - - private static final Logger log = LogManager.getLogger(SearchQueryCategorizer.class); - - final SearchQueryCounters searchQueryCounters; - - final SearchQueryAggregationCategorizer searchQueryAggregationCategorizer; - - public SearchQueryCategorizer(MetricsRegistry metricsRegistry) { - searchQueryCounters = new SearchQueryCounters(metricsRegistry); - searchQueryAggregationCategorizer = new SearchQueryAggregationCategorizer(searchQueryCounters); - } - - public void categorize(SearchSourceBuilder source) { - QueryBuilder topLevelQueryBuilder = source.query(); - logQueryShape(topLevelQueryBuilder); - incrementQueryTypeCounters(topLevelQueryBuilder); - incrementQueryAggregationCounters(source.aggregations()); - incrementQuerySortCounters(source.sorts()); - } - - private void incrementQuerySortCounters(List> sorts) { - if (sorts != null && sorts.size() > 0) { - for (ListIterator> it = sorts.listIterator(); it.hasNext();) { - SortBuilder sortBuilder = it.next(); - String sortOrder = sortBuilder.order().toString(); - searchQueryCounters.sortCounter.add(1, Tags.create().addTag("sort_order", sortOrder)); - } - } - } - - private void incrementQueryAggregationCounters(AggregatorFactories.Builder aggregations) { - if (aggregations == null) { - return; - } - - searchQueryAggregationCategorizer.incrementSearchQueryAggregationCounters(aggregations.getAggregatorFactories()); - } - - private void incrementQueryTypeCounters(QueryBuilder topLevelQueryBuilder) { - if (topLevelQueryBuilder == null) { - return; - } - QueryBuilderVisitor searchQueryVisitor = new SearchQueryCategorizingVisitor(searchQueryCounters); - topLevelQueryBuilder.visit(searchQueryVisitor); - } - - private void logQueryShape(QueryBuilder topLevelQueryBuilder) { - if (topLevelQueryBuilder == null) { - return; - } - QueryShapeVisitor shapeVisitor = new QueryShapeVisitor(); - topLevelQueryBuilder.visit(shapeVisitor); - log.trace("Query shape : {}", shapeVisitor.prettyPrintTree(" ")); - } - -} diff --git a/server/src/main/java/org/opensearch/action/search/SearchQueryCategorizingVisitor.java b/server/src/main/java/org/opensearch/action/search/SearchQueryCategorizingVisitor.java deleted file mode 100644 index 31f83dbef9dc9..0000000000000 --- a/server/src/main/java/org/opensearch/action/search/SearchQueryCategorizingVisitor.java +++ /dev/null @@ -1,39 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.action.search; - -import org.apache.lucene.search.BooleanClause; -import org.opensearch.index.query.QueryBuilder; -import org.opensearch.index.query.QueryBuilderVisitor; - -/** - * Class to visit the query builder tree and also track the level information. - * Increments the counters related to Search Query type. - */ -final class SearchQueryCategorizingVisitor implements QueryBuilderVisitor { - private final int level; - private final SearchQueryCounters searchQueryCounters; - - public SearchQueryCategorizingVisitor(SearchQueryCounters searchQueryCounters) { - this(searchQueryCounters, 0); - } - - private SearchQueryCategorizingVisitor(SearchQueryCounters counters, int level) { - this.searchQueryCounters = counters; - this.level = level; - } - - public void accept(QueryBuilder qb) { - searchQueryCounters.incrementCounter(qb, level); - } - - public QueryBuilderVisitor getChildVisitor(BooleanClause.Occur occur) { - return new SearchQueryCategorizingVisitor(searchQueryCounters, level + 1); - } -} diff --git a/server/src/main/java/org/opensearch/action/search/SearchQueryCounters.java b/server/src/main/java/org/opensearch/action/search/SearchQueryCounters.java deleted file mode 100644 index a8a7e352b89dc..0000000000000 --- a/server/src/main/java/org/opensearch/action/search/SearchQueryCounters.java +++ /dev/null @@ -1,70 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.action.search; - -import org.opensearch.index.query.QueryBuilder; -import org.opensearch.telemetry.metrics.Counter; -import org.opensearch.telemetry.metrics.MetricsRegistry; -import org.opensearch.telemetry.metrics.tags.Tags; - -import java.util.HashMap; -import java.util.Map; -import java.util.concurrent.ConcurrentHashMap; - -/** - * Class contains all the Counters related to search query types. - */ -final class SearchQueryCounters { - private static final String LEVEL_TAG = "level"; - private static final String UNIT = "1"; - private final MetricsRegistry metricsRegistry; - public final Counter aggCounter; - public final Counter otherQueryCounter; - public final Counter sortCounter; - private final Map, Counter> queryHandlers; - public final ConcurrentHashMap nameToQueryTypeCounters; - - public SearchQueryCounters(MetricsRegistry metricsRegistry) { - this.metricsRegistry = metricsRegistry; - this.nameToQueryTypeCounters = new ConcurrentHashMap<>(); - this.aggCounter = metricsRegistry.createCounter( - "search.query.type.agg.count", - "Counter for the number of top level agg search queries", - UNIT - ); - this.otherQueryCounter = metricsRegistry.createCounter( - "search.query.type.other.count", - "Counter for the number of top level and nested search queries that do not match any other categories", - UNIT - ); - this.sortCounter = metricsRegistry.createCounter( - "search.query.type.sort.count", - "Counter for the number of top level sort search queries", - UNIT - ); - this.queryHandlers = new HashMap<>(); - - } - - public void incrementCounter(QueryBuilder queryBuilder, int level) { - String uniqueQueryCounterName = queryBuilder.getName(); - - Counter counter = nameToQueryTypeCounters.computeIfAbsent(uniqueQueryCounterName, k -> createQueryCounter(k)); - counter.add(1, Tags.create().addTag(LEVEL_TAG, level)); - } - - private Counter createQueryCounter(String counterName) { - Counter counter = metricsRegistry.createCounter( - "search.query.type." + counterName + ".count", - "Counter for the number of top level and nested " + counterName + " search queries", - UNIT - ); - return counter; - } -} diff --git a/server/src/main/java/org/opensearch/action/search/TransportSearchAction.java b/server/src/main/java/org/opensearch/action/search/TransportSearchAction.java index 6e380775355a2..7d3237d43cd5c 100644 --- a/server/src/main/java/org/opensearch/action/search/TransportSearchAction.java +++ b/server/src/main/java/org/opensearch/action/search/TransportSearchAction.java @@ -143,13 +143,6 @@ public class TransportSearchAction extends HandledTransportAction SEARCH_QUERY_METRICS_ENABLED_SETTING = Setting.boolSetting( - "search.query.metrics.enabled", - false, - Setting.Property.NodeScope, - Setting.Property.Dynamic - ); - // cluster level setting for timeout based search cancellation. If search request level parameter is present then that will take // precedence over the cluster setting value public static final String SEARCH_CANCEL_AFTER_TIME_INTERVAL_SETTING_KEY = "search.cancel_after_time_interval"; @@ -182,11 +175,8 @@ public class TransportSearchAction extends HandledTransportAction buildPerIndexAliasFilter( SearchRequest request, ClusterState clusterState, @@ -473,13 +453,6 @@ private void executeRequest( } ActionListener requestTransformListener = ActionListener.wrap(sr -> { - if (searchQueryMetricsEnabled) { - try { - searchQueryCategorizer.categorize(sr.source()); - } catch (Exception e) { - logger.error("Error while trying to categorize the query.", e); - } - } ActionListener rewriteListener = buildRewriteListener( sr, diff --git a/server/src/main/java/org/opensearch/common/settings/ClusterSettings.java b/server/src/main/java/org/opensearch/common/settings/ClusterSettings.java index 5dcf23ae52294..0648fad619dc7 100644 --- a/server/src/main/java/org/opensearch/common/settings/ClusterSettings.java +++ b/server/src/main/java/org/opensearch/common/settings/ClusterSettings.java @@ -404,7 +404,6 @@ public void apply(Settings value, Settings current, Settings previous) { SearchService.DEFAULT_ALLOW_PARTIAL_SEARCH_RESULTS, TransportSearchAction.SHARD_COUNT_LIMIT_SETTING, TransportSearchAction.SEARCH_CANCEL_AFTER_TIME_INTERVAL_SETTING, - TransportSearchAction.SEARCH_QUERY_METRICS_ENABLED_SETTING, TransportSearchAction.SEARCH_PHASE_TOOK_ENABLED, SearchRequestStats.SEARCH_REQUEST_STATS_ENABLED, RemoteClusterService.REMOTE_CLUSTER_SKIP_UNAVAILABLE, diff --git a/server/src/main/java/org/opensearch/index/query/QueryShapeVisitor.java b/server/src/main/java/org/opensearch/index/query/QueryShapeVisitor.java deleted file mode 100644 index 3ba13bc7a2da4..0000000000000 --- a/server/src/main/java/org/opensearch/index/query/QueryShapeVisitor.java +++ /dev/null @@ -1,86 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.index.query; - -import org.apache.lucene.search.BooleanClause; -import org.opensearch.common.SetOnce; - -import java.util.ArrayList; -import java.util.EnumMap; -import java.util.List; -import java.util.Locale; -import java.util.Map; - -/** - * Class to traverse the QueryBuilder tree and capture the query shape - */ -public final class QueryShapeVisitor implements QueryBuilderVisitor { - private final SetOnce queryType = new SetOnce<>(); - private final Map> childVisitors = new EnumMap<>(BooleanClause.Occur.class); - - @Override - public void accept(QueryBuilder qb) { - queryType.set(qb.getName()); - } - - @Override - public QueryBuilderVisitor getChildVisitor(BooleanClause.Occur occur) { - // Should get called once per Occur value - if (childVisitors.containsKey(occur)) { - throw new IllegalStateException("child visitor already called for " + occur); - } - final List childVisitorList = new ArrayList<>(); - QueryBuilderVisitor childVisitorWrapper = new QueryBuilderVisitor() { - QueryShapeVisitor currentChild; - - @Override - public void accept(QueryBuilder qb) { - currentChild = new QueryShapeVisitor(); - childVisitorList.add(currentChild); - currentChild.accept(qb); - } - - @Override - public QueryBuilderVisitor getChildVisitor(BooleanClause.Occur occur) { - return currentChild.getChildVisitor(occur); - } - }; - childVisitors.put(occur, childVisitorList); - return childVisitorWrapper; - } - - String toJson() { - StringBuilder outputBuilder = new StringBuilder("{\"type\":\"").append(queryType.get()).append("\""); - for (Map.Entry> entry : childVisitors.entrySet()) { - outputBuilder.append(",\"").append(entry.getKey().name().toLowerCase(Locale.ROOT)).append("\"["); - boolean first = true; - for (QueryShapeVisitor child : entry.getValue()) { - if (!first) { - outputBuilder.append(","); - } - outputBuilder.append(child.toJson()); - first = false; - } - outputBuilder.append("]"); - } - outputBuilder.append("}"); - return outputBuilder.toString(); - } - - public String prettyPrintTree(String indent) { - StringBuilder outputBuilder = new StringBuilder(indent).append(queryType.get()).append("\n"); - for (Map.Entry> entry : childVisitors.entrySet()) { - outputBuilder.append(indent).append(" ").append(entry.getKey().name().toLowerCase(Locale.ROOT)).append(":\n"); - for (QueryShapeVisitor child : entry.getValue()) { - outputBuilder.append(child.prettyPrintTree(indent + " ")); - } - } - return outputBuilder.toString(); - } -} diff --git a/server/src/test/java/org/opensearch/action/search/SearchQueryCategorizerTests.java b/server/src/test/java/org/opensearch/action/search/SearchQueryCategorizerTests.java deleted file mode 100644 index 4878a463729f9..0000000000000 --- a/server/src/test/java/org/opensearch/action/search/SearchQueryCategorizerTests.java +++ /dev/null @@ -1,245 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.action.search; - -import org.opensearch.index.query.BoolQueryBuilder; -import org.opensearch.index.query.BoostingQueryBuilder; -import org.opensearch.index.query.MatchNoneQueryBuilder; -import org.opensearch.index.query.MatchQueryBuilder; -import org.opensearch.index.query.MultiMatchQueryBuilder; -import org.opensearch.index.query.QueryBuilders; -import org.opensearch.index.query.QueryStringQueryBuilder; -import org.opensearch.index.query.RangeQueryBuilder; -import org.opensearch.index.query.RegexpQueryBuilder; -import org.opensearch.index.query.TermQueryBuilder; -import org.opensearch.index.query.WildcardQueryBuilder; -import org.opensearch.index.query.functionscore.FunctionScoreQueryBuilder; -import org.opensearch.search.aggregations.bucket.range.RangeAggregationBuilder; -import org.opensearch.search.aggregations.bucket.terms.MultiTermsAggregationBuilder; -import org.opensearch.search.aggregations.support.MultiTermsValuesSourceConfig; -import org.opensearch.search.builder.SearchSourceBuilder; -import org.opensearch.search.sort.ScoreSortBuilder; -import org.opensearch.search.sort.SortOrder; -import org.opensearch.telemetry.metrics.Counter; -import org.opensearch.telemetry.metrics.MetricsRegistry; -import org.opensearch.telemetry.metrics.tags.Tags; -import org.opensearch.test.OpenSearchTestCase; -import org.junit.Before; - -import java.util.Arrays; - -import org.mockito.ArgumentCaptor; - -import static org.mockito.ArgumentMatchers.any; -import static org.mockito.ArgumentMatchers.eq; -import static org.mockito.Mockito.mock; -import static org.mockito.Mockito.times; -import static org.mockito.Mockito.verify; -import static org.mockito.Mockito.when; - -public final class SearchQueryCategorizerTests extends OpenSearchTestCase { - - private static final String MULTI_TERMS_AGGREGATION = "multi_terms"; - - private MetricsRegistry metricsRegistry; - - private SearchQueryCategorizer searchQueryCategorizer; - - @Before - public void setup() { - metricsRegistry = mock(MetricsRegistry.class); - when(metricsRegistry.createCounter(any(String.class), any(String.class), any(String.class))).thenAnswer( - invocation -> mock(Counter.class) - ); - searchQueryCategorizer = new SearchQueryCategorizer(metricsRegistry); - } - - public void testAggregationsQuery() { - SearchSourceBuilder sourceBuilder = new SearchSourceBuilder(); - sourceBuilder.aggregation( - new MultiTermsAggregationBuilder("agg1").terms( - Arrays.asList( - new MultiTermsValuesSourceConfig.Builder().setFieldName("username").build(), - new MultiTermsValuesSourceConfig.Builder().setFieldName("rating").build() - ) - ) - ); - sourceBuilder.size(0); - - searchQueryCategorizer.categorize(sourceBuilder); - - verify(searchQueryCategorizer.searchQueryCounters.aggCounter).add(eq(1.0d), any(Tags.class)); - - // capture the arguments passed to the aggCounter.add method - ArgumentCaptor valueCaptor = ArgumentCaptor.forClass(Double.class); - ArgumentCaptor tagsCaptor = ArgumentCaptor.forClass(Tags.class); - - // Verify that aggCounter.add was called with the expected arguments - verify(searchQueryCategorizer.searchQueryCounters.aggCounter).add(valueCaptor.capture(), tagsCaptor.capture()); - - double actualValue = valueCaptor.getValue(); - String actualTag = (String) tagsCaptor.getValue().getTagsMap().get("type"); - - assertEquals(1.0d, actualValue, 0.0001); - assertEquals(MULTI_TERMS_AGGREGATION, actualTag); - } - - public void testBoolQuery() { - SearchSourceBuilder sourceBuilder = new SearchSourceBuilder(); - sourceBuilder.size(50); - sourceBuilder.query(new BoolQueryBuilder().must(new MatchQueryBuilder("searchText", "fox"))); - - searchQueryCategorizer.categorize(sourceBuilder); - - verify(searchQueryCategorizer.searchQueryCounters.nameToQueryTypeCounters.get("bool")).add(eq(1.0d), any(Tags.class)); - verify(searchQueryCategorizer.searchQueryCounters.nameToQueryTypeCounters.get("match")).add(eq(1.0d), any(Tags.class)); - } - - public void testFunctionScoreQuery() { - SearchSourceBuilder sourceBuilder = new SearchSourceBuilder(); - sourceBuilder.size(50); - sourceBuilder.query(new FunctionScoreQueryBuilder(QueryBuilders.prefixQuery("text", "bro"))); - - searchQueryCategorizer.categorize(sourceBuilder); - - verify(searchQueryCategorizer.searchQueryCounters.nameToQueryTypeCounters.get("function_score")).add(eq(1.0d), any(Tags.class)); - } - - public void testMatchQuery() { - SearchSourceBuilder sourceBuilder = new SearchSourceBuilder(); - sourceBuilder.size(50); - sourceBuilder.query(QueryBuilders.matchQuery("tags", "php")); - - searchQueryCategorizer.categorize(sourceBuilder); - - verify(searchQueryCategorizer.searchQueryCounters.nameToQueryTypeCounters.get("match")).add(eq(1.0d), any(Tags.class)); - } - - public void testMatchPhraseQuery() { - SearchSourceBuilder sourceBuilder = new SearchSourceBuilder(); - sourceBuilder.size(50); - sourceBuilder.query(QueryBuilders.matchPhraseQuery("tags", "php")); - - searchQueryCategorizer.categorize(sourceBuilder); - - verify(searchQueryCategorizer.searchQueryCounters.nameToQueryTypeCounters.get("match_phrase")).add(eq(1.0d), any(Tags.class)); - } - - public void testMultiMatchQuery() { - SearchSourceBuilder sourceBuilder = new SearchSourceBuilder(); - sourceBuilder.size(50); - sourceBuilder.query(new MultiMatchQueryBuilder("foo bar", "myField")); - - searchQueryCategorizer.categorize(sourceBuilder); - - verify(searchQueryCategorizer.searchQueryCounters.nameToQueryTypeCounters.get("multi_match")).add(eq(1.0d), any(Tags.class)); - } - - public void testOtherQuery() { - SearchSourceBuilder sourceBuilder = new SearchSourceBuilder(); - sourceBuilder.size(50); - BoostingQueryBuilder queryBuilder = new BoostingQueryBuilder( - new TermQueryBuilder("unmapped_field", "foo"), - new MatchNoneQueryBuilder() - ); - sourceBuilder.query(queryBuilder); - - searchQueryCategorizer.categorize(sourceBuilder); - - verify(searchQueryCategorizer.searchQueryCounters.nameToQueryTypeCounters.get("boosting")).add(eq(1.0d), any(Tags.class)); - verify(searchQueryCategorizer.searchQueryCounters.nameToQueryTypeCounters.get("match_none")).add(eq(1.0d), any(Tags.class)); - verify(searchQueryCategorizer.searchQueryCounters.nameToQueryTypeCounters.get("term")).add(eq(1.0d), any(Tags.class)); - } - - public void testQueryStringQuery() { - SearchSourceBuilder sourceBuilder = new SearchSourceBuilder(); - sourceBuilder.size(50); - QueryStringQueryBuilder queryBuilder = new QueryStringQueryBuilder("foo:*"); - sourceBuilder.query(queryBuilder); - - searchQueryCategorizer.categorize(sourceBuilder); - - verify(searchQueryCategorizer.searchQueryCounters.nameToQueryTypeCounters.get("query_string")).add(eq(1.0d), any(Tags.class)); - } - - public void testRangeQuery() { - SearchSourceBuilder sourceBuilder = new SearchSourceBuilder(); - RangeQueryBuilder rangeQuery = new RangeQueryBuilder("date"); - rangeQuery.gte("1970-01-01"); - rangeQuery.lt("1982-01-01"); - sourceBuilder.query(rangeQuery); - - searchQueryCategorizer.categorize(sourceBuilder); - - verify(searchQueryCategorizer.searchQueryCounters.nameToQueryTypeCounters.get("range")).add(eq(1.0d), any(Tags.class)); - } - - public void testRegexQuery() { - SearchSourceBuilder sourceBuilder = new SearchSourceBuilder(); - sourceBuilder.query(new RegexpQueryBuilder("field", "text")); - - searchQueryCategorizer.categorize(sourceBuilder); - - verify(searchQueryCategorizer.searchQueryCounters.nameToQueryTypeCounters.get("regexp")).add(eq(1.0d), any(Tags.class)); - } - - public void testSortQuery() { - SearchSourceBuilder sourceBuilder = new SearchSourceBuilder(); - sourceBuilder.query(QueryBuilders.matchQuery("tags", "ruby")); - sourceBuilder.sort("creationDate", SortOrder.DESC); - sourceBuilder.sort(new ScoreSortBuilder()); - - searchQueryCategorizer.categorize(sourceBuilder); - - verify(searchQueryCategorizer.searchQueryCounters.nameToQueryTypeCounters.get("match")).add(eq(1.0d), any(Tags.class)); - verify(searchQueryCategorizer.searchQueryCounters.sortCounter, times(2)).add(eq(1.0d), any(Tags.class)); - } - - public void testTermQuery() { - SearchSourceBuilder sourceBuilder = new SearchSourceBuilder(); - sourceBuilder.size(50); - sourceBuilder.query(QueryBuilders.termQuery("field", "value2")); - - searchQueryCategorizer.categorize(sourceBuilder); - - verify(searchQueryCategorizer.searchQueryCounters.nameToQueryTypeCounters.get("term")).add(eq(1.0d), any(Tags.class)); - } - - public void testWildcardQuery() { - SearchSourceBuilder sourceBuilder = new SearchSourceBuilder(); - sourceBuilder.size(50); - sourceBuilder.query(new WildcardQueryBuilder("field", "text")); - - searchQueryCategorizer.categorize(sourceBuilder); - - verify(searchQueryCategorizer.searchQueryCounters.nameToQueryTypeCounters.get("wildcard")).add(eq(1.0d), any(Tags.class)); - } - - public void testComplexQuery() { - SearchSourceBuilder sourceBuilder = new SearchSourceBuilder(); - sourceBuilder.size(50); - - TermQueryBuilder termQueryBuilder = QueryBuilders.termQuery("field", "value2"); - MatchQueryBuilder matchQueryBuilder = QueryBuilders.matchQuery("tags", "php"); - RegexpQueryBuilder regexpQueryBuilder = new RegexpQueryBuilder("field", "text"); - BoolQueryBuilder boolQueryBuilder = new BoolQueryBuilder().must(termQueryBuilder) - .filter(matchQueryBuilder) - .should(regexpQueryBuilder); - sourceBuilder.query(boolQueryBuilder); - sourceBuilder.aggregation(new RangeAggregationBuilder("agg1").field("num")); - - searchQueryCategorizer.categorize(sourceBuilder); - - verify(searchQueryCategorizer.searchQueryCounters.nameToQueryTypeCounters.get("term")).add(eq(1.0d), any(Tags.class)); - verify(searchQueryCategorizer.searchQueryCounters.nameToQueryTypeCounters.get("match")).add(eq(1.0d), any(Tags.class)); - verify(searchQueryCategorizer.searchQueryCounters.nameToQueryTypeCounters.get("regexp")).add(eq(1.0d), any(Tags.class)); - verify(searchQueryCategorizer.searchQueryCounters.nameToQueryTypeCounters.get("bool")).add(eq(1.0d), any(Tags.class)); - verify(searchQueryCategorizer.searchQueryCounters.aggCounter).add(eq(1.0d), any(Tags.class)); - } -} diff --git a/server/src/test/java/org/opensearch/index/query/QueryShapeVisitorTests.java b/server/src/test/java/org/opensearch/index/query/QueryShapeVisitorTests.java deleted file mode 100644 index 18b814aec61c2..0000000000000 --- a/server/src/test/java/org/opensearch/index/query/QueryShapeVisitorTests.java +++ /dev/null @@ -1,31 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.index.query; - -import org.opensearch.test.OpenSearchTestCase; - -import static org.junit.Assert.assertEquals; - -public final class QueryShapeVisitorTests extends OpenSearchTestCase { - public void testQueryShapeVisitor() { - QueryBuilder builder = new BoolQueryBuilder().must(new TermQueryBuilder("foo", "bar")) - .filter(new ConstantScoreQueryBuilder(new RangeQueryBuilder("timestamp").from("12345677").to("2345678"))) - .should( - new BoolQueryBuilder().must(new MatchQueryBuilder("text", "this is some text")) - .mustNot(new RegexpQueryBuilder("color", "red.*")) - ) - .must(new TermsQueryBuilder("genre", "action", "drama", "romance")); - QueryShapeVisitor shapeVisitor = new QueryShapeVisitor(); - builder.visit(shapeVisitor); - assertEquals( - "{\"type\":\"bool\",\"must\"[{\"type\":\"term\"},{\"type\":\"terms\"}],\"filter\"[{\"type\":\"constant_score\",\"filter\"[{\"type\":\"range\"}]}],\"should\"[{\"type\":\"bool\",\"must\"[{\"type\":\"match\"}],\"must_not\"[{\"type\":\"regexp\"}]}]}", - shapeVisitor.toJson() - ); - } -} From d33d24e9ff48bd20c12636349ec8c0eb67b38eb2 Mon Sep 17 00:00:00 2001 From: Kaushal Kumar Date: Mon, 15 Jul 2024 22:05:57 -0700 Subject: [PATCH 065/167] Add changes to propagate queryGroupId across child requests and nodes (#14614) * add query group header propagator Signed-off-by: Kaushal Kumar * apply spotless check Signed-off-by: Kaushal Kumar * add new propagator in ThreadContext Signed-off-by: Kaushal Kumar * spotlessApply Signed-off-by: Kaushal Kumar * address comments Signed-off-by: Kaushal Kumar * Bump com.microsoft.azure:msal4j from 1.15.1 to 1.16.0 in /plugins/repository-azure (#14610) * Bump com.microsoft.azure:msal4j in /plugins/repository-azure Bumps [com.microsoft.azure:msal4j](https://github.com/AzureAD/microsoft-authentication-library-for-java) from 1.15.1 to 1.16.0. - [Release notes](https://github.com/AzureAD/microsoft-authentication-library-for-java/releases) - [Changelog](https://github.com/AzureAD/microsoft-authentication-library-for-java/blob/dev/changelog.txt) - [Commits](https://github.com/AzureAD/microsoft-authentication-library-for-java/compare/v1.15.1...v1.16.0) --- updated-dependencies: - dependency-name: com.microsoft.azure:msal4j dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] * Updating SHAs Signed-off-by: dependabot[bot] * Update changelog Signed-off-by: dependabot[bot] --------- Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] Signed-off-by: Kaushal Kumar * [Bugfix] Fix ICacheKeySerializerTests flakiness (#14564) * Fix testInvalidInput flakiness Signed-off-by: Peter Alfonsi * Addressed andrross's comment Signed-off-by: Peter Alfonsi * rerun security check Signed-off-by: Peter Alfonsi --------- Signed-off-by: Peter Alfonsi Co-authored-by: Peter Alfonsi Signed-off-by: Kaushal Kumar * Correct typo in method name (#14621) Signed-off-by: vatsal Signed-off-by: Kaushal Kumar * Refactoring FilterPath.parse by using an iterative approach instead of recursion. (#14200) * Refactor FilterPath parse function (#12067) Signed-off-by: Robin Friedmann * Implement unit tests for FilterPathTests (#12067) Signed-off-by: Robin Friedmann * Write warn log if Filter is empty; Add comments (#12067) Signed-off-by: Robin Friedmann * Add changelog Signed-off-by: Siddhant Deshmukh * Remove unnecessary log statement Signed-off-by: Siddhant Deshmukh * Remove unused logger Signed-off-by: Siddhant Deshmukh * Spotless apply Signed-off-by: Siddhant Deshmukh * Remove incorrect changelog Signed-off-by: Siddhant Deshmukh --------- Signed-off-by: Siddhant Deshmukh Co-authored-by: Robin Friedmann Signed-off-by: Kaushal Kumar * Removing String format in RemoteStoreMigrationAllocationDecider to optimise performance(#14612) Signed-off-by: RS146BIJAY Signed-off-by: Kaushal Kumar * Clear templates before Adding; Use NamedWriteableAwareStreamInput for RemoteCustomMetadata; Correct the check for deciding upload of HashesOfConsistentSettings (#14513) * Clear templates before Adding; Use NamedWriteableAwareStreamInput for RemoteCustomMetadata * Correct the check for deciding upload of hashes of consistent settings Signed-off-by: Sooraj Sinha Signed-off-by: Kaushal Kumar * add changelog Signed-off-by: Kaushal Kumar * add PR link changelog Signed-off-by: Kaushal Kumar * Improve reroute performance by optimising List.removeAll in LocalShardsBalancer to filter remote search shard from relocation decision (#14613) Signed-off-by: RS146BIJAY Signed-off-by: Kaushal Kumar * Fix assertion failure while deleting remote backed index (#14601) Signed-off-by: Sachin Kale Signed-off-by: Kaushal Kumar * Allow system index warning in OpenSearchRestTestCase.refreshAllIndices (#14635) * Allow system index warning Signed-off-by: Craig Perkins * Add to CHANGELOG Signed-off-by: Craig Perkins * Address code review comments Signed-off-by: Craig Perkins --------- Signed-off-by: Craig Perkins Signed-off-by: Kaushal Kumar * Star tree codec changes (#14514) --------- Signed-off-by: Bharathwaj G Signed-off-by: Kaushal Kumar * Bump com.github.spullara.mustache.java:compiler from 0.9.13 to 0.9.14 in /modules/lang-mustache (#14672) * Bump com.github.spullara.mustache.java:compiler Bumps [com.github.spullara.mustache.java:compiler](https://github.com/spullara/mustache.java) from 0.9.13 to 0.9.14. - [Commits](https://github.com/spullara/mustache.java/compare/mustache.java-0.9.13...mustache.java-0.9.14) --- updated-dependencies: - dependency-name: com.github.spullara.mustache.java:compiler dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] * Updating SHAs Signed-off-by: dependabot[bot] * Update changelog Signed-off-by: dependabot[bot] --------- Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] Signed-off-by: Kaushal Kumar * Bump net.minidev:accessors-smart from 2.5.0 to 2.5.1 in /plugins/repository-azure (#14673) * Bump net.minidev:accessors-smart in /plugins/repository-azure Bumps [net.minidev:accessors-smart](https://github.com/netplex/json-smart-v2) from 2.5.0 to 2.5.1. - [Release notes](https://github.com/netplex/json-smart-v2/releases) - [Commits](https://github.com/netplex/json-smart-v2/compare/2.5.0...2.5.1) --- updated-dependencies: - dependency-name: net.minidev:accessors-smart dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] * Updating SHAs Signed-off-by: dependabot[bot] * Update changelog Signed-off-by: dependabot[bot] --------- Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] Signed-off-by: Kaushal Kumar * move query group thread context propagator out of ThreadContext Signed-off-by: Kaushal Kumar --------- Signed-off-by: Kaushal Kumar Signed-off-by: dependabot[bot] Signed-off-by: Peter Alfonsi Signed-off-by: vatsal Signed-off-by: Siddhant Deshmukh Signed-off-by: RS146BIJAY Signed-off-by: Sooraj Sinha Signed-off-by: Sachin Kale Signed-off-by: Craig Perkins Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] Co-authored-by: Peter Alfonsi Co-authored-by: Peter Alfonsi Co-authored-by: Vatsal <36672090+imvtsl@users.noreply.github.com> Co-authored-by: Siddhant Deshmukh Co-authored-by: Robin Friedmann Co-authored-by: rishavz_sagar Co-authored-by: Sooraj Sinha <81695996+soosinha@users.noreply.github.com> Co-authored-by: Sachin Kale Co-authored-by: Craig Perkins Co-authored-by: Bharathwaj G --- CHANGELOG.md | 1 + ...ueryGroupThreadContextStatePropagator.java | 53 +++++++++++++++++++ .../java/org/opensearch/wlm/package-info.java | 13 +++++ ...roupThreadContextStatePropagatorTests.java | 30 +++++++++++ 4 files changed, 97 insertions(+) create mode 100644 server/src/main/java/org/opensearch/wlm/QueryGroupThreadContextStatePropagator.java create mode 100644 server/src/main/java/org/opensearch/wlm/package-info.java create mode 100644 server/src/test/java/org/opensearch/wlm/QueryGroupThreadContextStatePropagatorTests.java diff --git a/CHANGELOG.md b/CHANGELOG.md index 6bfd98ceaea80..6f666dbf3b8d5 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -14,6 +14,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Fix race condition while parsing derived fields from search definition ([14445](https://github.com/opensearch-project/OpenSearch/pull/14445)) - Add `strict_allow_templates` dynamic mapping option ([#14555](https://github.com/opensearch-project/OpenSearch/pull/14555)) - Add allowlist setting for ingest-common and search-pipeline-common processors ([#14439](https://github.com/opensearch-project/OpenSearch/issues/14439)) +- [Workload Management] add queryGroupId header propagator across requests and nodes ([#14614](https://github.com/opensearch-project/OpenSearch/pull/14614)) - Create SystemIndexRegistry with helper method matchesSystemIndex ([#14415](https://github.com/opensearch-project/OpenSearch/pull/14415)) - Print reason why parent task was cancelled ([#14604](https://github.com/opensearch-project/OpenSearch/issues/14604)) diff --git a/server/src/main/java/org/opensearch/wlm/QueryGroupThreadContextStatePropagator.java b/server/src/main/java/org/opensearch/wlm/QueryGroupThreadContextStatePropagator.java new file mode 100644 index 0000000000000..06d223907082e --- /dev/null +++ b/server/src/main/java/org/opensearch/wlm/QueryGroupThreadContextStatePropagator.java @@ -0,0 +1,53 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.wlm; + +import org.opensearch.common.util.concurrent.ThreadContextStatePropagator; + +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +/** + * This class is used to propagate QueryGroup related headers to request and nodes + */ +public class QueryGroupThreadContextStatePropagator implements ThreadContextStatePropagator { + // TODO: move this constant to QueryGroupService class once the QueryGroup monitoring framework PR is ready + public static List PROPAGATED_HEADERS = List.of("queryGroupId"); + + /** + * @param source current context transient headers + * @return the map of header and their values to be propagated across request threadContexts + */ + @Override + @SuppressWarnings("removal") + public Map transients(Map source) { + final Map transientHeaders = new HashMap<>(); + + for (String headerName : PROPAGATED_HEADERS) { + transientHeaders.compute(headerName, (k, v) -> source.get(headerName)); + } + return transientHeaders; + } + + /** + * @param source current context headers + * @return map of header and their values to be propagated across nodes + */ + @Override + @SuppressWarnings("removal") + public Map headers(Map source) { + final Map propagatedHeaders = new HashMap<>(); + + for (String headerName : PROPAGATED_HEADERS) { + propagatedHeaders.compute(headerName, (k, v) -> (String) source.get(headerName)); + } + return propagatedHeaders; + } +} diff --git a/server/src/main/java/org/opensearch/wlm/package-info.java b/server/src/main/java/org/opensearch/wlm/package-info.java new file mode 100644 index 0000000000000..fa4731d95cc34 --- /dev/null +++ b/server/src/main/java/org/opensearch/wlm/package-info.java @@ -0,0 +1,13 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +/** + * This package contains workload management constructs + */ + +package org.opensearch.wlm; diff --git a/server/src/test/java/org/opensearch/wlm/QueryGroupThreadContextStatePropagatorTests.java b/server/src/test/java/org/opensearch/wlm/QueryGroupThreadContextStatePropagatorTests.java new file mode 100644 index 0000000000000..ad5d7f569a56e --- /dev/null +++ b/server/src/test/java/org/opensearch/wlm/QueryGroupThreadContextStatePropagatorTests.java @@ -0,0 +1,30 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.wlm; + +import org.opensearch.test.OpenSearchTestCase; + +import java.util.Map; + +public class QueryGroupThreadContextStatePropagatorTests extends OpenSearchTestCase { + + public void testTransients() { + QueryGroupThreadContextStatePropagator sut = new QueryGroupThreadContextStatePropagator(); + Map source = Map.of("queryGroupId", "adgarja0r235te"); + Map transients = sut.transients(source); + assertEquals("adgarja0r235te", transients.get("queryGroupId")); + } + + public void testHeaders() { + QueryGroupThreadContextStatePropagator sut = new QueryGroupThreadContextStatePropagator(); + Map source = Map.of("queryGroupId", "adgarja0r235te"); + Map headers = sut.headers(source); + assertEquals("adgarja0r235te", headers.get("queryGroupId")); + } +} From ba9bdacc05928c82a4244b06c5c0e8283406dc7d Mon Sep 17 00:00:00 2001 From: Shourya Dutta Biswas <114977491+shourya035@users.noreply.github.com> Date: Tue, 16 Jul 2024 13:15:05 +0530 Subject: [PATCH 066/167] Add consumers to remote store based index settings (#14764) Signed-off-by: Shourya Dutta Biswas <114977491+shourya035@users.noreply.github.com> --- .../MigrationBaseTestCase.java | 16 +++++++++++ .../RemoteStoreMigrationTestCase.java | 9 +++++++ .../org/opensearch/index/IndexSettings.java | 27 ++++++++++++++++--- .../blobstore/BlobStoreRepository.java | 2 +- 4 files changed, 50 insertions(+), 4 deletions(-) diff --git a/server/src/internalClusterTest/java/org/opensearch/remotemigration/MigrationBaseTestCase.java b/server/src/internalClusterTest/java/org/opensearch/remotemigration/MigrationBaseTestCase.java index 5be9b25512704..2bea36ed80c9f 100644 --- a/server/src/internalClusterTest/java/org/opensearch/remotemigration/MigrationBaseTestCase.java +++ b/server/src/internalClusterTest/java/org/opensearch/remotemigration/MigrationBaseTestCase.java @@ -13,6 +13,8 @@ import org.opensearch.action.admin.cluster.health.ClusterHealthResponse; import org.opensearch.action.admin.cluster.repositories.get.GetRepositoriesRequest; import org.opensearch.action.admin.cluster.repositories.get.GetRepositoriesResponse; +import org.opensearch.action.admin.indices.get.GetIndexRequest; +import org.opensearch.action.admin.indices.get.GetIndexResponse; import org.opensearch.action.bulk.BulkRequest; import org.opensearch.action.bulk.BulkResponse; import org.opensearch.action.delete.DeleteResponse; @@ -21,12 +23,17 @@ import org.opensearch.client.Requests; import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.health.ClusterHealthStatus; +import org.opensearch.cluster.metadata.IndexMetadata; import org.opensearch.cluster.metadata.RepositoryMetadata; import org.opensearch.cluster.routing.RoutingNode; import org.opensearch.common.Priority; import org.opensearch.common.UUIDs; import org.opensearch.common.settings.Settings; import org.opensearch.common.unit.TimeValue; +import org.opensearch.core.index.Index; +import org.opensearch.index.IndexService; +import org.opensearch.index.shard.IndexShard; +import org.opensearch.indices.IndicesService; import org.opensearch.repositories.fs.ReloadableFsRepository; import org.opensearch.test.OpenSearchIntegTestCase; import org.junit.Before; @@ -261,4 +268,13 @@ public ClusterHealthStatus waitForRelocation(TimeValue t) { } return actionGet.getStatus(); } + + protected IndexShard getIndexShard(String dataNode, String indexName) throws ExecutionException, InterruptedException { + String clusterManagerName = internalCluster().getClusterManagerName(); + IndicesService indicesService = internalCluster().getInstance(IndicesService.class, dataNode); + GetIndexResponse getIndexResponse = client(clusterManagerName).admin().indices().getIndex(new GetIndexRequest()).get(); + String uuid = getIndexResponse.getSettings().get(indexName).get(IndexMetadata.SETTING_INDEX_UUID); + IndexService indexService = indicesService.indexService(new Index(indexName, uuid)); + return indexService.getShard(0); + } } diff --git a/server/src/internalClusterTest/java/org/opensearch/remotemigration/RemoteStoreMigrationTestCase.java b/server/src/internalClusterTest/java/org/opensearch/remotemigration/RemoteStoreMigrationTestCase.java index e0e25db4ca722..4d37b2a1feb88 100644 --- a/server/src/internalClusterTest/java/org/opensearch/remotemigration/RemoteStoreMigrationTestCase.java +++ b/server/src/internalClusterTest/java/org/opensearch/remotemigration/RemoteStoreMigrationTestCase.java @@ -17,6 +17,7 @@ import org.opensearch.common.unit.TimeValue; import org.opensearch.common.util.FeatureFlags; import org.opensearch.index.query.QueryBuilders; +import org.opensearch.index.shard.IndexShard; import org.opensearch.repositories.blobstore.BlobStoreRepository; import org.opensearch.snapshots.SnapshotInfo; import org.opensearch.test.OpenSearchIntegTestCase; @@ -216,4 +217,12 @@ public void testEndToEndRemoteMigration() throws Exception { asyncIndexingService.getIndexedDocs() ); } + + public void testRemoteSettingPropagatedToIndexShardAfterMigration() throws Exception { + testEndToEndRemoteMigration(); + IndexShard indexShard = getIndexShard(primaryNodeName("test"), "test"); + assertTrue(indexShard.indexSettings().isRemoteStoreEnabled()); + assertEquals(MigrationBaseTestCase.REPOSITORY_NAME, indexShard.indexSettings().getRemoteStoreRepository()); + assertEquals(MigrationBaseTestCase.REPOSITORY_2_NAME, indexShard.indexSettings().getRemoteStoreTranslogRepository()); + } } diff --git a/server/src/main/java/org/opensearch/index/IndexSettings.java b/server/src/main/java/org/opensearch/index/IndexSettings.java index 96458ecc49ddc..a833d66fab5d9 100644 --- a/server/src/main/java/org/opensearch/index/IndexSettings.java +++ b/server/src/main/java/org/opensearch/index/IndexSettings.java @@ -732,11 +732,11 @@ public static IndexMergePolicy fromString(String text) { private final Settings nodeSettings; private final int numberOfShards; private final ReplicationType replicationType; - private final boolean isRemoteStoreEnabled; + private volatile boolean isRemoteStoreEnabled; private final boolean isStoreLocalityPartial; private volatile TimeValue remoteTranslogUploadBufferInterval; - private final String remoteStoreTranslogRepository; - private final String remoteStoreRepository; + private volatile String remoteStoreTranslogRepository; + private volatile String remoteStoreRepository; private int remoteTranslogKeepExtraGen; private Version extendedCompatibilitySnapshotVersion; @@ -1132,6 +1132,15 @@ public IndexSettings(final IndexMetadata indexMetadata, final Settings nodeSetti this::setDocIdFuzzySetFalsePositiveProbability ); scopedSettings.addSettingsUpdateConsumer(ALLOW_DERIVED_FIELDS, this::setAllowDerivedField); + scopedSettings.addSettingsUpdateConsumer(IndexMetadata.INDEX_REMOTE_STORE_ENABLED_SETTING, this::setRemoteStoreEnabled); + scopedSettings.addSettingsUpdateConsumer( + IndexMetadata.INDEX_REMOTE_SEGMENT_STORE_REPOSITORY_SETTING, + this::setRemoteStoreRepository + ); + scopedSettings.addSettingsUpdateConsumer( + IndexMetadata.INDEX_REMOTE_TRANSLOG_REPOSITORY_SETTING, + this::setRemoteStoreTranslogRepository + ); } private void setSearchIdleAfter(TimeValue searchIdleAfter) { @@ -1950,4 +1959,16 @@ public RemoteStorePathStrategy getRemoteStorePathStrategy() { public boolean isTranslogMetadataEnabled() { return isTranslogMetadataEnabled; } + + public void setRemoteStoreEnabled(boolean isRemoteStoreEnabled) { + this.isRemoteStoreEnabled = isRemoteStoreEnabled; + } + + public void setRemoteStoreRepository(String remoteStoreRepository) { + this.remoteStoreRepository = remoteStoreRepository; + } + + public void setRemoteStoreTranslogRepository(String remoteStoreTranslogRepository) { + this.remoteStoreTranslogRepository = remoteStoreTranslogRepository; + } } diff --git a/server/src/main/java/org/opensearch/repositories/blobstore/BlobStoreRepository.java b/server/src/main/java/org/opensearch/repositories/blobstore/BlobStoreRepository.java index 53c44f743c781..02290b6a5e566 100644 --- a/server/src/main/java/org/opensearch/repositories/blobstore/BlobStoreRepository.java +++ b/server/src/main/java/org/opensearch/repositories/blobstore/BlobStoreRepository.java @@ -2678,7 +2678,7 @@ public void snapshotRemoteStoreIndexShard( final ShardId shardId = store.shardId(); try { final String generation = snapshotStatus.generation(); - logger.info("[{}] [{}] snapshot to [{}] [{}] ...", shardId, snapshotId, metadata.name(), generation); + logger.info("[{}] [{}] shallow copy snapshot to [{}] [{}] ...", shardId, snapshotId, metadata.name(), generation); final BlobContainer shardContainer = shardContainer(indexId, shardId); long indexTotalFileSize = 0; From 6ad57bf393bda06da1c463d7c276d4a61d3cb299 Mon Sep 17 00:00:00 2001 From: Craig Perkins Date: Tue, 16 Jul 2024 08:16:11 -0400 Subject: [PATCH 067/167] Add matchesPluginSystemIndexPattern to SystemIndexRegistry (#14750) * Add matchesPluginSystemIndexPattern to SystemIndexRegistry Signed-off-by: Craig Perkins * Add to CHANGELOG Signed-off-by: Craig Perkins * Use single data structure to keep track of system indices Signed-off-by: Craig Perkins * Address code review comments Signed-off-by: Craig Perkins * Add test for getAllDescriptors Signed-off-by: Craig Perkins * Update server/src/main/java/org/opensearch/indices/SystemIndexRegistry.java Co-authored-by: Andriy Redko Signed-off-by: Craig Perkins --------- Signed-off-by: Craig Perkins Signed-off-by: Craig Perkins Co-authored-by: Andriy Redko --- CHANGELOG.md | 1 + .../indices/SystemIndexRegistry.java | 36 ++++--- .../org/opensearch/indices/SystemIndices.java | 5 +- .../main/java/org/opensearch/node/Node.java | 5 +- .../indices/SystemIndicesTests.java | 100 ++++++++++++++---- 5 files changed, 112 insertions(+), 35 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 6f666dbf3b8d5..7050e7d23b24d 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -17,6 +17,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - [Workload Management] add queryGroupId header propagator across requests and nodes ([#14614](https://github.com/opensearch-project/OpenSearch/pull/14614)) - Create SystemIndexRegistry with helper method matchesSystemIndex ([#14415](https://github.com/opensearch-project/OpenSearch/pull/14415)) - Print reason why parent task was cancelled ([#14604](https://github.com/opensearch-project/OpenSearch/issues/14604)) +- Add matchesPluginSystemIndexPattern to SystemIndexRegistry ([#14750](https://github.com/opensearch-project/OpenSearch/pull/14750)) ### Dependencies - Bump `org.gradle.test-retry` from 1.5.8 to 1.5.9 ([#13442](https://github.com/opensearch-project/OpenSearch/pull/13442)) diff --git a/server/src/main/java/org/opensearch/indices/SystemIndexRegistry.java b/server/src/main/java/org/opensearch/indices/SystemIndexRegistry.java index d9608e220d924..ab2cbd4ef1a73 100644 --- a/server/src/main/java/org/opensearch/indices/SystemIndexRegistry.java +++ b/server/src/main/java/org/opensearch/indices/SystemIndexRegistry.java @@ -15,13 +15,13 @@ import org.opensearch.common.regex.Regex; import org.opensearch.tasks.TaskResultsService; -import java.util.Arrays; import java.util.Collection; import java.util.Collections; import java.util.Comparator; import java.util.HashMap; import java.util.List; import java.util.Map; +import java.util.Set; import java.util.stream.Collectors; import static java.util.Collections.singletonList; @@ -45,25 +45,35 @@ public class SystemIndexRegistry { ); private volatile static String[] SYSTEM_INDEX_PATTERNS = new String[0]; - volatile static Collection SYSTEM_INDEX_DESCRIPTORS = Collections.emptyList(); + private volatile static Map> SYSTEM_INDEX_DESCRIPTORS_MAP = Collections.emptyMap(); static void register(Map> pluginAndModulesDescriptors) { final Map> descriptorsMap = buildSystemIndexDescriptorMap(pluginAndModulesDescriptors); checkForOverlappingPatterns(descriptorsMap); - List descriptors = pluginAndModulesDescriptors.values() - .stream() - .flatMap(Collection::stream) - .collect(Collectors.toList()); - descriptors.add(TASK_INDEX_DESCRIPTOR); - SYSTEM_INDEX_DESCRIPTORS = descriptors.stream().collect(Collectors.toUnmodifiableList()); - SYSTEM_INDEX_PATTERNS = descriptors.stream().map(SystemIndexDescriptor::getIndexPattern).toArray(String[]::new); + SYSTEM_INDEX_DESCRIPTORS_MAP = descriptorsMap; + SYSTEM_INDEX_PATTERNS = getAllDescriptors().stream().map(SystemIndexDescriptor::getIndexPattern).toArray(String[]::new); } - public static List matchesSystemIndexPattern(String... indexExpressions) { - return Arrays.stream(indexExpressions) - .filter(pattern -> Regex.simpleMatch(SYSTEM_INDEX_PATTERNS, pattern)) - .collect(Collectors.toList()); + public static Set matchesSystemIndexPattern(Set indexExpressions) { + return indexExpressions.stream().filter(pattern -> Regex.simpleMatch(SYSTEM_INDEX_PATTERNS, pattern)).collect(Collectors.toSet()); + } + + public static Set matchesPluginSystemIndexPattern(String pluginClassName, Set indexExpressions) { + if (!SYSTEM_INDEX_DESCRIPTORS_MAP.containsKey(pluginClassName)) { + return Collections.emptySet(); + } + String[] pluginSystemIndexPatterns = SYSTEM_INDEX_DESCRIPTORS_MAP.get(pluginClassName) + .stream() + .map(SystemIndexDescriptor::getIndexPattern) + .toArray(String[]::new); + return indexExpressions.stream() + .filter(pattern -> Regex.simpleMatch(pluginSystemIndexPatterns, pattern)) + .collect(Collectors.toSet()); + } + + static List getAllDescriptors() { + return SYSTEM_INDEX_DESCRIPTORS_MAP.values().stream().flatMap(Collection::stream).collect(Collectors.toList()); } /** diff --git a/server/src/main/java/org/opensearch/indices/SystemIndices.java b/server/src/main/java/org/opensearch/indices/SystemIndices.java index bbf58fe91512f..6e9e5e7707877 100644 --- a/server/src/main/java/org/opensearch/indices/SystemIndices.java +++ b/server/src/main/java/org/opensearch/indices/SystemIndices.java @@ -63,7 +63,7 @@ public class SystemIndices { public SystemIndices(Map> pluginAndModulesDescriptors) { SystemIndexRegistry.register(pluginAndModulesDescriptors); - this.runAutomaton = buildCharacterRunAutomaton(SystemIndexRegistry.SYSTEM_INDEX_DESCRIPTORS); + this.runAutomaton = buildCharacterRunAutomaton(SystemIndexRegistry.getAllDescriptors()); } /** @@ -91,7 +91,8 @@ public boolean isSystemIndex(String indexName) { * @throws IllegalStateException if multiple descriptors match the name */ public @Nullable SystemIndexDescriptor findMatchingDescriptor(String name) { - final List matchingDescriptors = SystemIndexRegistry.SYSTEM_INDEX_DESCRIPTORS.stream() + final List matchingDescriptors = SystemIndexRegistry.getAllDescriptors() + .stream() .filter(descriptor -> descriptor.matchesIndexPattern(name)) .collect(Collectors.toList()); diff --git a/server/src/main/java/org/opensearch/node/Node.java b/server/src/main/java/org/opensearch/node/Node.java index 96a716af7f1a1..281f697d9bb79 100644 --- a/server/src/main/java/org/opensearch/node/Node.java +++ b/server/src/main/java/org/opensearch/node/Node.java @@ -699,7 +699,10 @@ protected Node( pluginsService.filterPlugins(SystemIndexPlugin.class) .stream() .collect( - Collectors.toMap(plugin -> plugin.getClass().getSimpleName(), plugin -> plugin.getSystemIndexDescriptors(settings)) + Collectors.toMap( + plugin -> plugin.getClass().getCanonicalName(), + plugin -> plugin.getSystemIndexDescriptors(settings) + ) ) ); final SystemIndices systemIndices = new SystemIndices(systemIndexDescriptorMap); diff --git a/server/src/test/java/org/opensearch/indices/SystemIndicesTests.java b/server/src/test/java/org/opensearch/indices/SystemIndicesTests.java index 8ac457c32d53a..ca9370645dec3 100644 --- a/server/src/test/java/org/opensearch/indices/SystemIndicesTests.java +++ b/server/src/test/java/org/opensearch/indices/SystemIndicesTests.java @@ -44,6 +44,8 @@ import java.util.HashMap; import java.util.List; import java.util.Map; +import java.util.Set; +import java.util.stream.Collectors; import static java.util.Collections.emptyMap; import static java.util.Collections.singletonList; @@ -155,32 +157,61 @@ public void testSystemIndexMatching() { ); assertThat( - SystemIndexRegistry.matchesSystemIndexPattern(".system-index1", ".system-index2"), - equalTo(List.of(SystemIndexPlugin1.SYSTEM_INDEX_1, SystemIndexPlugin2.SYSTEM_INDEX_2)) + SystemIndexRegistry.matchesSystemIndexPattern(Set.of(".system-index1", ".system-index2")), + equalTo(Set.of(SystemIndexPlugin1.SYSTEM_INDEX_1, SystemIndexPlugin2.SYSTEM_INDEX_2)) ); - assertThat(SystemIndexRegistry.matchesSystemIndexPattern(".system-index1"), equalTo(List.of(SystemIndexPlugin1.SYSTEM_INDEX_1))); - assertThat(SystemIndexRegistry.matchesSystemIndexPattern(".system-index2"), equalTo(List.of(SystemIndexPlugin2.SYSTEM_INDEX_2))); - assertThat(SystemIndexRegistry.matchesSystemIndexPattern(".system-index-pattern1"), equalTo(List.of(".system-index-pattern1"))); assertThat( - SystemIndexRegistry.matchesSystemIndexPattern(".system-index-pattern-sub*"), - equalTo(List.of(".system-index-pattern-sub*")) + SystemIndexRegistry.matchesSystemIndexPattern(Set.of(".system-index1")), + equalTo(Set.of(SystemIndexPlugin1.SYSTEM_INDEX_1)) ); assertThat( - SystemIndexRegistry.matchesSystemIndexPattern(".system-index-pattern1", ".system-index-pattern2"), - equalTo(List.of(".system-index-pattern1", ".system-index-pattern2")) + SystemIndexRegistry.matchesSystemIndexPattern(Set.of(".system-index2")), + equalTo(Set.of(SystemIndexPlugin2.SYSTEM_INDEX_2)) ); assertThat( - SystemIndexRegistry.matchesSystemIndexPattern(".system-index1", ".system-index-pattern1"), - equalTo(List.of(".system-index1", ".system-index-pattern1")) + SystemIndexRegistry.matchesSystemIndexPattern(Set.of(".system-index-pattern1")), + equalTo(Set.of(".system-index-pattern1")) ); assertThat( - SystemIndexRegistry.matchesSystemIndexPattern(".system-index1", ".system-index-pattern1", ".not-system"), - equalTo(List.of(".system-index1", ".system-index-pattern1")) + SystemIndexRegistry.matchesSystemIndexPattern(Set.of(".system-index-pattern-sub*")), + equalTo(Set.of(".system-index-pattern-sub*")) + ); + assertThat( + SystemIndexRegistry.matchesSystemIndexPattern(Set.of(".system-index-pattern1", ".system-index-pattern2")), + equalTo(Set.of(".system-index-pattern1", ".system-index-pattern2")) + ); + assertThat( + SystemIndexRegistry.matchesSystemIndexPattern(Set.of(".system-index1", ".system-index-pattern1")), + equalTo(Set.of(".system-index1", ".system-index-pattern1")) + ); + assertThat( + SystemIndexRegistry.matchesSystemIndexPattern(Set.of(".system-index1", ".system-index-pattern1", ".not-system")), + equalTo(Set.of(".system-index1", ".system-index-pattern1")) + ); + assertThat(SystemIndexRegistry.matchesSystemIndexPattern(Set.of(".not-system")), equalTo(Collections.emptySet())); + } + + public void testRegisteredSystemIndexGetAllDescriptors() { + SystemIndexPlugin plugin1 = new SystemIndexPlugin1(); + SystemIndexPlugin plugin2 = new SystemIndexPlugin2(); + SystemIndices pluginSystemIndices = new SystemIndices( + Map.of( + SystemIndexPlugin1.class.getCanonicalName(), + plugin1.getSystemIndexDescriptors(Settings.EMPTY), + SystemIndexPlugin2.class.getCanonicalName(), + plugin2.getSystemIndexDescriptors(Settings.EMPTY) + ) + ); + assertEquals( + SystemIndexRegistry.getAllDescriptors() + .stream() + .map(SystemIndexDescriptor::getIndexPattern) + .collect(Collectors.toUnmodifiableList()), + List.of(SystemIndexPlugin1.SYSTEM_INDEX_1, SystemIndexPlugin2.SYSTEM_INDEX_2, TASK_INDEX + "*") ); - assertThat(SystemIndexRegistry.matchesSystemIndexPattern(".not-system"), equalTo(Collections.emptyList())); } - public void testRegisteredSystemIndexExpansion() { + public void testRegisteredSystemIndexMatching() { SystemIndexPlugin plugin1 = new SystemIndexPlugin1(); SystemIndexPlugin plugin2 = new SystemIndexPlugin2(); SystemIndices pluginSystemIndices = new SystemIndices( @@ -191,12 +222,43 @@ public void testRegisteredSystemIndexExpansion() { plugin2.getSystemIndexDescriptors(Settings.EMPTY) ) ); - List systemIndices = SystemIndexRegistry.matchesSystemIndexPattern( - SystemIndexPlugin1.SYSTEM_INDEX_1, - SystemIndexPlugin2.SYSTEM_INDEX_2 + Set systemIndices = SystemIndexRegistry.matchesSystemIndexPattern( + Set.of(SystemIndexPlugin1.SYSTEM_INDEX_1, SystemIndexPlugin2.SYSTEM_INDEX_2) ); assertEquals(2, systemIndices.size()); - assertTrue(systemIndices.containsAll(List.of(SystemIndexPlugin1.SYSTEM_INDEX_1, SystemIndexPlugin2.SYSTEM_INDEX_2))); + assertTrue(systemIndices.containsAll(Set.of(SystemIndexPlugin1.SYSTEM_INDEX_1, SystemIndexPlugin2.SYSTEM_INDEX_2))); + } + + public void testRegisteredSystemIndexMatchingForPlugin() { + SystemIndexPlugin plugin1 = new SystemIndexPlugin1(); + SystemIndexPlugin plugin2 = new SystemIndexPlugin2(); + SystemIndices pluginSystemIndices = new SystemIndices( + Map.of( + SystemIndexPlugin1.class.getCanonicalName(), + plugin1.getSystemIndexDescriptors(Settings.EMPTY), + SystemIndexPlugin2.class.getCanonicalName(), + plugin2.getSystemIndexDescriptors(Settings.EMPTY) + ) + ); + Set systemIndicesForPlugin1 = SystemIndexRegistry.matchesPluginSystemIndexPattern( + SystemIndexPlugin1.class.getCanonicalName(), + Set.of(SystemIndexPlugin1.SYSTEM_INDEX_1, SystemIndexPlugin2.SYSTEM_INDEX_2, "other-index") + ); + assertEquals(1, systemIndicesForPlugin1.size()); + assertTrue(systemIndicesForPlugin1.contains(SystemIndexPlugin1.SYSTEM_INDEX_1)); + + Set systemIndicesForPlugin2 = SystemIndexRegistry.matchesPluginSystemIndexPattern( + SystemIndexPlugin2.class.getCanonicalName(), + Set.of(SystemIndexPlugin1.SYSTEM_INDEX_1, SystemIndexPlugin2.SYSTEM_INDEX_2, "other-index") + ); + assertEquals(1, systemIndicesForPlugin2.size()); + assertTrue(systemIndicesForPlugin2.contains(SystemIndexPlugin2.SYSTEM_INDEX_2)); + + Set noMatchingSystemIndices = SystemIndexRegistry.matchesPluginSystemIndexPattern( + SystemIndexPlugin2.class.getCanonicalName(), + Set.of("other-index") + ); + assertEquals(0, noMatchingSystemIndices.size()); } static final class SystemIndexPlugin1 extends Plugin implements SystemIndexPlugin { From 54af34ec80ab9eb3c4831c9e0c4b1bcd5983c3a3 Mon Sep 17 00:00:00 2001 From: Mohit Godwani <81609427+mgodwan@users.noreply.github.com> Date: Tue, 16 Jul 2024 22:17:27 +0530 Subject: [PATCH 068/167] SPI for loading ABC templates (#14659) * SPI for loading ABC templates Signed-off-by: mgodwan --- CHANGELOG.md | 1 + .../ClusterStateSystemTemplateLoader.java | 108 +++++++++++ .../applicationtemplates/SystemTemplate.java | 43 ++++ .../SystemTemplateLoader.java | 26 +++ .../SystemTemplateMetadata.java | 68 +++++++ .../SystemTemplateRepository.java | 37 ++++ .../SystemTemplatesPlugin.java | 31 +++ .../SystemTemplatesService.java | 183 ++++++++++++++++++ .../TemplateRepositoryMetadata.java | 34 ++++ .../applicationtemplates/package-info.java | 10 + .../common/settings/ClusterSettings.java | 5 +- .../common/settings/FeatureFlagSettings.java | 3 +- .../opensearch/common/util/FeatureFlags.java | 14 +- .../main/java/org/opensearch/node/Node.java | 13 +- ...ClusterStateSystemTemplateLoaderTests.java | 148 ++++++++++++++ .../SystemTemplatesServiceTests.java | 90 +++++++++ .../TestSystemTemplatesRepositoryPlugin.java | 72 +++++++ .../test/OpenSearchIntegTestCase.java | 3 + .../test/OpenSearchSingleNodeTestCase.java | 1 + 19 files changed, 886 insertions(+), 4 deletions(-) create mode 100644 server/src/main/java/org/opensearch/cluster/applicationtemplates/ClusterStateSystemTemplateLoader.java create mode 100644 server/src/main/java/org/opensearch/cluster/applicationtemplates/SystemTemplate.java create mode 100644 server/src/main/java/org/opensearch/cluster/applicationtemplates/SystemTemplateLoader.java create mode 100644 server/src/main/java/org/opensearch/cluster/applicationtemplates/SystemTemplateMetadata.java create mode 100644 server/src/main/java/org/opensearch/cluster/applicationtemplates/SystemTemplateRepository.java create mode 100644 server/src/main/java/org/opensearch/cluster/applicationtemplates/SystemTemplatesPlugin.java create mode 100644 server/src/main/java/org/opensearch/cluster/applicationtemplates/SystemTemplatesService.java create mode 100644 server/src/main/java/org/opensearch/cluster/applicationtemplates/TemplateRepositoryMetadata.java create mode 100644 server/src/main/java/org/opensearch/cluster/applicationtemplates/package-info.java create mode 100644 server/src/test/java/org/opensearch/cluster/applicationtemplates/ClusterStateSystemTemplateLoaderTests.java create mode 100644 server/src/test/java/org/opensearch/cluster/applicationtemplates/SystemTemplatesServiceTests.java create mode 100644 test/framework/src/main/java/org/opensearch/cluster/service/applicationtemplates/TestSystemTemplatesRepositoryPlugin.java diff --git a/CHANGELOG.md b/CHANGELOG.md index 7050e7d23b24d..f885261f404ae 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -18,6 +18,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Create SystemIndexRegistry with helper method matchesSystemIndex ([#14415](https://github.com/opensearch-project/OpenSearch/pull/14415)) - Print reason why parent task was cancelled ([#14604](https://github.com/opensearch-project/OpenSearch/issues/14604)) - Add matchesPluginSystemIndexPattern to SystemIndexRegistry ([#14750](https://github.com/opensearch-project/OpenSearch/pull/14750)) +- Add Plugin interface for loading application based configuration templates (([#14659](https://github.com/opensearch-project/OpenSearch/issues/14659))) ### Dependencies - Bump `org.gradle.test-retry` from 1.5.8 to 1.5.9 ([#13442](https://github.com/opensearch-project/OpenSearch/pull/13442)) diff --git a/server/src/main/java/org/opensearch/cluster/applicationtemplates/ClusterStateSystemTemplateLoader.java b/server/src/main/java/org/opensearch/cluster/applicationtemplates/ClusterStateSystemTemplateLoader.java new file mode 100644 index 0000000000000..332960ef49064 --- /dev/null +++ b/server/src/main/java/org/opensearch/cluster/applicationtemplates/ClusterStateSystemTemplateLoader.java @@ -0,0 +1,108 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.cluster.applicationtemplates; + +import org.apache.logging.log4j.LogManager; +import org.apache.logging.log4j.Logger; +import org.opensearch.OpenSearchCorruptionException; +import org.opensearch.action.admin.indices.template.put.PutComponentTemplateAction; +import org.opensearch.client.Client; +import org.opensearch.client.OriginSettingClient; +import org.opensearch.cluster.ClusterState; +import org.opensearch.cluster.metadata.ComponentTemplate; +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.common.unit.TimeValue; +import org.opensearch.common.xcontent.json.JsonXContent; +import org.opensearch.core.xcontent.DeprecationHandler; +import org.opensearch.core.xcontent.NamedXContentRegistry; +import org.opensearch.core.xcontent.XContentParser; + +import java.io.IOException; +import java.util.Objects; +import java.util.function.Supplier; + +/** + * Class responsible for loading the component templates provided by a repository into the cluster state. + */ +@ExperimentalApi +public class ClusterStateSystemTemplateLoader implements SystemTemplateLoader { + + private final Client client; + + private final Supplier clusterStateSupplier; + + private static final Logger logger = LogManager.getLogger(SystemTemplateLoader.class); + + public static final String TEMPLATE_LOADER_IDENTIFIER = "system_template_loader"; + public static final String TEMPLATE_TYPE_KEY = "_type"; + + public ClusterStateSystemTemplateLoader(Client client, Supplier clusterStateSupplier) { + this.client = new OriginSettingClient(client, TEMPLATE_LOADER_IDENTIFIER); + this.clusterStateSupplier = clusterStateSupplier; + } + + @Override + public boolean loadTemplate(SystemTemplate template) throws IOException { + final ComponentTemplate existingTemplate = clusterStateSupplier.get() + .metadata() + .componentTemplates() + .get(template.templateMetadata().fullyQualifiedName()); + + if (existingTemplate != null + && !SystemTemplateMetadata.COMPONENT_TEMPLATE_TYPE.equals( + Objects.toString(existingTemplate.metadata().get(TEMPLATE_TYPE_KEY)) + )) { + throw new OpenSearchCorruptionException( + "Attempting to create " + template.templateMetadata().name() + " which has already been created through some other source." + ); + } + + if (existingTemplate != null && existingTemplate.version() >= template.templateMetadata().version()) { + logger.debug( + "Skipping putting template {} as its existing version [{}] is >= fetched version [{}]", + template.templateMetadata().fullyQualifiedName(), + existingTemplate.version(), + template.templateMetadata().version() + ); + return false; + } + + ComponentTemplate newTemplate = null; + try ( + XContentParser contentParser = JsonXContent.jsonXContent.createParser( + NamedXContentRegistry.EMPTY, + DeprecationHandler.IGNORE_DEPRECATIONS, + template.templateContent().utf8ToString() + ) + ) { + newTemplate = ComponentTemplate.parse(contentParser); + } + + if (!Objects.equals(newTemplate.version(), template.templateMetadata().version())) { + throw new OpenSearchCorruptionException( + "Template version mismatch for " + + template.templateMetadata().name() + + ". Version in metadata: " + + template.templateMetadata().version() + + " , Version in content: " + + newTemplate.version() + ); + } + + final PutComponentTemplateAction.Request request = new PutComponentTemplateAction.Request( + template.templateMetadata().fullyQualifiedName() + ).componentTemplate(newTemplate); + + return client.admin() + .indices() + .execute(PutComponentTemplateAction.INSTANCE, request) + .actionGet(TimeValue.timeValueMillis(30000)) + .isAcknowledged(); + } +} diff --git a/server/src/main/java/org/opensearch/cluster/applicationtemplates/SystemTemplate.java b/server/src/main/java/org/opensearch/cluster/applicationtemplates/SystemTemplate.java new file mode 100644 index 0000000000000..e11ded7ef5546 --- /dev/null +++ b/server/src/main/java/org/opensearch/cluster/applicationtemplates/SystemTemplate.java @@ -0,0 +1,43 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.cluster.applicationtemplates; + +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.core.common.bytes.BytesReference; + +/** + * Encapsulates the information and content about a system template available within a repository. + */ +@ExperimentalApi +public class SystemTemplate { + + private final BytesReference templateContent; + + private final SystemTemplateMetadata templateMetadata; + + private final TemplateRepositoryMetadata repositoryMetadata; + + public SystemTemplate(BytesReference templateContent, SystemTemplateMetadata templateInfo, TemplateRepositoryMetadata repositoryInfo) { + this.templateContent = templateContent; + this.templateMetadata = templateInfo; + this.repositoryMetadata = repositoryInfo; + } + + public BytesReference templateContent() { + return templateContent; + } + + public SystemTemplateMetadata templateMetadata() { + return templateMetadata; + } + + public TemplateRepositoryMetadata repositoryMetadata() { + return repositoryMetadata; + } +} diff --git a/server/src/main/java/org/opensearch/cluster/applicationtemplates/SystemTemplateLoader.java b/server/src/main/java/org/opensearch/cluster/applicationtemplates/SystemTemplateLoader.java new file mode 100644 index 0000000000000..077580aed5a64 --- /dev/null +++ b/server/src/main/java/org/opensearch/cluster/applicationtemplates/SystemTemplateLoader.java @@ -0,0 +1,26 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.cluster.applicationtemplates; + +import org.opensearch.common.annotation.ExperimentalApi; + +import java.io.IOException; + +/** + * Interface to load template into the OpenSearch runtime. + */ +@ExperimentalApi +public interface SystemTemplateLoader { + + /** + * @param template Templated to be loaded + * @throws IOException If an exceptional situation is encountered while parsing/loading the template + */ + boolean loadTemplate(SystemTemplate template) throws IOException; +} diff --git a/server/src/main/java/org/opensearch/cluster/applicationtemplates/SystemTemplateMetadata.java b/server/src/main/java/org/opensearch/cluster/applicationtemplates/SystemTemplateMetadata.java new file mode 100644 index 0000000000000..9bbe27ac0e281 --- /dev/null +++ b/server/src/main/java/org/opensearch/cluster/applicationtemplates/SystemTemplateMetadata.java @@ -0,0 +1,68 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.cluster.applicationtemplates; + +import org.opensearch.common.annotation.ExperimentalApi; + +/** + * Metadata information about a template available in a template repository. + */ +@ExperimentalApi +public class SystemTemplateMetadata { + + private final long version; + private final String type; + private final String name; + + private static final String DELIMITER = "@"; + + public static final String COMPONENT_TEMPLATE_TYPE = "@abc_template"; + + public SystemTemplateMetadata(long version, String type, String name) { + this.version = version; + this.type = type; + this.name = name; + } + + public String type() { + return type; + } + + public String name() { + return name; + } + + public long version() { + return version; + } + + /** + * Gets the metadata using fully qualified name for the template + * @param fullyQualifiedName (e.g. @abc_template@logs@1) + * @return Metadata object based on name + */ + public static SystemTemplateMetadata fromComponentTemplate(String fullyQualifiedName) { + assert fullyQualifiedName.length() > 1 : "System template name must have at least one component"; + assert fullyQualifiedName.substring(1, fullyQualifiedName.indexOf(DELIMITER, 1)).equals(COMPONENT_TEMPLATE_TYPE); + + return new SystemTemplateMetadata( + Long.parseLong(fullyQualifiedName.substring(fullyQualifiedName.lastIndexOf(DELIMITER))), + COMPONENT_TEMPLATE_TYPE, + fullyQualifiedName.substring(0, fullyQualifiedName.lastIndexOf(DELIMITER)) + ); + } + + public static SystemTemplateMetadata fromComponentTemplateInfo(String name, long version) { + return new SystemTemplateMetadata(version, COMPONENT_TEMPLATE_TYPE, name); + } + + public final String fullyQualifiedName() { + return type + DELIMITER + name + DELIMITER + version; + } +} diff --git a/server/src/main/java/org/opensearch/cluster/applicationtemplates/SystemTemplateRepository.java b/server/src/main/java/org/opensearch/cluster/applicationtemplates/SystemTemplateRepository.java new file mode 100644 index 0000000000000..9cf302b8874f2 --- /dev/null +++ b/server/src/main/java/org/opensearch/cluster/applicationtemplates/SystemTemplateRepository.java @@ -0,0 +1,37 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.cluster.applicationtemplates; + +import org.opensearch.common.annotation.ExperimentalApi; + +import java.io.IOException; + +/** + * Repository interface around the templates provided by a store (e.g. code repo, remote file store, etc) + */ +@ExperimentalApi +public interface SystemTemplateRepository extends AutoCloseable { + + /** + * @return Metadata about the repository + */ + TemplateRepositoryMetadata metadata(); + + /** + * @return Metadata for all available templates + */ + Iterable listTemplates() throws IOException; + + /** + * + * @param template metadata about template to be fetched + * @return The actual template content + */ + SystemTemplate getTemplate(SystemTemplateMetadata template) throws IOException; +} diff --git a/server/src/main/java/org/opensearch/cluster/applicationtemplates/SystemTemplatesPlugin.java b/server/src/main/java/org/opensearch/cluster/applicationtemplates/SystemTemplatesPlugin.java new file mode 100644 index 0000000000000..54871e6db7010 --- /dev/null +++ b/server/src/main/java/org/opensearch/cluster/applicationtemplates/SystemTemplatesPlugin.java @@ -0,0 +1,31 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.cluster.applicationtemplates; + +import org.opensearch.common.annotation.ExperimentalApi; + +import java.io.IOException; + +/** + * Plugin interface to expose the template maintaining logic. + */ +@ExperimentalApi +public interface SystemTemplatesPlugin { + + /** + * @return repository implementation from which templates are to be fetched. + */ + SystemTemplateRepository loadRepository() throws IOException; + + /** + * @param templateInfo Metadata about the template to load + * @return Implementation of TemplateLoader which determines how to make the template available at runtime. + */ + SystemTemplateLoader loaderFor(SystemTemplateMetadata templateInfo); +} diff --git a/server/src/main/java/org/opensearch/cluster/applicationtemplates/SystemTemplatesService.java b/server/src/main/java/org/opensearch/cluster/applicationtemplates/SystemTemplatesService.java new file mode 100644 index 0000000000000..ccb9272fa57b1 --- /dev/null +++ b/server/src/main/java/org/opensearch/cluster/applicationtemplates/SystemTemplatesService.java @@ -0,0 +1,183 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.cluster.applicationtemplates; + +import org.apache.logging.log4j.LogManager; +import org.apache.logging.log4j.Logger; +import org.apache.logging.log4j.message.ParameterizedMessage; +import org.opensearch.cluster.LocalNodeClusterManagerListener; +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.common.settings.ClusterSettings; +import org.opensearch.common.settings.Setting; +import org.opensearch.common.settings.Settings; +import org.opensearch.common.util.FeatureFlags; +import org.opensearch.threadpool.ThreadPool; + +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.atomic.AtomicBoolean; + +/** + * Service class to orchestrate execution around available templates' management. + */ +@ExperimentalApi +public class SystemTemplatesService implements LocalNodeClusterManagerListener { + + public static final Setting SETTING_APPLICATION_BASED_CONFIGURATION_TEMPLATES_ENABLED = Setting.boolSetting( + "cluster.application_templates.enabled", + false, + Setting.Property.Dynamic, + Setting.Property.NodeScope + ); + + private final List systemTemplatesPluginList; + private final ThreadPool threadPool; + + private final AtomicBoolean loaded = new AtomicBoolean(false); + + private volatile boolean enabledTemplates; + + private volatile Stats latestStats; + + private static final Logger logger = LogManager.getLogger(SystemTemplatesService.class); + + public SystemTemplatesService( + List systemTemplatesPluginList, + ThreadPool threadPool, + ClusterSettings clusterSettings, + Settings settings + ) { + this.systemTemplatesPluginList = systemTemplatesPluginList; + this.threadPool = threadPool; + if (settings.getAsBoolean(SETTING_APPLICATION_BASED_CONFIGURATION_TEMPLATES_ENABLED.getKey(), false)) { + setEnabledTemplates(settings.getAsBoolean(SETTING_APPLICATION_BASED_CONFIGURATION_TEMPLATES_ENABLED.getKey(), false)); + } + clusterSettings.addSettingsUpdateConsumer(SETTING_APPLICATION_BASED_CONFIGURATION_TEMPLATES_ENABLED, this::setEnabledTemplates); + } + + @Override + public void onClusterManager() { + threadPool.generic().execute(() -> refreshTemplates(false)); + } + + @Override + public void offClusterManager() { + // do nothing + } + + public void verifyRepositories() { + refreshTemplates(true); + } + + public Stats stats() { + return latestStats; + } + + void refreshTemplates(boolean verification) { + int templatesLoaded = 0; + int failedLoadingTemplates = 0; + int failedLoadingRepositories = 0; + List exceptions = new ArrayList<>(); + + if (loaded.compareAndSet(false, true) && enabledTemplates) { + for (SystemTemplatesPlugin plugin : systemTemplatesPluginList) { + try (SystemTemplateRepository repository = plugin.loadRepository()) { + + final TemplateRepositoryMetadata repositoryMetadata = repository.metadata(); + logger.debug( + "Loading templates from repository: {} at version {}", + repositoryMetadata.id(), + repositoryMetadata.version() + ); + + for (SystemTemplateMetadata templateMetadata : repository.listTemplates()) { + try { + final SystemTemplate template = repository.getTemplate(templateMetadata); + + // Load plugin if not in verification phase. + if (!verification && plugin.loaderFor(templateMetadata).loadTemplate(template)) { + templatesLoaded++; + } + + } catch (Exception ex) { + exceptions.add(ex); + logger.error( + new ParameterizedMessage( + "Failed loading template {} from repository: {}", + templateMetadata.fullyQualifiedName(), + repositoryMetadata.id() + ), + ex + ); + failedLoadingTemplates++; + } + } + } catch (Exception ex) { + exceptions.add(ex); + failedLoadingRepositories++; + logger.error(new ParameterizedMessage("Failed loading repository from plugin: {}", plugin.getClass().getName()), ex); + } + } + + logger.debug( + "Stats: Total Loaded Templates: [{}], Failed Loading Templates: [{}], Failed Loading Repositories: [{}]", + templatesLoaded, + failedLoadingTemplates, + failedLoadingRepositories + ); + + // End exceptionally if invoked in verification context + if (verification && (failedLoadingRepositories > 0 || failedLoadingTemplates > 0)) { + latestStats = new Stats(templatesLoaded, failedLoadingTemplates, failedLoadingRepositories); + throw new IllegalStateException("Some of the repositories could not be loaded or are corrupted: " + exceptions); + } + } + + latestStats = new Stats(templatesLoaded, failedLoadingTemplates, failedLoadingRepositories); + } + + private void setEnabledTemplates(boolean enabled) { + if (!FeatureFlags.isEnabled(FeatureFlags.APPLICATION_BASED_CONFIGURATION_TEMPLATES_SETTING)) { + throw new IllegalArgumentException( + "Application Based Configuration Templates is under an experimental feature and can be activated only by enabling " + + FeatureFlags.APPLICATION_BASED_CONFIGURATION_TEMPLATES_SETTING.getKey() + + " feature flag." + ); + } + enabledTemplates = enabled; + } + + /** + * Class to record stats for templates loaded through the listener in a single iteration. + */ + @ExperimentalApi + public static class Stats { + private final long templatesLoaded; + private final long failedLoadingTemplates; + private final long failedLoadingRepositories; + + public Stats(long templatesLoaded, long failedLoadingTemplates, long failedLoadingRepositories) { + this.templatesLoaded = templatesLoaded; + this.failedLoadingTemplates = failedLoadingTemplates; + this.failedLoadingRepositories = failedLoadingRepositories; + } + + public long getTemplatesLoaded() { + return templatesLoaded; + } + + public long getFailedLoadingTemplates() { + return failedLoadingTemplates; + } + + public long getFailedLoadingRepositories() { + return failedLoadingRepositories; + } + } +} diff --git a/server/src/main/java/org/opensearch/cluster/applicationtemplates/TemplateRepositoryMetadata.java b/server/src/main/java/org/opensearch/cluster/applicationtemplates/TemplateRepositoryMetadata.java new file mode 100644 index 0000000000000..7ab4553aade0e --- /dev/null +++ b/server/src/main/java/org/opensearch/cluster/applicationtemplates/TemplateRepositoryMetadata.java @@ -0,0 +1,34 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.cluster.applicationtemplates; + +import org.opensearch.common.annotation.ExperimentalApi; + +/** + * The information to uniquely identify a template repository. + */ +@ExperimentalApi +public class TemplateRepositoryMetadata { + + private final String id; + private final long version; + + public TemplateRepositoryMetadata(String id, long version) { + this.id = id; + this.version = version; + } + + public String id() { + return id; + } + + public long version() { + return version; + } +} diff --git a/server/src/main/java/org/opensearch/cluster/applicationtemplates/package-info.java b/server/src/main/java/org/opensearch/cluster/applicationtemplates/package-info.java new file mode 100644 index 0000000000000..3fef2aab07d43 --- /dev/null +++ b/server/src/main/java/org/opensearch/cluster/applicationtemplates/package-info.java @@ -0,0 +1,10 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +/** Core classes responsible for handling all application based configuration templates related operations. */ +package org.opensearch.cluster.applicationtemplates; diff --git a/server/src/main/java/org/opensearch/common/settings/ClusterSettings.java b/server/src/main/java/org/opensearch/common/settings/ClusterSettings.java index 0648fad619dc7..b4826e1a59428 100644 --- a/server/src/main/java/org/opensearch/common/settings/ClusterSettings.java +++ b/server/src/main/java/org/opensearch/common/settings/ClusterSettings.java @@ -49,6 +49,7 @@ import org.opensearch.cluster.NodeConnectionsService; import org.opensearch.cluster.action.index.MappingUpdatedAction; import org.opensearch.cluster.action.shard.ShardStateAction; +import org.opensearch.cluster.applicationtemplates.SystemTemplatesService; import org.opensearch.cluster.coordination.ClusterBootstrapService; import org.opensearch.cluster.coordination.ClusterFormationFailureHelper; import org.opensearch.cluster.coordination.Coordinator; @@ -757,7 +758,9 @@ public void apply(Settings value, Settings current, Settings previous) { SearchService.CLUSTER_ALLOW_DERIVED_FIELD_SETTING, // Composite index settings - CompositeIndexSettings.STAR_TREE_INDEX_ENABLED_SETTING + CompositeIndexSettings.STAR_TREE_INDEX_ENABLED_SETTING, + + SystemTemplatesService.SETTING_APPLICATION_BASED_CONFIGURATION_TEMPLATES_ENABLED ) ) ); diff --git a/server/src/main/java/org/opensearch/common/settings/FeatureFlagSettings.java b/server/src/main/java/org/opensearch/common/settings/FeatureFlagSettings.java index b6166f5d3cce1..d893d8d92be3b 100644 --- a/server/src/main/java/org/opensearch/common/settings/FeatureFlagSettings.java +++ b/server/src/main/java/org/opensearch/common/settings/FeatureFlagSettings.java @@ -38,6 +38,7 @@ protected FeatureFlagSettings( FeatureFlags.REMOTE_STORE_MIGRATION_EXPERIMENTAL_SETTING, FeatureFlags.PLUGGABLE_CACHE_SETTING, FeatureFlags.REMOTE_PUBLICATION_EXPERIMENTAL_SETTING, - FeatureFlags.STAR_TREE_INDEX_SETTING + FeatureFlags.STAR_TREE_INDEX_SETTING, + FeatureFlags.APPLICATION_BASED_CONFIGURATION_TEMPLATES_SETTING ); } diff --git a/server/src/main/java/org/opensearch/common/util/FeatureFlags.java b/server/src/main/java/org/opensearch/common/util/FeatureFlags.java index ceb2559a0e16c..9d57e6939e3ae 100644 --- a/server/src/main/java/org/opensearch/common/util/FeatureFlags.java +++ b/server/src/main/java/org/opensearch/common/util/FeatureFlags.java @@ -107,6 +107,16 @@ public class FeatureFlags { public static final String STAR_TREE_INDEX = "opensearch.experimental.feature.composite_index.star_tree.enabled"; public static final Setting STAR_TREE_INDEX_SETTING = Setting.boolSetting(STAR_TREE_INDEX, false, Property.NodeScope); + /** + * Gates the functionality of application based configuration templates. + */ + public static final String APPLICATION_BASED_CONFIGURATION_TEMPLATES = "opensearch.experimental.feature.application_templates.enabled"; + public static final Setting APPLICATION_BASED_CONFIGURATION_TEMPLATES_SETTING = Setting.boolSetting( + APPLICATION_BASED_CONFIGURATION_TEMPLATES, + false, + Property.NodeScope + ); + private static final List> ALL_FEATURE_FLAG_SETTINGS = List.of( REMOTE_STORE_MIGRATION_EXPERIMENTAL_SETTING, EXTENSIONS_SETTING, @@ -116,8 +126,10 @@ public class FeatureFlags { TIERED_REMOTE_INDEX_SETTING, PLUGGABLE_CACHE_SETTING, REMOTE_PUBLICATION_EXPERIMENTAL_SETTING, - STAR_TREE_INDEX_SETTING + STAR_TREE_INDEX_SETTING, + APPLICATION_BASED_CONFIGURATION_TEMPLATES_SETTING ); + /** * Should store the settings from opensearch.yml. */ diff --git a/server/src/main/java/org/opensearch/node/Node.java b/server/src/main/java/org/opensearch/node/Node.java index 281f697d9bb79..d91b2a45a48c6 100644 --- a/server/src/main/java/org/opensearch/node/Node.java +++ b/server/src/main/java/org/opensearch/node/Node.java @@ -68,6 +68,8 @@ import org.opensearch.cluster.InternalClusterInfoService; import org.opensearch.cluster.NodeConnectionsService; import org.opensearch.cluster.action.index.MappingUpdatedAction; +import org.opensearch.cluster.applicationtemplates.SystemTemplatesPlugin; +import org.opensearch.cluster.applicationtemplates.SystemTemplatesService; import org.opensearch.cluster.coordination.PersistedStateRegistry; import org.opensearch.cluster.metadata.AliasValidator; import org.opensearch.cluster.metadata.IndexTemplateMetadata; @@ -669,11 +671,20 @@ protected Node( resourcesToClose.add(clusterService); final Set> consistentSettings = settingsModule.getConsistentSettings(); if (consistentSettings.isEmpty() == false) { - clusterService.addLocalNodeMasterListener( + clusterService.addLocalNodeClusterManagerListener( new ConsistentSettingsService(settings, clusterService, consistentSettings).newHashPublisher() ); } + SystemTemplatesService systemTemplatesService = new SystemTemplatesService( + pluginsService.filterPlugins(SystemTemplatesPlugin.class), + threadPool, + clusterService.getClusterSettings(), + settings + ); + systemTemplatesService.verifyRepositories(); + clusterService.addLocalNodeClusterManagerListener(systemTemplatesService); + final ClusterInfoService clusterInfoService = newClusterInfoService(settings, clusterService, threadPool, client); final UsageService usageService = new UsageService(); diff --git a/server/src/test/java/org/opensearch/cluster/applicationtemplates/ClusterStateSystemTemplateLoaderTests.java b/server/src/test/java/org/opensearch/cluster/applicationtemplates/ClusterStateSystemTemplateLoaderTests.java new file mode 100644 index 0000000000000..63caccc87e67a --- /dev/null +++ b/server/src/test/java/org/opensearch/cluster/applicationtemplates/ClusterStateSystemTemplateLoaderTests.java @@ -0,0 +1,148 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.cluster.applicationtemplates; + +import org.opensearch.OpenSearchCorruptionException; +import org.opensearch.cluster.metadata.ComponentTemplate; +import org.opensearch.cluster.service.ClusterService; +import org.opensearch.common.xcontent.json.JsonXContent; +import org.opensearch.core.common.bytes.BytesArray; +import org.opensearch.core.xcontent.DeprecationHandler; +import org.opensearch.core.xcontent.NamedXContentRegistry; +import org.opensearch.core.xcontent.XContentParser; +import org.opensearch.test.OpenSearchSingleNodeTestCase; + +import java.io.IOException; +import java.util.UUID; + +public class ClusterStateSystemTemplateLoaderTests extends OpenSearchSingleNodeTestCase { + + public static final String SAMPLE_TEMPLATE = "{\n" + + " \"template\": {\n" + + " \"settings\": {\n" + + " \"index\": {\n" + + " \"codec\": \"best_compression\",\n" + + " \"merge.policy\": \"log_byte_size\",\n" + + " \"refresh_interval\": \"60s\"\n" + + " }\n" + + " }\n" + + " },\n" + + " \"_meta\": {\n" + + " \"_type\": \"@abc_template\",\n" + + " \"_version\": 1\n" + + " },\n" + + " \"version\": 1\n" + + "}"; + + public static final String SAMPLE_TEMPLATE_V2 = "{\n" + + " \"template\": {\n" + + " \"settings\": {\n" + + " \"index\": {\n" + + " \"codec\": \"best_compression\",\n" + + " \"merge.policy\": \"log_byte_size\",\n" + + " \"refresh_interval\": \"60s\"\n" + + " }\n" + + " }\n" + + " },\n" + + " \"_meta\": {\n" + + " \"_type\": \"@abc_template\",\n" + + " \"_version\": 2\n" + + " },\n" + + " \"version\": 2\n" + + "}"; + + public void testLoadTemplate() throws IOException { + ClusterStateSystemTemplateLoader loader = new ClusterStateSystemTemplateLoader( + node().client(), + () -> node().injector().getInstance(ClusterService.class).state() + ); + + TemplateRepositoryMetadata repositoryMetadata = new TemplateRepositoryMetadata(UUID.randomUUID().toString(), 1L); + SystemTemplateMetadata metadata = SystemTemplateMetadata.fromComponentTemplateInfo("dummy", 1L); + + // Load for the first time + assertTrue( + loader.loadTemplate( + new SystemTemplate( + new BytesArray(SAMPLE_TEMPLATE), + metadata, + new TemplateRepositoryMetadata(UUID.randomUUID().toString(), 1L) + ) + ) + ); + assertTrue( + node().injector() + .getInstance(ClusterService.class) + .state() + .metadata() + .componentTemplates() + .containsKey(metadata.fullyQualifiedName()) + ); + XContentParser parser = JsonXContent.jsonXContent.createParser( + NamedXContentRegistry.EMPTY, + DeprecationHandler.IGNORE_DEPRECATIONS, + SAMPLE_TEMPLATE + ); + assertEquals( + node().injector().getInstance(ClusterService.class).state().metadata().componentTemplates().get(metadata.fullyQualifiedName()), + ComponentTemplate.parse(parser) + ); + + // Retry and ensure loading does not happen again with same version + assertFalse( + loader.loadTemplate( + new SystemTemplate( + new BytesArray(SAMPLE_TEMPLATE), + metadata, + new TemplateRepositoryMetadata(UUID.randomUUID().toString(), 1L) + ) + ) + ); + + // Retry with new template version + SystemTemplateMetadata newVersionMetadata = SystemTemplateMetadata.fromComponentTemplateInfo("dummy", 2L); + assertTrue(loader.loadTemplate(new SystemTemplate(new BytesArray(SAMPLE_TEMPLATE_V2), newVersionMetadata, repositoryMetadata))); + parser = JsonXContent.jsonXContent.createParser( + NamedXContentRegistry.EMPTY, + DeprecationHandler.IGNORE_DEPRECATIONS, + SAMPLE_TEMPLATE_V2 + ); + assertEquals( + node().injector() + .getInstance(ClusterService.class) + .state() + .metadata() + .componentTemplates() + .get(newVersionMetadata.fullyQualifiedName()), + ComponentTemplate.parse(parser) + ); + } + + public void testLoadTemplateVersionMismatch() throws IOException { + ClusterStateSystemTemplateLoader loader = new ClusterStateSystemTemplateLoader( + node().client(), + () -> node().injector().getInstance(ClusterService.class).state() + ); + + TemplateRepositoryMetadata repositoryMetadata = new TemplateRepositoryMetadata(UUID.randomUUID().toString(), 1L); + SystemTemplateMetadata metadata = SystemTemplateMetadata.fromComponentTemplateInfo("dummy", 2L); + + // Load for the first time + assertThrows( + OpenSearchCorruptionException.class, + () -> loader.loadTemplate( + new SystemTemplate( + new BytesArray(SAMPLE_TEMPLATE), + metadata, + new TemplateRepositoryMetadata(UUID.randomUUID().toString(), 1L) + ) + ) + ); + } +} diff --git a/server/src/test/java/org/opensearch/cluster/applicationtemplates/SystemTemplatesServiceTests.java b/server/src/test/java/org/opensearch/cluster/applicationtemplates/SystemTemplatesServiceTests.java new file mode 100644 index 0000000000000..4addf3802b40d --- /dev/null +++ b/server/src/test/java/org/opensearch/cluster/applicationtemplates/SystemTemplatesServiceTests.java @@ -0,0 +1,90 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.cluster.applicationtemplates; + +import org.opensearch.cluster.service.applicationtemplates.TestSystemTemplatesRepositoryPlugin; +import org.opensearch.common.settings.ClusterSettings; +import org.opensearch.common.settings.Settings; +import org.opensearch.common.util.FeatureFlags; +import org.opensearch.common.util.concurrent.OpenSearchExecutors; +import org.opensearch.test.OpenSearchTestCase; +import org.opensearch.threadpool.ThreadPool; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.List; + +import org.mockito.Mockito; + +import static org.opensearch.common.settings.ClusterSettings.BUILT_IN_CLUSTER_SETTINGS; +import static org.mockito.Mockito.when; + +public class SystemTemplatesServiceTests extends OpenSearchTestCase { + + private SystemTemplatesService systemTemplatesService; + + public void testSystemTemplatesLoaded() throws IOException { + setupService(true); + + systemTemplatesService.onClusterManager(); + SystemTemplatesService.Stats stats = systemTemplatesService.stats(); + assertNotNull(stats); + assertEquals(stats.getTemplatesLoaded(), 1L); + assertEquals(stats.getFailedLoadingTemplates(), 0L); + assertEquals(stats.getFailedLoadingRepositories(), 1L); + } + + public void testSystemTemplatesVerify() throws IOException { + setupService(false); + + systemTemplatesService.verifyRepositories(); + + SystemTemplatesService.Stats stats = systemTemplatesService.stats(); + assertNotNull(stats); + assertEquals(stats.getTemplatesLoaded(), 0L); + assertEquals(stats.getFailedLoadingTemplates(), 0L); + assertEquals(stats.getFailedLoadingRepositories(), 0L); + } + + public void testSystemTemplatesVerifyWithFailingRepository() throws IOException { + setupService(true); + + assertThrows(IllegalStateException.class, () -> systemTemplatesService.verifyRepositories()); + + SystemTemplatesService.Stats stats = systemTemplatesService.stats(); + assertNotNull(stats); + assertEquals(stats.getTemplatesLoaded(), 0L); + assertEquals(stats.getFailedLoadingTemplates(), 0L); + assertEquals(stats.getFailedLoadingRepositories(), 1L); + } + + void setupService(boolean errorFromMockPlugin) throws IOException { + FeatureFlags.initializeFeatureFlags(Settings.builder().put(FeatureFlags.APPLICATION_BASED_CONFIGURATION_TEMPLATES, true).build()); + + ThreadPool mockPool = Mockito.mock(ThreadPool.class); + when(mockPool.generic()).thenReturn(OpenSearchExecutors.newDirectExecutorService()); + + List plugins = new ArrayList<>(); + plugins.add(new TestSystemTemplatesRepositoryPlugin()); + + if (errorFromMockPlugin) { + SystemTemplatesPlugin mockPlugin = Mockito.mock(SystemTemplatesPlugin.class); + when(mockPlugin.loadRepository()).thenThrow(new IOException()); + plugins.add(mockPlugin); + } + + ClusterSettings mockSettings = new ClusterSettings(Settings.EMPTY, BUILT_IN_CLUSTER_SETTINGS); + systemTemplatesService = new SystemTemplatesService( + plugins, + mockPool, + mockSettings, + Settings.builder().put(SystemTemplatesService.SETTING_APPLICATION_BASED_CONFIGURATION_TEMPLATES_ENABLED.getKey(), true).build() + ); + } +} diff --git a/test/framework/src/main/java/org/opensearch/cluster/service/applicationtemplates/TestSystemTemplatesRepositoryPlugin.java b/test/framework/src/main/java/org/opensearch/cluster/service/applicationtemplates/TestSystemTemplatesRepositoryPlugin.java new file mode 100644 index 0000000000000..c5245c7109d8f --- /dev/null +++ b/test/framework/src/main/java/org/opensearch/cluster/service/applicationtemplates/TestSystemTemplatesRepositoryPlugin.java @@ -0,0 +1,72 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.cluster.service.applicationtemplates; + +import org.opensearch.cluster.applicationtemplates.SystemTemplate; +import org.opensearch.cluster.applicationtemplates.SystemTemplateLoader; +import org.opensearch.cluster.applicationtemplates.SystemTemplateMetadata; +import org.opensearch.cluster.applicationtemplates.SystemTemplateRepository; +import org.opensearch.cluster.applicationtemplates.SystemTemplatesPlugin; +import org.opensearch.cluster.applicationtemplates.TemplateRepositoryMetadata; +import org.opensearch.core.common.bytes.BytesReference; +import org.opensearch.plugins.Plugin; + +import java.io.IOException; +import java.nio.ByteBuffer; +import java.nio.charset.StandardCharsets; +import java.util.List; + +public class TestSystemTemplatesRepositoryPlugin extends Plugin implements SystemTemplatesPlugin { + + private final SystemTemplateMetadata templateMetadata = SystemTemplateMetadata.fromComponentTemplateInfo("dummy", 1); + + private final TemplateRepositoryMetadata repoMetadata = new TemplateRepositoryMetadata("test", 1); + + private final SystemTemplate systemTemplate = new SystemTemplate( + BytesReference.fromByteBuffer(ByteBuffer.wrap("content".getBytes(StandardCharsets.UTF_8))), + templateMetadata, + repoMetadata + ); + + @Override + public SystemTemplateRepository loadRepository() throws IOException { + return new SystemTemplateRepository() { + @Override + public TemplateRepositoryMetadata metadata() { + return repoMetadata; + } + + @Override + public List listTemplates() throws IOException { + return List.of(templateMetadata); + } + + @Override + public SystemTemplate getTemplate(SystemTemplateMetadata template) throws IOException { + return systemTemplate; + } + + @Override + public void close() throws Exception {} + }; + } + + @Override + public SystemTemplateLoader loaderFor(SystemTemplateMetadata templateMetadata) { + return new SystemTemplateLoader() { // Asserting Loader + @Override + public boolean loadTemplate(SystemTemplate template) throws IOException { + assert template.templateMetadata() == TestSystemTemplatesRepositoryPlugin.this.templateMetadata; + assert template.repositoryMetadata() == repoMetadata; + assert template.templateContent() == systemTemplate.templateContent(); + return true; + } + }; + } +} diff --git a/test/framework/src/main/java/org/opensearch/test/OpenSearchIntegTestCase.java b/test/framework/src/main/java/org/opensearch/test/OpenSearchIntegTestCase.java index ca5ddf21710af..7a50502e418e2 100644 --- a/test/framework/src/main/java/org/opensearch/test/OpenSearchIntegTestCase.java +++ b/test/framework/src/main/java/org/opensearch/test/OpenSearchIntegTestCase.java @@ -90,6 +90,7 @@ import org.opensearch.cluster.routing.allocation.decider.AwarenessAllocationDecider; import org.opensearch.cluster.routing.allocation.decider.EnableAllocationDecider; import org.opensearch.cluster.service.ClusterService; +import org.opensearch.cluster.service.applicationtemplates.TestSystemTemplatesRepositoryPlugin; import org.opensearch.common.Nullable; import org.opensearch.common.Priority; import org.opensearch.common.collect.Tuple; @@ -682,6 +683,7 @@ protected Settings featureFlagSettings() { } // Enabling Telemetry setting by default featureSettings.put(FeatureFlags.TELEMETRY_SETTING.getKey(), true); + featureSettings.put(FeatureFlags.APPLICATION_BASED_CONFIGURATION_TEMPLATES_SETTING.getKey(), true); return featureSettings.build(); } @@ -2168,6 +2170,7 @@ protected Collection> getMockPlugins() { if (addMockTelemetryPlugin()) { mocks.add(MockTelemetryPlugin.class); } + mocks.add(TestSystemTemplatesRepositoryPlugin.class); return Collections.unmodifiableList(mocks); } diff --git a/test/framework/src/main/java/org/opensearch/test/OpenSearchSingleNodeTestCase.java b/test/framework/src/main/java/org/opensearch/test/OpenSearchSingleNodeTestCase.java index 45ea63e862df6..1dfad60c04155 100644 --- a/test/framework/src/main/java/org/opensearch/test/OpenSearchSingleNodeTestCase.java +++ b/test/framework/src/main/java/org/opensearch/test/OpenSearchSingleNodeTestCase.java @@ -438,6 +438,7 @@ protected Settings featureFlagSettings() { featureSettings.put(builtInFlag.getKey(), builtInFlag.getDefaultRaw(Settings.EMPTY)); } featureSettings.put(FeatureFlags.TELEMETRY_SETTING.getKey(), true); + featureSettings.put(FeatureFlags.APPLICATION_BASED_CONFIGURATION_TEMPLATES_SETTING.getKey(), true); return featureSettings.build(); } From 8ae728c5610900888449af2eae2ef8b23d664538 Mon Sep 17 00:00:00 2001 From: gaobinlong Date: Wed, 17 Jul 2024 04:34:09 +0800 Subject: [PATCH 069/167] Fix bulk upsert ignores the default_pipeline and final_pipeline when the auto-created index matches the index template (#12891) * Fix bulk upsert ignores the default_pipeline and final_pipeline when auto-created index matches with the index template Signed-off-by: Gao Binlong * Modify changelog & comment Signed-off-by: Gao Binlong * Use new approach Signed-off-by: Gao Binlong * Fix test failure Signed-off-by: Gao Binlong --------- Signed-off-by: Gao Binlong --- CHANGELOG.md | 1 + .../rest-api-spec/test/ingest/70_bulk.yml | 38 +++++++++++++++++++ .../action/update/UpdateRequest.java | 4 +- .../action/update/UpdateRequestTests.java | 4 +- .../opensearch/ingest/IngestServiceTests.java | 14 +++++++ 5 files changed, 58 insertions(+), 3 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index f885261f404ae..b863b9d13e789 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -73,6 +73,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Refactoring FilterPath.parse by using an iterative approach ([#14200](https://github.com/opensearch-project/OpenSearch/pull/14200)) - Refactoring Grok.validatePatternBank by using an iterative approach ([#14206](https://github.com/opensearch-project/OpenSearch/pull/14206)) - Update help output for _cat ([#14722](https://github.com/opensearch-project/OpenSearch/pull/14722)) +- Fix bulk upsert ignores the default_pipeline and final_pipeline when auto-created index matches the index template ([#12891](https://github.com/opensearch-project/OpenSearch/pull/12891)) ### Security diff --git a/modules/ingest-common/src/yamlRestTest/resources/rest-api-spec/test/ingest/70_bulk.yml b/modules/ingest-common/src/yamlRestTest/resources/rest-api-spec/test/ingest/70_bulk.yml index edb7b77eb8d28..8830503940f4d 100644 --- a/modules/ingest-common/src/yamlRestTest/resources/rest-api-spec/test/ingest/70_bulk.yml +++ b/modules/ingest-common/src/yamlRestTest/resources/rest-api-spec/test/ingest/70_bulk.yml @@ -41,6 +41,10 @@ teardown: ingest.delete_pipeline: id: "pipeline2" ignore: 404 + - do: + indices.delete_index_template: + name: test_index_template_for_bulk + ignore: 404 --- "Test bulk request without default pipeline": @@ -168,6 +172,40 @@ teardown: id: test_id3 - match: { _source: {"f1": "v2", "f2": 47, "field1": "value1"}} +# related issue: https://github.com/opensearch-project/OpenSearch/issues/12888 +--- +"Test bulk upsert honors default_pipeline and final_pipeline when the auto-created index matches with the index template": + - skip: + version: " - 2.99.99" + reason: "fixed in 3.0.0" + - do: + indices.put_index_template: + name: test_for_bulk_upsert_index_template + body: + index_patterns: test_bulk_upsert_* + template: + settings: + number_of_shards: 1 + number_of_replicas: 0 + default_pipeline: pipeline1 + final_pipeline: pipeline2 + + - do: + bulk: + refresh: true + body: + - '{"update": {"_index": "test_bulk_upsert_index", "_id": "test_id3"}}' + - '{"upsert": {"f1": "v2", "f2": 47}, "doc": {"x": 1}}' + + - match: { errors: false } + - match: { items.0.update.result: created } + + - do: + get: + index: test_bulk_upsert_index + id: test_id3 + - match: { _source: {"f1": "v2", "f2": 47, "field1": "value1", "field2": "value2"}} + --- "Test bulk API with batch enabled happy case": - skip: diff --git a/server/src/main/java/org/opensearch/action/update/UpdateRequest.java b/server/src/main/java/org/opensearch/action/update/UpdateRequest.java index 9654bd1c114ba..6cb5e049e0f1e 100644 --- a/server/src/main/java/org/opensearch/action/update/UpdateRequest.java +++ b/server/src/main/java/org/opensearch/action/update/UpdateRequest.java @@ -717,7 +717,7 @@ public IndexRequest doc() { private IndexRequest safeDoc() { if (doc == null) { - doc = new IndexRequest(); + doc = new IndexRequest(index); } return doc; } @@ -803,7 +803,7 @@ public IndexRequest upsertRequest() { private IndexRequest safeUpsertRequest() { if (upsertRequest == null) { - upsertRequest = new IndexRequest(); + upsertRequest = new IndexRequest(index); } return upsertRequest; } diff --git a/server/src/test/java/org/opensearch/action/update/UpdateRequestTests.java b/server/src/test/java/org/opensearch/action/update/UpdateRequestTests.java index b70fda0d86240..e85dfa8cca556 100644 --- a/server/src/test/java/org/opensearch/action/update/UpdateRequestTests.java +++ b/server/src/test/java/org/opensearch/action/update/UpdateRequestTests.java @@ -247,6 +247,7 @@ public void testFromXContent() throws Exception { assertThat(params, notNullValue()); assertThat(params.size(), equalTo(1)); assertThat(params.get("param1").toString(), equalTo("value1")); + assertThat(request.upsertRequest().index(), equalTo("test")); Map upsertDoc = XContentHelper.convertToMap( request.upsertRequest().source(), true, @@ -304,6 +305,7 @@ public void testFromXContent() throws Exception { ) ); Map doc = request.doc().sourceAsMap(); + assertThat(request.doc().index(), equalTo("test")); assertThat(doc.get("field1").toString(), equalTo("value1")); assertThat(((Map) doc.get("compound")).get("field2").toString(), equalTo("value2")); } @@ -662,7 +664,7 @@ public void testToString() throws IOException { request.toString(), equalTo( "update {[test][1], doc_as_upsert[false], " - + "doc[index {[null][null], source[{\"body\":\"bar\"}]}], scripted_upsert[false], detect_noop[true]}" + + "doc[index {[test][null], source[{\"body\":\"bar\"}]}], scripted_upsert[false], detect_noop[true]}" ) ); } diff --git a/server/src/test/java/org/opensearch/ingest/IngestServiceTests.java b/server/src/test/java/org/opensearch/ingest/IngestServiceTests.java index 684297c11c140..e61fbb6e1dbff 100644 --- a/server/src/test/java/org/opensearch/ingest/IngestServiceTests.java +++ b/server/src/test/java/org/opensearch/ingest/IngestServiceTests.java @@ -1605,6 +1605,13 @@ public void testResolveRequiredOrDefaultPipelineDefaultPipeline() { assertThat(result, is(true)); assertThat(indexRequest.isPipelineResolved(), is(true)); assertThat(indexRequest.getPipeline(), equalTo("default-pipeline")); + + // index name matches with ITMD for bulk upsert + UpdateRequest updateRequest = new UpdateRequest("idx", "id1").upsert(emptyMap()).script(mockScript("1")); + result = IngestService.resolvePipelines(updateRequest, TransportBulkAction.getIndexWriteRequest(updateRequest), metadata); + assertThat(result, is(true)); + assertThat(updateRequest.upsertRequest().isPipelineResolved(), is(true)); + assertThat(updateRequest.upsertRequest().getPipeline(), equalTo("default-pipeline")); } public void testResolveFinalPipeline() { @@ -1642,6 +1649,13 @@ public void testResolveFinalPipeline() { assertThat(indexRequest.isPipelineResolved(), is(true)); assertThat(indexRequest.getPipeline(), equalTo("_none")); assertThat(indexRequest.getFinalPipeline(), equalTo("final-pipeline")); + + // index name matches with ITMD for bulk upsert: + UpdateRequest updateRequest = new UpdateRequest("idx", "id1").upsert(emptyMap()).script(mockScript("1")); + result = IngestService.resolvePipelines(updateRequest, TransportBulkAction.getIndexWriteRequest(updateRequest), metadata); + assertThat(result, is(true)); + assertThat(updateRequest.upsertRequest().isPipelineResolved(), is(true)); + assertThat(updateRequest.upsertRequest().getFinalPipeline(), equalTo("final-pipeline")); } public void testResolveRequestOrDefaultPipelineAndFinalPipeline() { From 0e31c02585059db726ebaa71b57dd8dbad7f6007 Mon Sep 17 00:00:00 2001 From: Mohit Godwani <81609427+mgodwan@users.noreply.github.com> Date: Wed, 17 Jul 2024 13:57:20 +0530 Subject: [PATCH 070/167] Fix flaky test due to node being used across all tests (#14787) Signed-off-by: Mohit Godwani --- .../ClusterStateSystemTemplateLoaderTests.java | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/server/src/test/java/org/opensearch/cluster/applicationtemplates/ClusterStateSystemTemplateLoaderTests.java b/server/src/test/java/org/opensearch/cluster/applicationtemplates/ClusterStateSystemTemplateLoaderTests.java index 63caccc87e67a..c7cfab6d38e04 100644 --- a/server/src/test/java/org/opensearch/cluster/applicationtemplates/ClusterStateSystemTemplateLoaderTests.java +++ b/server/src/test/java/org/opensearch/cluster/applicationtemplates/ClusterStateSystemTemplateLoaderTests.java @@ -145,4 +145,9 @@ public void testLoadTemplateVersionMismatch() throws IOException { ) ); } + + @Override + protected boolean resetNodeAfterTest() { + return true; + } } From a2cef8fcdeeaed4ca11f6c6d69a9c9cb5cd4d2c5 Mon Sep 17 00:00:00 2001 From: Sarthak Aggarwal Date: Wed, 17 Jul 2024 15:26:49 +0530 Subject: [PATCH 071/167] Star Tree Implementation [OnHeap] (#14512) --------- Signed-off-by: Sarthak Aggarwal --- .../composite/Composite99DocValuesWriter.java | 5 +- .../datacube/startree/StarTreeDocument.java | 34 + .../aggregators/CountValueAggregator.java | 66 ++ .../aggregators/MetricAggregatorInfo.java | 130 ++++ .../aggregators/SumValueAggregator.java | 97 +++ .../startree/aggregators/ValueAggregator.java | 64 ++ .../aggregators/ValueAggregatorFactory.java | 56 ++ .../numerictype/StarTreeNumericType.java | 66 ++ .../StarTreeNumericTypeConverters.java | 58 ++ .../aggregators/numerictype/package-info.java | 14 + .../startree/aggregators/package-info.java | 14 + .../startree/builder/BaseStarTreeBuilder.java | 668 +++++++++++++++++ .../builder/OnHeapStarTreeBuilder.java | 213 ++++++ .../startree/builder/StarTreeBuilder.java | 29 + .../StarTreeDocValuesIteratorAdapter.java | 82 ++ .../startree/builder/StarTreesBuilder.java | 114 +++ .../startree/builder/package-info.java | 14 + .../datacube/startree/package-info.java | 2 + .../utils/SequentialDocValuesIterator.java | 137 ++++ .../datacube/startree/utils/TreeNode.java | 65 ++ .../datacube/startree/utils/package-info.java | 14 + .../StarTreeDocValuesFormatTests.java | 36 - .../CountValueAggregatorTests.java | 53 ++ .../MetricAggregatorInfoTests.java | 123 +++ .../aggregators/SumValueAggregatorTests.java | 72 ++ .../ValueAggregatorFactoryTests.java | 27 + .../builder/BaseStarTreeBuilderTests.java | 216 ++++++ .../builder/OnHeapStarTreeBuilderTests.java | 706 ++++++++++++++++++ ...StarTreeDocValuesIteratorAdapterTests.java | 139 ++++ .../StarTreeValuesIteratorFactoryTests.java | 131 ++++ .../builder/StarTreesBuilderTests.java | 132 ++++ .../SequentialDocValuesIteratorTests.java | 46 ++ 32 files changed, 3586 insertions(+), 37 deletions(-) create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/StarTreeDocument.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/CountValueAggregator.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MetricAggregatorInfo.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/SumValueAggregator.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/ValueAggregator.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/ValueAggregatorFactory.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/numerictype/StarTreeNumericType.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/numerictype/StarTreeNumericTypeConverters.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/numerictype/package-info.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/package-info.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/BaseStarTreeBuilder.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/OnHeapStarTreeBuilder.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreeBuilder.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreeDocValuesIteratorAdapter.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreesBuilder.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/package-info.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/utils/SequentialDocValuesIterator.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/utils/TreeNode.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/utils/package-info.java create mode 100644 server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/CountValueAggregatorTests.java create mode 100644 server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MetricAggregatorInfoTests.java create mode 100644 server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/SumValueAggregatorTests.java create mode 100644 server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/ValueAggregatorFactoryTests.java create mode 100644 server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/BaseStarTreeBuilderTests.java create mode 100644 server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/OnHeapStarTreeBuilderTests.java create mode 100644 server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreeDocValuesIteratorAdapterTests.java create mode 100644 server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreeValuesIteratorFactoryTests.java create mode 100644 server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreesBuilderTests.java create mode 100644 server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/utils/SequentialDocValuesIteratorTests.java diff --git a/server/src/main/java/org/opensearch/index/codec/composite/Composite99DocValuesWriter.java b/server/src/main/java/org/opensearch/index/codec/composite/Composite99DocValuesWriter.java index 75bbf78dbdad2..3753b20a8bea3 100644 --- a/server/src/main/java/org/opensearch/index/codec/composite/Composite99DocValuesWriter.java +++ b/server/src/main/java/org/opensearch/index/codec/composite/Composite99DocValuesWriter.java @@ -14,6 +14,7 @@ import org.apache.lucene.index.MergeState; import org.apache.lucene.index.SegmentWriteState; import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.index.compositeindex.datacube.startree.builder.StarTreesBuilder; import org.opensearch.index.mapper.CompositeMappedFieldType; import org.opensearch.index.mapper.MapperService; import org.opensearch.index.mapper.StarTreeMapper; @@ -98,7 +99,9 @@ private void createCompositeIndicesIfPossible(DocValuesProducer valuesProducer, if (compositeFieldSet.isEmpty()) { for (CompositeMappedFieldType mappedType : compositeMappedFieldTypes) { if (mappedType instanceof StarTreeMapper.StarTreeFieldType) { - // TODO : Call StarTree builder + try (StarTreesBuilder starTreesBuilder = new StarTreesBuilder(fieldProducerMap, state, mapperService)) { + starTreesBuilder.build(); + } } } } diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/StarTreeDocument.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/StarTreeDocument.java new file mode 100644 index 0000000000000..0ce2b3a5cdac5 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/StarTreeDocument.java @@ -0,0 +1,34 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree; + +import org.opensearch.common.annotation.ExperimentalApi; + +import java.util.Arrays; + +/** + * Star tree document + * + * @opensearch.experimental + */ +@ExperimentalApi +public class StarTreeDocument { + public final Long[] dimensions; + public final Object[] metrics; + + public StarTreeDocument(Long[] dimensions, Object[] metrics) { + this.dimensions = dimensions; + this.metrics = metrics; + } + + @Override + public String toString() { + return Arrays.toString(dimensions) + " | " + Arrays.toString(metrics); + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/CountValueAggregator.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/CountValueAggregator.java new file mode 100644 index 0000000000000..d72f4a292dc0a --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/CountValueAggregator.java @@ -0,0 +1,66 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ +package org.opensearch.index.compositeindex.datacube.startree.aggregators; + +import org.opensearch.index.compositeindex.datacube.MetricStat; +import org.opensearch.index.compositeindex.datacube.startree.aggregators.numerictype.StarTreeNumericType; + +/** + * Count value aggregator for star tree + * + * @opensearch.experimental + */ +public class CountValueAggregator implements ValueAggregator { + public static final StarTreeNumericType VALUE_AGGREGATOR_TYPE = StarTreeNumericType.LONG; + public static final long DEFAULT_INITIAL_VALUE = 1L; + + @Override + public MetricStat getAggregationType() { + return MetricStat.COUNT; + } + + @Override + public StarTreeNumericType getAggregatedValueType() { + return VALUE_AGGREGATOR_TYPE; + } + + @Override + public Long getInitialAggregatedValueForSegmentDocValue(Long segmentDocValue, StarTreeNumericType starTreeNumericType) { + return DEFAULT_INITIAL_VALUE; + } + + @Override + public Long mergeAggregatedValueAndSegmentValue(Long value, Long segmentDocValue, StarTreeNumericType starTreeNumericType) { + return value + 1; + } + + @Override + public Long mergeAggregatedValues(Long value, Long aggregatedValue) { + return value + aggregatedValue; + } + + @Override + public Long getInitialAggregatedValue(Long value) { + return value; + } + + @Override + public int getMaxAggregatedValueByteSize() { + return Long.BYTES; + } + + @Override + public Long toLongValue(Long value) { + return value; + } + + @Override + public Long toStarTreeNumericTypeValue(Long value, StarTreeNumericType type) { + return value; + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MetricAggregatorInfo.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MetricAggregatorInfo.java new file mode 100644 index 0000000000000..46f1b1ac11063 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MetricAggregatorInfo.java @@ -0,0 +1,130 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ +package org.opensearch.index.compositeindex.datacube.startree.aggregators; + +import org.opensearch.index.compositeindex.datacube.MetricStat; +import org.opensearch.index.compositeindex.datacube.startree.aggregators.numerictype.StarTreeNumericType; +import org.opensearch.index.compositeindex.datacube.startree.utils.SequentialDocValuesIterator; +import org.opensearch.index.fielddata.IndexNumericFieldData; + +import java.util.Comparator; +import java.util.Objects; + +/** + * Builds aggregation function and doc values field pair to support various aggregations + * + * @opensearch.experimental + */ +public class MetricAggregatorInfo implements Comparable { + + public static final String DELIMITER = "_"; + private final String metric; + private final String starFieldName; + private final MetricStat metricStat; + private final String field; + private final ValueAggregator valueAggregators; + private final StarTreeNumericType starTreeNumericType; + private final SequentialDocValuesIterator metricStatReader; + + /** + * Constructor for MetricAggregatorInfo + */ + public MetricAggregatorInfo( + MetricStat metricStat, + String field, + String starFieldName, + IndexNumericFieldData.NumericType numericType, + SequentialDocValuesIterator metricStatReader + ) { + this.metricStat = metricStat; + this.valueAggregators = ValueAggregatorFactory.getValueAggregator(metricStat); + this.starTreeNumericType = StarTreeNumericType.fromNumericType(numericType); + this.metricStatReader = metricStatReader; + this.field = field; + this.starFieldName = starFieldName; + this.metric = toFieldName(); + } + + /** + * @return metric type + */ + public MetricStat getMetricStat() { + return metricStat; + } + + /** + * @return field Name + */ + public String getField() { + return field; + } + + /** + * @return the metric stat name + */ + public String getMetric() { + return metric; + } + + /** + * @return aggregator for the field value + */ + public ValueAggregator getValueAggregators() { + return valueAggregators; + } + + /** + * @return star tree aggregated value type + */ + public StarTreeNumericType getAggregatedValueType() { + return starTreeNumericType; + } + + /** + * @return metric value reader iterator + */ + public SequentialDocValuesIterator getMetricStatReader() { + return metricStatReader; + } + + /** + * @return field name with metric type and field + */ + public String toFieldName() { + return starFieldName + DELIMITER + field + DELIMITER + metricStat.getTypeName(); + } + + @Override + public int hashCode() { + return Objects.hashCode(toFieldName()); + } + + @Override + public boolean equals(Object obj) { + if (this == obj) { + return true; + } + if (obj instanceof MetricAggregatorInfo) { + MetricAggregatorInfo anotherPair = (MetricAggregatorInfo) obj; + return metricStat == anotherPair.metricStat && field.equals(anotherPair.field); + } + return false; + } + + @Override + public String toString() { + return toFieldName(); + } + + @Override + public int compareTo(MetricAggregatorInfo other) { + return Comparator.comparing((MetricAggregatorInfo o) -> o.field) + .thenComparing((MetricAggregatorInfo o) -> o.metricStat) + .compare(this, other); + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/SumValueAggregator.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/SumValueAggregator.java new file mode 100644 index 0000000000000..543b0f7f42374 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/SumValueAggregator.java @@ -0,0 +1,97 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ +package org.opensearch.index.compositeindex.datacube.startree.aggregators; + +import org.apache.lucene.util.NumericUtils; +import org.opensearch.index.compositeindex.datacube.MetricStat; +import org.opensearch.index.compositeindex.datacube.startree.aggregators.numerictype.StarTreeNumericType; +import org.opensearch.search.aggregations.metrics.CompensatedSum; + +/** + * Sum value aggregator for star tree + * + * @opensearch.experimental + */ +public class SumValueAggregator implements ValueAggregator { + + public static final StarTreeNumericType VALUE_AGGREGATOR_TYPE = StarTreeNumericType.DOUBLE; + private double sum = 0; + private double compensation = 0; + private CompensatedSum kahanSummation = new CompensatedSum(0, 0); + + @Override + public MetricStat getAggregationType() { + return MetricStat.SUM; + } + + @Override + public StarTreeNumericType getAggregatedValueType() { + return VALUE_AGGREGATOR_TYPE; + } + + @Override + public Double getInitialAggregatedValueForSegmentDocValue(Long segmentDocValue, StarTreeNumericType starTreeNumericType) { + kahanSummation.reset(0, 0); + kahanSummation.add(starTreeNumericType.getDoubleValue(segmentDocValue)); + compensation = kahanSummation.delta(); + sum = kahanSummation.value(); + return kahanSummation.value(); + } + + @Override + public Double mergeAggregatedValueAndSegmentValue(Double value, Long segmentDocValue, StarTreeNumericType starTreeNumericType) { + assert kahanSummation.value() == value; + kahanSummation.reset(sum, compensation); + kahanSummation.add(starTreeNumericType.getDoubleValue(segmentDocValue)); + compensation = kahanSummation.delta(); + sum = kahanSummation.value(); + return kahanSummation.value(); + } + + @Override + public Double mergeAggregatedValues(Double value, Double aggregatedValue) { + assert kahanSummation.value() == aggregatedValue; + kahanSummation.reset(sum, compensation); + kahanSummation.add(value); + compensation = kahanSummation.delta(); + sum = kahanSummation.value(); + return kahanSummation.value(); + } + + @Override + public Double getInitialAggregatedValue(Double value) { + kahanSummation.reset(0, 0); + kahanSummation.add(value); + compensation = kahanSummation.delta(); + sum = kahanSummation.value(); + return kahanSummation.value(); + } + + @Override + public int getMaxAggregatedValueByteSize() { + return Double.BYTES; + } + + @Override + public Long toLongValue(Double value) { + try { + return NumericUtils.doubleToSortableLong(value); + } catch (Exception e) { + throw new IllegalStateException("Cannot convert " + value + " to sortable long", e); + } + } + + @Override + public Double toStarTreeNumericTypeValue(Long value, StarTreeNumericType type) { + try { + return type.getDoubleValue(value); + } catch (Exception e) { + throw new IllegalStateException("Cannot convert " + value + " to sortable aggregation type", e); + } + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/ValueAggregator.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/ValueAggregator.java new file mode 100644 index 0000000000000..3dd1f85845c17 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/ValueAggregator.java @@ -0,0 +1,64 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ +package org.opensearch.index.compositeindex.datacube.startree.aggregators; + +import org.opensearch.index.compositeindex.datacube.MetricStat; +import org.opensearch.index.compositeindex.datacube.startree.aggregators.numerictype.StarTreeNumericType; + +/** + * A value aggregator that pre-aggregates on the input values for a specific type of aggregation. + * + * @opensearch.experimental + */ +public interface ValueAggregator { + + /** + * Returns the type of the aggregation. + */ + MetricStat getAggregationType(); + + /** + * Returns the data type of the aggregated value. + */ + StarTreeNumericType getAggregatedValueType(); + + /** + * Returns the initial aggregated value. + */ + A getInitialAggregatedValueForSegmentDocValue(Long segmentDocValue, StarTreeNumericType starTreeNumericType); + + /** + * Applies a segment doc value to the current aggregated value. + */ + A mergeAggregatedValueAndSegmentValue(A value, Long segmentDocValue, StarTreeNumericType starTreeNumericType); + + /** + * Applies an aggregated value to the current aggregated value. + */ + A mergeAggregatedValues(A value, A aggregatedValue); + + /** + * Clones an aggregated value. + */ + A getInitialAggregatedValue(A value); + + /** + * Returns the maximum size in bytes of the aggregated values seen so far. + */ + int getMaxAggregatedValueByteSize(); + + /** + * Converts an aggregated value into a Long type. + */ + Long toLongValue(A value); + + /** + * Converts an aggregated value from a Long type. + */ + A toStarTreeNumericTypeValue(Long rawValue, StarTreeNumericType type); +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/ValueAggregatorFactory.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/ValueAggregatorFactory.java new file mode 100644 index 0000000000000..4ee0b0b5b13f8 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/ValueAggregatorFactory.java @@ -0,0 +1,56 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ +package org.opensearch.index.compositeindex.datacube.startree.aggregators; + +import org.opensearch.index.compositeindex.datacube.MetricStat; +import org.opensearch.index.compositeindex.datacube.startree.aggregators.numerictype.StarTreeNumericType; + +/** + * Value aggregator factory for a given aggregation type + * + * @opensearch.experimental + */ +public class ValueAggregatorFactory { + private ValueAggregatorFactory() {} + + /** + * Returns a new instance of value aggregator for the given aggregation type. + * + * @param aggregationType Aggregation type + * @return Value aggregator + */ + public static ValueAggregator getValueAggregator(MetricStat aggregationType) { + switch (aggregationType) { + // other metric types (count, min, max, avg) will be supported in the future + case SUM: + return new SumValueAggregator(); + case COUNT: + return new CountValueAggregator(); + default: + throw new IllegalStateException("Unsupported aggregation type: " + aggregationType); + } + } + + /** + * Returns the data type of the aggregated value for the given aggregation type. + * + * @param aggregationType Aggregation type + * @return Data type of the aggregated value + */ + public static StarTreeNumericType getAggregatedValueType(MetricStat aggregationType) { + switch (aggregationType) { + // other metric types (count, min, max, avg) will be supported in the future + case SUM: + return SumValueAggregator.VALUE_AGGREGATOR_TYPE; + case COUNT: + return CountValueAggregator.VALUE_AGGREGATOR_TYPE; + default: + throw new IllegalStateException("Unsupported aggregation type: " + aggregationType); + } + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/numerictype/StarTreeNumericType.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/numerictype/StarTreeNumericType.java new file mode 100644 index 0000000000000..57fe573a6a93c --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/numerictype/StarTreeNumericType.java @@ -0,0 +1,66 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree.aggregators.numerictype; + +import org.opensearch.index.fielddata.IndexNumericFieldData; + +import java.util.function.Function; + +/** + * Enum to map Star Tree Numeric Types to Lucene's Numeric Type + * + * @opensearch.experimental + */ +public enum StarTreeNumericType { + + // TODO: Handle scaled floats + HALF_FLOAT(IndexNumericFieldData.NumericType.HALF_FLOAT, StarTreeNumericTypeConverters::halfFloatPointToDouble), + FLOAT(IndexNumericFieldData.NumericType.FLOAT, StarTreeNumericTypeConverters::floatPointToDouble), + LONG(IndexNumericFieldData.NumericType.LONG, StarTreeNumericTypeConverters::longToDouble), + DOUBLE(IndexNumericFieldData.NumericType.DOUBLE, StarTreeNumericTypeConverters::sortableLongtoDouble), + INT(IndexNumericFieldData.NumericType.INT, StarTreeNumericTypeConverters::intToDouble), + SHORT(IndexNumericFieldData.NumericType.SHORT, StarTreeNumericTypeConverters::shortToDouble), + BYTE(IndexNumericFieldData.NumericType.BYTE, StarTreeNumericTypeConverters::bytesToDouble), + UNSIGNED_LONG(IndexNumericFieldData.NumericType.UNSIGNED_LONG, StarTreeNumericTypeConverters::unsignedlongToDouble); + + final IndexNumericFieldData.NumericType numericType; + final Function converter; + + StarTreeNumericType(IndexNumericFieldData.NumericType numericType, Function converter) { + this.numericType = numericType; + this.converter = converter; + } + + public double getDoubleValue(long rawValue) { + return this.converter.apply(rawValue); + } + + public static StarTreeNumericType fromNumericType(IndexNumericFieldData.NumericType numericType) { + switch (numericType) { + case HALF_FLOAT: + return StarTreeNumericType.HALF_FLOAT; + case FLOAT: + return StarTreeNumericType.FLOAT; + case LONG: + return StarTreeNumericType.LONG; + case DOUBLE: + return StarTreeNumericType.DOUBLE; + case INT: + return StarTreeNumericType.INT; + case SHORT: + return StarTreeNumericType.SHORT; + case UNSIGNED_LONG: + return StarTreeNumericType.UNSIGNED_LONG; + case BYTE: + return StarTreeNumericType.BYTE; + default: + throw new UnsupportedOperationException("Unknown numeric type [" + numericType + "]"); + } + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/numerictype/StarTreeNumericTypeConverters.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/numerictype/StarTreeNumericTypeConverters.java new file mode 100644 index 0000000000000..eb7647c4f9851 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/numerictype/StarTreeNumericTypeConverters.java @@ -0,0 +1,58 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree.aggregators.numerictype; + +import org.apache.lucene.sandbox.document.HalfFloatPoint; +import org.apache.lucene.util.NumericUtils; +import org.opensearch.common.Numbers; +import org.opensearch.common.annotation.ExperimentalApi; + +/** + * Numeric converters used during aggregations of metric values + * + * @opensearch.experimental + */ +@ExperimentalApi +public class StarTreeNumericTypeConverters { + + public static double halfFloatPointToDouble(Long value) { + return HalfFloatPoint.sortableShortToHalfFloat((short) value.longValue()); + } + + public static double floatPointToDouble(Long value) { + return NumericUtils.sortableIntToFloat((int) value.longValue()); + } + + public static double longToDouble(Long value) { + return (double) value; + } + + public static double intToDouble(Long value) { + return (double) value; + } + + public static double shortToDouble(Long value) { + return (double) value; + } + + public static Double sortableLongtoDouble(Long value) { + return NumericUtils.sortableLongToDouble(value); + } + + public static double unsignedlongToDouble(Long value) { + return Numbers.unsignedLongToDouble(value); + } + + public static double bytesToDouble(Long value) { + byte[] bytes = new byte[8]; + NumericUtils.longToSortableBytes(value, bytes, 0); + return NumericUtils.sortableLongToDouble(NumericUtils.sortableBytesToLong(bytes, 0)); + } + +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/numerictype/package-info.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/numerictype/package-info.java new file mode 100644 index 0000000000000..fe5c2a7ceb254 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/numerictype/package-info.java @@ -0,0 +1,14 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +/** + * Numeric Types for Composite Index Star Tree + * + * @opensearch.experimental + */ +package org.opensearch.index.compositeindex.datacube.startree.aggregators.numerictype; diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/package-info.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/package-info.java new file mode 100644 index 0000000000000..bddd6a46fbbe8 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/package-info.java @@ -0,0 +1,14 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +/** + * Aggregators for Composite Index Star Tree + * + * @opensearch.experimental + */ +package org.opensearch.index.compositeindex.datacube.startree.aggregators; diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/BaseStarTreeBuilder.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/BaseStarTreeBuilder.java new file mode 100644 index 0000000000000..0a363bfad8fe1 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/BaseStarTreeBuilder.java @@ -0,0 +1,668 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ +package org.opensearch.index.compositeindex.datacube.startree.builder; + +import org.apache.logging.log4j.LogManager; +import org.apache.logging.log4j.Logger; +import org.apache.lucene.codecs.DocValuesProducer; +import org.apache.lucene.index.DocValuesType; +import org.apache.lucene.index.FieldInfo; +import org.apache.lucene.index.SegmentWriteState; +import org.opensearch.index.compositeindex.datacube.Dimension; +import org.opensearch.index.compositeindex.datacube.Metric; +import org.opensearch.index.compositeindex.datacube.MetricStat; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeDocument; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeField; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeFieldConfiguration; +import org.opensearch.index.compositeindex.datacube.startree.aggregators.MetricAggregatorInfo; +import org.opensearch.index.compositeindex.datacube.startree.aggregators.ValueAggregator; +import org.opensearch.index.compositeindex.datacube.startree.aggregators.numerictype.StarTreeNumericType; +import org.opensearch.index.compositeindex.datacube.startree.utils.SequentialDocValuesIterator; +import org.opensearch.index.compositeindex.datacube.startree.utils.TreeNode; +import org.opensearch.index.fielddata.IndexNumericFieldData; +import org.opensearch.index.mapper.Mapper; +import org.opensearch.index.mapper.MapperService; +import org.opensearch.index.mapper.NumberFieldMapper; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Iterator; +import java.util.List; +import java.util.Map; +import java.util.Set; + +import static org.opensearch.index.compositeindex.datacube.startree.utils.TreeNode.ALL; + +/** + * Builder for star tree. Defines the algorithm to construct star-tree + * See {@link StarTreesBuilder} for information around the construction of star-trees based on star-tree fields + * + * @opensearch.experimental + */ +public abstract class BaseStarTreeBuilder implements StarTreeBuilder { + + private static final Logger logger = LogManager.getLogger(BaseStarTreeBuilder.class); + + /** + * Default value for star node + */ + public static final int STAR_IN_DOC_VALUES_INDEX = -1; + + protected final Set skipStarNodeCreationForDimensions; + + protected final List metricAggregatorInfos; + protected final int numMetrics; + protected final int numDimensions; + protected int numStarTreeDocs; + protected int totalSegmentDocs; + protected int numStarTreeNodes; + protected final int maxLeafDocuments; + + protected final TreeNode rootNode = getNewNode(); + + protected SequentialDocValuesIterator[] dimensionReaders; + + // We do not close these producers as they are empty doc value producers (where close() is unsupported) + protected Map fieldProducerMap; + + private final StarTreeDocValuesIteratorAdapter starTreeDocValuesIteratorAdapter; + private final StarTreeField starTreeField; + + /** + * Reads all the configuration related to dimensions and metrics, builds a star-tree based on the different construction parameters. + * + * @param starTreeField holds the configuration for the star tree + * @param fieldProducerMap helps return the doc values iterator for each type based on field name + * @param state stores the segment write state + * @param mapperService helps to find the original type of the field + */ + protected BaseStarTreeBuilder( + StarTreeField starTreeField, + Map fieldProducerMap, + SegmentWriteState state, + MapperService mapperService + ) throws IOException { + + logger.debug("Building in base star tree builder"); + + this.starTreeField = starTreeField; + StarTreeFieldConfiguration starTreeFieldSpec = starTreeField.getStarTreeConfig(); + this.fieldProducerMap = fieldProducerMap; + this.starTreeDocValuesIteratorAdapter = new StarTreeDocValuesIteratorAdapter(); + + List dimensionsSplitOrder = starTreeField.getDimensionsOrder(); + this.numDimensions = dimensionsSplitOrder.size(); + + this.skipStarNodeCreationForDimensions = new HashSet<>(); + this.totalSegmentDocs = state.segmentInfo.maxDoc(); + this.dimensionReaders = new SequentialDocValuesIterator[numDimensions]; + Set skipStarNodeCreationForDimensions = starTreeFieldSpec.getSkipStarNodeCreationInDims(); + + for (int i = 0; i < numDimensions; i++) { + String dimension = dimensionsSplitOrder.get(i).getField(); + if (skipStarNodeCreationForDimensions.contains(dimensionsSplitOrder.get(i).getField())) { + this.skipStarNodeCreationForDimensions.add(i); + } + FieldInfo dimensionFieldInfos = state.fieldInfos.fieldInfo(dimension); + DocValuesType dimensionDocValuesType = dimensionFieldInfos.getDocValuesType(); + dimensionReaders[i] = starTreeDocValuesIteratorAdapter.getDocValuesIterator( + dimensionDocValuesType, + dimensionFieldInfos, + fieldProducerMap.get(dimensionFieldInfos.name) + ); + } + + this.metricAggregatorInfos = generateMetricAggregatorInfos(mapperService, state); + this.numMetrics = metricAggregatorInfos.size(); + this.maxLeafDocuments = starTreeFieldSpec.maxLeafDocs(); + } + + /** + * Generates the configuration required to perform aggregation for all the metrics on a field + * + * @return list of MetricAggregatorInfo + */ + public List generateMetricAggregatorInfos(MapperService mapperService, SegmentWriteState state) + throws IOException { + List metricAggregatorInfos = new ArrayList<>(); + for (Metric metric : this.starTreeField.getMetrics()) { + for (MetricStat metricStat : metric.getMetrics()) { + IndexNumericFieldData.NumericType numericType; + SequentialDocValuesIterator metricStatReader; + Mapper fieldMapper = mapperService.documentMapper().mappers().getMapper(metric.getField()); + if (fieldMapper instanceof NumberFieldMapper) { + numericType = ((NumberFieldMapper) fieldMapper).fieldType().numericType(); + } else { + logger.error("unsupported mapper type"); + throw new IllegalStateException("unsupported mapper type"); + } + + FieldInfo metricFieldInfos = state.fieldInfos.fieldInfo(metric.getField()); + DocValuesType metricDocValuesType = metricFieldInfos.getDocValuesType(); + if (metricStat != MetricStat.COUNT) { + metricStatReader = starTreeDocValuesIteratorAdapter.getDocValuesIterator( + metricDocValuesType, + metricFieldInfos, + fieldProducerMap.get(metricFieldInfos.name) + ); + } else { + metricStatReader = new SequentialDocValuesIterator(); + } + + MetricAggregatorInfo metricAggregatorInfo = new MetricAggregatorInfo( + metricStat, + metric.getField(), + starTreeField.getName(), + numericType, + metricStatReader + ); + metricAggregatorInfos.add(metricAggregatorInfo); + } + } + return metricAggregatorInfos; + } + + /** + * Adds a document to the star-tree. + * + * @param starTreeDocument star tree document to be added + * @throws IOException if an I/O error occurs while adding the document + */ + public abstract void appendStarTreeDocument(StarTreeDocument starTreeDocument) throws IOException; + + /** + * Returns the document of the given document id in the star-tree. + * + * @param docId document id + * @return star tree document + * @throws IOException if an I/O error occurs while fetching the star-tree document + */ + public abstract StarTreeDocument getStarTreeDocument(int docId) throws IOException; + + /** + * Retrieves the list of star-tree documents in the star-tree. + * + * @return Star tree documents + */ + public abstract List getStarTreeDocuments(); + + /** + * Returns the value of the dimension for the given dimension id and document in the star-tree. + * + * @param docId document id + * @param dimensionId dimension id + * @return dimension value + */ + public abstract Long getDimensionValue(int docId, int dimensionId) throws IOException; + + /** + * Sorts and aggregates the star-tree document in the segment, and returns a star-tree document iterator for all the + * aggregated star-tree document. + * + * @return Iterator for the aggregated star-tree document + */ + public abstract Iterator sortAndAggregateStarTreeDocuments() throws IOException; + + /** + * Generates aggregated star-tree documents for star-node. + * + * @param startDocId start document id (inclusive) in the star-tree + * @param endDocId end document id (exclusive) in the star-tree + * @param dimensionId dimension id of the star-node + * @return Iterator for the aggregated star-tree documents + */ + public abstract Iterator generateStarTreeDocumentsForStarNode(int startDocId, int endDocId, int dimensionId) + throws IOException; + + /** + * Returns the star-tree document from the segment + * + * @throws IOException when we are unable to build a star tree document from the segment + */ + protected StarTreeDocument getSegmentStarTreeDocument(int currentDocId) throws IOException { + Long[] dimensions = getStarTreeDimensionsFromSegment(currentDocId); + Object[] metrics = getStarTreeMetricsFromSegment(currentDocId); + return new StarTreeDocument(dimensions, metrics); + } + + /** + * Returns the dimension values for the next document from the segment + * + * @return dimension values for each of the star-tree dimension + * @throws IOException when we are unable to iterate to the next doc for the given dimension readers + */ + private Long[] getStarTreeDimensionsFromSegment(int currentDocId) throws IOException { + Long[] dimensions = new Long[numDimensions]; + for (int i = 0; i < numDimensions; i++) { + try { + dimensions[i] = getValuesFromSegment(dimensionReaders[i], currentDocId); + } catch (Exception e) { + logger.error("unable to read the dimension values from the segment", e); + throw new IllegalStateException("unable to read the dimension values from the segment", e); + } + + } + return dimensions; + } + + /** + * Returns the next value from the iterator of respective field + * + * @param iterator respective field iterator + * @param currentDocId current document id + * @return the next value for the field + * @throws IOException when we are unable to iterate to the next doc for the given iterator + */ + private Long getValuesFromSegment(SequentialDocValuesIterator iterator, int currentDocId) throws IOException { + try { + starTreeDocValuesIteratorAdapter.nextDoc(iterator, currentDocId); + } catch (IOException e) { + logger.error("unable to iterate to next doc", e); + throw new RuntimeException("unable to iterate to next doc", e); + } + return starTreeDocValuesIteratorAdapter.getNextValue(iterator, currentDocId); + } + + /** + * Returns the metric values for the next document from the segment + * + * @return metric values for each of the star-tree metric + * @throws IOException when we are unable to iterate to the next doc for the given metric readers + */ + private Object[] getStarTreeMetricsFromSegment(int currentDocId) throws IOException { + Object[] metrics = new Object[numMetrics]; + for (int i = 0; i < numMetrics; i++) { + SequentialDocValuesIterator metricStatReader = metricAggregatorInfos.get(i).getMetricStatReader(); + if (metricStatReader != null) { + try { + metrics[i] = getValuesFromSegment(metricStatReader, currentDocId); + } catch (Exception e) { + logger.error("unable to read the metric values from the segment", e); + throw new IllegalStateException("unable to read the metric values from the segment", e); + } + } else { + throw new IllegalStateException("metric readers are empty"); + } + } + return metrics; + } + + /** + * Merges a star-tree document from the segment into an aggregated star-tree document. + * A new aggregated star-tree document is created if the aggregated segment document is null. + * + * @param aggregatedSegmentDocument aggregated star-tree document + * @param segmentDocument segment star-tree document + * @return merged star-tree document + */ + @SuppressWarnings({ "unchecked", "rawtypes" }) + protected StarTreeDocument reduceSegmentStarTreeDocuments( + StarTreeDocument aggregatedSegmentDocument, + StarTreeDocument segmentDocument + ) { + if (aggregatedSegmentDocument == null) { + Long[] dimensions = Arrays.copyOf(segmentDocument.dimensions, numDimensions); + Object[] metrics = new Object[numMetrics]; + for (int i = 0; i < numMetrics; i++) { + try { + ValueAggregator metricValueAggregator = metricAggregatorInfos.get(i).getValueAggregators(); + StarTreeNumericType starTreeNumericType = metricAggregatorInfos.get(i).getAggregatedValueType(); + metrics[i] = metricValueAggregator.getInitialAggregatedValueForSegmentDocValue( + getLong(segmentDocument.metrics[i]), + starTreeNumericType + ); + } catch (Exception e) { + logger.error("Cannot parse initial segment doc value", e); + throw new IllegalStateException("Cannot parse initial segment doc value [" + segmentDocument.metrics[i] + "]"); + } + } + return new StarTreeDocument(dimensions, metrics); + } else { + for (int i = 0; i < numMetrics; i++) { + try { + ValueAggregator metricValueAggregator = metricAggregatorInfos.get(i).getValueAggregators(); + StarTreeNumericType starTreeNumericType = metricAggregatorInfos.get(i).getAggregatedValueType(); + aggregatedSegmentDocument.metrics[i] = metricValueAggregator.mergeAggregatedValueAndSegmentValue( + aggregatedSegmentDocument.metrics[i], + getLong(segmentDocument.metrics[i]), + starTreeNumericType + ); + } catch (Exception e) { + logger.error("Cannot apply segment doc value for aggregation", e); + throw new IllegalStateException("Cannot apply segment doc value for aggregation [" + segmentDocument.metrics[i] + "]"); + } + } + return aggregatedSegmentDocument; + } + } + + /** + * Safely converts the metric value of object type to long. + * + * @param metric value of the metric + * @return converted metric value to long + */ + private static long getLong(Object metric) { + + Long metricValue = null; + try { + if (metric instanceof Long) { + metricValue = (long) metric; + } else if (metric != null) { + metricValue = Long.valueOf(String.valueOf(metric)); + } + } catch (Exception e) { + throw new IllegalStateException("unable to cast segment metric", e); + } + + if (metricValue == null) { + throw new IllegalStateException("unable to cast segment metric"); + } + return metricValue; + } + + /** + * Merges a star-tree document into an aggregated star-tree document. + * A new aggregated star-tree document is created if the aggregated document is null. + * + * @param aggregatedDocument aggregated star-tree document + * @param starTreeDocument segment star-tree document + * @return merged star-tree document + */ + @SuppressWarnings("unchecked") + public StarTreeDocument reduceStarTreeDocuments(StarTreeDocument aggregatedDocument, StarTreeDocument starTreeDocument) { + // aggregate the documents + if (aggregatedDocument == null) { + Long[] dimensions = Arrays.copyOf(starTreeDocument.dimensions, numDimensions); + Object[] metrics = new Object[numMetrics]; + for (int i = 0; i < numMetrics; i++) { + try { + metrics[i] = metricAggregatorInfos.get(i).getValueAggregators().getInitialAggregatedValue(starTreeDocument.metrics[i]); + } catch (Exception e) { + logger.error("Cannot get value for aggregation", e); + throw new IllegalStateException("Cannot get value for aggregation[" + starTreeDocument.metrics[i] + "]"); + } + } + return new StarTreeDocument(dimensions, metrics); + } else { + for (int i = 0; i < numMetrics; i++) { + try { + aggregatedDocument.metrics[i] = metricAggregatorInfos.get(i) + .getValueAggregators() + .mergeAggregatedValues(starTreeDocument.metrics[i], aggregatedDocument.metrics[i]); + } catch (Exception e) { + logger.error("Cannot apply value to aggregated document for aggregation", e); + throw new IllegalStateException( + "Cannot apply value to aggregated document for aggregation [" + starTreeDocument.metrics[i] + "]" + ); + } + } + return aggregatedDocument; + } + } + + /** + * Builds the star tree using total segment documents + * + * @throws IOException when we are unable to build star-tree + */ + public void build() throws IOException { + long startTime = System.currentTimeMillis(); + logger.debug("Star-tree build is a go with star tree field {}", starTreeField.getName()); + + if (totalSegmentDocs == 0) { + logger.debug("No documents found in the segment"); + return; + } + + Iterator starTreeDocumentIterator = sortAndAggregateStarTreeDocuments(); + logger.debug("Sorting and aggregating star-tree in ms : {}", (System.currentTimeMillis() - startTime)); + build(starTreeDocumentIterator); + logger.debug("Finished Building star-tree in ms : {}", (System.currentTimeMillis() - startTime)); + } + + /** + * Builds the star tree using Star-Tree Document + * + * @param starTreeDocumentIterator contains the sorted and aggregated documents + * @throws IOException when we are unable to build star-tree + */ + void build(Iterator starTreeDocumentIterator) throws IOException { + int numSegmentStarTreeDocument = totalSegmentDocs; + + while (starTreeDocumentIterator.hasNext()) { + appendToStarTree(starTreeDocumentIterator.next()); + } + int numStarTreeDocument = numStarTreeDocs; + logger.debug("Generated star tree docs : [{}] from segment docs : [{}]", numStarTreeDocument, numSegmentStarTreeDocument); + + if (numStarTreeDocs == 0) { + // TODO: Uncomment when segment codec and file formats is ready + // StarTreeBuilderUtils.serializeTree(indexOutput, rootNode, dimensionsSplitOrder, numNodes); + return; + } + + constructStarTree(rootNode, 0, numStarTreeDocs); + int numStarTreeDocumentUnderStarNode = numStarTreeDocs - numStarTreeDocument; + logger.debug( + "Finished constructing star-tree, got [ {} ] tree nodes and [ {} ] starTreeDocument under star-node", + numStarTreeNodes, + numStarTreeDocumentUnderStarNode + ); + + createAggregatedDocs(rootNode); + int numAggregatedStarTreeDocument = numStarTreeDocs - numStarTreeDocument - numStarTreeDocumentUnderStarNode; + logger.debug("Finished creating aggregated documents : {}", numAggregatedStarTreeDocument); + + // TODO: When StarTree Codec is ready + // Create doc values indices in disk + // Serialize and save in disk + // Write star tree metadata for off heap implementation + + } + + /** + * Adds a document to star-tree + * + * @param starTreeDocument star-tree document + * @throws IOException throws an exception if we are unable to add the doc + */ + private void appendToStarTree(StarTreeDocument starTreeDocument) throws IOException { + appendStarTreeDocument(starTreeDocument); + numStarTreeDocs++; + } + + /** + * Returns a new star-tree node + * + * @return return new star-tree node + */ + private TreeNode getNewNode() { + numStarTreeNodes++; + return new TreeNode(); + } + + /** + * Implements the algorithm to construct a star-tree + * + * @param node star-tree node + * @param startDocId start document id + * @param endDocId end document id + * @throws IOException throws an exception if we are unable to construct the tree + */ + private void constructStarTree(TreeNode node, int startDocId, int endDocId) throws IOException { + + int childDimensionId = node.dimensionId + 1; + if (childDimensionId == numDimensions) { + return; + } + + // Construct all non-star children nodes + node.childDimensionId = childDimensionId; + Map children = constructNonStarNodes(startDocId, endDocId, childDimensionId); + node.children = children; + + // Construct star-node if required + if (!skipStarNodeCreationForDimensions.contains(childDimensionId) && children.size() > 1) { + children.put((long) ALL, constructStarNode(startDocId, endDocId, childDimensionId)); + } + + // Further split on child nodes if required + for (TreeNode child : children.values()) { + if (child.endDocId - child.startDocId > maxLeafDocuments) { + constructStarTree(child, child.startDocId, child.endDocId); + } + } + } + + /** + * Constructs non star tree nodes + * + * @param startDocId start document id (inclusive) + * @param endDocId end document id (exclusive) + * @param dimensionId id of the dimension in the star tree + * @return root node with non-star nodes constructed + * @throws IOException throws an exception if we are unable to construct non-star nodes + */ + private Map constructNonStarNodes(int startDocId, int endDocId, int dimensionId) throws IOException { + Map nodes = new HashMap<>(); + int nodeStartDocId = startDocId; + Long nodeDimensionValue = getDimensionValue(startDocId, dimensionId); + for (int i = startDocId + 1; i < endDocId; i++) { + Long dimensionValue = getDimensionValue(i, dimensionId); + if (!dimensionValue.equals(nodeDimensionValue)) { + TreeNode child = getNewNode(); + child.dimensionId = dimensionId; + child.dimensionValue = nodeDimensionValue; + child.startDocId = nodeStartDocId; + child.endDocId = i; + nodes.put(nodeDimensionValue, child); + + nodeStartDocId = i; + nodeDimensionValue = dimensionValue; + } + } + TreeNode lastNode = getNewNode(); + lastNode.dimensionId = dimensionId; + lastNode.dimensionValue = nodeDimensionValue; + lastNode.startDocId = nodeStartDocId; + lastNode.endDocId = endDocId; + nodes.put(nodeDimensionValue, lastNode); + return nodes; + } + + /** + * Constructs star tree nodes + * + * @param startDocId start document id (inclusive) + * @param endDocId end document id (exclusive) + * @param dimensionId id of the dimension in the star tree + * @return root node with star nodes constructed + * @throws IOException throws an exception if we are unable to construct non-star nodes + */ + private TreeNode constructStarNode(int startDocId, int endDocId, int dimensionId) throws IOException { + TreeNode starNode = getNewNode(); + starNode.dimensionId = dimensionId; + starNode.dimensionValue = ALL; + starNode.isStarNode = true; + starNode.startDocId = numStarTreeDocs; + Iterator starTreeDocumentIterator = generateStarTreeDocumentsForStarNode(startDocId, endDocId, dimensionId); + while (starTreeDocumentIterator.hasNext()) { + appendToStarTree(starTreeDocumentIterator.next()); + } + starNode.endDocId = numStarTreeDocs; + return starNode; + } + + /** + * Returns aggregated star-tree document + * + * @param node star-tree node + * @return aggregated star-tree documents + * @throws IOException throws an exception upon failing to create new aggregated docs based on star tree + */ + private StarTreeDocument createAggregatedDocs(TreeNode node) throws IOException { + StarTreeDocument aggregatedStarTreeDocument = null; + if (node.children == null) { + + // For leaf node + if (node.startDocId == node.endDocId - 1) { + // If it has only one document, use it as the aggregated document + aggregatedStarTreeDocument = getStarTreeDocument(node.startDocId); + node.aggregatedDocId = node.startDocId; + } else { + // If it has multiple documents, aggregate all of them + for (int i = node.startDocId; i < node.endDocId; i++) { + aggregatedStarTreeDocument = reduceStarTreeDocuments(aggregatedStarTreeDocument, getStarTreeDocument(i)); + } + if (null == aggregatedStarTreeDocument) { + throw new IllegalStateException("aggregated star-tree document is null after reducing the documents"); + } + for (int i = node.dimensionId + 1; i < numDimensions; i++) { + aggregatedStarTreeDocument.dimensions[i] = Long.valueOf(STAR_IN_DOC_VALUES_INDEX); + } + node.aggregatedDocId = numStarTreeDocs; + appendToStarTree(aggregatedStarTreeDocument); + } + } else { + // For non-leaf node + if (node.children.containsKey((long) ALL)) { + // If it has star child, use the star child aggregated document directly + for (TreeNode child : node.children.values()) { + if (child.isStarNode) { + aggregatedStarTreeDocument = createAggregatedDocs(child); + node.aggregatedDocId = child.aggregatedDocId; + } else { + createAggregatedDocs(child); + } + } + } else { + // If no star child exists, aggregate all aggregated documents from non-star children + if (node.children.values().size() == 1) { + for (TreeNode child : node.children.values()) { + aggregatedStarTreeDocument = reduceStarTreeDocuments(aggregatedStarTreeDocument, createAggregatedDocs(child)); + node.aggregatedDocId = child.aggregatedDocId; + } + } else { + for (TreeNode child : node.children.values()) { + aggregatedStarTreeDocument = reduceStarTreeDocuments(aggregatedStarTreeDocument, createAggregatedDocs(child)); + } + if (null == aggregatedStarTreeDocument) { + throw new IllegalStateException("aggregated star-tree document is null after reducing the documents"); + } + for (int i = node.dimensionId + 1; i < numDimensions; i++) { + aggregatedStarTreeDocument.dimensions[i] = Long.valueOf(STAR_IN_DOC_VALUES_INDEX); + } + node.aggregatedDocId = numStarTreeDocs; + appendToStarTree(aggregatedStarTreeDocument); + } + } + } + return aggregatedStarTreeDocument; + } + + /** + * Handles the dimension of date time field type + * + * @param fieldName name of the field + * @param val value of the field + * @return returns the converted dimension of the field to a particular granularity + */ + private long handleDateDimension(final String fieldName, final long val) { + // TODO: handle timestamp granularity + return val; + } + + public void close() throws IOException { + + } + +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/OnHeapStarTreeBuilder.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/OnHeapStarTreeBuilder.java new file mode 100644 index 0000000000000..caeb24838da62 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/OnHeapStarTreeBuilder.java @@ -0,0 +1,213 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ +package org.opensearch.index.compositeindex.datacube.startree.builder; + +import org.apache.lucene.codecs.DocValuesProducer; +import org.apache.lucene.index.SegmentWriteState; +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeDocument; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeField; +import org.opensearch.index.mapper.MapperService; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Iterator; +import java.util.List; +import java.util.Map; +import java.util.Objects; + +/** + * On heap single tree builder + * + * @opensearch.experimental + */ +@ExperimentalApi +public class OnHeapStarTreeBuilder extends BaseStarTreeBuilder { + + private final List starTreeDocuments = new ArrayList<>(); + + /** + * Constructor for OnHeapStarTreeBuilder + * + * @param starTreeField star-tree field + * @param fieldProducerMap helps with document values producer for a particular field + * @param segmentWriteState segment write state + * @param mapperService helps with the numeric type of field + * @throws IOException throws an exception we are unable to construct an onheap star-tree + */ + public OnHeapStarTreeBuilder( + StarTreeField starTreeField, + Map fieldProducerMap, + SegmentWriteState segmentWriteState, + MapperService mapperService + ) throws IOException { + super(starTreeField, fieldProducerMap, segmentWriteState, mapperService); + } + + @Override + public void appendStarTreeDocument(StarTreeDocument starTreeDocument) throws IOException { + starTreeDocuments.add(starTreeDocument); + } + + @Override + public StarTreeDocument getStarTreeDocument(int docId) throws IOException { + return starTreeDocuments.get(docId); + } + + @Override + public List getStarTreeDocuments() { + return starTreeDocuments; + } + + @Override + public Long getDimensionValue(int docId, int dimensionId) throws IOException { + return starTreeDocuments.get(docId).dimensions[dimensionId]; + } + + @Override + public Iterator sortAndAggregateStarTreeDocuments() throws IOException { + int numDocs = totalSegmentDocs; + StarTreeDocument[] starTreeDocuments = new StarTreeDocument[numDocs]; + for (int currentDocId = 0; currentDocId < numDocs; currentDocId++) { + starTreeDocuments[currentDocId] = getSegmentStarTreeDocument(currentDocId); + } + + return sortAndAggregateStarTreeDocuments(starTreeDocuments); + } + + /** + * Sort, aggregates and merges the star-tree documents + * + * @param starTreeDocuments star-tree documents + * @return iterator for star-tree documents + */ + Iterator sortAndAggregateStarTreeDocuments(StarTreeDocument[] starTreeDocuments) { + + // sort all the documents + sortStarTreeDocumentsFromDimensionId(starTreeDocuments, 0); + + // merge the documents + return mergeStarTreeDocuments(starTreeDocuments); + } + + /** + * Merges the star-tree documents + * + * @param starTreeDocuments star-tree documents + * @return iterator to aggregate star-tree documents + */ + private Iterator mergeStarTreeDocuments(StarTreeDocument[] starTreeDocuments) { + return new Iterator<>() { + boolean hasNext = true; + StarTreeDocument currentStarTreeDocument = starTreeDocuments[0]; + // starting from 1 since we have already fetched the 0th document + int docId = 1; + + @Override + public boolean hasNext() { + return hasNext; + } + + @Override + public StarTreeDocument next() { + // aggregate as we move on to the next doc + StarTreeDocument next = reduceSegmentStarTreeDocuments(null, currentStarTreeDocument); + while (docId < starTreeDocuments.length) { + StarTreeDocument starTreeDocument = starTreeDocuments[docId]; + docId++; + if (Arrays.equals(starTreeDocument.dimensions, next.dimensions) == false) { + currentStarTreeDocument = starTreeDocument; + return next; + } else { + next = reduceSegmentStarTreeDocuments(next, starTreeDocument); + } + } + hasNext = false; + return next; + } + }; + } + + /** + * Generates a star-tree for a given star-node + * + * @param startDocId Start document id in the star-tree + * @param endDocId End document id (exclusive) in the star-tree + * @param dimensionId Dimension id of the star-node + * @return iterator for star-tree documents of star-node + * @throws IOException throws when unable to generate star-tree for star-node + */ + @Override + public Iterator generateStarTreeDocumentsForStarNode(int startDocId, int endDocId, int dimensionId) + throws IOException { + int numDocs = endDocId - startDocId; + StarTreeDocument[] starTreeDocuments = new StarTreeDocument[numDocs]; + for (int i = 0; i < numDocs; i++) { + starTreeDocuments[i] = getStarTreeDocument(startDocId + i); + } + + // sort star tree documents from given dimension id (as previous dimension ids have already been processed) + sortStarTreeDocumentsFromDimensionId(starTreeDocuments, dimensionId + 1); + + return new Iterator() { + boolean hasNext = true; + StarTreeDocument currentStarTreeDocument = starTreeDocuments[0]; + int docId = 1; + + private boolean hasSameDimensions(StarTreeDocument starTreeDocument1, StarTreeDocument starTreeDocument2) { + for (int i = dimensionId + 1; i < numDimensions; i++) { + if (!Objects.equals(starTreeDocument1.dimensions[i], starTreeDocument2.dimensions[i])) { + return false; + } + } + return true; + } + + @Override + public boolean hasNext() { + return hasNext; + } + + @Override + public StarTreeDocument next() { + StarTreeDocument next = reduceStarTreeDocuments(null, currentStarTreeDocument); + next.dimensions[dimensionId] = Long.valueOf(STAR_IN_DOC_VALUES_INDEX); + while (docId < numDocs) { + StarTreeDocument starTreeDocument = starTreeDocuments[docId]; + docId++; + if (!hasSameDimensions(starTreeDocument, currentStarTreeDocument)) { + currentStarTreeDocument = starTreeDocument; + return next; + } else { + next = reduceStarTreeDocuments(next, starTreeDocument); + } + } + hasNext = false; + return next; + } + }; + } + + /** + * Sorts the star-tree documents from the given dimension id + * + * @param starTreeDocuments star-tree documents + * @param dimensionId id of the dimension + */ + private void sortStarTreeDocumentsFromDimensionId(StarTreeDocument[] starTreeDocuments, int dimensionId) { + Arrays.sort(starTreeDocuments, (o1, o2) -> { + for (int i = dimensionId; i < numDimensions; i++) { + if (!Objects.equals(o1.dimensions[i], o2.dimensions[i])) { + return Long.compare(o1.dimensions[i], o2.dimensions[i]); + } + } + return 0; + }); + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreeBuilder.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreeBuilder.java new file mode 100644 index 0000000000000..20af1b3bc7935 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreeBuilder.java @@ -0,0 +1,29 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree.builder; + +import org.opensearch.common.annotation.ExperimentalApi; + +import java.io.Closeable; +import java.io.IOException; + +/** + * A star-tree builder that builds a single star-tree. + * + * @opensearch.experimental + */ +@ExperimentalApi +public interface StarTreeBuilder extends Closeable { + + /** + * Builds the star tree based on star-tree field + * @throws IOException when we are unable to build star-tree + */ + void build() throws IOException; +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreeDocValuesIteratorAdapter.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreeDocValuesIteratorAdapter.java new file mode 100644 index 0000000000000..cb0350bb110b0 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreeDocValuesIteratorAdapter.java @@ -0,0 +1,82 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree.builder; + +import org.apache.lucene.codecs.DocValuesProducer; +import org.apache.lucene.index.DocValuesType; +import org.apache.lucene.index.FieldInfo; +import org.apache.lucene.index.SortedNumericDocValues; +import org.apache.lucene.search.DocIdSetIterator; +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.index.compositeindex.datacube.startree.utils.SequentialDocValuesIterator; + +import java.io.IOException; + +/** + * A factory class to return respective doc values iterator based on the doc volues type. + * + * @opensearch.experimental + */ +@ExperimentalApi +public class StarTreeDocValuesIteratorAdapter { + + /** + * Creates an iterator for the given doc values type and field using the doc values producer + */ + public SequentialDocValuesIterator getDocValuesIterator(DocValuesType type, FieldInfo field, DocValuesProducer producer) + throws IOException { + switch (type) { + case SORTED_NUMERIC: + return new SequentialDocValuesIterator(producer.getSortedNumeric(field)); + default: + throw new IllegalArgumentException("Unsupported DocValuesType: " + type); + } + } + + /** + * Returns the next value for the given iterator + */ + public Long getNextValue(SequentialDocValuesIterator sequentialDocValuesIterator, int currentDocId) throws IOException { + if (sequentialDocValuesIterator.getDocIdSetIterator() instanceof SortedNumericDocValues) { + SortedNumericDocValues sortedNumericDocValues = (SortedNumericDocValues) sequentialDocValuesIterator.getDocIdSetIterator(); + if (sequentialDocValuesIterator.getDocId() < 0 || sequentialDocValuesIterator.getDocId() == DocIdSetIterator.NO_MORE_DOCS) { + throw new IllegalStateException("invalid doc id to fetch the next value"); + } + + if (sequentialDocValuesIterator.getDocValue() == null) { + sequentialDocValuesIterator.setDocValue(sortedNumericDocValues.nextValue()); + return sequentialDocValuesIterator.getDocValue(); + } + + if (sequentialDocValuesIterator.getDocId() == currentDocId) { + Long nextValue = sequentialDocValuesIterator.getDocValue(); + sequentialDocValuesIterator.setDocValue(null); + return nextValue; + } else { + return null; + } + } else { + throw new IllegalStateException("Unsupported Iterator: " + sequentialDocValuesIterator.getDocIdSetIterator().toString()); + } + } + + /** + * Moves to the next doc in the iterator + * Returns the doc id for the next document from the given iterator + */ + public int nextDoc(SequentialDocValuesIterator iterator, int currentDocId) throws IOException { + if (iterator.getDocValue() != null) { + return iterator.getDocId(); + } + iterator.setDocId(iterator.getDocIdSetIterator().nextDoc()); + iterator.setDocValue(this.getNextValue(iterator, currentDocId)); + return iterator.getDocId(); + } + +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreesBuilder.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreesBuilder.java new file mode 100644 index 0000000000000..eaf9ae1dcdaa1 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreesBuilder.java @@ -0,0 +1,114 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree.builder; + +import org.apache.logging.log4j.LogManager; +import org.apache.logging.log4j.Logger; +import org.apache.lucene.codecs.DocValuesProducer; +import org.apache.lucene.index.SegmentWriteState; +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeField; +import org.opensearch.index.mapper.CompositeMappedFieldType; +import org.opensearch.index.mapper.MapperService; +import org.opensearch.index.mapper.StarTreeMapper; + +import java.io.Closeable; +import java.io.IOException; +import java.util.ArrayList; +import java.util.List; +import java.util.Locale; +import java.util.Map; + +/** + * Builder to construct star-trees based on multiple star-tree fields. + * + * @opensearch.experimental + */ +@ExperimentalApi +public class StarTreesBuilder implements Closeable { + + private static final Logger logger = LogManager.getLogger(StarTreesBuilder.class); + + private final List starTreeFields; + private final SegmentWriteState state; + private final Map fieldProducerMap; + private final MapperService mapperService; + + public StarTreesBuilder( + Map fieldProducerMap, + SegmentWriteState segmentWriteState, + MapperService mapperService + ) { + List starTreeFields = new ArrayList<>(); + for (CompositeMappedFieldType compositeMappedFieldType : mapperService.getCompositeFieldTypes()) { + if (compositeMappedFieldType instanceof StarTreeMapper.StarTreeFieldType) { + StarTreeMapper.StarTreeFieldType starTreeFieldType = (StarTreeMapper.StarTreeFieldType) compositeMappedFieldType; + starTreeFields.add( + new StarTreeField( + starTreeFieldType.name(), + starTreeFieldType.getDimensions(), + starTreeFieldType.getMetrics(), + starTreeFieldType.getStarTreeConfig() + ) + ); + } + } + + this.starTreeFields = starTreeFields; + this.fieldProducerMap = fieldProducerMap; + this.state = segmentWriteState; + this.mapperService = mapperService; + } + + /** + * Builds the star-trees. + */ + public void build() throws IOException { + if (starTreeFields.isEmpty()) { + logger.debug("no star-tree fields found, returning from star-tree builder"); + return; + } + long startTime = System.currentTimeMillis(); + int numStarTrees = starTreeFields.size(); + logger.debug("Starting building {} star-trees with star-tree fields", numStarTrees); + + // Build all star-trees + for (StarTreeField starTreeField : starTreeFields) { + try (StarTreeBuilder starTreeBuilder = getStarTreeBuilder(starTreeField, fieldProducerMap, state, mapperService)) { + starTreeBuilder.build(); + } + } + logger.debug("Took {} ms to building {} star-trees with star-tree fields", System.currentTimeMillis() - startTime, numStarTrees); + } + + @Override + public void close() throws IOException { + + } + + StarTreeBuilder getStarTreeBuilder( + StarTreeField starTreeField, + Map fieldProducerMap, + SegmentWriteState state, + MapperService mapperService + ) throws IOException { + switch (starTreeField.getStarTreeConfig().getBuildMode()) { + case ON_HEAP: + return new OnHeapStarTreeBuilder(starTreeField, fieldProducerMap, state, mapperService); + default: + throw new IllegalArgumentException( + String.format( + Locale.ROOT, + "No star tree implementation is available for [%s] build mode", + starTreeField.getStarTreeConfig().getBuildMode() + ) + ); + } + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/package-info.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/package-info.java new file mode 100644 index 0000000000000..9c97b076371a3 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/package-info.java @@ -0,0 +1,14 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +/** + * Builders for Composite Index Star Tree + * + * @opensearch.experimental + */ +package org.opensearch.index.compositeindex.datacube.startree.builder; diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/package-info.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/package-info.java index 4f4e670478e2f..6d6cb420f4a9e 100644 --- a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/package-info.java +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/package-info.java @@ -7,5 +7,7 @@ */ /** * Core classes for handling star tree index. + * + * @opensearch.experimental */ package org.opensearch.index.compositeindex.datacube.startree; diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/utils/SequentialDocValuesIterator.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/utils/SequentialDocValuesIterator.java new file mode 100644 index 0000000000000..cf5f3e94c1ca6 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/utils/SequentialDocValuesIterator.java @@ -0,0 +1,137 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree.utils; + +import org.apache.lucene.index.SortedNumericDocValues; +import org.apache.lucene.search.DocIdSetIterator; +import org.opensearch.common.annotation.ExperimentalApi; + +import java.io.IOException; + +/** + * Coordinates the reading of documents across multiple DocIdSetIterators. + * It encapsulates a single DocIdSetIterator and maintains the latest document ID and its associated value. + * + * @opensearch.experimental + */ +@ExperimentalApi +public class SequentialDocValuesIterator { + + /** + * The doc id set iterator associated for each field. + */ + private final DocIdSetIterator docIdSetIterator; + + /** + * The value associated with the latest document. + */ + private Long docValue; + + /** + * The id of the latest document. + */ + private int docId; + + /** + * Constructs a new SequentialDocValuesIterator instance with the given DocIdSetIterator. + * + * @param docIdSetIterator the DocIdSetIterator to be associated with this instance + */ + public SequentialDocValuesIterator(DocIdSetIterator docIdSetIterator) { + this.docIdSetIterator = docIdSetIterator; + } + + /** + * Constructs a new SequentialDocValuesIterator instance with the given SortedNumericDocValues. + * + */ + public SequentialDocValuesIterator() { + this.docIdSetIterator = new SortedNumericDocValues() { + @Override + public long nextValue() throws IOException { + return 0; + } + + @Override + public int docValueCount() { + return 0; + } + + @Override + public boolean advanceExact(int i) throws IOException { + return false; + } + + @Override + public int docID() { + return 0; + } + + @Override + public int nextDoc() throws IOException { + return 0; + } + + @Override + public int advance(int i) throws IOException { + return 0; + } + + @Override + public long cost() { + return 0; + } + }; + } + + /** + * Returns the value associated with the latest document. + * + * @return the value associated with the latest document + */ + public Long getDocValue() { + return docValue; + } + + /** + * Sets the value associated with the latest document. + * + * @param docValue the value to be associated with the latest document + */ + public void setDocValue(Long docValue) { + this.docValue = docValue; + } + + /** + * Returns the id of the latest document. + * + * @return the id of the latest document + */ + public int getDocId() { + return docId; + } + + /** + * Sets the id of the latest document. + * + * @param docId the ID of the latest document + */ + public void setDocId(int docId) { + this.docId = docId; + } + + /** + * Returns the DocIdSetIterator associated with this instance. + * + * @return the DocIdSetIterator associated with this instance + */ + public DocIdSetIterator getDocIdSetIterator() { + return docIdSetIterator; + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/utils/TreeNode.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/utils/TreeNode.java new file mode 100644 index 0000000000000..5cf737c61ab2d --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/utils/TreeNode.java @@ -0,0 +1,65 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ +package org.opensearch.index.compositeindex.datacube.startree.utils; + +import org.opensearch.common.annotation.ExperimentalApi; + +import java.util.Map; + +/** + * /** + * Represents a node in a tree data structure, specifically designed for a star-tree implementation. + * A star-tree node will represent both star and non-star nodes. + * + * @opensearch.experimental + */ +@ExperimentalApi +public class TreeNode { + + public static final int ALL = -1; + + /** + * The dimension id for the dimension (field) associated with this star-tree node. + */ + public int dimensionId = ALL; + + /** + * The starting document id (inclusive) associated with this star-tree node. + */ + public int startDocId = ALL; + + /** + * The ending document id (exclusive) associated with this star-tree node. + */ + public int endDocId = ALL; + + /** + * The aggregated document id associated with this star-tree node. + */ + public int aggregatedDocId = ALL; + + /** + * The child dimension identifier associated with this star-tree node. + */ + public int childDimensionId = ALL; + + /** + * The value of the dimension associated with this star-tree node. + */ + public long dimensionValue = ALL; + + /** + * A flag indicating whether this node is a star node (a node that represents an aggregation of all dimensions). + */ + public boolean isStarNode = false; + + /** + * A map containing the child nodes of this star-tree node, keyed by their dimension id. + */ + public Map children; +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/utils/package-info.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/utils/package-info.java new file mode 100644 index 0000000000000..c7e8b04d42178 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/utils/package-info.java @@ -0,0 +1,14 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +/** + * Utility to support Composite Index Star Tree + * + * @opensearch.experimental + */ +package org.opensearch.index.compositeindex.datacube.startree.utils; diff --git a/server/src/test/java/org/opensearch/index/codec/composite/datacube/startree/StarTreeDocValuesFormatTests.java b/server/src/test/java/org/opensearch/index/codec/composite/datacube/startree/StarTreeDocValuesFormatTests.java index 6c6d26656e4de..31df9a49bebfb 100644 --- a/server/src/test/java/org/opensearch/index/codec/composite/datacube/startree/StarTreeDocValuesFormatTests.java +++ b/server/src/test/java/org/opensearch/index/codec/composite/datacube/startree/StarTreeDocValuesFormatTests.java @@ -12,12 +12,7 @@ import org.apache.logging.log4j.Logger; import org.apache.lucene.codecs.Codec; import org.apache.lucene.codecs.lucene99.Lucene99Codec; -import org.apache.lucene.document.Document; -import org.apache.lucene.document.SortedNumericDocValuesField; -import org.apache.lucene.index.IndexWriterConfig; -import org.apache.lucene.store.Directory; import org.apache.lucene.tests.index.BaseDocValuesFormatTestCase; -import org.apache.lucene.tests.index.RandomIndexWriter; import org.apache.lucene.tests.util.LuceneTestCase; import org.opensearch.common.Rounding; import org.opensearch.index.codec.composite.Composite99Codec; @@ -31,7 +26,6 @@ import org.opensearch.index.mapper.MapperService; import org.opensearch.index.mapper.StarTreeMapper; -import java.io.IOException; import java.util.ArrayList; import java.util.Collections; import java.util.List; @@ -77,34 +71,4 @@ private static StarTreeField getStarTreeField(List d1Cale return new StarTreeField("starTree", dims, metrics, config); } - - public void testStarTreeDocValues() throws IOException { - Directory directory = newDirectory(); - IndexWriterConfig conf = newIndexWriterConfig(null); - conf.setMergePolicy(newLogMergePolicy()); - RandomIndexWriter iw = new RandomIndexWriter(random(), directory, conf); - Document doc = new Document(); - doc.add(new SortedNumericDocValuesField("sndv", 1)); - doc.add(new SortedNumericDocValuesField("dv", 1)); - doc.add(new SortedNumericDocValuesField("field", 1)); - iw.addDocument(doc); - doc.add(new SortedNumericDocValuesField("sndv", 1)); - doc.add(new SortedNumericDocValuesField("dv", 1)); - doc.add(new SortedNumericDocValuesField("field", 1)); - iw.addDocument(doc); - iw.forceMerge(1); - doc.add(new SortedNumericDocValuesField("sndv", 2)); - doc.add(new SortedNumericDocValuesField("dv", 2)); - doc.add(new SortedNumericDocValuesField("field", 2)); - iw.addDocument(doc); - doc.add(new SortedNumericDocValuesField("sndv", 2)); - doc.add(new SortedNumericDocValuesField("dv", 2)); - doc.add(new SortedNumericDocValuesField("field", 2)); - iw.addDocument(doc); - iw.forceMerge(1); - iw.close(); - - // TODO : validate star tree structures that got created - directory.close(); - } } diff --git a/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/CountValueAggregatorTests.java b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/CountValueAggregatorTests.java new file mode 100644 index 0000000000000..e30e203406a6c --- /dev/null +++ b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/CountValueAggregatorTests.java @@ -0,0 +1,53 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree.aggregators; + +import org.opensearch.index.compositeindex.datacube.MetricStat; +import org.opensearch.index.compositeindex.datacube.startree.aggregators.numerictype.StarTreeNumericType; +import org.opensearch.test.OpenSearchTestCase; + +public class CountValueAggregatorTests extends OpenSearchTestCase { + private final CountValueAggregator aggregator = new CountValueAggregator(); + + public void testGetAggregationType() { + assertEquals(MetricStat.COUNT.getTypeName(), aggregator.getAggregationType().getTypeName()); + } + + public void testGetAggregatedValueType() { + assertEquals(CountValueAggregator.VALUE_AGGREGATOR_TYPE, aggregator.getAggregatedValueType()); + } + + public void testGetInitialAggregatedValueForSegmentDocValue() { + assertEquals(1L, aggregator.getInitialAggregatedValueForSegmentDocValue(randomLong(), StarTreeNumericType.LONG), 0.0); + } + + public void testMergeAggregatedValueAndSegmentValue() { + assertEquals(3L, aggregator.mergeAggregatedValueAndSegmentValue(2L, 3L, StarTreeNumericType.LONG), 0.0); + } + + public void testMergeAggregatedValues() { + assertEquals(5L, aggregator.mergeAggregatedValues(2L, 3L), 0.0); + } + + public void testGetInitialAggregatedValue() { + assertEquals(3L, aggregator.getInitialAggregatedValue(3L), 0.0); + } + + public void testGetMaxAggregatedValueByteSize() { + assertEquals(Long.BYTES, aggregator.getMaxAggregatedValueByteSize()); + } + + public void testToLongValue() { + assertEquals(3L, aggregator.toLongValue(3L), 0.0); + } + + public void testToStarTreeNumericTypeValue() { + assertEquals(3L, aggregator.toStarTreeNumericTypeValue(3L, StarTreeNumericType.LONG), 0.0); + } +} diff --git a/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MetricAggregatorInfoTests.java b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MetricAggregatorInfoTests.java new file mode 100644 index 0000000000000..d08f637a3f0a9 --- /dev/null +++ b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MetricAggregatorInfoTests.java @@ -0,0 +1,123 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree.aggregators; + +import org.opensearch.index.compositeindex.datacube.MetricStat; +import org.opensearch.index.fielddata.IndexNumericFieldData; +import org.opensearch.test.OpenSearchTestCase; + +public class MetricAggregatorInfoTests extends OpenSearchTestCase { + + public void testConstructor() { + MetricAggregatorInfo pair = new MetricAggregatorInfo( + MetricStat.SUM, + "column1", + "star_tree_field", + IndexNumericFieldData.NumericType.DOUBLE, + null + ); + assertEquals(MetricStat.SUM, pair.getMetricStat()); + assertEquals("column1", pair.getField()); + } + + public void testCountStarConstructor() { + MetricAggregatorInfo pair = new MetricAggregatorInfo( + MetricStat.COUNT, + "anything", + "star_tree_field", + IndexNumericFieldData.NumericType.DOUBLE, + null + ); + assertEquals(MetricStat.COUNT, pair.getMetricStat()); + assertEquals("anything", pair.getField()); + } + + public void testToFieldName() { + MetricAggregatorInfo pair = new MetricAggregatorInfo( + MetricStat.SUM, + "column2", + "star_tree_field", + IndexNumericFieldData.NumericType.DOUBLE, + null + ); + assertEquals("star_tree_field_column2_sum", pair.toFieldName()); + } + + public void testEquals() { + MetricAggregatorInfo pair1 = new MetricAggregatorInfo( + MetricStat.SUM, + "column1", + "star_tree_field", + IndexNumericFieldData.NumericType.DOUBLE, + null + ); + MetricAggregatorInfo pair2 = new MetricAggregatorInfo( + MetricStat.SUM, + "column1", + "star_tree_field", + IndexNumericFieldData.NumericType.DOUBLE, + null + ); + assertEquals(pair1, pair2); + assertNotEquals( + pair1, + new MetricAggregatorInfo(MetricStat.COUNT, "column1", "star_tree_field", IndexNumericFieldData.NumericType.DOUBLE, null) + ); + assertNotEquals( + pair1, + new MetricAggregatorInfo(MetricStat.SUM, "column2", "star_tree_field", IndexNumericFieldData.NumericType.DOUBLE, null) + ); + } + + public void testHashCode() { + MetricAggregatorInfo pair1 = new MetricAggregatorInfo( + MetricStat.SUM, + "column1", + "star_tree_field", + IndexNumericFieldData.NumericType.DOUBLE, + null + ); + MetricAggregatorInfo pair2 = new MetricAggregatorInfo( + MetricStat.SUM, + "column1", + "star_tree_field", + IndexNumericFieldData.NumericType.DOUBLE, + null + ); + assertEquals(pair1.hashCode(), pair2.hashCode()); + } + + public void testCompareTo() { + MetricAggregatorInfo pair1 = new MetricAggregatorInfo( + MetricStat.SUM, + "column1", + "star_tree_field", + IndexNumericFieldData.NumericType.DOUBLE, + null + ); + MetricAggregatorInfo pair2 = new MetricAggregatorInfo( + MetricStat.SUM, + "column2", + "star_tree_field", + IndexNumericFieldData.NumericType.DOUBLE, + null + ); + MetricAggregatorInfo pair3 = new MetricAggregatorInfo( + MetricStat.COUNT, + "column1", + "star_tree_field", + IndexNumericFieldData.NumericType.DOUBLE, + null + ); + assertTrue(pair1.compareTo(pair2) < 0); + assertTrue(pair2.compareTo(pair1) > 0); + assertTrue(pair1.compareTo(pair3) > 0); + assertTrue(pair3.compareTo(pair1) < 0); + } +} diff --git a/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/SumValueAggregatorTests.java b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/SumValueAggregatorTests.java new file mode 100644 index 0000000000000..3fb627e7cd434 --- /dev/null +++ b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/SumValueAggregatorTests.java @@ -0,0 +1,72 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree.aggregators; + +import org.apache.lucene.util.NumericUtils; +import org.opensearch.index.compositeindex.datacube.MetricStat; +import org.opensearch.index.compositeindex.datacube.startree.aggregators.numerictype.StarTreeNumericType; +import org.opensearch.test.OpenSearchTestCase; +import org.junit.Before; + +public class SumValueAggregatorTests extends OpenSearchTestCase { + + private SumValueAggregator aggregator; + + @Before + public void setup() { + aggregator = new SumValueAggregator(); + } + + public void testGetAggregationType() { + assertEquals(MetricStat.SUM.getTypeName(), aggregator.getAggregationType().getTypeName()); + } + + public void testGetAggregatedValueType() { + assertEquals(SumValueAggregator.VALUE_AGGREGATOR_TYPE, aggregator.getAggregatedValueType()); + } + + public void testGetInitialAggregatedValueForSegmentDocValue() { + assertEquals(1.0, aggregator.getInitialAggregatedValueForSegmentDocValue(1L, StarTreeNumericType.LONG), 0.0); + assertThrows( + NullPointerException.class, + () -> aggregator.getInitialAggregatedValueForSegmentDocValue(null, StarTreeNumericType.DOUBLE) + ); + } + + public void testMergeAggregatedValueAndSegmentValue() { + aggregator.getInitialAggregatedValue(2.0); + assertEquals(5.0, aggregator.mergeAggregatedValueAndSegmentValue(2.0, 3L, StarTreeNumericType.LONG), 0.0); + } + + public void testMergeAggregatedValueAndSegmentValue_nullSegmentDocValue() { + aggregator.getInitialAggregatedValue(2.0); + assertThrows(NullPointerException.class, () -> aggregator.mergeAggregatedValueAndSegmentValue(2.0, null, StarTreeNumericType.LONG)); + } + + public void testMergeAggregatedValues() { + aggregator.getInitialAggregatedValue(3.0); + assertEquals(5.0, aggregator.mergeAggregatedValues(2.0, 3.0), 0.0); + } + + public void testGetInitialAggregatedValue() { + assertEquals(3.14, aggregator.getInitialAggregatedValue(3.14), 0.0); + } + + public void testGetMaxAggregatedValueByteSize() { + assertEquals(Double.BYTES, aggregator.getMaxAggregatedValueByteSize()); + } + + public void testToLongValue() { + assertEquals(NumericUtils.doubleToSortableLong(3.14), aggregator.toLongValue(3.14), 0.0); + } + + public void testToStarTreeNumericTypeValue() { + assertEquals(NumericUtils.sortableLongToDouble(3L), aggregator.toStarTreeNumericTypeValue(3L, StarTreeNumericType.DOUBLE), 0.0); + } +} diff --git a/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/ValueAggregatorFactoryTests.java b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/ValueAggregatorFactoryTests.java new file mode 100644 index 0000000000000..ce61ab839cc61 --- /dev/null +++ b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/ValueAggregatorFactoryTests.java @@ -0,0 +1,27 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree.aggregators; + +import org.opensearch.index.compositeindex.datacube.MetricStat; +import org.opensearch.index.compositeindex.datacube.startree.aggregators.numerictype.StarTreeNumericType; +import org.opensearch.test.OpenSearchTestCase; + +public class ValueAggregatorFactoryTests extends OpenSearchTestCase { + + public void testGetValueAggregatorForSumType() { + ValueAggregator aggregator = ValueAggregatorFactory.getValueAggregator(MetricStat.SUM); + assertNotNull(aggregator); + assertEquals(SumValueAggregator.class, aggregator.getClass()); + } + + public void testGetAggregatedValueTypeForSumType() { + StarTreeNumericType starTreeNumericType = ValueAggregatorFactory.getAggregatedValueType(MetricStat.SUM); + assertEquals(SumValueAggregator.VALUE_AGGREGATOR_TYPE, starTreeNumericType); + } +} diff --git a/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/BaseStarTreeBuilderTests.java b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/BaseStarTreeBuilderTests.java new file mode 100644 index 0000000000000..b78130e72aba1 --- /dev/null +++ b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/BaseStarTreeBuilderTests.java @@ -0,0 +1,216 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree.builder; + +import org.apache.lucene.codecs.DocValuesProducer; +import org.apache.lucene.codecs.lucene99.Lucene99Codec; +import org.apache.lucene.index.DocValuesType; +import org.apache.lucene.index.FieldInfo; +import org.apache.lucene.index.FieldInfos; +import org.apache.lucene.index.IndexOptions; +import org.apache.lucene.index.SegmentInfo; +import org.apache.lucene.index.SegmentWriteState; +import org.apache.lucene.index.VectorEncoding; +import org.apache.lucene.index.VectorSimilarityFunction; +import org.apache.lucene.store.Directory; +import org.apache.lucene.util.InfoStream; +import org.apache.lucene.util.Version; +import org.opensearch.common.settings.Settings; +import org.opensearch.index.compositeindex.datacube.Dimension; +import org.opensearch.index.compositeindex.datacube.Metric; +import org.opensearch.index.compositeindex.datacube.MetricStat; +import org.opensearch.index.compositeindex.datacube.NumericDimension; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeDocument; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeField; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeFieldConfiguration; +import org.opensearch.index.compositeindex.datacube.startree.aggregators.MetricAggregatorInfo; +import org.opensearch.index.fielddata.IndexNumericFieldData; +import org.opensearch.index.mapper.ContentPath; +import org.opensearch.index.mapper.DocumentMapper; +import org.opensearch.index.mapper.Mapper; +import org.opensearch.index.mapper.MapperService; +import org.opensearch.index.mapper.MappingLookup; +import org.opensearch.index.mapper.NumberFieldMapper; +import org.opensearch.test.OpenSearchTestCase; +import org.junit.BeforeClass; + +import java.io.IOException; +import java.nio.charset.StandardCharsets; +import java.util.Collections; +import java.util.HashMap; +import java.util.Iterator; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.UUID; + +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; + +public class BaseStarTreeBuilderTests extends OpenSearchTestCase { + + private static BaseStarTreeBuilder builder; + private static MapperService mapperService; + private static List dimensionsOrder; + private static List fields = List.of( + "field1", + "field2", + "field3", + "field4", + "field5", + "field6", + "field7", + "field8", + "field9", + "field10" + ); + private static List metrics; + private static Directory directory; + private static FieldInfo[] fieldsInfo; + private static SegmentWriteState state; + private static StarTreeField starTreeField; + + @BeforeClass + public static void setup() throws IOException { + + dimensionsOrder = List.of( + new NumericDimension("field1"), + new NumericDimension("field3"), + new NumericDimension("field5"), + new NumericDimension("field8") + ); + metrics = List.of(new Metric("field2", List.of(MetricStat.SUM)), new Metric("field4", List.of(MetricStat.SUM))); + + starTreeField = new StarTreeField( + "test", + dimensionsOrder, + metrics, + new StarTreeFieldConfiguration(1, Set.of("field8"), StarTreeFieldConfiguration.StarTreeBuildMode.ON_HEAP) + ); + + DocValuesProducer docValuesProducer = mock(DocValuesProducer.class); + directory = newFSDirectory(createTempDir()); + SegmentInfo segmentInfo = new SegmentInfo( + directory, + Version.LATEST, + Version.LUCENE_9_11_0, + "test_segment", + 5, + false, + false, + new Lucene99Codec(), + new HashMap<>(), + UUID.randomUUID().toString().substring(0, 16).getBytes(StandardCharsets.UTF_8), + new HashMap<>(), + null + ); + + fieldsInfo = new FieldInfo[fields.size()]; + Map fieldProducerMap = new HashMap<>(); + for (int i = 0; i < fieldsInfo.length; i++) { + fieldsInfo[i] = new FieldInfo( + fields.get(i), + i, + false, + false, + true, + IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS, + DocValuesType.SORTED_NUMERIC, + -1, + Collections.emptyMap(), + 0, + 0, + 0, + 0, + VectorEncoding.FLOAT32, + VectorSimilarityFunction.EUCLIDEAN, + false, + false + ); + fieldProducerMap.put(fields.get(i), docValuesProducer); + } + FieldInfos fieldInfos = new FieldInfos(fieldsInfo); + state = new SegmentWriteState(InfoStream.getDefault(), segmentInfo.dir, segmentInfo, fieldInfos, null, newIOContext(random())); + + mapperService = mock(MapperService.class); + DocumentMapper documentMapper = mock(DocumentMapper.class); + when(mapperService.documentMapper()).thenReturn(documentMapper); + Settings settings = Settings.builder().put(settings(org.opensearch.Version.CURRENT).build()).build(); + NumberFieldMapper numberFieldMapper1 = new NumberFieldMapper.Builder("field2", NumberFieldMapper.NumberType.DOUBLE, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + NumberFieldMapper numberFieldMapper2 = new NumberFieldMapper.Builder("field4", NumberFieldMapper.NumberType.DOUBLE, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + MappingLookup fieldMappers = new MappingLookup( + Set.of(numberFieldMapper1, numberFieldMapper2), + Collections.emptyList(), + Collections.emptyList(), + 0, + null + ); + when(documentMapper.mappers()).thenReturn(fieldMappers); + + builder = new BaseStarTreeBuilder(starTreeField, fieldProducerMap, state, mapperService) { + @Override + public void appendStarTreeDocument(StarTreeDocument starTreeDocument) throws IOException {} + + @Override + public StarTreeDocument getStarTreeDocument(int docId) throws IOException { + return null; + } + + @Override + public List getStarTreeDocuments() { + return List.of(); + } + + @Override + public Long getDimensionValue(int docId, int dimensionId) throws IOException { + return 0L; + } + + @Override + public Iterator sortAndAggregateStarTreeDocuments() throws IOException { + return null; + } + + @Override + public Iterator generateStarTreeDocumentsForStarNode(int startDocId, int endDocId, int dimensionId) + throws IOException { + return null; + } + }; + } + + public void test_generateMetricAggregatorInfos() throws IOException { + List metricAggregatorInfos = builder.generateMetricAggregatorInfos(mapperService, state); + List expectedMetricAggregatorInfos = List.of( + new MetricAggregatorInfo(MetricStat.SUM, "field2", starTreeField.getName(), IndexNumericFieldData.NumericType.DOUBLE, null), + new MetricAggregatorInfo(MetricStat.SUM, "field4", starTreeField.getName(), IndexNumericFieldData.NumericType.DOUBLE, null) + ); + assertEquals(metricAggregatorInfos, expectedMetricAggregatorInfos); + } + + public void test_reduceStarTreeDocuments() { + StarTreeDocument starTreeDocument1 = new StarTreeDocument(new Long[] { 1L, 3L, 5L, 8L }, new Double[] { 4.0, 8.0 }); + StarTreeDocument starTreeDocument2 = new StarTreeDocument(new Long[] { 1L, 3L, 5L, 8L }, new Double[] { 10.0, 6.0 }); + + StarTreeDocument expectedeMergedStarTreeDocument = new StarTreeDocument(new Long[] { 1L, 3L, 5L, 8L }, new Double[] { 14.0, 14.0 }); + StarTreeDocument mergedStarTreeDocument = builder.reduceStarTreeDocuments(null, starTreeDocument1); + StarTreeDocument resultStarTreeDocument = builder.reduceStarTreeDocuments(mergedStarTreeDocument, starTreeDocument2); + + assertEquals(resultStarTreeDocument.metrics[0], expectedeMergedStarTreeDocument.metrics[0]); + assertEquals(resultStarTreeDocument.metrics[1], expectedeMergedStarTreeDocument.metrics[1]); + } + + @Override + public void tearDown() throws Exception { + super.tearDown(); + directory.close(); + } +} diff --git a/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/OnHeapStarTreeBuilderTests.java b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/OnHeapStarTreeBuilderTests.java new file mode 100644 index 0000000000000..4e107e78d27be --- /dev/null +++ b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/OnHeapStarTreeBuilderTests.java @@ -0,0 +1,706 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree.builder; + +import org.apache.lucene.codecs.DocValuesProducer; +import org.apache.lucene.codecs.lucene99.Lucene99Codec; +import org.apache.lucene.index.DocValuesType; +import org.apache.lucene.index.FieldInfo; +import org.apache.lucene.index.FieldInfos; +import org.apache.lucene.index.IndexOptions; +import org.apache.lucene.index.SegmentInfo; +import org.apache.lucene.index.SegmentWriteState; +import org.apache.lucene.index.VectorEncoding; +import org.apache.lucene.index.VectorSimilarityFunction; +import org.apache.lucene.sandbox.document.HalfFloatPoint; +import org.apache.lucene.store.Directory; +import org.apache.lucene.util.InfoStream; +import org.apache.lucene.util.NumericUtils; +import org.apache.lucene.util.Version; +import org.opensearch.common.settings.Settings; +import org.opensearch.index.compositeindex.datacube.Dimension; +import org.opensearch.index.compositeindex.datacube.Metric; +import org.opensearch.index.compositeindex.datacube.MetricStat; +import org.opensearch.index.compositeindex.datacube.NumericDimension; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeDocument; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeField; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeFieldConfiguration; +import org.opensearch.index.mapper.ContentPath; +import org.opensearch.index.mapper.DocumentMapper; +import org.opensearch.index.mapper.Mapper; +import org.opensearch.index.mapper.MapperService; +import org.opensearch.index.mapper.MappingLookup; +import org.opensearch.index.mapper.NumberFieldMapper; +import org.opensearch.test.OpenSearchTestCase; +import org.junit.Before; + +import java.io.IOException; +import java.nio.charset.StandardCharsets; +import java.util.Collections; +import java.util.HashMap; +import java.util.Iterator; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.UUID; + +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; + +public class OnHeapStarTreeBuilderTests extends OpenSearchTestCase { + + private OnHeapStarTreeBuilder builder; + private MapperService mapperService; + private List dimensionsOrder; + private List fields = List.of(); + private List metrics; + private Directory directory; + private FieldInfo[] fieldsInfo; + private StarTreeField compositeField; + private Map fieldProducerMap; + private SegmentWriteState writeState; + + @Before + public void setup() throws IOException { + fields = List.of("field1", "field2", "field3", "field4", "field5", "field6", "field7", "field8", "field9", "field10"); + + dimensionsOrder = List.of( + new NumericDimension("field1"), + new NumericDimension("field3"), + new NumericDimension("field5"), + new NumericDimension("field8") + ); + metrics = List.of( + new Metric("field2", List.of(MetricStat.SUM)), + new Metric("field4", List.of(MetricStat.SUM)), + new Metric("field6", List.of(MetricStat.COUNT)) + ); + + DocValuesProducer docValuesProducer = mock(DocValuesProducer.class); + + compositeField = new StarTreeField( + "test", + dimensionsOrder, + metrics, + new StarTreeFieldConfiguration(1, Set.of("field8"), StarTreeFieldConfiguration.StarTreeBuildMode.ON_HEAP) + ); + directory = newFSDirectory(createTempDir()); + SegmentInfo segmentInfo = new SegmentInfo( + directory, + Version.LATEST, + Version.LUCENE_9_11_0, + "test_segment", + 5, + false, + false, + new Lucene99Codec(), + new HashMap<>(), + UUID.randomUUID().toString().substring(0, 16).getBytes(StandardCharsets.UTF_8), + new HashMap<>(), + null + ); + + fieldsInfo = new FieldInfo[fields.size()]; + fieldProducerMap = new HashMap<>(); + for (int i = 0; i < fieldsInfo.length; i++) { + fieldsInfo[i] = new FieldInfo( + fields.get(i), + i, + false, + false, + true, + IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS, + DocValuesType.SORTED_NUMERIC, + -1, + Collections.emptyMap(), + 0, + 0, + 0, + 0, + VectorEncoding.FLOAT32, + VectorSimilarityFunction.EUCLIDEAN, + false, + false + ); + fieldProducerMap.put(fields.get(i), docValuesProducer); + } + FieldInfos fieldInfos = new FieldInfos(fieldsInfo); + writeState = new SegmentWriteState(InfoStream.getDefault(), segmentInfo.dir, segmentInfo, fieldInfos, null, newIOContext(random())); + + mapperService = mock(MapperService.class); + DocumentMapper documentMapper = mock(DocumentMapper.class); + when(mapperService.documentMapper()).thenReturn(documentMapper); + Settings settings = Settings.builder().put(settings(org.opensearch.Version.CURRENT).build()).build(); + NumberFieldMapper numberFieldMapper1 = new NumberFieldMapper.Builder("field2", NumberFieldMapper.NumberType.DOUBLE, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + NumberFieldMapper numberFieldMapper2 = new NumberFieldMapper.Builder("field4", NumberFieldMapper.NumberType.DOUBLE, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + NumberFieldMapper numberFieldMapper3 = new NumberFieldMapper.Builder("field6", NumberFieldMapper.NumberType.DOUBLE, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + MappingLookup fieldMappers = new MappingLookup( + Set.of(numberFieldMapper1, numberFieldMapper2, numberFieldMapper3), + Collections.emptyList(), + Collections.emptyList(), + 0, + null + ); + when(documentMapper.mappers()).thenReturn(fieldMappers); + builder = new OnHeapStarTreeBuilder(compositeField, fieldProducerMap, writeState, mapperService); + } + + public void test_sortAndAggregateStarTreeDocuments() throws IOException { + + int noOfStarTreeDocuments = 5; + StarTreeDocument[] starTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + + starTreeDocuments[0] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Double[] { 12.0, 10.0, randomDouble() }); + starTreeDocuments[1] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 10.0, 6.0, randomDouble() }); + starTreeDocuments[2] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 14.0, 12.0, randomDouble() }); + starTreeDocuments[3] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Double[] { 9.0, 4.0, randomDouble() }); + starTreeDocuments[4] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 11.0, 16.0, randomDouble() }); + + List inorderStarTreeDocuments = List.of( + new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Object[] { 21.0, 14.0, 2L }), + new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Object[] { 35.0, 34.0, 3L }) + ); + Iterator expectedStarTreeDocumentIterator = inorderStarTreeDocuments.iterator(); + + StarTreeDocument[] segmentStarTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + for (int i = 0; i < noOfStarTreeDocuments; i++) { + long metric1 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[0]); + long metric2 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[1]); + long metric3 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[2]); + segmentStarTreeDocuments[i] = new StarTreeDocument(starTreeDocuments[i].dimensions, new Long[] { metric1, metric2, metric3 }); + } + + Iterator segmentStarTreeDocumentIterator = builder.sortAndAggregateStarTreeDocuments(segmentStarTreeDocuments); + int numOfAggregatedDocuments = 0; + while (segmentStarTreeDocumentIterator.hasNext() && expectedStarTreeDocumentIterator.hasNext()) { + StarTreeDocument resultStarTreeDocument = segmentStarTreeDocumentIterator.next(); + StarTreeDocument expectedStarTreeDocument = expectedStarTreeDocumentIterator.next(); + + assertEquals(expectedStarTreeDocument.dimensions[0], resultStarTreeDocument.dimensions[0]); + assertEquals(expectedStarTreeDocument.dimensions[1], resultStarTreeDocument.dimensions[1]); + assertEquals(expectedStarTreeDocument.dimensions[2], resultStarTreeDocument.dimensions[2]); + assertEquals(expectedStarTreeDocument.dimensions[3], resultStarTreeDocument.dimensions[3]); + assertEquals(expectedStarTreeDocument.metrics[0], resultStarTreeDocument.metrics[0]); + assertEquals(expectedStarTreeDocument.metrics[1], resultStarTreeDocument.metrics[1]); + assertEquals(expectedStarTreeDocument.metrics[2], resultStarTreeDocument.metrics[2]); + + numOfAggregatedDocuments++; + } + + assertEquals(inorderStarTreeDocuments.size(), numOfAggregatedDocuments); + + } + + public void test_sortAndAggregateStarTreeDocuments_nullMetric() throws IOException { + + int noOfStarTreeDocuments = 5; + StarTreeDocument[] starTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + + starTreeDocuments[0] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Double[] { 12.0, 10.0, randomDouble() }); + starTreeDocuments[1] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 10.0, 6.0, randomDouble() }); + starTreeDocuments[2] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 14.0, 12.0, randomDouble() }); + starTreeDocuments[3] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Double[] { 9.0, 4.0, randomDouble() }); + starTreeDocuments[4] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 11.0, null, randomDouble() }); + StarTreeDocument expectedStarTreeDocument = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Double[] { 21.0, 14.0, 2.0 }); + + StarTreeDocument[] segmentStarTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + for (int i = 0; i < noOfStarTreeDocuments; i++) { + Long metric1 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[0]); + Long metric2 = starTreeDocuments[i].metrics[1] != null + ? NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[1]) + : null; + Long metric3 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[2]); + segmentStarTreeDocuments[i] = new StarTreeDocument(starTreeDocuments[i].dimensions, new Object[] { metric1, metric2, metric3 }); + } + + Iterator segmentStarTreeDocumentIterator = builder.sortAndAggregateStarTreeDocuments(segmentStarTreeDocuments); + + StarTreeDocument resultStarTreeDocument = segmentStarTreeDocumentIterator.next(); + assertEquals(expectedStarTreeDocument.dimensions[0], resultStarTreeDocument.dimensions[0]); + assertEquals(expectedStarTreeDocument.dimensions[1], resultStarTreeDocument.dimensions[1]); + assertEquals(expectedStarTreeDocument.dimensions[2], resultStarTreeDocument.dimensions[2]); + assertEquals(expectedStarTreeDocument.dimensions[3], resultStarTreeDocument.dimensions[3]); + assertEquals(expectedStarTreeDocument.metrics[0], resultStarTreeDocument.metrics[0]); + assertEquals(expectedStarTreeDocument.metrics[1], resultStarTreeDocument.metrics[1]); + + assertThrows( + "Null metric should have resulted in IllegalStateException", + IllegalStateException.class, + segmentStarTreeDocumentIterator::next + ); + + } + + public void test_sortAndAggregateStarTreeDocument_longMaxAndLongMinDimensions() throws IOException { + + int noOfStarTreeDocuments = 5; + StarTreeDocument[] starTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + + starTreeDocuments[0] = new StarTreeDocument(new Long[] { Long.MIN_VALUE, 4L, 3L, 4L }, new Double[] { 12.0, 10.0, randomDouble() }); + starTreeDocuments[1] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, Long.MAX_VALUE }, new Double[] { 10.0, 6.0, randomDouble() }); + starTreeDocuments[2] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, Long.MAX_VALUE }, new Double[] { 14.0, 12.0, randomDouble() }); + starTreeDocuments[3] = new StarTreeDocument(new Long[] { Long.MIN_VALUE, 4L, 3L, 4L }, new Double[] { 9.0, 4.0, randomDouble() }); + starTreeDocuments[4] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, Long.MAX_VALUE }, new Double[] { 11.0, 16.0, randomDouble() }); + + List inorderStarTreeDocuments = List.of( + new StarTreeDocument(new Long[] { Long.MIN_VALUE, 4L, 3L, 4L }, new Object[] { 21.0, 14.0, 2L }), + new StarTreeDocument(new Long[] { 3L, 4L, 2L, Long.MAX_VALUE }, new Object[] { 35.0, 34.0, 3L }) + ); + Iterator expectedStarTreeDocumentIterator = inorderStarTreeDocuments.iterator(); + + StarTreeDocument[] segmentStarTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + for (int i = 0; i < noOfStarTreeDocuments; i++) { + long metric1 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[0]); + long metric2 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[1]); + long metric3 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[2]); + segmentStarTreeDocuments[i] = new StarTreeDocument(starTreeDocuments[i].dimensions, new Long[] { metric1, metric2, metric3 }); + } + + Iterator segmentStarTreeDocumentIterator = builder.sortAndAggregateStarTreeDocuments(segmentStarTreeDocuments); + int numOfAggregatedDocuments = 0; + while (segmentStarTreeDocumentIterator.hasNext() && expectedStarTreeDocumentIterator.hasNext()) { + StarTreeDocument resultStarTreeDocument = segmentStarTreeDocumentIterator.next(); + StarTreeDocument expectedStarTreeDocument = expectedStarTreeDocumentIterator.next(); + + assertEquals(expectedStarTreeDocument.dimensions[0], resultStarTreeDocument.dimensions[0]); + assertEquals(expectedStarTreeDocument.dimensions[1], resultStarTreeDocument.dimensions[1]); + assertEquals(expectedStarTreeDocument.dimensions[2], resultStarTreeDocument.dimensions[2]); + assertEquals(expectedStarTreeDocument.dimensions[3], resultStarTreeDocument.dimensions[3]); + assertEquals(expectedStarTreeDocument.metrics[0], resultStarTreeDocument.metrics[0]); + assertEquals(expectedStarTreeDocument.metrics[1], resultStarTreeDocument.metrics[1]); + assertEquals(expectedStarTreeDocument.metrics[2], resultStarTreeDocument.metrics[2]); + + numOfAggregatedDocuments++; + } + + assertEquals(inorderStarTreeDocuments.size(), numOfAggregatedDocuments); + + } + + public void test_sortAndAggregateStarTreeDocument_DoubleMaxAndDoubleMinMetrics() throws IOException { + + int noOfStarTreeDocuments = 5; + StarTreeDocument[] starTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + + starTreeDocuments[0] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Double[] { Double.MAX_VALUE, 10.0, randomDouble() }); + starTreeDocuments[1] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 10.0, 6.0, randomDouble() }); + starTreeDocuments[2] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 14.0, Double.MIN_VALUE, randomDouble() }); + starTreeDocuments[3] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Double[] { 9.0, 4.0, randomDouble() }); + starTreeDocuments[4] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 11.0, 16.0, randomDouble() }); + + List inorderStarTreeDocuments = List.of( + new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Object[] { Double.MAX_VALUE + 9, 14.0, 2L }), + new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Object[] { 35.0, Double.MIN_VALUE + 22, 3L }) + ); + Iterator expectedStarTreeDocumentIterator = inorderStarTreeDocuments.iterator(); + + StarTreeDocument[] segmentStarTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + for (int i = 0; i < noOfStarTreeDocuments; i++) { + long metric1 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[0]); + long metric2 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[1]); + long metric3 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[2]); + segmentStarTreeDocuments[i] = new StarTreeDocument(starTreeDocuments[i].dimensions, new Long[] { metric1, metric2, metric3 }); + } + + Iterator segmentStarTreeDocumentIterator = builder.sortAndAggregateStarTreeDocuments(segmentStarTreeDocuments); + int numOfAggregatedDocuments = 0; + while (segmentStarTreeDocumentIterator.hasNext() && expectedStarTreeDocumentIterator.hasNext()) { + StarTreeDocument resultStarTreeDocument = segmentStarTreeDocumentIterator.next(); + StarTreeDocument expectedStarTreeDocument = expectedStarTreeDocumentIterator.next(); + + assertEquals(expectedStarTreeDocument.dimensions[0], resultStarTreeDocument.dimensions[0]); + assertEquals(expectedStarTreeDocument.dimensions[1], resultStarTreeDocument.dimensions[1]); + assertEquals(expectedStarTreeDocument.dimensions[2], resultStarTreeDocument.dimensions[2]); + assertEquals(expectedStarTreeDocument.dimensions[3], resultStarTreeDocument.dimensions[3]); + assertEquals(expectedStarTreeDocument.metrics[0], resultStarTreeDocument.metrics[0]); + assertEquals(expectedStarTreeDocument.metrics[1], resultStarTreeDocument.metrics[1]); + assertEquals(expectedStarTreeDocument.metrics[2], resultStarTreeDocument.metrics[2]); + + numOfAggregatedDocuments++; + } + + assertEquals(inorderStarTreeDocuments.size(), numOfAggregatedDocuments); + + } + + public void test_build_halfFloatMetrics() throws IOException { + + mapperService = mock(MapperService.class); + DocumentMapper documentMapper = mock(DocumentMapper.class); + when(mapperService.documentMapper()).thenReturn(documentMapper); + Settings settings = Settings.builder().put(settings(org.opensearch.Version.CURRENT).build()).build(); + NumberFieldMapper numberFieldMapper1 = new NumberFieldMapper.Builder("field2", NumberFieldMapper.NumberType.HALF_FLOAT, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + NumberFieldMapper numberFieldMapper2 = new NumberFieldMapper.Builder("field4", NumberFieldMapper.NumberType.HALF_FLOAT, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + NumberFieldMapper numberFieldMapper3 = new NumberFieldMapper.Builder("field6", NumberFieldMapper.NumberType.HALF_FLOAT, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + MappingLookup fieldMappers = new MappingLookup( + Set.of(numberFieldMapper1, numberFieldMapper2, numberFieldMapper3), + Collections.emptyList(), + Collections.emptyList(), + 0, + null + ); + when(documentMapper.mappers()).thenReturn(fieldMappers); + builder = new OnHeapStarTreeBuilder(compositeField, fieldProducerMap, writeState, mapperService); + + int noOfStarTreeDocuments = 5; + StarTreeDocument[] starTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + + starTreeDocuments[0] = new StarTreeDocument( + new Long[] { 2L, 4L, 3L, 4L }, + new HalfFloatPoint[] { new HalfFloatPoint("hf1", 12), new HalfFloatPoint("hf6", 10), new HalfFloatPoint("field6", 10) } + ); + starTreeDocuments[1] = new StarTreeDocument( + new Long[] { 3L, 4L, 2L, 1L }, + new HalfFloatPoint[] { new HalfFloatPoint("hf2", 10), new HalfFloatPoint("hf7", 6), new HalfFloatPoint("field6", 10) } + ); + starTreeDocuments[2] = new StarTreeDocument( + new Long[] { 3L, 4L, 2L, 1L }, + new HalfFloatPoint[] { new HalfFloatPoint("hf3", 14), new HalfFloatPoint("hf8", 12), new HalfFloatPoint("field6", 10) } + ); + starTreeDocuments[3] = new StarTreeDocument( + new Long[] { 2L, 4L, 3L, 4L }, + new HalfFloatPoint[] { new HalfFloatPoint("hf4", 9), new HalfFloatPoint("hf9", 4), new HalfFloatPoint("field6", 10) } + ); + starTreeDocuments[4] = new StarTreeDocument( + new Long[] { 3L, 4L, 2L, 1L }, + new HalfFloatPoint[] { new HalfFloatPoint("hf5", 11), new HalfFloatPoint("hf10", 16), new HalfFloatPoint("field6", 10) } + ); + + StarTreeDocument[] segmentStarTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + for (int i = 0; i < noOfStarTreeDocuments; i++) { + long metric1 = HalfFloatPoint.halfFloatToSortableShort( + ((HalfFloatPoint) starTreeDocuments[i].metrics[0]).numericValue().floatValue() + ); + long metric2 = HalfFloatPoint.halfFloatToSortableShort( + ((HalfFloatPoint) starTreeDocuments[i].metrics[1]).numericValue().floatValue() + ); + long metric3 = HalfFloatPoint.halfFloatToSortableShort( + ((HalfFloatPoint) starTreeDocuments[i].metrics[2]).numericValue().floatValue() + ); + segmentStarTreeDocuments[i] = new StarTreeDocument(starTreeDocuments[i].dimensions, new Long[] { metric1, metric2, metric3 }); + } + + Iterator segmentStarTreeDocumentIterator = builder.sortAndAggregateStarTreeDocuments(segmentStarTreeDocuments); + builder.build(segmentStarTreeDocumentIterator); + + List resultStarTreeDocuments = builder.getStarTreeDocuments(); + assertEquals(7, resultStarTreeDocuments.size()); + + Iterator expectedStarTreeDocumentIterator = getExpectedStarTreeDocumentIterator(); + assertStarTreeDocuments(resultStarTreeDocuments, expectedStarTreeDocumentIterator); + } + + public void test_build_floatMetrics() throws IOException { + + mapperService = mock(MapperService.class); + DocumentMapper documentMapper = mock(DocumentMapper.class); + when(mapperService.documentMapper()).thenReturn(documentMapper); + Settings settings = Settings.builder().put(settings(org.opensearch.Version.CURRENT).build()).build(); + NumberFieldMapper numberFieldMapper1 = new NumberFieldMapper.Builder("field2", NumberFieldMapper.NumberType.FLOAT, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + NumberFieldMapper numberFieldMapper2 = new NumberFieldMapper.Builder("field4", NumberFieldMapper.NumberType.FLOAT, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + NumberFieldMapper numberFieldMapper3 = new NumberFieldMapper.Builder("field6", NumberFieldMapper.NumberType.FLOAT, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + MappingLookup fieldMappers = new MappingLookup( + Set.of(numberFieldMapper1, numberFieldMapper2, numberFieldMapper3), + Collections.emptyList(), + Collections.emptyList(), + 0, + null + ); + when(documentMapper.mappers()).thenReturn(fieldMappers); + builder = new OnHeapStarTreeBuilder(compositeField, fieldProducerMap, writeState, mapperService); + + int noOfStarTreeDocuments = 5; + StarTreeDocument[] starTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + + starTreeDocuments[0] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Float[] { 12.0F, 10.0F, randomFloat() }); + starTreeDocuments[1] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Float[] { 10.0F, 6.0F, randomFloat() }); + starTreeDocuments[2] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Float[] { 14.0F, 12.0F, randomFloat() }); + starTreeDocuments[3] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Float[] { 9.0F, 4.0F, randomFloat() }); + starTreeDocuments[4] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Float[] { 11.0F, 16.0F, randomFloat() }); + + StarTreeDocument[] segmentStarTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + for (int i = 0; i < noOfStarTreeDocuments; i++) { + long metric1 = NumericUtils.floatToSortableInt((Float) starTreeDocuments[i].metrics[0]); + long metric2 = NumericUtils.floatToSortableInt((Float) starTreeDocuments[i].metrics[1]); + long metric3 = NumericUtils.floatToSortableInt((Float) starTreeDocuments[i].metrics[2]); + segmentStarTreeDocuments[i] = new StarTreeDocument(starTreeDocuments[i].dimensions, new Long[] { metric1, metric2, metric3 }); + } + + Iterator segmentStarTreeDocumentIterator = builder.sortAndAggregateStarTreeDocuments(segmentStarTreeDocuments); + builder.build(segmentStarTreeDocumentIterator); + + List resultStarTreeDocuments = builder.getStarTreeDocuments(); + assertEquals(7, resultStarTreeDocuments.size()); + + Iterator expectedStarTreeDocumentIterator = getExpectedStarTreeDocumentIterator(); + assertStarTreeDocuments(resultStarTreeDocuments, expectedStarTreeDocumentIterator); + } + + public void test_build_longMetrics() throws IOException { + + mapperService = mock(MapperService.class); + DocumentMapper documentMapper = mock(DocumentMapper.class); + when(mapperService.documentMapper()).thenReturn(documentMapper); + Settings settings = Settings.builder().put(settings(org.opensearch.Version.CURRENT).build()).build(); + NumberFieldMapper numberFieldMapper1 = new NumberFieldMapper.Builder("field2", NumberFieldMapper.NumberType.LONG, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + NumberFieldMapper numberFieldMapper2 = new NumberFieldMapper.Builder("field4", NumberFieldMapper.NumberType.LONG, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + NumberFieldMapper numberFieldMapper3 = new NumberFieldMapper.Builder("field6", NumberFieldMapper.NumberType.LONG, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + MappingLookup fieldMappers = new MappingLookup( + Set.of(numberFieldMapper1, numberFieldMapper2, numberFieldMapper3), + Collections.emptyList(), + Collections.emptyList(), + 0, + null + ); + when(documentMapper.mappers()).thenReturn(fieldMappers); + builder = new OnHeapStarTreeBuilder(compositeField, fieldProducerMap, writeState, mapperService); + + int noOfStarTreeDocuments = 5; + StarTreeDocument[] starTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + + starTreeDocuments[0] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Long[] { 12L, 10L, randomLong() }); + starTreeDocuments[1] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Long[] { 10L, 6L, randomLong() }); + starTreeDocuments[2] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Long[] { 14L, 12L, randomLong() }); + starTreeDocuments[3] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Long[] { 9L, 4L, randomLong() }); + starTreeDocuments[4] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Long[] { 11L, 16L, randomLong() }); + + StarTreeDocument[] segmentStarTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + for (int i = 0; i < noOfStarTreeDocuments; i++) { + long metric1 = (Long) starTreeDocuments[i].metrics[0]; + long metric2 = (Long) starTreeDocuments[i].metrics[1]; + long metric3 = (Long) starTreeDocuments[i].metrics[2]; + segmentStarTreeDocuments[i] = new StarTreeDocument(starTreeDocuments[i].dimensions, new Long[] { metric1, metric2, metric3 }); + } + + Iterator segmentStarTreeDocumentIterator = builder.sortAndAggregateStarTreeDocuments(segmentStarTreeDocuments); + builder.build(segmentStarTreeDocumentIterator); + + List resultStarTreeDocuments = builder.getStarTreeDocuments(); + assertEquals(7, resultStarTreeDocuments.size()); + + Iterator expectedStarTreeDocumentIterator = getExpectedStarTreeDocumentIterator(); + assertStarTreeDocuments(resultStarTreeDocuments, expectedStarTreeDocumentIterator); + } + + private static Iterator getExpectedStarTreeDocumentIterator() { + List expectedStarTreeDocuments = List.of( + new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Object[] { 21.0, 14.0, 2L }), + new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Object[] { 35.0, 34.0, 3L }), + new StarTreeDocument(new Long[] { -1L, 4L, 2L, 1L }, new Object[] { 35.0, 34.0, 3L }), + new StarTreeDocument(new Long[] { -1L, 4L, 3L, 4L }, new Object[] { 21.0, 14.0, 2L }), + new StarTreeDocument(new Long[] { -1L, 4L, -1L, 1L }, new Object[] { 35.0, 34.0, 3L }), + new StarTreeDocument(new Long[] { -1L, 4L, -1L, 4L }, new Object[] { 21.0, 14.0, 2L }), + new StarTreeDocument(new Long[] { -1L, 4L, -1L, -1L }, new Object[] { 56.0, 48.0, 5L }) + ); + return expectedStarTreeDocuments.iterator(); + } + + public void test_build() throws IOException { + + int noOfStarTreeDocuments = 5; + StarTreeDocument[] starTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + + starTreeDocuments[0] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Double[] { 12.0, 10.0, randomDouble() }); + starTreeDocuments[1] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 10.0, 6.0, randomDouble() }); + starTreeDocuments[2] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 14.0, 12.0, randomDouble() }); + starTreeDocuments[3] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Double[] { 9.0, 4.0, randomDouble() }); + starTreeDocuments[4] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 11.0, 16.0, randomDouble() }); + + StarTreeDocument[] segmentStarTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + for (int i = 0; i < noOfStarTreeDocuments; i++) { + long metric1 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[0]); + long metric2 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[1]); + long metric3 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[2]); + segmentStarTreeDocuments[i] = new StarTreeDocument(starTreeDocuments[i].dimensions, new Long[] { metric1, metric2, metric3 }); + } + + Iterator segmentStarTreeDocumentIterator = builder.sortAndAggregateStarTreeDocuments(segmentStarTreeDocuments); + builder.build(segmentStarTreeDocumentIterator); + + List resultStarTreeDocuments = builder.getStarTreeDocuments(); + assertEquals(7, resultStarTreeDocuments.size()); + + Iterator expectedStarTreeDocumentIterator = getExpectedStarTreeDocumentIterator(); + assertStarTreeDocuments(resultStarTreeDocuments, expectedStarTreeDocumentIterator); + } + + private void assertStarTreeDocuments( + List resultStarTreeDocuments, + Iterator expectedStarTreeDocumentIterator + ) { + Iterator resultStarTreeDocumentIterator = resultStarTreeDocuments.iterator(); + while (resultStarTreeDocumentIterator.hasNext() && expectedStarTreeDocumentIterator.hasNext()) { + StarTreeDocument resultStarTreeDocument = resultStarTreeDocumentIterator.next(); + StarTreeDocument expectedStarTreeDocument = expectedStarTreeDocumentIterator.next(); + + assertEquals(expectedStarTreeDocument.dimensions[0], resultStarTreeDocument.dimensions[0]); + assertEquals(expectedStarTreeDocument.dimensions[1], resultStarTreeDocument.dimensions[1]); + assertEquals(expectedStarTreeDocument.dimensions[2], resultStarTreeDocument.dimensions[2]); + assertEquals(expectedStarTreeDocument.dimensions[3], resultStarTreeDocument.dimensions[3]); + assertEquals(expectedStarTreeDocument.metrics[0], resultStarTreeDocument.metrics[0]); + assertEquals(expectedStarTreeDocument.metrics[1], resultStarTreeDocument.metrics[1]); + assertEquals(expectedStarTreeDocument.metrics[2], resultStarTreeDocument.metrics[2]); + } + } + + public void test_build_starTreeDataset() throws IOException { + + fields = List.of("fieldC", "fieldB", "fieldL", "fieldI"); + + dimensionsOrder = List.of(new NumericDimension("fieldC"), new NumericDimension("fieldB"), new NumericDimension("fieldL")); + metrics = List.of(new Metric("fieldI", List.of(MetricStat.SUM))); + + DocValuesProducer docValuesProducer = mock(DocValuesProducer.class); + + compositeField = new StarTreeField( + "test", + dimensionsOrder, + metrics, + new StarTreeFieldConfiguration(1, Set.of(), StarTreeFieldConfiguration.StarTreeBuildMode.ON_HEAP) + ); + SegmentInfo segmentInfo = new SegmentInfo( + directory, + Version.LATEST, + Version.LUCENE_9_11_0, + "test_segment", + 7, + false, + false, + new Lucene99Codec(), + new HashMap<>(), + UUID.randomUUID().toString().substring(0, 16).getBytes(StandardCharsets.UTF_8), + new HashMap<>(), + null + ); + + fieldsInfo = new FieldInfo[fields.size()]; + fieldProducerMap = new HashMap<>(); + for (int i = 0; i < fieldsInfo.length; i++) { + fieldsInfo[i] = new FieldInfo( + fields.get(i), + i, + false, + false, + true, + IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS, + DocValuesType.SORTED_NUMERIC, + -1, + Collections.emptyMap(), + 0, + 0, + 0, + 0, + VectorEncoding.FLOAT32, + VectorSimilarityFunction.EUCLIDEAN, + false, + false + ); + fieldProducerMap.put(fields.get(i), docValuesProducer); + } + FieldInfos fieldInfos = new FieldInfos(fieldsInfo); + writeState = new SegmentWriteState(InfoStream.getDefault(), segmentInfo.dir, segmentInfo, fieldInfos, null, newIOContext(random())); + + mapperService = mock(MapperService.class); + DocumentMapper documentMapper = mock(DocumentMapper.class); + when(mapperService.documentMapper()).thenReturn(documentMapper); + Settings settings = Settings.builder().put(settings(org.opensearch.Version.CURRENT).build()).build(); + NumberFieldMapper numberFieldMapper1 = new NumberFieldMapper.Builder("fieldI", NumberFieldMapper.NumberType.DOUBLE, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + MappingLookup fieldMappers = new MappingLookup( + Set.of(numberFieldMapper1), + Collections.emptyList(), + Collections.emptyList(), + 0, + null + ); + when(documentMapper.mappers()).thenReturn(fieldMappers); + builder = new OnHeapStarTreeBuilder(compositeField, fieldProducerMap, writeState, mapperService); + + int noOfStarTreeDocuments = 7; + StarTreeDocument[] starTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + starTreeDocuments[0] = new StarTreeDocument(new Long[] { 1L, 11L, 21L }, new Double[] { 400.0 }); + starTreeDocuments[1] = new StarTreeDocument(new Long[] { 1L, 12L, 22L }, new Double[] { 200.0 }); + starTreeDocuments[2] = new StarTreeDocument(new Long[] { 2L, 13L, 23L }, new Double[] { 300.0 }); + starTreeDocuments[3] = new StarTreeDocument(new Long[] { 2L, 13L, 21L }, new Double[] { 100.0 }); + starTreeDocuments[4] = new StarTreeDocument(new Long[] { 3L, 11L, 21L }, new Double[] { 600.0 }); + starTreeDocuments[5] = new StarTreeDocument(new Long[] { 3L, 12L, 23L }, new Double[] { 200.0 }); + starTreeDocuments[6] = new StarTreeDocument(new Long[] { 3L, 12L, 21L }, new Double[] { 400.0 }); + + StarTreeDocument[] segmentStarTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + for (int i = 0; i < noOfStarTreeDocuments; i++) { + long metric1 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[0]); + segmentStarTreeDocuments[i] = new StarTreeDocument(starTreeDocuments[i].dimensions, new Long[] { metric1 }); + } + + Iterator segmentStarTreeDocumentIterator = builder.sortAndAggregateStarTreeDocuments(segmentStarTreeDocuments); + builder.build(segmentStarTreeDocumentIterator); + + List resultStarTreeDocuments = builder.getStarTreeDocuments(); + List expectedStarTreeDocuments = List.of( + new StarTreeDocument(new Long[] { 1L, 11L, 21L }, new Object[] { 400.0 }), + new StarTreeDocument(new Long[] { 1L, 12L, 22L }, new Object[] { 200.0 }), + new StarTreeDocument(new Long[] { 2L, 13L, 21L }, new Object[] { 100.0 }), + new StarTreeDocument(new Long[] { 2L, 13L, 23L }, new Object[] { 300.0 }), + new StarTreeDocument(new Long[] { 3L, 11L, 21L }, new Object[] { 600.0 }), + new StarTreeDocument(new Long[] { 3L, 12L, 21L }, new Object[] { 400.0 }), + new StarTreeDocument(new Long[] { 3L, 12L, 23L }, new Object[] { 200.0 }), + new StarTreeDocument(new Long[] { -1L, 11L, 21L }, new Object[] { 1000.0 }), + new StarTreeDocument(new Long[] { -1L, 12L, 21L }, new Object[] { 400.0 }), + new StarTreeDocument(new Long[] { -1L, 12L, 22L }, new Object[] { 200.0 }), + new StarTreeDocument(new Long[] { -1L, 12L, 23L }, new Object[] { 200.0 }), + new StarTreeDocument(new Long[] { -1L, 13L, 21L }, new Object[] { 100.0 }), + new StarTreeDocument(new Long[] { -1L, 13L, 23L }, new Object[] { 300.0 }), + new StarTreeDocument(new Long[] { -1L, -1L, 21L }, new Object[] { 1500.0 }), + new StarTreeDocument(new Long[] { -1L, -1L, 22L }, new Object[] { 200.0 }), + new StarTreeDocument(new Long[] { -1L, -1L, 23L }, new Object[] { 500.0 }), + new StarTreeDocument(new Long[] { -1L, -1L, -1L }, new Object[] { 2200.0 }), + new StarTreeDocument(new Long[] { -1L, 12L, -1L }, new Object[] { 800.0 }), + new StarTreeDocument(new Long[] { -1L, 13L, -1L }, new Object[] { 400.0 }), + new StarTreeDocument(new Long[] { 1L, -1L, 21L }, new Object[] { 400.0 }), + new StarTreeDocument(new Long[] { 1L, -1L, 22L }, new Object[] { 200.0 }), + new StarTreeDocument(new Long[] { 1L, -1L, -1L }, new Object[] { 600.0 }), + new StarTreeDocument(new Long[] { 2L, 13L, -1L }, new Object[] { 400.0 }), + new StarTreeDocument(new Long[] { 3L, -1L, 21L }, new Object[] { 1000.0 }), + new StarTreeDocument(new Long[] { 3L, -1L, 23L }, new Object[] { 200.0 }), + new StarTreeDocument(new Long[] { 3L, -1L, -1L }, new Object[] { 1200.0 }), + new StarTreeDocument(new Long[] { 3L, 12L, -1L }, new Object[] { 600.0 }) + ); + + Iterator expectedStarTreeDocumentIterator = expectedStarTreeDocuments.iterator(); + Iterator resultStarTreeDocumentIterator = resultStarTreeDocuments.iterator(); + while (resultStarTreeDocumentIterator.hasNext() && expectedStarTreeDocumentIterator.hasNext()) { + StarTreeDocument resultStarTreeDocument = resultStarTreeDocumentIterator.next(); + StarTreeDocument expectedStarTreeDocument = expectedStarTreeDocumentIterator.next(); + + assertEquals(expectedStarTreeDocument.dimensions[0], resultStarTreeDocument.dimensions[0]); + assertEquals(expectedStarTreeDocument.dimensions[1], resultStarTreeDocument.dimensions[1]); + assertEquals(expectedStarTreeDocument.dimensions[2], resultStarTreeDocument.dimensions[2]); + assertEquals(expectedStarTreeDocument.metrics[0], resultStarTreeDocument.metrics[0]); + } + + } + + @Override + public void tearDown() throws Exception { + super.tearDown(); + directory.close(); + } +} diff --git a/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreeDocValuesIteratorAdapterTests.java b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreeDocValuesIteratorAdapterTests.java new file mode 100644 index 0000000000000..9c2621401faa4 --- /dev/null +++ b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreeDocValuesIteratorAdapterTests.java @@ -0,0 +1,139 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree.builder; + +import org.apache.lucene.codecs.DocValuesProducer; +import org.apache.lucene.index.DocValuesType; +import org.apache.lucene.index.FieldInfo; +import org.apache.lucene.index.IndexOptions; +import org.apache.lucene.index.SortedNumericDocValues; +import org.apache.lucene.index.VectorEncoding; +import org.apache.lucene.index.VectorSimilarityFunction; +import org.apache.lucene.search.DocIdSetIterator; +import org.opensearch.index.compositeindex.datacube.startree.utils.SequentialDocValuesIterator; +import org.opensearch.test.OpenSearchTestCase; + +import java.io.IOException; +import java.util.Collections; + +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; + +public class StarTreeDocValuesIteratorAdapterTests extends OpenSearchTestCase { + + private StarTreeDocValuesIteratorAdapter adapter; + + @Override + public void setUp() throws Exception { + super.setUp(); + adapter = new StarTreeDocValuesIteratorAdapter(); + } + + public void testGetDocValuesIterator() throws IOException { + DocValuesProducer mockProducer = mock(DocValuesProducer.class); + SortedNumericDocValues mockSortedNumericDocValues = mock(SortedNumericDocValues.class); + + when(mockProducer.getSortedNumeric(any())).thenReturn(mockSortedNumericDocValues); + + SequentialDocValuesIterator iterator = adapter.getDocValuesIterator(DocValuesType.SORTED_NUMERIC, any(), mockProducer); + + assertNotNull(iterator); + assertEquals(mockSortedNumericDocValues, iterator.getDocIdSetIterator()); + } + + public void testGetDocValuesIteratorWithUnsupportedType() { + DocValuesProducer mockProducer = mock(DocValuesProducer.class); + FieldInfo fieldInfo = new FieldInfo( + "random_field", + 0, + false, + false, + true, + IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS, + DocValuesType.SORTED_NUMERIC, + -1, + Collections.emptyMap(), + 0, + 0, + 0, + 0, + VectorEncoding.FLOAT32, + VectorSimilarityFunction.EUCLIDEAN, + false, + false + ); + IllegalArgumentException exception = expectThrows(IllegalArgumentException.class, () -> { + adapter.getDocValuesIterator(DocValuesType.BINARY, fieldInfo, mockProducer); + }); + + assertEquals("Unsupported DocValuesType: BINARY", exception.getMessage()); + } + + public void testGetNextValue() throws IOException { + SortedNumericDocValues mockSortedNumericDocValues = mock(SortedNumericDocValues.class); + SequentialDocValuesIterator iterator = new SequentialDocValuesIterator(mockSortedNumericDocValues); + iterator.setDocId(1); + when(mockSortedNumericDocValues.nextValue()).thenReturn(42L); + + Long nextValue = adapter.getNextValue(iterator, 1); + + assertEquals(Long.valueOf(42L), nextValue); + assertEquals(Long.valueOf(42L), iterator.getDocValue()); + } + + public void testGetNextValueWithInvalidDocId() { + SortedNumericDocValues mockSortedNumericDocValues = mock(SortedNumericDocValues.class); + SequentialDocValuesIterator iterator = new SequentialDocValuesIterator(mockSortedNumericDocValues); + iterator.setDocId(DocIdSetIterator.NO_MORE_DOCS); + + IllegalStateException exception = expectThrows(IllegalStateException.class, () -> { adapter.getNextValue(iterator, 1); }); + + assertEquals("invalid doc id to fetch the next value", exception.getMessage()); + } + + public void testGetNextValueWithUnsupportedIterator() { + DocIdSetIterator mockIterator = mock(DocIdSetIterator.class); + SequentialDocValuesIterator iterator = new SequentialDocValuesIterator(mockIterator); + + IllegalStateException exception = expectThrows(IllegalStateException.class, () -> { adapter.getNextValue(iterator, 1); }); + + assertEquals("Unsupported Iterator: " + mockIterator.toString(), exception.getMessage()); + } + + public void testNextDoc() throws IOException { + SortedNumericDocValues mockSortedNumericDocValues = mock(SortedNumericDocValues.class); + SequentialDocValuesIterator iterator = new SequentialDocValuesIterator(mockSortedNumericDocValues); + when(mockSortedNumericDocValues.nextDoc()).thenReturn(2, 3, DocIdSetIterator.NO_MORE_DOCS); + when(mockSortedNumericDocValues.nextValue()).thenReturn(42L, 32L); + + int nextDocId = adapter.nextDoc(iterator, 1); + assertEquals(2, nextDocId); + assertEquals(Long.valueOf(42L), adapter.getNextValue(iterator, nextDocId)); + + nextDocId = adapter.nextDoc(iterator, 2); + assertEquals(3, nextDocId); + when(mockSortedNumericDocValues.nextValue()).thenReturn(42L, 32L); + + } + + public void testNextDoc_noMoreDocs() throws IOException { + SortedNumericDocValues mockSortedNumericDocValues = mock(SortedNumericDocValues.class); + SequentialDocValuesIterator iterator = new SequentialDocValuesIterator(mockSortedNumericDocValues); + when(mockSortedNumericDocValues.nextDoc()).thenReturn(2, DocIdSetIterator.NO_MORE_DOCS); + when(mockSortedNumericDocValues.nextValue()).thenReturn(42L, 32L); + + int nextDocId = adapter.nextDoc(iterator, 1); + assertEquals(2, nextDocId); + assertEquals(Long.valueOf(42L), adapter.getNextValue(iterator, nextDocId)); + + assertThrows(IllegalStateException.class, () -> adapter.nextDoc(iterator, 2)); + + } +} diff --git a/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreeValuesIteratorFactoryTests.java b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreeValuesIteratorFactoryTests.java new file mode 100644 index 0000000000000..1aba67533d52e --- /dev/null +++ b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreeValuesIteratorFactoryTests.java @@ -0,0 +1,131 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree.builder; + +import org.apache.lucene.codecs.DocValuesProducer; +import org.apache.lucene.index.DocValuesType; +import org.apache.lucene.index.FieldInfo; +import org.apache.lucene.index.IndexOptions; +import org.apache.lucene.index.SortedNumericDocValues; +import org.apache.lucene.index.VectorEncoding; +import org.apache.lucene.index.VectorSimilarityFunction; +import org.apache.lucene.search.DocIdSetIterator; +import org.opensearch.index.compositeindex.datacube.startree.utils.SequentialDocValuesIterator; +import org.opensearch.test.OpenSearchTestCase; +import org.junit.BeforeClass; + +import java.io.IOException; +import java.util.Collections; + +import org.mockito.Mockito; + +import static org.mockito.Mockito.when; + +public class StarTreeValuesIteratorFactoryTests extends OpenSearchTestCase { + + private static StarTreeDocValuesIteratorAdapter starTreeDocValuesIteratorAdapter; + private static FieldInfo mockFieldInfo; + + @BeforeClass + public static void setup() { + starTreeDocValuesIteratorAdapter = new StarTreeDocValuesIteratorAdapter(); + mockFieldInfo = new FieldInfo( + "field", + 1, + false, + false, + true, + IndexOptions.NONE, + DocValuesType.NONE, + -1, + Collections.emptyMap(), + 0, + 0, + 0, + 0, + VectorEncoding.FLOAT32, + VectorSimilarityFunction.EUCLIDEAN, + false, + false + ); + } + + public void testCreateIterator_SortedNumeric() throws IOException { + DocValuesProducer producer = Mockito.mock(DocValuesProducer.class); + SortedNumericDocValues iterator = Mockito.mock(SortedNumericDocValues.class); + when(producer.getSortedNumeric(mockFieldInfo)).thenReturn(iterator); + SequentialDocValuesIterator result = starTreeDocValuesIteratorAdapter.getDocValuesIterator( + DocValuesType.SORTED_NUMERIC, + mockFieldInfo, + producer + ); + assertEquals(iterator.getClass(), result.getDocIdSetIterator().getClass()); + } + + public void testCreateIterator_UnsupportedType() { + DocValuesProducer producer = Mockito.mock(DocValuesProducer.class); + IllegalArgumentException exception = expectThrows(IllegalArgumentException.class, () -> { + starTreeDocValuesIteratorAdapter.getDocValuesIterator(DocValuesType.BINARY, mockFieldInfo, producer); + }); + assertEquals("Unsupported DocValuesType: BINARY", exception.getMessage()); + } + + public void testGetNextValue_SortedNumeric() throws IOException { + SortedNumericDocValues iterator = Mockito.mock(SortedNumericDocValues.class); + when(iterator.nextDoc()).thenReturn(0); + when(iterator.nextValue()).thenReturn(123L); + SequentialDocValuesIterator sequentialDocValuesIterator = new SequentialDocValuesIterator(iterator); + sequentialDocValuesIterator.getDocIdSetIterator().nextDoc(); + long result = starTreeDocValuesIteratorAdapter.getNextValue(sequentialDocValuesIterator, 0); + assertEquals(123L, result); + } + + public void testGetNextValue_UnsupportedIterator() { + DocIdSetIterator iterator = Mockito.mock(DocIdSetIterator.class); + SequentialDocValuesIterator sequentialDocValuesIterator = new SequentialDocValuesIterator(iterator); + + IllegalStateException exception = expectThrows(IllegalStateException.class, () -> { + starTreeDocValuesIteratorAdapter.getNextValue(sequentialDocValuesIterator, 0); + }); + assertEquals("Unsupported Iterator: " + iterator.toString(), exception.getMessage()); + } + + public void testNextDoc() throws IOException { + SortedNumericDocValues iterator = Mockito.mock(SortedNumericDocValues.class); + SequentialDocValuesIterator sequentialDocValuesIterator = new SequentialDocValuesIterator(iterator); + when(iterator.nextDoc()).thenReturn(5); + + int result = starTreeDocValuesIteratorAdapter.nextDoc(sequentialDocValuesIterator, 5); + assertEquals(5, result); + } + + public void test_multipleCoordinatedDocumentReader() throws IOException { + SortedNumericDocValues iterator1 = Mockito.mock(SortedNumericDocValues.class); + SortedNumericDocValues iterator2 = Mockito.mock(SortedNumericDocValues.class); + + SequentialDocValuesIterator sequentialDocValuesIterator1 = new SequentialDocValuesIterator(iterator1); + SequentialDocValuesIterator sequentialDocValuesIterator2 = new SequentialDocValuesIterator(iterator2); + + when(iterator1.nextDoc()).thenReturn(0); + when(iterator2.nextDoc()).thenReturn(1); + + when(iterator1.nextValue()).thenReturn(9L); + when(iterator2.nextValue()).thenReturn(9L); + + starTreeDocValuesIteratorAdapter.nextDoc(sequentialDocValuesIterator1, 0); + starTreeDocValuesIteratorAdapter.nextDoc(sequentialDocValuesIterator2, 0); + assertEquals(0, sequentialDocValuesIterator1.getDocId()); + assertEquals(9L, (long) sequentialDocValuesIterator1.getDocValue()); + assertNotEquals(0, sequentialDocValuesIterator2.getDocId()); + assertEquals(1, sequentialDocValuesIterator2.getDocId()); + assertEquals(9L, (long) sequentialDocValuesIterator2.getDocValue()); + + } + +} diff --git a/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreesBuilderTests.java b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreesBuilderTests.java new file mode 100644 index 0000000000000..518c6729c2e1a --- /dev/null +++ b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreesBuilderTests.java @@ -0,0 +1,132 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree.builder; + +import org.apache.lucene.codecs.DocValuesProducer; +import org.apache.lucene.codecs.lucene99.Lucene99Codec; +import org.apache.lucene.index.FieldInfo; +import org.apache.lucene.index.FieldInfos; +import org.apache.lucene.index.SegmentInfo; +import org.apache.lucene.index.SegmentWriteState; +import org.apache.lucene.store.Directory; +import org.apache.lucene.util.InfoStream; +import org.apache.lucene.util.Version; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeField; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeFieldConfiguration; +import org.opensearch.index.mapper.MapperService; +import org.opensearch.index.mapper.StarTreeMapper; +import org.opensearch.test.OpenSearchTestCase; + +import java.io.IOException; +import java.nio.charset.StandardCharsets; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Map; +import java.util.Set; +import java.util.UUID; + +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.verifyNoInteractions; +import static org.mockito.Mockito.when; + +public class StarTreesBuilderTests extends OpenSearchTestCase { + + private MapperService mapperService; + private SegmentWriteState segmentWriteState; + private DocValuesProducer docValuesProducer; + private StarTreeMapper.StarTreeFieldType starTreeFieldType; + private StarTreeField starTreeField; + private Map fieldProducerMap; + private Directory directory; + + public void setUp() throws Exception { + super.setUp(); + mapperService = mock(MapperService.class); + directory = newFSDirectory(createTempDir()); + SegmentInfo segmentInfo = new SegmentInfo( + directory, + Version.LATEST, + Version.LUCENE_9_11_0, + "test_segment", + 5, + false, + false, + new Lucene99Codec(), + new HashMap<>(), + UUID.randomUUID().toString().substring(0, 16).getBytes(StandardCharsets.UTF_8), + new HashMap<>(), + null + ); + FieldInfos fieldInfos = new FieldInfos(new FieldInfo[0]); + segmentWriteState = new SegmentWriteState( + InfoStream.getDefault(), + segmentInfo.dir, + segmentInfo, + fieldInfos, + null, + newIOContext(random()) + ); + docValuesProducer = mock(DocValuesProducer.class); + StarTreeFieldConfiguration starTreeFieldConfiguration = new StarTreeFieldConfiguration( + 1, + new HashSet<>(), + StarTreeFieldConfiguration.StarTreeBuildMode.ON_HEAP + ); + starTreeField = new StarTreeField("star_tree", new ArrayList<>(), new ArrayList<>(), starTreeFieldConfiguration); + starTreeFieldType = new StarTreeMapper.StarTreeFieldType("star_tree", starTreeField); + fieldProducerMap = new HashMap<>(); + fieldProducerMap.put("field1", docValuesProducer); + } + + public void test_buildWithNoStarTreeFields() throws IOException { + when(mapperService.getCompositeFieldTypes()).thenReturn(new HashSet<>()); + + StarTreesBuilder starTreesBuilder = new StarTreesBuilder(fieldProducerMap, segmentWriteState, mapperService); + starTreesBuilder.build(); + + verifyNoInteractions(docValuesProducer); + } + + public void test_getStarTreeBuilder() throws IOException { + when(mapperService.getCompositeFieldTypes()).thenReturn(Set.of(starTreeFieldType)); + StarTreesBuilder starTreesBuilder = new StarTreesBuilder(fieldProducerMap, segmentWriteState, mapperService); + StarTreeBuilder starTreeBuilder = starTreesBuilder.getStarTreeBuilder(starTreeField, fieldProducerMap, segmentWriteState, mapperService); + assertTrue(starTreeBuilder instanceof OnHeapStarTreeBuilder); + } + + public void test_getStarTreeBuilder_illegalArgument() { + when(mapperService.getCompositeFieldTypes()).thenReturn(Set.of(starTreeFieldType)); + StarTreeFieldConfiguration starTreeFieldConfiguration = new StarTreeFieldConfiguration(1, new HashSet<>(), StarTreeFieldConfiguration.StarTreeBuildMode.OFF_HEAP); + StarTreeField starTreeField = new StarTreeField("star_tree", new ArrayList<>(), new ArrayList<>(), starTreeFieldConfiguration); + StarTreesBuilder starTreesBuilder = new StarTreesBuilder(fieldProducerMap, segmentWriteState, mapperService); + assertThrows(IllegalArgumentException.class, () -> starTreesBuilder.getStarTreeBuilder(starTreeField, fieldProducerMap, segmentWriteState, mapperService)); + } + + public void test_closeWithNoStarTreeFields() throws IOException { + StarTreeFieldConfiguration starTreeFieldConfiguration = new StarTreeFieldConfiguration( + 1, + new HashSet<>(), + StarTreeFieldConfiguration.StarTreeBuildMode.OFF_HEAP + ); + StarTreeField starTreeField = new StarTreeField("star_tree", new ArrayList<>(), new ArrayList<>(), starTreeFieldConfiguration); + starTreeFieldType = new StarTreeMapper.StarTreeFieldType("star_tree", starTreeField); + when(mapperService.getCompositeFieldTypes()).thenReturn(Set.of(starTreeFieldType)); + StarTreesBuilder starTreesBuilder = new StarTreesBuilder(fieldProducerMap, segmentWriteState, mapperService); + starTreesBuilder.close(); + + verifyNoInteractions(docValuesProducer); + } + + @Override + public void tearDown() throws Exception { + super.tearDown(); + directory.close(); + } +} diff --git a/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/utils/SequentialDocValuesIteratorTests.java b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/utils/SequentialDocValuesIteratorTests.java new file mode 100644 index 0000000000000..76b612e3677f7 --- /dev/null +++ b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/utils/SequentialDocValuesIteratorTests.java @@ -0,0 +1,46 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree.utils; + +import org.apache.lucene.index.SortedNumericDocValues; +import org.opensearch.index.fielddata.AbstractNumericDocValues; +import org.opensearch.test.OpenSearchTestCase; + +import java.io.IOException; + +public class SequentialDocValuesIteratorTests extends OpenSearchTestCase { + + public void test_sequentialDocValuesIterator() { + SequentialDocValuesIterator sequentialDocValuesIterator = new SequentialDocValuesIterator(new AbstractNumericDocValues() { + @Override + public long longValue() throws IOException { + return 0; + } + + @Override + public boolean advanceExact(int i) throws IOException { + return false; + } + + @Override + public int docID() { + return 0; + } + }); + + assertTrue(sequentialDocValuesIterator.getDocIdSetIterator() instanceof AbstractNumericDocValues); + assertEquals(sequentialDocValuesIterator.getDocId(), 0); + } + + public void test_sequentialDocValuesIterator_default() { + SequentialDocValuesIterator sequentialDocValuesIterator = new SequentialDocValuesIterator(); + assertTrue(sequentialDocValuesIterator.getDocIdSetIterator() instanceof SortedNumericDocValues); + } + +} From fca520fe5af3ee496520ab59d7a1da8e0a5703a7 Mon Sep 17 00:00:00 2001 From: Andriy Redko Date: Wed, 17 Jul 2024 15:34:07 -0400 Subject: [PATCH 072/167] Add Gao Binlong as maintainer (#14796) Signed-off-by: Andriy Redko --- .github/CODEOWNERS | 2 +- MAINTAINERS.md | 1 + 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/.github/CODEOWNERS b/.github/CODEOWNERS index 8ceecb3abb4a2..1aefeee710f47 100644 --- a/.github/CODEOWNERS +++ b/.github/CODEOWNERS @@ -24,4 +24,4 @@ /.github/ @peternied -/MAINTAINERS.md @anasalkouz @andrross @ashking94 @Bukhtawar @CEHENKLE @dblock @dbwiddis @gbbafna @jed326 @kotwanikunal @mch2 @msfroh @nknize @owaiskazi19 @peternied @reta @Rishikesh1159 @sachinpkale @saratvemulapalli @shwetathareja @sohami @VachaShah +/MAINTAINERS.md @anasalkouz @andrross @ashking94 @Bukhtawar @CEHENKLE @dblock @dbwiddis @gaobinlong @gbbafna @jed326 @kotwanikunal @mch2 @msfroh @nknize @owaiskazi19 @peternied @reta @Rishikesh1159 @sachinpkale @saratvemulapalli @shwetathareja @sohami @VachaShah diff --git a/MAINTAINERS.md b/MAINTAINERS.md index 3298ceb15463c..f77c69ddeff2a 100644 --- a/MAINTAINERS.md +++ b/MAINTAINERS.md @@ -14,6 +14,7 @@ This document contains a list of maintainers in this repo. See [opensearch-proje | Charlotte Henkle | [CEHENKLE](https://github.com/CEHENKLE) | Amazon | | Dan Widdis | [dbwiddis](https://github.com/dbwiddis) | Amazon | | Daniel "dB." Doubrovkine | [dblock](https://github.com/dblock) | Amazon | +| Gao Binlong | [gaobinlong](https://github.com/gaobinlong) | Amazon | | Gaurav Bafna | [gbbafna](https://github.com/gbbafna) | Amazon | | Jay Deng | [jed326](https://github.com/jed326) | Amazon | | Kunal Kotwani | [kotwanikunal](https://github.com/kotwanikunal) | Amazon | From b3b743d69cc4fc10dbd43f90df9731e51b63991b Mon Sep 17 00:00:00 2001 From: Sagar <99425694+sgup432@users.noreply.github.com> Date: Wed, 17 Jul 2024 19:02:05 -0700 Subject: [PATCH 073/167] Clear ehcache disk cache files during initialization (#14738) * Clear ehcache disk cache files during initialization Signed-off-by: Sagar Upadhyaya * Adding UT to fix line coverage Signed-off-by: Sagar Upadhyaya * Addressing comment Signed-off-by: Sagar Upadhyaya * Adding more Uts for better line coverage Signed-off-by: Sagar Upadhyaya * Throwing exception in case we fail to clear cache files during startup Signed-off-by: Sagar Upadhyaya * Adding more UTs Signed-off-by: Sagar Upadhyaya * Adding a UT for more coverage Signed-off-by: Sagar Upadhyaya * Fixing gradle build Signed-off-by: Sagar Upadhyaya * Update ehcache disk cache close() logic Signed-off-by: Sagar Upadhyaya --------- Signed-off-by: Sagar Upadhyaya --- .../cache/store/disk/EhcacheDiskCache.java | 50 ++- .../store/disk/EhCacheDiskCacheTests.java | 293 ++++++++++++++++++ 2 files changed, 328 insertions(+), 15 deletions(-) diff --git a/plugins/cache-ehcache/src/main/java/org/opensearch/cache/store/disk/EhcacheDiskCache.java b/plugins/cache-ehcache/src/main/java/org/opensearch/cache/store/disk/EhcacheDiskCache.java index b4c62fbf85cb8..4a95b04de3952 100644 --- a/plugins/cache-ehcache/src/main/java/org/opensearch/cache/store/disk/EhcacheDiskCache.java +++ b/plugins/cache-ehcache/src/main/java/org/opensearch/cache/store/disk/EhcacheDiskCache.java @@ -60,7 +60,6 @@ import java.util.function.ToLongBiFunction; import org.ehcache.Cache; -import org.ehcache.CachePersistenceException; import org.ehcache.PersistentCacheManager; import org.ehcache.config.builders.CacheConfigurationBuilder; import org.ehcache.config.builders.CacheEventListenerConfigurationBuilder; @@ -104,8 +103,6 @@ public class EhcacheDiskCache implements ICache { // Unique id associated with this cache. private final static String UNIQUE_ID = UUID.randomUUID().toString(); private final static String THREAD_POOL_ALIAS_PREFIX = "ehcachePool"; - private final static int MINIMUM_MAX_SIZE_IN_BYTES = 1024 * 100; // 100KB - // A Cache manager can create many caches. private final PersistentCacheManager cacheManager; @@ -127,13 +124,18 @@ public class EhcacheDiskCache implements ICache { private final Serializer keySerializer; private final Serializer valueSerializer; + final static int MINIMUM_MAX_SIZE_IN_BYTES = 1024 * 100; // 100KB + final static String CACHE_DATA_CLEANUP_DURING_INITIALIZATION_EXCEPTION = "Failed to delete ehcache disk cache under " + + "path: %s during initialization. Please clean this up manually and restart the process"; + /** * Used in computeIfAbsent to synchronize loading of a given key. This is needed as ehcache doesn't provide a * computeIfAbsent method. */ Map, CompletableFuture, V>>> completableFutureMap = new ConcurrentHashMap<>(); - private EhcacheDiskCache(Builder builder) { + @SuppressForbidden(reason = "Ehcache uses File.io") + EhcacheDiskCache(Builder builder) { this.keyType = Objects.requireNonNull(builder.keyType, "Key type shouldn't be null"); this.valueType = Objects.requireNonNull(builder.valueType, "Value type shouldn't be null"); this.expireAfterAccess = Objects.requireNonNull(builder.getExpireAfterAcess(), "ExpireAfterAccess value shouldn't " + "be null"); @@ -151,6 +153,18 @@ private EhcacheDiskCache(Builder builder) { if (this.storagePath == null || this.storagePath.isBlank()) { throw new IllegalArgumentException("Storage path shouldn't be null or empty"); } + // Delete all the previous disk cache related files/data. We don't persist data between process restart for + // now which is why need to do this. Clean up in case there was a non graceful restart and we had older disk + // cache data still lying around. + Path ehcacheDirectory = Paths.get(this.storagePath); + if (Files.exists(ehcacheDirectory)) { + try { + logger.info("Found older disk cache data lying around during initialization under path: {}", this.storagePath); + IOUtils.rm(ehcacheDirectory); + } catch (IOException e) { + throw new OpenSearchException(String.format(CACHE_DATA_CLEANUP_DURING_INITIALIZATION_EXCEPTION, this.storagePath), e); + } + } if (builder.threadPoolAlias == null || builder.threadPoolAlias.isBlank()) { this.threadPoolAlias = THREAD_POOL_ALIAS_PREFIX + "DiskWrite#" + UNIQUE_ID; } else { @@ -175,6 +189,11 @@ private EhcacheDiskCache(Builder builder) { } } + // Package private for testing + PersistentCacheManager getCacheManager() { + return this.cacheManager; + } + @SuppressWarnings({ "rawtypes" }) private Cache buildCache(Duration expireAfterAccess, Builder builder) { // Creating the cache requires permissions specified in plugin-security.policy @@ -255,7 +274,7 @@ Map, CompletableFuture, V>>> getCompletableFutur } @SuppressForbidden(reason = "Ehcache uses File.io") - private PersistentCacheManager buildCacheManager() { + PersistentCacheManager buildCacheManager() { // In case we use multiple ehCaches, we can define this cache manager at a global level. // Creating the cache manager also requires permissions specified in plugin-security.policy return AccessController.doPrivileged((PrivilegedAction) () -> { @@ -444,20 +463,21 @@ public void refresh() { @Override @SuppressForbidden(reason = "Ehcache uses File.io") public void close() { - cacheManager.removeCache(this.diskCacheAlias); - cacheManager.close(); try { - cacheManager.destroyCache(this.diskCacheAlias); - // Delete all the disk cache related files/data - Path ehcacheDirectory = Paths.get(this.storagePath); - if (Files.exists(ehcacheDirectory)) { + cacheManager.close(); + } catch (Exception e) { + logger.error(() -> new ParameterizedMessage("Exception occurred while trying to close ehcache manager"), e); + } + // Delete all the disk cache related files/data in case it is present + Path ehcacheDirectory = Paths.get(this.storagePath); + if (Files.exists(ehcacheDirectory)) { + try { IOUtils.rm(ehcacheDirectory); + } catch (IOException e) { + logger.error(() -> new ParameterizedMessage("Failed to delete ehcache disk cache data under path: {}", this.storagePath)); } - } catch (CachePersistenceException e) { - throw new OpenSearchException("Exception occurred while destroying ehcache and associated data", e); - } catch (IOException e) { - logger.error(() -> new ParameterizedMessage("Failed to delete ehcache disk cache data under path: {}", this.storagePath)); } + } /** diff --git a/plugins/cache-ehcache/src/test/java/org/opensearch/cache/store/disk/EhCacheDiskCacheTests.java b/plugins/cache-ehcache/src/test/java/org/opensearch/cache/store/disk/EhCacheDiskCacheTests.java index 29551befd3e9f..2bc24227bb513 100644 --- a/plugins/cache-ehcache/src/test/java/org/opensearch/cache/store/disk/EhCacheDiskCacheTests.java +++ b/plugins/cache-ehcache/src/test/java/org/opensearch/cache/store/disk/EhCacheDiskCacheTests.java @@ -25,6 +25,7 @@ import org.opensearch.common.metrics.CounterMetric; import org.opensearch.common.settings.Settings; import org.opensearch.common.unit.TimeValue; +import org.opensearch.common.util.io.IOUtils; import org.opensearch.core.common.bytes.BytesArray; import org.opensearch.core.common.bytes.BytesReference; import org.opensearch.core.common.bytes.CompositeBytesReference; @@ -34,6 +35,8 @@ import java.io.IOException; import java.nio.charset.Charset; import java.nio.charset.StandardCharsets; +import java.nio.file.Files; +import java.nio.file.Path; import java.util.ArrayList; import java.util.HashMap; import java.util.Iterator; @@ -47,10 +50,17 @@ import java.util.concurrent.Phaser; import java.util.function.ToLongBiFunction; +import org.ehcache.PersistentCacheManager; + import static org.opensearch.cache.EhcacheDiskCacheSettings.DISK_LISTENER_MODE_SYNC_KEY; import static org.opensearch.cache.EhcacheDiskCacheSettings.DISK_MAX_SIZE_IN_BYTES_KEY; import static org.opensearch.cache.EhcacheDiskCacheSettings.DISK_STORAGE_PATH_KEY; +import static org.opensearch.cache.store.disk.EhcacheDiskCache.MINIMUM_MAX_SIZE_IN_BYTES; import static org.hamcrest.CoreMatchers.instanceOf; +import static org.mockito.ArgumentMatchers.anyString; +import static org.mockito.Mockito.doNothing; +import static org.mockito.Mockito.doThrow; +import static org.mockito.Mockito.mock; @ThreadLeakFilters(filters = { EhcacheThreadLeakFilter.class }) public class EhCacheDiskCacheTests extends OpenSearchSingleNodeTestCase { @@ -882,6 +892,289 @@ public void testStatsTrackingDisabled() throws Exception { } } + public void testDiskCacheFilesAreClearedUpDuringCloseAndInitialization() throws Exception { + Settings settings = Settings.builder().build(); + MockRemovalListener removalListener = new MockRemovalListener<>(); + ToLongBiFunction, String> weigher = getWeigher(); + try (NodeEnvironment env = newNodeEnvironment(settings)) { + String path = env.nodePaths()[0].path.toString() + "/request_cache"; + // Create a dummy file to simulate a scenario where the data is already in the disk cache storage path + // beforehand. + Files.createDirectory(Path.of(path)); + Path dummyFilePath = Files.createFile(Path.of(path + "/testing.txt")); + assertTrue(Files.exists(dummyFilePath)); + ICache ehcacheTest = new EhcacheDiskCache.Builder().setThreadPoolAlias("ehcacheTest") + .setStoragePath(path) + .setIsEventListenerModeSync(true) + .setKeyType(String.class) + .setValueType(String.class) + .setKeySerializer(new StringSerializer()) + .setDiskCacheAlias("test1") + .setValueSerializer(new StringSerializer()) + .setDimensionNames(List.of(dimensionName)) + .setCacheType(CacheType.INDICES_REQUEST_CACHE) + .setThreadPoolAlias("") + .setSettings(settings) + .setExpireAfterAccess(TimeValue.MAX_VALUE) + .setMaximumWeightInBytes(CACHE_SIZE_IN_BYTES) + .setRemovalListener(removalListener) + .setWeigher(weigher) + .setStatsTrackingEnabled(false) + .build(); + int randomKeys = randomIntBetween(10, 100); + for (int i = 0; i < randomKeys; i++) { + ICacheKey iCacheKey = getICacheKey(UUID.randomUUID().toString()); + ehcacheTest.put(iCacheKey, UUID.randomUUID().toString()); + assertEquals(0, ehcacheTest.count()); // Expect count of 0 if NoopCacheStatsHolder is used + assertEquals(new ImmutableCacheStats(0, 0, 0, 0, 0), ehcacheTest.stats().getTotalStats()); + } + // Verify that older data was wiped out after initialization + assertFalse(Files.exists(dummyFilePath)); + + // Verify that there is data present under desired path by explicitly verifying the folder name by prefix + // (used from disk cache alias) + assertTrue(Files.exists(Path.of(path))); + boolean folderExists = Files.walk(Path.of(path)) + .filter(Files::isDirectory) + .anyMatch(path1 -> path1.getFileName().toString().startsWith("test1")); + assertTrue(folderExists); + ehcacheTest.close(); + assertFalse(Files.exists(Path.of(path))); // Verify everything is cleared up now after close() + } + } + + public void testDiskCacheCloseCalledTwiceAndVerifyDiskDataIsCleanedUp() throws Exception { + Settings settings = Settings.builder().build(); + MockRemovalListener removalListener = new MockRemovalListener<>(); + ToLongBiFunction, String> weigher = getWeigher(); + try (NodeEnvironment env = newNodeEnvironment(settings)) { + String path = env.nodePaths()[0].path.toString() + "/request_cache"; + ICache ehcacheTest = new EhcacheDiskCache.Builder().setThreadPoolAlias(null) + .setStoragePath(path) + .setIsEventListenerModeSync(true) + .setKeyType(String.class) + .setValueType(String.class) + .setKeySerializer(new StringSerializer()) + .setDiskCacheAlias("test1") + .setValueSerializer(new StringSerializer()) + .setDimensionNames(List.of(dimensionName)) + .setCacheType(CacheType.INDICES_REQUEST_CACHE) + .setSettings(settings) + .setExpireAfterAccess(TimeValue.MAX_VALUE) + .setMaximumWeightInBytes(CACHE_SIZE_IN_BYTES) + .setRemovalListener(removalListener) + .setWeigher(weigher) + .setStatsTrackingEnabled(false) + .build(); + int randomKeys = randomIntBetween(10, 100); + for (int i = 0; i < randomKeys; i++) { + ICacheKey iCacheKey = getICacheKey(UUID.randomUUID().toString()); + ehcacheTest.put(iCacheKey, UUID.randomUUID().toString()); + assertEquals(0, ehcacheTest.count()); // Expect count storagePath 0 if NoopCacheStatsHolder is used + assertEquals(new ImmutableCacheStats(0, 0, 0, 0, 0), ehcacheTest.stats().getTotalStats()); + } + ehcacheTest.close(); + assertFalse(Files.exists(Path.of(path))); // Verify everything is cleared up now after close() + // Call it again. This will throw an exception. + ehcacheTest.close(); + } + } + + public void testDiskCacheCloseAfterCleaningUpFilesManually() throws Exception { + Settings settings = Settings.builder().build(); + MockRemovalListener removalListener = new MockRemovalListener<>(); + ToLongBiFunction, String> weigher = getWeigher(); + try (NodeEnvironment env = newNodeEnvironment(settings)) { + String path = env.nodePaths()[0].path.toString() + "/request_cache"; + ICache ehcacheTest = new EhcacheDiskCache.Builder().setThreadPoolAlias(null) + .setStoragePath(path) + .setIsEventListenerModeSync(true) + .setKeyType(String.class) + .setValueType(String.class) + .setKeySerializer(new StringSerializer()) + .setDiskCacheAlias("test1") + .setValueSerializer(new StringSerializer()) + .setDimensionNames(List.of(dimensionName)) + .setCacheType(CacheType.INDICES_REQUEST_CACHE) + .setSettings(settings) + .setExpireAfterAccess(TimeValue.MAX_VALUE) + .setMaximumWeightInBytes(CACHE_SIZE_IN_BYTES) + .setRemovalListener(removalListener) + .setWeigher(weigher) + .setStatsTrackingEnabled(false) + .build(); + int randomKeys = randomIntBetween(10, 100); + for (int i = 0; i < randomKeys; i++) { + ICacheKey iCacheKey = getICacheKey(UUID.randomUUID().toString()); + ehcacheTest.put(iCacheKey, UUID.randomUUID().toString()); + assertEquals(0, ehcacheTest.count()); // Expect count storagePath 0 if NoopCacheStatsHolder is used + assertEquals(new ImmutableCacheStats(0, 0, 0, 0, 0), ehcacheTest.stats().getTotalStats()); + } + IOUtils.rm(Path.of(path)); + ehcacheTest.close(); + } + } + + public void testEhcacheDiskCacheWithoutStoragePathDefined() throws Exception { + Settings settings = Settings.builder().build(); + MockRemovalListener removalListener = new MockRemovalListener<>(); + ToLongBiFunction, String> weigher = getWeigher(); + try (NodeEnvironment env = newNodeEnvironment(settings)) { + assertThrows( + IllegalArgumentException.class, + () -> new EhcacheDiskCache.Builder().setThreadPoolAlias("ehcacheTest") + .setIsEventListenerModeSync(true) + .setKeyType(String.class) + .setValueType(String.class) + .setKeySerializer(new StringSerializer()) + .setDiskCacheAlias("test1") + .setValueSerializer(new StringSerializer()) + .setDimensionNames(List.of(dimensionName)) + .setCacheType(CacheType.INDICES_REQUEST_CACHE) + .setSettings(settings) + .setExpireAfterAccess(TimeValue.MAX_VALUE) + .setMaximumWeightInBytes(CACHE_SIZE_IN_BYTES) + .setRemovalListener(removalListener) + .setWeigher(weigher) + .setStatsTrackingEnabled(false) + .build() + ); + } + } + + public void testEhcacheDiskCacheWithoutStoragePathNull() throws Exception { + Settings settings = Settings.builder().build(); + MockRemovalListener removalListener = new MockRemovalListener<>(); + ToLongBiFunction, String> weigher = getWeigher(); + try (NodeEnvironment env = newNodeEnvironment(settings)) { + assertThrows( + IllegalArgumentException.class, + () -> new EhcacheDiskCache.Builder().setThreadPoolAlias("ehcacheTest") + .setStoragePath(null) + .setIsEventListenerModeSync(true) + .setKeyType(String.class) + .setValueType(String.class) + .setKeySerializer(new StringSerializer()) + .setDiskCacheAlias("test1") + .setValueSerializer(new StringSerializer()) + .setDimensionNames(List.of(dimensionName)) + .setCacheType(CacheType.INDICES_REQUEST_CACHE) + .setSettings(settings) + .setExpireAfterAccess(TimeValue.MAX_VALUE) + .setMaximumWeightInBytes(CACHE_SIZE_IN_BYTES) + .setRemovalListener(removalListener) + .setWeigher(weigher) + .setStatsTrackingEnabled(false) + .build() + ); + } + } + + public void testEhcacheWithStorageSizeLowerThanMinimumExpected() throws Exception { + Settings settings = Settings.builder().build(); + MockRemovalListener removalListener = new MockRemovalListener<>(); + ToLongBiFunction, String> weigher = getWeigher(); + try (NodeEnvironment env = newNodeEnvironment(settings)) { + assertThrows( + IllegalArgumentException.class, + () -> new EhcacheDiskCache.Builder().setThreadPoolAlias("ehcacheTest") + .setIsEventListenerModeSync(true) + .setKeyType(String.class) + .setValueType(String.class) + .setKeySerializer(new StringSerializer()) + .setDiskCacheAlias("test1") + .setValueSerializer(new StringSerializer()) + .setDimensionNames(List.of(dimensionName)) + .setCacheType(CacheType.INDICES_REQUEST_CACHE) + .setSettings(settings) + .setExpireAfterAccess(TimeValue.MAX_VALUE) + .setMaximumWeightInBytes(MINIMUM_MAX_SIZE_IN_BYTES) + .setRemovalListener(removalListener) + .setWeigher(weigher) + .setStatsTrackingEnabled(false) + .build() + ); + } + } + + public void testEhcacheWithStorageSizeZero() throws Exception { + Settings settings = Settings.builder().build(); + MockRemovalListener removalListener = new MockRemovalListener<>(); + ToLongBiFunction, String> weigher = getWeigher(); + try (NodeEnvironment env = newNodeEnvironment(settings)) { + assertThrows( + IllegalArgumentException.class, + () -> new EhcacheDiskCache.Builder().setThreadPoolAlias("ehcacheTest") + .setIsEventListenerModeSync(true) + .setKeyType(String.class) + .setValueType(String.class) + .setKeySerializer(new StringSerializer()) + .setDiskCacheAlias("test1") + .setValueSerializer(new StringSerializer()) + .setDimensionNames(List.of(dimensionName)) + .setCacheType(CacheType.INDICES_REQUEST_CACHE) + .setSettings(settings) + .setExpireAfterAccess(TimeValue.MAX_VALUE) + .setMaximumWeightInBytes(0) + .setRemovalListener(removalListener) + .setWeigher(weigher) + .setStatsTrackingEnabled(false) + .build() + ); + } + } + + public void testEhcacheCloseWithDestroyCacheMethodThrowingException() throws Exception { + EhcacheDiskCache ehcacheDiskCache = new MockEhcahceDiskCache(createDummyBuilder(null)); + PersistentCacheManager cacheManager = ehcacheDiskCache.getCacheManager(); + doNothing().when(cacheManager).removeCache(anyString()); + doNothing().when(cacheManager).close(); + doThrow(new RuntimeException("test")).when(cacheManager).destroyCache(anyString()); + ehcacheDiskCache.close(); + } + + static class MockEhcahceDiskCache extends EhcacheDiskCache { + + public MockEhcahceDiskCache(Builder builder) { + super(builder); + } + + @Override + PersistentCacheManager buildCacheManager() { + PersistentCacheManager cacheManager = mock(PersistentCacheManager.class); + return cacheManager; + } + } + + private EhcacheDiskCache.Builder createDummyBuilder(String storagePath) throws IOException { + Settings settings = Settings.builder().build(); + MockRemovalListener removalListener = new MockRemovalListener<>(); + ToLongBiFunction, String> weigher = getWeigher(); + try (NodeEnvironment env = newNodeEnvironment(settings)) { + if (storagePath == null || storagePath.isBlank()) { + storagePath = env.nodePaths()[0].path.toString() + "/request_cache"; + } + return (EhcacheDiskCache.Builder) new EhcacheDiskCache.Builder().setThreadPoolAlias( + "ehcacheTest" + ) + .setIsEventListenerModeSync(true) + .setStoragePath(storagePath) + .setKeyType(String.class) + .setValueType(String.class) + .setKeySerializer(new StringSerializer()) + .setDiskCacheAlias("test1") + .setValueSerializer(new StringSerializer()) + .setDimensionNames(List.of(dimensionName)) + .setCacheType(CacheType.INDICES_REQUEST_CACHE) + .setSettings(settings) + .setExpireAfterAccess(TimeValue.MAX_VALUE) + .setMaximumWeightInBytes(CACHE_SIZE_IN_BYTES) + .setRemovalListener(removalListener) + .setWeigher(weigher) + .setStatsTrackingEnabled(false); + } + } + private List getRandomDimensions(List dimensionNames) { Random rand = Randomness.get(); int bound = 3; From 4abcf395897fda619a186afe53fbb5cfaa34b841 Mon Sep 17 00:00:00 2001 From: Arpit-Bandejiya Date: Thu, 18 Jul 2024 14:33:13 +0530 Subject: [PATCH 074/167] Refactor remote-routing-table service inline with remote state interfaces (#14668) --------- Signed-off-by: Arpit Bandejiya Signed-off-by: Arpit-Bandejiya --- CHANGELOG.md | 1 + .../InternalRemoteRoutingTableService.java | 232 +++---------- .../remote/NoopRemoteRoutingTableService.java | 14 +- .../remote/RemoteRoutingTableService.java | 14 +- .../RemoteRoutingTableServiceFactory.java | 5 +- .../AbstractRemoteWritableBlobEntity.java | 4 + .../common/settings/ClusterSettings.java | 6 +- .../remote/RemoteClusterStateService.java | 45 ++- .../model/RemoteClusterStateBlobStore.java | 22 +- .../model/RemoteRoutingTableBlobStore.java | 108 ++++++ .../routingtable/IndexRoutingTableHeader.java | 81 ----- .../routingtable/RemoteIndexRoutingTable.java | 136 ++++---- ...RemoteRoutingTableServiceFactoryTests.java | 6 +- .../RemoteRoutingTableServiceTests.java | 303 ++++++----------- .../remote/ClusterMetadataManifestTests.java | 4 +- .../RemoteClusterStateServiceTests.java | 9 +- .../RemoteRoutingTableBlobStoreTests.java | 133 ++++++++ .../IndexRoutingTableHeaderTests.java | 32 -- .../RemoteIndexRoutingTableTests.java | 307 +++++++++++++++--- 19 files changed, 799 insertions(+), 663 deletions(-) create mode 100644 server/src/main/java/org/opensearch/gateway/remote/model/RemoteRoutingTableBlobStore.java delete mode 100644 server/src/main/java/org/opensearch/gateway/remote/routingtable/IndexRoutingTableHeader.java create mode 100644 server/src/test/java/org/opensearch/gateway/remote/model/RemoteRoutingTableBlobStoreTests.java delete mode 100644 server/src/test/java/org/opensearch/gateway/remote/routingtable/IndexRoutingTableHeaderTests.java diff --git a/CHANGELOG.md b/CHANGELOG.md index b863b9d13e789..0417cc14ee86f 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -19,6 +19,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Print reason why parent task was cancelled ([#14604](https://github.com/opensearch-project/OpenSearch/issues/14604)) - Add matchesPluginSystemIndexPattern to SystemIndexRegistry ([#14750](https://github.com/opensearch-project/OpenSearch/pull/14750)) - Add Plugin interface for loading application based configuration templates (([#14659](https://github.com/opensearch-project/OpenSearch/issues/14659))) +- Refactor remote-routing-table service inline with remote state interfaces([#14668](https://github.com/opensearch-project/OpenSearch/pull/14668)) ### Dependencies - Bump `org.gradle.test-retry` from 1.5.8 to 1.5.9 ([#13442](https://github.com/opensearch-project/OpenSearch/pull/13442)) diff --git a/server/src/main/java/org/opensearch/cluster/routing/remote/InternalRemoteRoutingTableService.java b/server/src/main/java/org/opensearch/cluster/routing/remote/InternalRemoteRoutingTableService.java index cc1b0713393f3..f3f245ee9f8f0 100644 --- a/server/src/main/java/org/opensearch/cluster/routing/remote/InternalRemoteRoutingTableService.java +++ b/server/src/main/java/org/opensearch/cluster/routing/remote/InternalRemoteRoutingTableService.java @@ -11,35 +11,24 @@ import org.apache.logging.log4j.LogManager; import org.apache.logging.log4j.Logger; import org.apache.logging.log4j.message.ParameterizedMessage; -import org.apache.lucene.store.IndexInput; import org.opensearch.action.LatchedActionListener; -import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.DiffableUtils; import org.opensearch.cluster.routing.IndexRoutingTable; import org.opensearch.cluster.routing.RoutingTable; import org.opensearch.common.CheckedRunnable; -import org.opensearch.common.blobstore.AsyncMultiStreamBlobContainer; -import org.opensearch.common.blobstore.BlobContainer; import org.opensearch.common.blobstore.BlobPath; -import org.opensearch.common.blobstore.stream.write.WritePriority; -import org.opensearch.common.blobstore.transfer.RemoteTransferContainer; -import org.opensearch.common.blobstore.transfer.stream.OffsetRangeIndexInputStream; -import org.opensearch.common.io.stream.BytesStreamOutput; import org.opensearch.common.lifecycle.AbstractLifecycleComponent; -import org.opensearch.common.lucene.store.ByteArrayIndexInput; +import org.opensearch.common.remote.RemoteWritableEntityStore; import org.opensearch.common.settings.ClusterSettings; -import org.opensearch.common.settings.Setting; import org.opensearch.common.settings.Settings; import org.opensearch.common.util.io.IOUtils; import org.opensearch.core.action.ActionListener; -import org.opensearch.core.common.bytes.BytesReference; -import org.opensearch.core.index.Index; +import org.opensearch.core.compress.Compressor; import org.opensearch.gateway.remote.ClusterMetadataManifest; import org.opensearch.gateway.remote.RemoteStateTransferException; +import org.opensearch.gateway.remote.model.RemoteRoutingTableBlobStore; import org.opensearch.gateway.remote.routingtable.RemoteIndexRoutingTable; -import org.opensearch.index.remote.RemoteStoreEnums; -import org.opensearch.index.remote.RemoteStorePathStrategy; -import org.opensearch.index.remote.RemoteStoreUtils; +import org.opensearch.index.translog.transfer.BlobStoreTransferService; import org.opensearch.node.Node; import org.opensearch.node.remotestore.RemoteStoreNodeAttribute; import org.opensearch.repositories.RepositoriesService; @@ -52,12 +41,10 @@ import java.util.List; import java.util.Map; import java.util.Optional; -import java.util.concurrent.ExecutorService; import java.util.function.Function; import java.util.function.Supplier; import java.util.stream.Collectors; -import static org.opensearch.gateway.remote.RemoteClusterStateUtils.DELIMITER; import static org.opensearch.node.remotestore.RemoteStoreNodeAttribute.isRemoteRoutingTableEnabled; /** @@ -67,64 +54,29 @@ */ public class InternalRemoteRoutingTableService extends AbstractLifecycleComponent implements RemoteRoutingTableService { - /** - * This setting is used to set the remote routing table store blob store path type strategy. - */ - public static final Setting REMOTE_ROUTING_TABLE_PATH_TYPE_SETTING = new Setting<>( - "cluster.remote_store.routing_table.path_type", - RemoteStoreEnums.PathType.HASHED_PREFIX.toString(), - RemoteStoreEnums.PathType::parseString, - Setting.Property.NodeScope, - Setting.Property.Dynamic - ); - - /** - * This setting is used to set the remote routing table store blob store path hash algorithm strategy. - * This setting will come to effect if the {@link #REMOTE_ROUTING_TABLE_PATH_TYPE_SETTING} - * is either {@code HASHED_PREFIX} or {@code HASHED_INFIX}. - */ - public static final Setting REMOTE_ROUTING_TABLE_PATH_HASH_ALGO_SETTING = new Setting<>( - "cluster.remote_store.routing_table.path_hash_algo", - RemoteStoreEnums.PathHashAlgorithm.FNV_1A_BASE64.toString(), - RemoteStoreEnums.PathHashAlgorithm::parseString, - Setting.Property.NodeScope, - Setting.Property.Dynamic - ); - - public static final String INDEX_ROUTING_PATH_TOKEN = "index-routing"; - public static final String INDEX_ROUTING_FILE_PREFIX = "index_routing"; - public static final String INDEX_ROUTING_METADATA_PREFIX = "indexRouting--"; - private static final Logger logger = LogManager.getLogger(InternalRemoteRoutingTableService.class); private final Settings settings; private final Supplier repositoriesService; + private Compressor compressor; + private RemoteWritableEntityStore remoteIndexRoutingTableStore; + private final ClusterSettings clusterSettings; private BlobStoreRepository blobStoreRepository; - private RemoteStoreEnums.PathType pathType; - private RemoteStoreEnums.PathHashAlgorithm pathHashAlgo; - private ThreadPool threadPool; + private final ThreadPool threadPool; + private final String clusterName; public InternalRemoteRoutingTableService( Supplier repositoriesService, Settings settings, ClusterSettings clusterSettings, - ThreadPool threadpool + ThreadPool threadpool, + String clusterName ) { assert isRemoteRoutingTableEnabled(settings) : "Remote routing table is not enabled"; this.repositoriesService = repositoriesService; this.settings = settings; - this.pathType = clusterSettings.get(REMOTE_ROUTING_TABLE_PATH_TYPE_SETTING); - this.pathHashAlgo = clusterSettings.get(REMOTE_ROUTING_TABLE_PATH_HASH_ALGO_SETTING); - clusterSettings.addSettingsUpdateConsumer(REMOTE_ROUTING_TABLE_PATH_TYPE_SETTING, this::setPathTypeSetting); - clusterSettings.addSettingsUpdateConsumer(REMOTE_ROUTING_TABLE_PATH_HASH_ALGO_SETTING, this::setPathHashAlgoSetting); this.threadPool = threadpool; - } - - private void setPathTypeSetting(RemoteStoreEnums.PathType pathType) { - this.pathType = pathType; - } - - private void setPathHashAlgoSetting(RemoteStoreEnums.PathHashAlgorithm pathHashAlgo) { - this.pathHashAlgo = pathHashAlgo; + this.clusterName = clusterName; + this.clusterSettings = clusterSettings; } public List getIndicesRouting(RoutingTable routingTable) { @@ -151,43 +103,32 @@ public DiffableUtils.MapDiff getIndexRoutingAsyncAction( - ClusterState clusterState, + @Override + public CheckedRunnable getAsyncIndexRoutingWriteAction( + String clusterUUID, + long term, + long version, IndexRoutingTable indexRouting, - LatchedActionListener latchedActionListener, - BlobPath clusterBasePath + LatchedActionListener latchedActionListener ) { - BlobPath indexRoutingPath = clusterBasePath.add(INDEX_ROUTING_PATH_TOKEN); - BlobPath path = pathType.path( - RemoteStorePathStrategy.PathInput.builder().basePath(indexRoutingPath).indexUUID(indexRouting.getIndex().getUUID()).build(), - pathHashAlgo - ); - final BlobContainer blobContainer = blobStoreRepository.blobStore().blobContainer(path); - - final String fileName = getIndexRoutingFileName(clusterState.term(), clusterState.version()); + RemoteIndexRoutingTable remoteIndexRoutingTable = new RemoteIndexRoutingTable(indexRouting, clusterUUID, compressor, term, version); ActionListener completionListener = ActionListener.wrap( - resp -> latchedActionListener.onResponse( - new ClusterMetadataManifest.UploadedIndexMetadata( - indexRouting.getIndex().getName(), - indexRouting.getIndex().getUUID(), - path.buildAsString() + fileName, - INDEX_ROUTING_METADATA_PREFIX - ) - ), + resp -> latchedActionListener.onResponse(remoteIndexRoutingTable.getUploadedMetadata()), ex -> latchedActionListener.onFailure( new RemoteStateTransferException("Exception in writing index to remote store: " + indexRouting.getIndex().toString(), ex) ) ); - return () -> uploadIndex(indexRouting, fileName, blobContainer, completionListener); + return () -> remoteIndexRoutingTableStore.writeAsync(remoteIndexRoutingTable, completionListener); } /** @@ -214,111 +155,21 @@ public List getAllUploadedIndices return new ArrayList<>(allUploadedIndicesRouting.values()); } - private void uploadIndex( - IndexRoutingTable indexRouting, - String fileName, - BlobContainer blobContainer, - ActionListener completionListener - ) { - RemoteIndexRoutingTable indexRoutingInput = new RemoteIndexRoutingTable(indexRouting); - BytesReference bytesInput = null; - try (BytesStreamOutput streamOutput = new BytesStreamOutput()) { - indexRoutingInput.writeTo(streamOutput); - bytesInput = streamOutput.bytes(); - } catch (IOException e) { - logger.error("Failed to serialize IndexRoutingTable for [{}]: [{}]", indexRouting, e); - completionListener.onFailure(e); - return; - } - - if (blobContainer instanceof AsyncMultiStreamBlobContainer == false) { - try { - blobContainer.writeBlob(fileName, bytesInput.streamInput(), bytesInput.length(), true); - completionListener.onResponse(null); - } catch (IOException e) { - logger.error("Failed to write IndexRoutingTable to remote store for indexRouting [{}]: [{}]", indexRouting, e); - completionListener.onFailure(e); - } - return; - } - - try (IndexInput input = new ByteArrayIndexInput("indexrouting", BytesReference.toBytes(bytesInput))) { - try ( - RemoteTransferContainer remoteTransferContainer = new RemoteTransferContainer( - fileName, - fileName, - input.length(), - true, - WritePriority.URGENT, - (size, position) -> new OffsetRangeIndexInputStream(input, size, position), - null, - false - ) - ) { - ((AsyncMultiStreamBlobContainer) blobContainer).asyncBlobUpload( - remoteTransferContainer.createWriteContext(), - completionListener - ); - } catch (IOException e) { - logger.error("Failed to write IndexRoutingTable to remote store for indexRouting [{}]: [{}]", indexRouting, e); - completionListener.onFailure(e); - } - } catch (IOException e) { - logger.error( - "Failed to create transfer object for IndexRoutingTable for remote store upload for indexRouting [{}]: [{}]", - indexRouting, - e - ); - completionListener.onFailure(e); - } - } - @Override public CheckedRunnable getAsyncIndexRoutingReadAction( + String clusterUUID, String uploadedFilename, - Index index, LatchedActionListener latchedActionListener ) { - int idx = uploadedFilename.lastIndexOf("/"); - String blobFileName = uploadedFilename.substring(idx + 1); - BlobContainer blobContainer = blobStoreRepository.blobStore() - .blobContainer(BlobPath.cleanPath().add(uploadedFilename.substring(0, idx))); - return () -> readAsync( - blobContainer, - blobFileName, - index, - threadPool.executor(ThreadPool.Names.REMOTE_STATE_READ), - ActionListener.wrap( - response -> latchedActionListener.onResponse(response.getIndexRoutingTable()), - latchedActionListener::onFailure - ) + ActionListener actionListener = ActionListener.wrap( + latchedActionListener::onResponse, + latchedActionListener::onFailure ); - } - private void readAsync( - BlobContainer blobContainer, - String name, - Index index, - ExecutorService executorService, - ActionListener listener - ) { - executorService.execute(() -> { - try { - listener.onResponse(read(blobContainer, name, index)); - } catch (Exception e) { - listener.onFailure(e); - } - }); - } + RemoteIndexRoutingTable remoteIndexRoutingTable = new RemoteIndexRoutingTable(uploadedFilename, clusterUUID, compressor); - private RemoteIndexRoutingTable read(BlobContainer blobContainer, String path, Index index) { - try { - return new RemoteIndexRoutingTable(blobContainer.readBlob(path), index); - } catch (IOException | AssertionError e) { - logger.error(() -> new ParameterizedMessage("RoutingTable read failed for path {}", path), e); - throw new RemoteStateTransferException("Failed to read RemoteRoutingTable from Manifest with error ", e); - } + return () -> remoteIndexRoutingTableStore.readAsync(remoteIndexRoutingTable, actionListener); } @Override @@ -335,16 +186,6 @@ public List getUpdatedIndexRoutin }).collect(Collectors.toList()); } - private String getIndexRoutingFileName(long term, long version) { - return String.join( - DELIMITER, - INDEX_ROUTING_FILE_PREFIX, - RemoteStoreUtils.invertLong(term), - RemoteStoreUtils.invertLong(version), - RemoteStoreUtils.invertLong(System.currentTimeMillis()) - ); - } - @Override protected void doClose() throws IOException { if (blobStoreRepository != null) { @@ -362,6 +203,16 @@ protected void doStart() { final Repository repository = repositoriesService.get().repository(remoteStoreRepo); assert repository instanceof BlobStoreRepository : "Repository should be instance of BlobStoreRepository"; blobStoreRepository = (BlobStoreRepository) repository; + compressor = blobStoreRepository.getCompressor(); + + this.remoteIndexRoutingTableStore = new RemoteRoutingTableBlobStore<>( + new BlobStoreTransferService(blobStoreRepository.blobStore(), threadPool), + blobStoreRepository, + clusterName, + threadPool, + ThreadPool.Names.REMOTE_STATE_READ, + clusterSettings + ); } @Override @@ -377,5 +228,4 @@ public void deleteStaleIndexRoutingPaths(List stalePaths) throws IOExcep throw e; } } - } diff --git a/server/src/main/java/org/opensearch/cluster/routing/remote/NoopRemoteRoutingTableService.java b/server/src/main/java/org/opensearch/cluster/routing/remote/NoopRemoteRoutingTableService.java index 6236d107d0220..4636e492df28f 100644 --- a/server/src/main/java/org/opensearch/cluster/routing/remote/NoopRemoteRoutingTableService.java +++ b/server/src/main/java/org/opensearch/cluster/routing/remote/NoopRemoteRoutingTableService.java @@ -9,14 +9,11 @@ package org.opensearch.cluster.routing.remote; import org.opensearch.action.LatchedActionListener; -import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.DiffableUtils; import org.opensearch.cluster.routing.IndexRoutingTable; import org.opensearch.cluster.routing.RoutingTable; import org.opensearch.common.CheckedRunnable; -import org.opensearch.common.blobstore.BlobPath; import org.opensearch.common.lifecycle.AbstractLifecycleComponent; -import org.opensearch.core.index.Index; import org.opensearch.gateway.remote.ClusterMetadataManifest; import java.io.IOException; @@ -42,11 +39,12 @@ public DiffableUtils.MapDiff getIndexRoutingAsyncAction( - ClusterState clusterState, + public CheckedRunnable getAsyncIndexRoutingWriteAction( + String clusterUUID, + long term, + long version, IndexRoutingTable indexRouting, - LatchedActionListener latchedActionListener, - BlobPath clusterBasePath + LatchedActionListener latchedActionListener ) { // noop return () -> {}; @@ -64,8 +62,8 @@ public List getAllUploadedIndices @Override public CheckedRunnable getAsyncIndexRoutingReadAction( + String clusterUUID, String uploadedFilename, - Index index, LatchedActionListener latchedActionListener ) { // noop diff --git a/server/src/main/java/org/opensearch/cluster/routing/remote/RemoteRoutingTableService.java b/server/src/main/java/org/opensearch/cluster/routing/remote/RemoteRoutingTableService.java index d455dfb58eabc..d319123bc2cee 100644 --- a/server/src/main/java/org/opensearch/cluster/routing/remote/RemoteRoutingTableService.java +++ b/server/src/main/java/org/opensearch/cluster/routing/remote/RemoteRoutingTableService.java @@ -9,16 +9,13 @@ package org.opensearch.cluster.routing.remote; import org.opensearch.action.LatchedActionListener; -import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.DiffableUtils; import org.opensearch.cluster.routing.IndexRoutingTable; import org.opensearch.cluster.routing.RoutingTable; import org.opensearch.common.CheckedRunnable; -import org.opensearch.common.blobstore.BlobPath; import org.opensearch.common.lifecycle.LifecycleComponent; import org.opensearch.core.common.io.stream.StreamInput; import org.opensearch.core.common.io.stream.StreamOutput; -import org.opensearch.core.index.Index; import org.opensearch.gateway.remote.ClusterMetadataManifest; import java.io.IOException; @@ -47,8 +44,8 @@ public IndexRoutingTable read(StreamInput in, String key) throws IOException { List getIndicesRouting(RoutingTable routingTable); CheckedRunnable getAsyncIndexRoutingReadAction( + String clusterUUID, String uploadedFilename, - Index index, LatchedActionListener latchedActionListener ); @@ -62,11 +59,12 @@ DiffableUtils.MapDiff> RoutingTable after ); - CheckedRunnable getIndexRoutingAsyncAction( - ClusterState clusterState, + CheckedRunnable getAsyncIndexRoutingWriteAction( + String clusterUUID, + long term, + long version, IndexRoutingTable indexRouting, - LatchedActionListener latchedActionListener, - BlobPath clusterBasePath + LatchedActionListener latchedActionListener ); List getAllUploadedIndicesRouting( diff --git a/server/src/main/java/org/opensearch/cluster/routing/remote/RemoteRoutingTableServiceFactory.java b/server/src/main/java/org/opensearch/cluster/routing/remote/RemoteRoutingTableServiceFactory.java index 82837191a30b7..56dfa03215a64 100644 --- a/server/src/main/java/org/opensearch/cluster/routing/remote/RemoteRoutingTableServiceFactory.java +++ b/server/src/main/java/org/opensearch/cluster/routing/remote/RemoteRoutingTableServiceFactory.java @@ -34,10 +34,11 @@ public static RemoteRoutingTableService getService( Supplier repositoriesService, Settings settings, ClusterSettings clusterSettings, - ThreadPool threadPool + ThreadPool threadPool, + String clusterName ) { if (isRemoteRoutingTableEnabled(settings)) { - return new InternalRemoteRoutingTableService(repositoriesService, settings, clusterSettings, threadPool); + return new InternalRemoteRoutingTableService(repositoriesService, settings, clusterSettings, threadPool, clusterName); } return new NoopRemoteRoutingTableService(); } diff --git a/server/src/main/java/org/opensearch/common/remote/AbstractRemoteWritableBlobEntity.java b/server/src/main/java/org/opensearch/common/remote/AbstractRemoteWritableBlobEntity.java index 23fc9d3ad77cb..237c077cb673c 100644 --- a/server/src/main/java/org/opensearch/common/remote/AbstractRemoteWritableBlobEntity.java +++ b/server/src/main/java/org/opensearch/common/remote/AbstractRemoteWritableBlobEntity.java @@ -40,6 +40,10 @@ public AbstractRemoteWritableBlobEntity( this.namedXContentRegistry = namedXContentRegistry; } + public AbstractRemoteWritableBlobEntity(final String clusterUUID, final Compressor compressor) { + this(clusterUUID, compressor, null); + } + public abstract BlobPathParameters getBlobPathParameters(); public abstract String getType(); diff --git a/server/src/main/java/org/opensearch/common/settings/ClusterSettings.java b/server/src/main/java/org/opensearch/common/settings/ClusterSettings.java index b4826e1a59428..49801fd3834b8 100644 --- a/server/src/main/java/org/opensearch/common/settings/ClusterSettings.java +++ b/server/src/main/java/org/opensearch/common/settings/ClusterSettings.java @@ -78,7 +78,6 @@ import org.opensearch.cluster.routing.allocation.decider.SameShardAllocationDecider; import org.opensearch.cluster.routing.allocation.decider.ShardsLimitAllocationDecider; import org.opensearch.cluster.routing.allocation.decider.ThrottlingAllocationDecider; -import org.opensearch.cluster.routing.remote.InternalRemoteRoutingTableService; import org.opensearch.cluster.service.ClusterApplierService; import org.opensearch.cluster.service.ClusterManagerService; import org.opensearch.cluster.service.ClusterManagerTaskThrottler; @@ -108,6 +107,7 @@ import org.opensearch.gateway.ShardsBatchGatewayAllocator; import org.opensearch.gateway.remote.RemoteClusterStateCleanupManager; import org.opensearch.gateway.remote.RemoteClusterStateService; +import org.opensearch.gateway.remote.model.RemoteRoutingTableBlobStore; import org.opensearch.http.HttpTransportSettings; import org.opensearch.index.IndexModule; import org.opensearch.index.IndexSettings; @@ -730,8 +730,8 @@ public void apply(Settings value, Settings current, Settings previous) { RemoteStoreNodeService.MIGRATION_DIRECTION_SETTING, IndicesService.CLUSTER_REMOTE_INDEX_RESTRICT_ASYNC_DURABILITY_SETTING, IndicesService.CLUSTER_INDEX_RESTRICT_REPLICATION_TYPE_SETTING, - InternalRemoteRoutingTableService.REMOTE_ROUTING_TABLE_PATH_TYPE_SETTING, - InternalRemoteRoutingTableService.REMOTE_ROUTING_TABLE_PATH_HASH_ALGO_SETTING, + RemoteRoutingTableBlobStore.REMOTE_ROUTING_TABLE_PATH_TYPE_SETTING, + RemoteRoutingTableBlobStore.REMOTE_ROUTING_TABLE_PATH_HASH_ALGO_SETTING, // Admission Control Settings AdmissionControlSettings.ADMISSION_CONTROL_TRANSPORT_LAYER_MODE, diff --git a/server/src/main/java/org/opensearch/gateway/remote/RemoteClusterStateService.java b/server/src/main/java/org/opensearch/gateway/remote/RemoteClusterStateService.java index 3e63f9114ea16..7e7a93e1d42ec 100644 --- a/server/src/main/java/org/opensearch/gateway/remote/RemoteClusterStateService.java +++ b/server/src/main/java/org/opensearch/gateway/remote/RemoteClusterStateService.java @@ -26,7 +26,6 @@ import org.opensearch.cluster.node.DiscoveryNodes.Builder; import org.opensearch.cluster.routing.IndexRoutingTable; import org.opensearch.cluster.routing.RoutingTable; -import org.opensearch.cluster.routing.remote.InternalRemoteRoutingTableService; import org.opensearch.cluster.routing.remote.RemoteRoutingTableService; import org.opensearch.cluster.routing.remote.RemoteRoutingTableServiceFactory; import org.opensearch.cluster.service.ClusterService; @@ -43,7 +42,6 @@ import org.opensearch.common.util.io.IOUtils; import org.opensearch.core.action.ActionListener; import org.opensearch.core.common.io.stream.NamedWriteableRegistry; -import org.opensearch.core.index.Index; import org.opensearch.core.xcontent.ToXContent; import org.opensearch.gateway.remote.ClusterMetadataManifest.UploadedIndexMetadata; import org.opensearch.gateway.remote.ClusterMetadataManifest.UploadedMetadataAttribute; @@ -98,7 +96,6 @@ import static org.opensearch.gateway.remote.RemoteClusterStateUtils.DELIMITER; import static org.opensearch.gateway.remote.RemoteClusterStateUtils.UploadedMetadataResults; import static org.opensearch.gateway.remote.RemoteClusterStateUtils.clusterUUIDContainer; -import static org.opensearch.gateway.remote.RemoteClusterStateUtils.getClusterMetadataBasePath; import static org.opensearch.gateway.remote.model.RemoteClusterStateCustoms.CLUSTER_STATE_CUSTOM; import static org.opensearch.gateway.remote.model.RemoteCoordinationMetadata.COORDINATION_METADATA; import static org.opensearch.gateway.remote.model.RemoteCustomMetadata.CUSTOM_DELIMITER; @@ -107,6 +104,7 @@ import static org.opensearch.gateway.remote.model.RemotePersistentSettingsMetadata.SETTING_METADATA; import static org.opensearch.gateway.remote.model.RemoteTemplatesMetadata.TEMPLATES_METADATA; import static org.opensearch.gateway.remote.model.RemoteTransientSettingsMetadata.TRANSIENT_SETTING_METADATA; +import static org.opensearch.gateway.remote.routingtable.RemoteIndexRoutingTable.INDEX_ROUTING_METADATA_PREFIX; import static org.opensearch.node.remotestore.RemoteStoreNodeAttribute.isRemoteStoreClusterStateEnabled; /** @@ -146,7 +144,7 @@ public class RemoteClusterStateService implements Closeable { private final List indexMetadataUploadListeners; private BlobStoreRepository blobStoreRepository; private BlobStoreTransferService blobStoreTransferService; - private final RemoteRoutingTableService remoteRoutingTableService; + private RemoteRoutingTableService remoteRoutingTableService; private volatile TimeValue slowWriteLoggingThreshold; private final RemotePersistenceStats remoteStateStats; @@ -197,16 +195,17 @@ public RemoteClusterStateService( this.remoteStateStats = new RemotePersistenceStats(); this.namedWriteableRegistry = namedWriteableRegistry; this.indexMetadataUploadListeners = indexMetadataUploadListeners; + this.isPublicationEnabled = FeatureFlags.isEnabled(REMOTE_PUBLICATION_EXPERIMENTAL) + && RemoteStoreNodeAttribute.isRemoteStoreClusterStateEnabled(settings) + && RemoteStoreNodeAttribute.isRemoteRoutingTableEnabled(settings); this.remoteRoutingTableService = RemoteRoutingTableServiceFactory.getService( repositoriesService, settings, clusterSettings, - threadPool + threadpool, + ClusterName.CLUSTER_NAME_SETTING.get(settings).value() ); - this.remoteClusterStateCleanupManager = new RemoteClusterStateCleanupManager(this, clusterService, remoteRoutingTableService); - this.isPublicationEnabled = FeatureFlags.isEnabled(REMOTE_PUBLICATION_EXPERIMENTAL) - && RemoteStoreNodeAttribute.isRemoteStoreClusterStateEnabled(settings) - && RemoteStoreNodeAttribute.isRemoteRoutingTableEnabled(settings); + remoteClusterStateCleanupManager = new RemoteClusterStateCleanupManager(this, clusterService, remoteRoutingTableService); } /** @@ -663,16 +662,13 @@ UploadedMetadataResults writeMetadataInParallel( }); indicesRoutingToUpload.forEach(indexRoutingTable -> { uploadTasks.put( - InternalRemoteRoutingTableService.INDEX_ROUTING_METADATA_PREFIX + indexRoutingTable.getIndex().getName(), - remoteRoutingTableService.getIndexRoutingAsyncAction( - clusterState, + INDEX_ROUTING_METADATA_PREFIX + indexRoutingTable.getIndex().getName(), + remoteRoutingTableService.getAsyncIndexRoutingWriteAction( + clusterState.metadata().clusterUUID(), + clusterState.term(), + clusterState.version(), indexRoutingTable, - listener, - getClusterMetadataBasePath( - blobStoreRepository, - clusterState.getClusterName().value(), - clusterState.metadata().clusterUUID() - ) + listener ) ); }); @@ -723,7 +719,7 @@ UploadedMetadataResults writeMetadataInParallel( UploadedMetadataResults response = new UploadedMetadataResults(); results.forEach((name, uploadedMetadata) -> { if (uploadedMetadata.getClass().equals(UploadedIndexMetadata.class) - && uploadedMetadata.getComponent().contains(InternalRemoteRoutingTableService.INDEX_ROUTING_METADATA_PREFIX)) { + && uploadedMetadata.getComponent().contains(INDEX_ROUTING_METADATA_PREFIX)) { response.uploadedIndicesRoutingMetadata.add((UploadedIndexMetadata) uploadedMetadata); } else if (name.startsWith(CUSTOM_METADATA)) { // component name for custom metadata will look like custom-- @@ -897,9 +893,8 @@ public void start() { final Repository repository = repositoriesService.get().repository(remoteStoreRepo); assert repository instanceof BlobStoreRepository : "Repository should be instance of BlobStoreRepository"; blobStoreRepository = (BlobStoreRepository) repository; - this.remoteRoutingTableService.start(); - blobStoreTransferService = new BlobStoreTransferService(getBlobStore(), threadpool); String clusterName = ClusterName.CLUSTER_NAME_SETTING.get(settings).value(); + blobStoreTransferService = new BlobStoreTransferService(getBlobStore(), threadpool); remoteGlobalMetadataManager = new RemoteGlobalMetadataManager( clusterSettings, @@ -931,6 +926,8 @@ public void start() { namedWriteableRegistry, threadpool ); + + remoteRoutingTableService.start(); remoteClusterStateCleanupManager.start(); } @@ -1022,10 +1019,10 @@ ClusterState readClusterStateInParallel( LatchedActionListener routingTableLatchedActionListener = new LatchedActionListener<>( ActionListener.wrap(response -> { - logger.debug("Successfully read cluster state component from remote"); + logger.debug(() -> new ParameterizedMessage("Successfully read index-routing for index {}", response.getIndex().getName())); readIndexRoutingTableResults.add(response); }, ex -> { - logger.error("Failed to read cluster state from remote", ex); + logger.error(() -> new ParameterizedMessage("Failed to read index-routing from remote"), ex); exceptionList.add(ex); }), latch @@ -1034,8 +1031,8 @@ ClusterState readClusterStateInParallel( for (UploadedIndexMetadata indexRouting : indicesRoutingToRead) { asyncMetadataReadActions.add( remoteRoutingTableService.getAsyncIndexRoutingReadAction( + clusterUUID, indexRouting.getUploadedFilename(), - new Index(indexRouting.getIndexName(), indexRouting.getIndexUUID()), routingTableLatchedActionListener ) ); diff --git a/server/src/main/java/org/opensearch/gateway/remote/model/RemoteClusterStateBlobStore.java b/server/src/main/java/org/opensearch/gateway/remote/model/RemoteClusterStateBlobStore.java index 1dd23443f1252..cd8b8aa41ad65 100644 --- a/server/src/main/java/org/opensearch/gateway/remote/model/RemoteClusterStateBlobStore.java +++ b/server/src/main/java/org/opensearch/gateway/remote/model/RemoteClusterStateBlobStore.java @@ -23,6 +23,8 @@ import java.io.InputStream; import java.util.concurrent.ExecutorService; +import static org.opensearch.gateway.remote.RemoteClusterStateUtils.CLUSTER_STATE_PATH_TOKEN; + /** * Abstract class for a blob type storage * @@ -88,18 +90,26 @@ public void readAsync(final U entity, final ActionListener listener) { }); } - private BlobPath getBlobPathForUpload(final AbstractRemoteWritableBlobEntity obj) { - BlobPath blobPath = blobStoreRepository.basePath() - .add(RemoteClusterStateUtils.encodeString(clusterName)) - .add("cluster-state") - .add(obj.clusterUUID()); + public String getClusterName() { + return clusterName; + } + + public BlobPath getBlobPathPrefix(String clusterUUID) { + return blobStoreRepository.basePath() + .add(RemoteClusterStateUtils.encodeString(getClusterName())) + .add(CLUSTER_STATE_PATH_TOKEN) + .add(clusterUUID); + } + + public BlobPath getBlobPathForUpload(final AbstractRemoteWritableBlobEntity obj) { + BlobPath blobPath = getBlobPathPrefix(obj.clusterUUID()); for (String token : obj.getBlobPathParameters().getPathTokens()) { blobPath = blobPath.add(token); } return blobPath; } - private BlobPath getBlobPathForDownload(final AbstractRemoteWritableBlobEntity obj) { + public BlobPath getBlobPathForDownload(final AbstractRemoteWritableBlobEntity obj) { String[] pathTokens = obj.getBlobPathTokens(); BlobPath blobPath = new BlobPath(); if (pathTokens == null || pathTokens.length < 1) { diff --git a/server/src/main/java/org/opensearch/gateway/remote/model/RemoteRoutingTableBlobStore.java b/server/src/main/java/org/opensearch/gateway/remote/model/RemoteRoutingTableBlobStore.java new file mode 100644 index 0000000000000..7c4a5bf2236a1 --- /dev/null +++ b/server/src/main/java/org/opensearch/gateway/remote/model/RemoteRoutingTableBlobStore.java @@ -0,0 +1,108 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.gateway.remote.model; + +import org.opensearch.common.blobstore.BlobPath; +import org.opensearch.common.remote.AbstractRemoteWritableBlobEntity; +import org.opensearch.common.remote.RemoteWriteableEntity; +import org.opensearch.common.settings.ClusterSettings; +import org.opensearch.common.settings.Setting; +import org.opensearch.gateway.remote.routingtable.RemoteIndexRoutingTable; +import org.opensearch.index.remote.RemoteStoreEnums; +import org.opensearch.index.remote.RemoteStorePathStrategy; +import org.opensearch.index.translog.transfer.BlobStoreTransferService; +import org.opensearch.repositories.blobstore.BlobStoreRepository; +import org.opensearch.threadpool.ThreadPool; + +import static org.opensearch.gateway.remote.routingtable.RemoteIndexRoutingTable.INDEX_ROUTING_TABLE; + +/** + * Extends the RemoteClusterStateBlobStore to support {@link RemoteIndexRoutingTable} + * + * @param which can be uploaded to / downloaded from blob store + * @param The concrete class implementing {@link RemoteWriteableEntity} which is used as a wrapper for IndexRoutingTable entity. + */ +public class RemoteRoutingTableBlobStore> extends + RemoteClusterStateBlobStore { + + /** + * This setting is used to set the remote routing table store blob store path type strategy. + */ + public static final Setting REMOTE_ROUTING_TABLE_PATH_TYPE_SETTING = new Setting<>( + "cluster.remote_store.routing_table.path_type", + RemoteStoreEnums.PathType.HASHED_PREFIX.toString(), + RemoteStoreEnums.PathType::parseString, + Setting.Property.NodeScope, + Setting.Property.Dynamic + ); + + /** + * This setting is used to set the remote routing table store blob store path hash algorithm strategy. + * This setting will come to effect if the {@link #REMOTE_ROUTING_TABLE_PATH_TYPE_SETTING} + * is either {@code HASHED_PREFIX} or {@code HASHED_INFIX}. + */ + public static final Setting REMOTE_ROUTING_TABLE_PATH_HASH_ALGO_SETTING = new Setting<>( + "cluster.remote_store.routing_table.path_hash_algo", + RemoteStoreEnums.PathHashAlgorithm.FNV_1A_BASE64.toString(), + RemoteStoreEnums.PathHashAlgorithm::parseString, + Setting.Property.NodeScope, + Setting.Property.Dynamic + ); + + private RemoteStoreEnums.PathType pathType; + private RemoteStoreEnums.PathHashAlgorithm pathHashAlgo; + + public RemoteRoutingTableBlobStore( + BlobStoreTransferService blobStoreTransferService, + BlobStoreRepository blobStoreRepository, + String clusterName, + ThreadPool threadPool, + String executor, + ClusterSettings clusterSettings + ) { + super(blobStoreTransferService, blobStoreRepository, clusterName, threadPool, executor); + this.pathType = clusterSettings.get(REMOTE_ROUTING_TABLE_PATH_TYPE_SETTING); + this.pathHashAlgo = clusterSettings.get(REMOTE_ROUTING_TABLE_PATH_HASH_ALGO_SETTING); + clusterSettings.addSettingsUpdateConsumer(REMOTE_ROUTING_TABLE_PATH_TYPE_SETTING, this::setPathTypeSetting); + clusterSettings.addSettingsUpdateConsumer(REMOTE_ROUTING_TABLE_PATH_HASH_ALGO_SETTING, this::setPathHashAlgoSetting); + } + + @Override + public BlobPath getBlobPathForUpload(final AbstractRemoteWritableBlobEntity obj) { + assert obj.getBlobPathParameters().getPathTokens().size() == 1 : "Unexpected tokens in RemoteRoutingTableObject"; + BlobPath indexRoutingPath = getBlobPathPrefix(obj.clusterUUID()).add(INDEX_ROUTING_TABLE); + + BlobPath path = pathType.path( + RemoteStorePathStrategy.PathInput.builder() + .basePath(indexRoutingPath) + .indexUUID(String.join("", obj.getBlobPathParameters().getPathTokens())) + .build(), + pathHashAlgo + ); + return path; + } + + private void setPathTypeSetting(RemoteStoreEnums.PathType pathType) { + this.pathType = pathType; + } + + private void setPathHashAlgoSetting(RemoteStoreEnums.PathHashAlgorithm pathHashAlgo) { + this.pathHashAlgo = pathHashAlgo; + } + + // For testing only + protected RemoteStoreEnums.PathType getPathTypeSetting() { + return pathType; + } + + // For testing only + protected RemoteStoreEnums.PathHashAlgorithm getPathHashAlgoSetting() { + return pathHashAlgo; + } +} diff --git a/server/src/main/java/org/opensearch/gateway/remote/routingtable/IndexRoutingTableHeader.java b/server/src/main/java/org/opensearch/gateway/remote/routingtable/IndexRoutingTableHeader.java deleted file mode 100644 index 5baea6adba0c7..0000000000000 --- a/server/src/main/java/org/opensearch/gateway/remote/routingtable/IndexRoutingTableHeader.java +++ /dev/null @@ -1,81 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.gateway.remote.routingtable; - -import org.apache.lucene.codecs.CodecUtil; -import org.apache.lucene.index.CorruptIndexException; -import org.apache.lucene.index.IndexFormatTooNewException; -import org.apache.lucene.index.IndexFormatTooOldException; -import org.apache.lucene.store.InputStreamDataInput; -import org.apache.lucene.store.OutputStreamDataOutput; -import org.opensearch.core.common.io.stream.StreamInput; -import org.opensearch.core.common.io.stream.StreamOutput; -import org.opensearch.core.common.io.stream.Writeable; - -import java.io.EOFException; -import java.io.IOException; - -/** - * The stored header information for the individual index routing table - */ -public class IndexRoutingTableHeader implements Writeable { - - public static final String INDEX_ROUTING_HEADER_CODEC = "index_routing_header_codec"; - public static final int INITIAL_VERSION = 1; - public static final int CURRENT_VERSION = INITIAL_VERSION; - private final String indexName; - - public IndexRoutingTableHeader(String indexName) { - this.indexName = indexName; - } - - /** - * Reads the contents on the stream into the corresponding {@link IndexRoutingTableHeader} - * - * @param in streamInput - * @throws IOException exception thrown on failing to read from stream. - */ - public IndexRoutingTableHeader(StreamInput in) throws IOException { - try { - readHeaderVersion(in); - indexName = in.readString(); - } catch (EOFException e) { - throw new IOException("index routing header truncated", e); - } - } - - private void readHeaderVersion(final StreamInput in) throws IOException { - try { - CodecUtil.checkHeader(new InputStreamDataInput(in), INDEX_ROUTING_HEADER_CODEC, INITIAL_VERSION, CURRENT_VERSION); - } catch (CorruptIndexException | IndexFormatTooOldException | IndexFormatTooNewException e) { - throw new IOException("index routing table header corrupted", e); - } - } - - /** - * Write the IndexRoutingTable to given stream. - * - * @param out stream to write - * @throws IOException exception thrown on failing to write to stream. - */ - public void writeTo(StreamOutput out) throws IOException { - try { - CodecUtil.writeHeader(new OutputStreamDataOutput(out), INDEX_ROUTING_HEADER_CODEC, CURRENT_VERSION); - out.writeString(indexName); - out.flush(); - } catch (IOException e) { - throw new IOException("Failed to write IndexRoutingTable header", e); - } - } - - public String getIndexName() { - return indexName; - } - -} diff --git a/server/src/main/java/org/opensearch/gateway/remote/routingtable/RemoteIndexRoutingTable.java b/server/src/main/java/org/opensearch/gateway/remote/routingtable/RemoteIndexRoutingTable.java index 17c55190da07f..40b5bafde2b13 100644 --- a/server/src/main/java/org/opensearch/gateway/remote/routingtable/RemoteIndexRoutingTable.java +++ b/server/src/main/java/org/opensearch/gateway/remote/routingtable/RemoteIndexRoutingTable.java @@ -9,92 +9,106 @@ package org.opensearch.gateway.remote.routingtable; import org.opensearch.cluster.routing.IndexRoutingTable; -import org.opensearch.cluster.routing.IndexShardRoutingTable; -import org.opensearch.core.common.io.stream.BufferedChecksumStreamInput; -import org.opensearch.core.common.io.stream.BufferedChecksumStreamOutput; -import org.opensearch.core.common.io.stream.InputStreamStreamInput; -import org.opensearch.core.common.io.stream.StreamOutput; -import org.opensearch.core.common.io.stream.Writeable; +import org.opensearch.common.io.Streams; +import org.opensearch.common.remote.AbstractRemoteWritableBlobEntity; +import org.opensearch.common.remote.BlobPathParameters; +import org.opensearch.core.compress.Compressor; import org.opensearch.core.index.Index; +import org.opensearch.gateway.remote.ClusterMetadataManifest; +import org.opensearch.index.remote.RemoteStoreUtils; +import org.opensearch.repositories.blobstore.ChecksumWritableBlobStoreFormat; -import java.io.EOFException; import java.io.IOException; import java.io.InputStream; +import java.util.List; + +import static org.opensearch.gateway.remote.RemoteClusterStateUtils.DELIMITER; /** * Remote store object for IndexRoutingTable */ -public class RemoteIndexRoutingTable implements Writeable { +public class RemoteIndexRoutingTable extends AbstractRemoteWritableBlobEntity { - private final IndexRoutingTable indexRoutingTable; + public static final String INDEX_ROUTING_TABLE = "index-routing"; + public static final String INDEX_ROUTING_METADATA_PREFIX = "indexRouting--"; + public static final String INDEX_ROUTING_FILE = "index_routing"; + private IndexRoutingTable indexRoutingTable; + private final Index index; + private long term; + private long version; + private BlobPathParameters blobPathParameters; + public static final ChecksumWritableBlobStoreFormat INDEX_ROUTING_TABLE_FORMAT = + new ChecksumWritableBlobStoreFormat<>("index-routing-table", IndexRoutingTable::readFrom); - public RemoteIndexRoutingTable(IndexRoutingTable indexRoutingTable) { + public RemoteIndexRoutingTable( + IndexRoutingTable indexRoutingTable, + String clusterUUID, + Compressor compressor, + long term, + long version + ) { + super(clusterUUID, compressor); + this.index = indexRoutingTable.getIndex(); this.indexRoutingTable = indexRoutingTable; + this.term = term; + this.version = version; } /** * Reads data from inputStream and creates RemoteIndexRoutingTable object with the {@link IndexRoutingTable} - * @param inputStream input stream with index routing data - * @param index index for the current routing data - * @throws IOException exception thrown on failing to read from stream. + * @param blobName name of the blob, which contains the index routing data + * @param clusterUUID UUID of the cluster + * @param compressor Compressor object */ - public RemoteIndexRoutingTable(InputStream inputStream, Index index) throws IOException { - try { - try (BufferedChecksumStreamInput in = new BufferedChecksumStreamInput(new InputStreamStreamInput(inputStream), "assertion")) { - // Read the Table Header first and confirm the index - IndexRoutingTableHeader indexRoutingTableHeader = new IndexRoutingTableHeader(in); - assert indexRoutingTableHeader.getIndexName().equals(index.getName()); + public RemoteIndexRoutingTable(String blobName, String clusterUUID, Compressor compressor) { + super(clusterUUID, compressor); + this.index = null; + this.term = -1; + this.version = -1; + this.blobName = blobName; + } - int numberOfShardRouting = in.readVInt(); - IndexRoutingTable.Builder indicesRoutingTable = IndexRoutingTable.builder(index); - for (int idx = 0; idx < numberOfShardRouting; idx++) { - IndexShardRoutingTable indexShardRoutingTable = IndexShardRoutingTable.Builder.readFrom(in); - indicesRoutingTable.addIndexShard(indexShardRoutingTable); - } - verifyCheckSum(in); - indexRoutingTable = indicesRoutingTable.build(); - } - } catch (EOFException e) { - throw new IOException("Indices Routing table is corrupted", e); + @Override + public BlobPathParameters getBlobPathParameters() { + if (blobPathParameters == null) { + blobPathParameters = new BlobPathParameters(List.of(indexRoutingTable.getIndex().getUUID()), INDEX_ROUTING_FILE); } + return blobPathParameters; } - public IndexRoutingTable getIndexRoutingTable() { - return indexRoutingTable; + @Override + public String getType() { + return INDEX_ROUTING_TABLE; } - /** - * Writes {@link IndexRoutingTable} to the given stream - * @param streamOutput output stream to write - * @throws IOException exception thrown on failing to write to stream. - */ @Override - public void writeTo(StreamOutput streamOutput) throws IOException { - try { - BufferedChecksumStreamOutput out = new BufferedChecksumStreamOutput(streamOutput); - IndexRoutingTableHeader indexRoutingTableHeader = new IndexRoutingTableHeader(indexRoutingTable.getIndex().getName()); - indexRoutingTableHeader.writeTo(out); - out.writeVInt(indexRoutingTable.shards().size()); - for (IndexShardRoutingTable next : indexRoutingTable) { - IndexShardRoutingTable.Builder.writeTo(next, out); - } - out.writeLong(out.getChecksum()); - out.flush(); - } catch (IOException e) { - throw new IOException("Failed to write IndexRoutingTable to stream", e); + public String generateBlobFileName() { + if (blobFileName == null) { + blobFileName = String.join( + DELIMITER, + getBlobPathParameters().getFilePrefix(), + RemoteStoreUtils.invertLong(term), + RemoteStoreUtils.invertLong(version), + RemoteStoreUtils.invertLong(System.currentTimeMillis()) + ); } + return blobFileName; } - private void verifyCheckSum(BufferedChecksumStreamInput in) throws IOException { - long expectedChecksum = in.getChecksum(); - long readChecksum = in.readLong(); - if (readChecksum != expectedChecksum) { - throw new IOException( - "checksum verification failed - expected: 0x" - + Long.toHexString(expectedChecksum) - + ", got: 0x" - + Long.toHexString(readChecksum) - ); - } + @Override + public ClusterMetadataManifest.UploadedMetadata getUploadedMetadata() { + assert blobName != null; + assert index != null; + return new ClusterMetadataManifest.UploadedIndexMetadata(index.getName(), index.getUUID(), blobName, INDEX_ROUTING_METADATA_PREFIX); + } + + @Override + public InputStream serialize() throws IOException { + return INDEX_ROUTING_TABLE_FORMAT.serialize(indexRoutingTable, generateBlobFileName(), getCompressor()).streamInput(); + } + + @Override + public IndexRoutingTable deserialize(InputStream in) throws IOException { + return INDEX_ROUTING_TABLE_FORMAT.deserialize(blobName, Streams.readFully(in)); } } diff --git a/server/src/test/java/org/opensearch/cluster/routing/remote/RemoteRoutingTableServiceFactoryTests.java b/server/src/test/java/org/opensearch/cluster/routing/remote/RemoteRoutingTableServiceFactoryTests.java index 39294ee8da41e..86f4b9502d6ab 100644 --- a/server/src/test/java/org/opensearch/cluster/routing/remote/RemoteRoutingTableServiceFactoryTests.java +++ b/server/src/test/java/org/opensearch/cluster/routing/remote/RemoteRoutingTableServiceFactoryTests.java @@ -40,7 +40,8 @@ public void testGetServiceWhenRemoteRoutingDisabled() { repositoriesService, settings, new ClusterSettings(settings, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS), - threadPool + threadPool, + "test-cluster" ); assertTrue(service instanceof NoopRemoteRoutingTableService); } @@ -56,7 +57,8 @@ public void testGetServiceWhenRemoteRoutingEnabled() { repositoriesService, settings, new ClusterSettings(settings, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS), - threadPool + threadPool, + "test-cluster" ); assertTrue(service instanceof InternalRemoteRoutingTableService); } diff --git a/server/src/test/java/org/opensearch/cluster/routing/remote/RemoteRoutingTableServiceTests.java b/server/src/test/java/org/opensearch/cluster/routing/remote/RemoteRoutingTableServiceTests.java index 839ebe1ff8301..564c7f7aed304 100644 --- a/server/src/test/java/org/opensearch/cluster/routing/remote/RemoteRoutingTableServiceTests.java +++ b/server/src/test/java/org/opensearch/cluster/routing/remote/RemoteRoutingTableServiceTests.java @@ -19,27 +19,25 @@ import org.opensearch.cluster.routing.IndexRoutingTable; import org.opensearch.cluster.routing.RoutingTable; import org.opensearch.cluster.service.ClusterService; -import org.opensearch.common.CheckedRunnable; -import org.opensearch.common.blobstore.AsyncMultiStreamBlobContainer; import org.opensearch.common.blobstore.BlobContainer; import org.opensearch.common.blobstore.BlobPath; import org.opensearch.common.blobstore.BlobStore; -import org.opensearch.common.blobstore.stream.write.WriteContext; import org.opensearch.common.blobstore.stream.write.WritePriority; import org.opensearch.common.compress.DeflateCompressor; -import org.opensearch.common.io.stream.BytesStreamOutput; import org.opensearch.common.settings.ClusterSettings; import org.opensearch.common.settings.Settings; import org.opensearch.common.util.FeatureFlags; +import org.opensearch.common.util.TestCapturingListener; import org.opensearch.core.action.ActionListener; -import org.opensearch.core.common.io.stream.StreamInput; +import org.opensearch.core.compress.Compressor; +import org.opensearch.core.compress.NoneCompressor; import org.opensearch.core.index.Index; import org.opensearch.gateway.remote.ClusterMetadataManifest; -import org.opensearch.gateway.remote.RemoteStateTransferException; -import org.opensearch.gateway.remote.routingtable.RemoteIndexRoutingTable; +import org.opensearch.gateway.remote.RemoteClusterStateUtils; import org.opensearch.index.remote.RemoteStoreEnums; import org.opensearch.index.remote.RemoteStorePathStrategy; import org.opensearch.index.remote.RemoteStoreUtils; +import org.opensearch.index.translog.transfer.BlobStoreTransferService; import org.opensearch.repositories.FilterRepository; import org.opensearch.repositories.RepositoriesService; import org.opensearch.repositories.RepositoryMissingException; @@ -51,33 +49,37 @@ import org.junit.Before; import java.io.IOException; +import java.io.InputStream; import java.util.ArrayList; import java.util.Arrays; import java.util.List; import java.util.Locale; import java.util.Map; -import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.CountDownLatch; import java.util.function.Supplier; -import org.mockito.ArgumentCaptor; import org.mockito.Mockito; -import static org.opensearch.cluster.routing.remote.InternalRemoteRoutingTableService.INDEX_ROUTING_FILE_PREFIX; -import static org.opensearch.cluster.routing.remote.InternalRemoteRoutingTableService.INDEX_ROUTING_PATH_TOKEN; import static org.opensearch.common.util.FeatureFlags.REMOTE_PUBLICATION_EXPERIMENTAL; import static org.opensearch.gateway.remote.ClusterMetadataManifestTests.randomUploadedIndexMetadataList; +import static org.opensearch.gateway.remote.RemoteClusterStateUtils.CLUSTER_STATE_PATH_TOKEN; import static org.opensearch.gateway.remote.RemoteClusterStateUtils.DELIMITER; +import static org.opensearch.gateway.remote.RemoteClusterStateUtils.PATH_DELIMITER; +import static org.opensearch.gateway.remote.routingtable.RemoteIndexRoutingTable.INDEX_ROUTING_FILE; +import static org.opensearch.gateway.remote.routingtable.RemoteIndexRoutingTable.INDEX_ROUTING_METADATA_PREFIX; +import static org.opensearch.gateway.remote.routingtable.RemoteIndexRoutingTable.INDEX_ROUTING_TABLE; +import static org.opensearch.gateway.remote.routingtable.RemoteIndexRoutingTable.INDEX_ROUTING_TABLE_FORMAT; +import static org.opensearch.index.remote.RemoteStoreEnums.PathHashAlgorithm.FNV_1A_BASE64; +import static org.opensearch.index.remote.RemoteStoreEnums.PathType.HASHED_PREFIX; import static org.opensearch.node.remotestore.RemoteStoreNodeAttribute.REMOTE_STORE_ROUTING_TABLE_REPOSITORY_NAME_ATTRIBUTE_KEY; -import static org.mockito.ArgumentMatchers.anyLong; +import static org.hamcrest.Matchers.lessThanOrEqualTo; import static org.mockito.ArgumentMatchers.anyString; import static org.mockito.ArgumentMatchers.eq; -import static org.mockito.ArgumentMatchers.startsWith; import static org.mockito.Mockito.any; import static org.mockito.Mockito.doAnswer; import static org.mockito.Mockito.doNothing; import static org.mockito.Mockito.doThrow; import static org.mockito.Mockito.mock; -import static org.mockito.Mockito.times; import static org.mockito.Mockito.verify; import static org.mockito.Mockito.when; @@ -92,6 +94,8 @@ public class RemoteRoutingTableServiceTests extends OpenSearchTestCase { private BlobPath basePath; private ClusterSettings clusterSettings; private ClusterService clusterService; + private Compressor compressor; + private BlobStoreTransferService blobStoreTransferService; private final ThreadPool threadPool = new TestThreadPool(getClass().getName()); @Before @@ -105,6 +109,7 @@ public void setup() { .build(); clusterSettings = new ClusterSettings(settings, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS); clusterService = mock(ClusterService.class); + blobStoreTransferService = mock(BlobStoreTransferService.class); when(clusterService.getClusterSettings()).thenReturn(clusterSettings); blobStoreRepository = mock(BlobStoreRepository.class); when(blobStoreRepository.getCompressor()).thenReturn(new DeflateCompressor()); @@ -112,18 +117,20 @@ public void setup() { blobContainer = mock(BlobContainer.class); when(repositoriesService.repository("routing_repository")).thenReturn(blobStoreRepository); when(blobStoreRepository.blobStore()).thenReturn(blobStore); + when(blobStore.blobContainer(any())).thenReturn(blobContainer); Settings nodeSettings = Settings.builder().put(REMOTE_PUBLICATION_EXPERIMENTAL, "true").build(); FeatureFlags.initializeFeatureFlags(nodeSettings); - + compressor = new NoneCompressor(); basePath = BlobPath.cleanPath().add("base-path"); - + when(blobStoreRepository.basePath()).thenReturn(basePath); remoteRoutingTableService = new InternalRemoteRoutingTableService( repositoriesServiceSupplier, settings, new ClusterSettings(settings, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS), - threadPool + threadPool, + "test-cluster" ); - + remoteRoutingTableService.doStart(); } @After @@ -141,7 +148,8 @@ public void testFailInitializationWhenRemoteRoutingDisabled() { repositoriesServiceSupplier, settings, new ClusterSettings(settings, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS), - threadPool + threadPool, + "test-cluster" ) ); } @@ -347,136 +355,13 @@ public void testGetIndicesRoutingMapDiffIndexDeleted() { assertEquals(indexName, diff.getDeletes().get(0)); } - public void testGetIndexRoutingAsyncAction() throws IOException { - String indexName = randomAlphaOfLength(randomIntBetween(1, 50)); - ClusterState clusterState = createClusterState(indexName); - BlobPath expectedPath = getPath(); - - LatchedActionListener listener = mock(LatchedActionListener.class); - when(blobStore.blobContainer(expectedPath)).thenReturn(blobContainer); - - remoteRoutingTableService.start(); - CheckedRunnable runnable = remoteRoutingTableService.getIndexRoutingAsyncAction( - clusterState, - clusterState.routingTable().getIndicesRouting().get(indexName), - listener, - basePath - ); - assertNotNull(runnable); - runnable.run(); - - String expectedFilePrefix = String.join( - DELIMITER, - INDEX_ROUTING_FILE_PREFIX, - RemoteStoreUtils.invertLong(clusterState.term()), - RemoteStoreUtils.invertLong(clusterState.version()) - ); - verify(blobContainer, times(1)).writeBlob(startsWith(expectedFilePrefix), any(StreamInput.class), anyLong(), eq(true)); - verify(listener, times(1)).onResponse(any(ClusterMetadataManifest.UploadedMetadata.class)); - } - - public void testGetIndexRoutingAsyncActionFailureInBlobRepo() throws IOException { - String indexName = randomAlphaOfLength(randomIntBetween(1, 50)); - ClusterState clusterState = createClusterState(indexName); - BlobPath expectedPath = getPath(); - - LatchedActionListener listener = mock(LatchedActionListener.class); - when(blobStore.blobContainer(expectedPath)).thenReturn(blobContainer); - doThrow(new IOException("testing failure")).when(blobContainer).writeBlob(anyString(), any(StreamInput.class), anyLong(), eq(true)); - - remoteRoutingTableService.start(); - CheckedRunnable runnable = remoteRoutingTableService.getIndexRoutingAsyncAction( - clusterState, - clusterState.routingTable().getIndicesRouting().get(indexName), - listener, - basePath - ); - assertNotNull(runnable); - runnable.run(); - String expectedFilePrefix = String.join( - DELIMITER, - INDEX_ROUTING_FILE_PREFIX, - RemoteStoreUtils.invertLong(clusterState.term()), - RemoteStoreUtils.invertLong(clusterState.version()) - ); - verify(blobContainer, times(1)).writeBlob(startsWith(expectedFilePrefix), any(StreamInput.class), anyLong(), eq(true)); - verify(listener, times(1)).onFailure(any(RemoteStateTransferException.class)); - } - - public void testGetIndexRoutingAsyncActionAsyncRepo() throws IOException { - String indexName = randomAlphaOfLength(randomIntBetween(1, 50)); - ClusterState clusterState = createClusterState(indexName); - BlobPath expectedPath = getPath(); - - LatchedActionListener listener = mock(LatchedActionListener.class); - blobContainer = mock(AsyncMultiStreamBlobContainer.class); - when(blobStore.blobContainer(expectedPath)).thenReturn(blobContainer); - ArgumentCaptor> actionListenerArgumentCaptor = ArgumentCaptor.forClass(ActionListener.class); - ArgumentCaptor writeContextArgumentCaptor = ArgumentCaptor.forClass(WriteContext.class); - ConcurrentHashMap capturedWriteContext = new ConcurrentHashMap<>(); - - doAnswer((i) -> { - actionListenerArgumentCaptor.getValue().onResponse(null); - WriteContext writeContext = writeContextArgumentCaptor.getValue(); - capturedWriteContext.put(writeContext.getFileName().split(DELIMITER)[0], writeContextArgumentCaptor.getValue()); - return null; - }).when((AsyncMultiStreamBlobContainer) blobContainer) - .asyncBlobUpload(writeContextArgumentCaptor.capture(), actionListenerArgumentCaptor.capture()); - - remoteRoutingTableService.start(); - CheckedRunnable runnable = remoteRoutingTableService.getIndexRoutingAsyncAction( - clusterState, - clusterState.routingTable().getIndicesRouting().get(indexName), - listener, - basePath - ); - assertNotNull(runnable); - runnable.run(); - - String expectedFilePrefix = String.join( - DELIMITER, - INDEX_ROUTING_FILE_PREFIX, - RemoteStoreUtils.invertLong(clusterState.term()), - RemoteStoreUtils.invertLong(clusterState.version()) - ); - assertEquals(1, actionListenerArgumentCaptor.getAllValues().size()); - assertEquals(1, writeContextArgumentCaptor.getAllValues().size()); - assertNotNull(capturedWriteContext.get("index_routing")); - assertEquals(capturedWriteContext.get("index_routing").getWritePriority(), WritePriority.URGENT); - assertTrue(capturedWriteContext.get("index_routing").getFileName().startsWith(expectedFilePrefix)); - } - - public void testGetIndexRoutingAsyncActionAsyncRepoFailureInRepo() throws IOException { - String indexName = randomAlphaOfLength(randomIntBetween(1, 50)); - ClusterState clusterState = createClusterState(indexName); - BlobPath expectedPath = getPath(); - - LatchedActionListener listener = mock(LatchedActionListener.class); - blobContainer = mock(AsyncMultiStreamBlobContainer.class); - when(blobStore.blobContainer(expectedPath)).thenReturn(blobContainer); - - doThrow(new IOException("Testing failure")).when((AsyncMultiStreamBlobContainer) blobContainer) - .asyncBlobUpload(any(WriteContext.class), any(ActionListener.class)); - - remoteRoutingTableService.start(); - CheckedRunnable runnable = remoteRoutingTableService.getIndexRoutingAsyncAction( - clusterState, - clusterState.routingTable().getIndicesRouting().get(indexName), - listener, - basePath - ); - assertNotNull(runnable); - runnable.run(); - verify(listener, times(1)).onFailure(any(RemoteStateTransferException.class)); - } - public void testGetAllUploadedIndicesRouting() { final ClusterMetadataManifest previousManifest = ClusterMetadataManifest.builder().build(); final ClusterMetadataManifest.UploadedIndexMetadata uploadedIndexMetadata = new ClusterMetadataManifest.UploadedIndexMetadata( "test-index", "index-uuid", "index-filename", - InternalRemoteRoutingTableService.INDEX_ROUTING_METADATA_PREFIX + INDEX_ROUTING_METADATA_PREFIX ); List allIndiceRoutingMetadata = remoteRoutingTableService @@ -491,7 +376,7 @@ public void testGetAllUploadedIndicesRoutingExistingIndexInManifest() { "test-index", "index-uuid", "index-filename", - InternalRemoteRoutingTableService.INDEX_ROUTING_METADATA_PREFIX + INDEX_ROUTING_METADATA_PREFIX ); final ClusterMetadataManifest previousManifest = ClusterMetadataManifest.builder() .indicesRouting(List.of(uploadedIndexMetadata)) @@ -509,7 +394,7 @@ public void testGetAllUploadedIndicesRoutingNewIndexFromManifest() { "test-index", "index-uuid", "index-filename", - InternalRemoteRoutingTableService.INDEX_ROUTING_METADATA_PREFIX + INDEX_ROUTING_METADATA_PREFIX ); final ClusterMetadataManifest previousManifest = ClusterMetadataManifest.builder() .indicesRouting(List.of(uploadedIndexMetadata)) @@ -518,7 +403,7 @@ public void testGetAllUploadedIndicesRoutingNewIndexFromManifest() { "test-index2", "index-uuid", "index-filename", - InternalRemoteRoutingTableService.INDEX_ROUTING_METADATA_PREFIX + INDEX_ROUTING_METADATA_PREFIX ); List allIndiceRoutingMetadata = remoteRoutingTableService @@ -534,13 +419,13 @@ public void testGetAllUploadedIndicesRoutingIndexDeleted() { "test-index", "index-uuid", "index-filename", - InternalRemoteRoutingTableService.INDEX_ROUTING_METADATA_PREFIX + INDEX_ROUTING_METADATA_PREFIX ); final ClusterMetadataManifest.UploadedIndexMetadata uploadedIndexMetadata2 = new ClusterMetadataManifest.UploadedIndexMetadata( "test-index2", "index-uuid", "index-filename", - InternalRemoteRoutingTableService.INDEX_ROUTING_METADATA_PREFIX + INDEX_ROUTING_METADATA_PREFIX ); final ClusterMetadataManifest previousManifest = ClusterMetadataManifest.builder() .indicesRouting(List.of(uploadedIndexMetadata, uploadedIndexMetadata2)) @@ -558,13 +443,13 @@ public void testGetAllUploadedIndicesRoutingNoChange() { "test-index", "index-uuid", "index-filename", - InternalRemoteRoutingTableService.INDEX_ROUTING_METADATA_PREFIX + INDEX_ROUTING_METADATA_PREFIX ); final ClusterMetadataManifest.UploadedIndexMetadata uploadedIndexMetadata2 = new ClusterMetadataManifest.UploadedIndexMetadata( "test-index2", "index-uuid", "index-filename", - InternalRemoteRoutingTableService.INDEX_ROUTING_METADATA_PREFIX + INDEX_ROUTING_METADATA_PREFIX ); final ClusterMetadataManifest previousManifest = ClusterMetadataManifest.builder() .indicesRouting(List.of(uploadedIndexMetadata, uploadedIndexMetadata2)) @@ -640,69 +525,83 @@ public void testIndicesRoutingDiffWhenIndexDeletedAndAdded() { ); } - public void testGetAsyncIndexMetadataReadAction() throws Exception { + public void testGetAsyncIndexRoutingReadAction() throws Exception { String indexName = randomAlphaOfLength(randomIntBetween(1, 50)); ClusterState clusterState = createClusterState(indexName); String uploadedFileName = String.format(Locale.ROOT, "index-routing/" + indexName); - Index index = new Index(indexName, "uuid-01"); - - LatchedActionListener listener = mock(LatchedActionListener.class); - when(blobStore.blobContainer(any())).thenReturn(blobContainer); - BytesStreamOutput streamOutput = new BytesStreamOutput(); - RemoteIndexRoutingTable remoteIndexRoutingTable = new RemoteIndexRoutingTable( - clusterState.routingTable().getIndicesRouting().get(indexName) + when(blobContainer.readBlob(indexName)).thenReturn( + INDEX_ROUTING_TABLE_FORMAT.serialize( + clusterState.getRoutingTable().getIndicesRouting().get(indexName), + uploadedFileName, + compressor + ).streamInput() ); - remoteIndexRoutingTable.writeTo(streamOutput); - when(blobContainer.readBlob(indexName)).thenReturn(streamOutput.bytes().streamInput()); - remoteRoutingTableService.start(); + TestCapturingListener listener = new TestCapturingListener<>(); + CountDownLatch latch = new CountDownLatch(1); - CheckedRunnable runnable = remoteRoutingTableService.getAsyncIndexRoutingReadAction(uploadedFileName, index, listener); - assertNotNull(runnable); - runnable.run(); + remoteRoutingTableService.getAsyncIndexRoutingReadAction( + "cluster-uuid", + uploadedFileName, + new LatchedActionListener<>(listener, latch) + ).run(); + latch.await(); - assertBusy(() -> verify(blobContainer, times(1)).readBlob(any())); - assertBusy(() -> verify(listener, times(1)).onResponse(any(IndexRoutingTable.class))); + assertNull(listener.getFailure()); + assertNotNull(listener.getResult()); + IndexRoutingTable indexRoutingTable = listener.getResult(); + assertEquals(clusterState.getRoutingTable().getIndicesRouting().get(indexName), indexRoutingTable); } - public void testGetAsyncIndexMetadataReadActionFailureForIncorrectIndex() throws Exception { + public void testGetAsyncIndexRoutingWriteAction() throws Exception { String indexName = randomAlphaOfLength(randomIntBetween(1, 50)); ClusterState clusterState = createClusterState(indexName); - String uploadedFileName = String.format(Locale.ROOT, "index-routing/" + indexName); - Index index = new Index("incorrect-index", "uuid-01"); - - LatchedActionListener listener = mock(LatchedActionListener.class); - when(blobStore.blobContainer(any())).thenReturn(blobContainer); - BytesStreamOutput streamOutput = new BytesStreamOutput(); - RemoteIndexRoutingTable remoteIndexRoutingTable = new RemoteIndexRoutingTable( - clusterState.routingTable().getIndicesRouting().get(indexName) + Iterable remotePath = HASHED_PREFIX.path( + RemoteStorePathStrategy.PathInput.builder() + .basePath( + new BlobPath().add("base-path") + .add(RemoteClusterStateUtils.encodeString(ClusterName.DEFAULT.toString())) + .add(CLUSTER_STATE_PATH_TOKEN) + .add(clusterState.metadata().clusterUUID()) + .add(INDEX_ROUTING_TABLE) + ) + .indexUUID(clusterState.getRoutingTable().indicesRouting().get(indexName).getIndex().getUUID()) + .build(), + FNV_1A_BASE64 ); - remoteIndexRoutingTable.writeTo(streamOutput); - when(blobContainer.readBlob(anyString())).thenReturn(streamOutput.bytes().streamInput()); - remoteRoutingTableService.doStart(); - - CheckedRunnable runnable = remoteRoutingTableService.getAsyncIndexRoutingReadAction(uploadedFileName, index, listener); - assertNotNull(runnable); - runnable.run(); - - assertBusy(() -> verify(blobContainer, times(1)).readBlob(any())); - assertBusy(() -> verify(listener, times(1)).onFailure(any(Exception.class))); - } - - public void testGetAsyncIndexMetadataReadActionFailureInBlobRepo() throws Exception { - String indexName = randomAlphaOfLength(randomIntBetween(1, 50)); - String uploadedFileName = String.format(Locale.ROOT, "index-routing/" + indexName); - Index index = new Index(indexName, "uuid-01"); - - LatchedActionListener listener = mock(LatchedActionListener.class); - when(blobStore.blobContainer(any())).thenReturn(blobContainer); - doThrow(new IOException("testing failure")).when(blobContainer).readBlob(indexName); - remoteRoutingTableService.doStart(); - - CheckedRunnable runnable = remoteRoutingTableService.getAsyncIndexRoutingReadAction(uploadedFileName, index, listener); - assertNotNull(runnable); - runnable.run(); - assertBusy(() -> verify(listener, times(1)).onFailure(any(RemoteStateTransferException.class))); + doAnswer(invocationOnMock -> { + invocationOnMock.getArgument(4, ActionListener.class).onResponse(null); + return null; + }).when(blobStoreTransferService) + .uploadBlob(any(InputStream.class), eq(remotePath), anyString(), eq(WritePriority.URGENT), any(ActionListener.class)); + + TestCapturingListener listener = new TestCapturingListener<>(); + CountDownLatch latch = new CountDownLatch(1); + + remoteRoutingTableService.getAsyncIndexRoutingWriteAction( + clusterState.metadata().clusterUUID(), + clusterState.term(), + clusterState.version(), + clusterState.getRoutingTable().indicesRouting().get(indexName), + new LatchedActionListener<>(listener, latch) + ).run(); + latch.await(); + assertNull(listener.getFailure()); + assertNotNull(listener.getResult()); + ClusterMetadataManifest.UploadedMetadata uploadedMetadata = listener.getResult(); + + assertEquals(INDEX_ROUTING_METADATA_PREFIX + indexName, uploadedMetadata.getComponent()); + String uploadedFileName = uploadedMetadata.getUploadedFilename(); + String[] pathTokens = uploadedFileName.split(PATH_DELIMITER); + assertEquals(8, pathTokens.length); + assertEquals(pathTokens[1], "base-path"); + String[] fileNameTokens = pathTokens[7].split(DELIMITER); + + assertEquals(4, fileNameTokens.length); + assertEquals(fileNameTokens[0], INDEX_ROUTING_FILE); + assertEquals(fileNameTokens[1], RemoteStoreUtils.invertLong(1L)); + assertEquals(fileNameTokens[2], RemoteStoreUtils.invertLong(2L)); + assertThat(RemoteStoreUtils.invertLong(fileNameTokens[3]), lessThanOrEqualTo(System.currentTimeMillis())); } public void testGetUpdatedIndexRoutingTableMetadataWhenNoChange() { @@ -758,7 +657,7 @@ private ClusterState createClusterState(String indexName) { } private BlobPath getPath() { - BlobPath indexRoutingPath = basePath.add(INDEX_ROUTING_PATH_TOKEN); + BlobPath indexRoutingPath = basePath.add(INDEX_ROUTING_TABLE); return RemoteStoreEnums.PathType.HASHED_PREFIX.path( RemoteStorePathStrategy.PathInput.builder().basePath(indexRoutingPath).indexUUID("uuid").build(), RemoteStoreEnums.PathHashAlgorithm.FNV_1A_BASE64 diff --git a/server/src/test/java/org/opensearch/gateway/remote/ClusterMetadataManifestTests.java b/server/src/test/java/org/opensearch/gateway/remote/ClusterMetadataManifestTests.java index 152a6dba6c032..256161af1a3e2 100644 --- a/server/src/test/java/org/opensearch/gateway/remote/ClusterMetadataManifestTests.java +++ b/server/src/test/java/org/opensearch/gateway/remote/ClusterMetadataManifestTests.java @@ -13,7 +13,6 @@ import org.opensearch.cluster.metadata.IndexGraveyard; import org.opensearch.cluster.metadata.RepositoriesMetadata; import org.opensearch.cluster.metadata.WeightedRoutingMetadata; -import org.opensearch.cluster.routing.remote.InternalRemoteRoutingTableService; import org.opensearch.common.xcontent.json.JsonXContent; import org.opensearch.core.common.bytes.BytesReference; import org.opensearch.core.common.io.stream.NamedWriteableRegistry; @@ -44,6 +43,7 @@ import static org.opensearch.gateway.remote.model.RemotePersistentSettingsMetadata.SETTING_METADATA; import static org.opensearch.gateway.remote.model.RemoteTemplatesMetadata.TEMPLATES_METADATA; import static org.opensearch.gateway.remote.model.RemoteTransientSettingsMetadata.TRANSIENT_SETTING_METADATA; +import static org.opensearch.gateway.remote.routingtable.RemoteIndexRoutingTable.INDEX_ROUTING_METADATA_PREFIX; public class ClusterMetadataManifestTests extends OpenSearchTestCase { @@ -545,7 +545,7 @@ public void testClusterMetadataManifestXContentV2WithoutEphemeral() throws IOExc "test-index", "test-uuid", "routing-path", - InternalRemoteRoutingTableService.INDEX_ROUTING_METADATA_PREFIX + INDEX_ROUTING_METADATA_PREFIX ); ClusterMetadataManifest originalManifest = ClusterMetadataManifest.builder() .clusterTerm(1L) diff --git a/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateServiceTests.java b/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateServiceTests.java index 6cd9cbbf13848..ebd3488d06007 100644 --- a/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateServiceTests.java +++ b/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateServiceTests.java @@ -117,6 +117,7 @@ import static org.opensearch.gateway.remote.RemoteClusterStateTestUtils.TestClusterStateCustom1; import static org.opensearch.gateway.remote.RemoteClusterStateTestUtils.TestClusterStateCustom2; import static org.opensearch.gateway.remote.RemoteClusterStateTestUtils.TestClusterStateCustom3; +import static org.opensearch.gateway.remote.RemoteClusterStateUtils.CUSTOM_DELIMITER; import static org.opensearch.gateway.remote.RemoteClusterStateUtils.DELIMITER; import static org.opensearch.gateway.remote.RemoteClusterStateUtils.FORMAT_PARAMS; import static org.opensearch.gateway.remote.RemoteClusterStateUtils.getFormattedIndexFileName; @@ -126,7 +127,6 @@ import static org.opensearch.gateway.remote.model.RemoteClusterStateCustoms.CLUSTER_STATE_CUSTOM; import static org.opensearch.gateway.remote.model.RemoteCoordinationMetadata.COORDINATION_METADATA; import static org.opensearch.gateway.remote.model.RemoteCoordinationMetadata.COORDINATION_METADATA_FORMAT; -import static org.opensearch.gateway.remote.model.RemoteCustomMetadata.CUSTOM_DELIMITER; import static org.opensearch.gateway.remote.model.RemoteCustomMetadata.CUSTOM_METADATA; import static org.opensearch.gateway.remote.model.RemoteCustomMetadata.readFrom; import static org.opensearch.gateway.remote.model.RemoteDiscoveryNodes.DISCOVERY_NODES; @@ -144,6 +144,7 @@ import static org.opensearch.gateway.remote.model.RemoteTemplatesMetadata.TEMPLATES_METADATA_FORMAT; import static org.opensearch.gateway.remote.model.RemoteTemplatesMetadataTests.getTemplatesMetadata; import static org.opensearch.gateway.remote.model.RemoteTransientSettingsMetadata.TRANSIENT_SETTING_METADATA; +import static org.opensearch.gateway.remote.routingtable.RemoteIndexRoutingTable.INDEX_ROUTING_METADATA_PREFIX; import static org.opensearch.node.remotestore.RemoteStoreNodeAttribute.REMOTE_STORE_CLUSTER_STATE_REPOSITORY_NAME_ATTRIBUTE_KEY; import static org.opensearch.node.remotestore.RemoteStoreNodeAttribute.REMOTE_STORE_REPOSITORY_SETTINGS_ATTRIBUTE_KEY_PREFIX; import static org.opensearch.node.remotestore.RemoteStoreNodeAttribute.REMOTE_STORE_REPOSITORY_TYPE_ATTRIBUTE_KEY_FORMAT; @@ -2603,7 +2604,7 @@ public void testWriteFullMetadataSuccessWithRoutingTable() throws IOException { "test-index", "index-uuid", "routing-filename", - InternalRemoteRoutingTableService.INDEX_ROUTING_METADATA_PREFIX + INDEX_ROUTING_METADATA_PREFIX ); final ClusterMetadataManifest expectedManifest = ClusterMetadataManifest.builder() .indices(List.of(uploadedIndexMetadata)) @@ -2654,7 +2655,7 @@ public void testWriteFullMetadataInParallelSuccessWithRoutingTable() throws IOEx "test-index", "index-uuid", "routing-filename", - InternalRemoteRoutingTableService.INDEX_ROUTING_METADATA_PREFIX + INDEX_ROUTING_METADATA_PREFIX ); final ClusterMetadataManifest expectedManifest = ClusterMetadataManifest.builder() @@ -2710,7 +2711,7 @@ public void testWriteIncrementalMetadataSuccessWithRoutingTable() throws IOExcep "test-index", "index-uuid", "routing-filename", - InternalRemoteRoutingTableService.INDEX_ROUTING_METADATA_PREFIX + INDEX_ROUTING_METADATA_PREFIX ); final ClusterMetadataManifest expectedManifest = ClusterMetadataManifest.builder() .indices(List.of(uploadedIndexMetadata)) diff --git a/server/src/test/java/org/opensearch/gateway/remote/model/RemoteRoutingTableBlobStoreTests.java b/server/src/test/java/org/opensearch/gateway/remote/model/RemoteRoutingTableBlobStoreTests.java new file mode 100644 index 0000000000000..ea0f3264cfe4f --- /dev/null +++ b/server/src/test/java/org/opensearch/gateway/remote/model/RemoteRoutingTableBlobStoreTests.java @@ -0,0 +1,133 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.gateway.remote.model; + +import org.opensearch.Version; +import org.opensearch.cluster.metadata.IndexMetadata; +import org.opensearch.cluster.routing.IndexRoutingTable; +import org.opensearch.common.blobstore.BlobPath; +import org.opensearch.common.compress.DeflateCompressor; +import org.opensearch.common.settings.ClusterSettings; +import org.opensearch.common.settings.Settings; +import org.opensearch.core.index.Index; +import org.opensearch.gateway.remote.RemoteClusterStateUtils; +import org.opensearch.gateway.remote.routingtable.RemoteIndexRoutingTable; +import org.opensearch.index.remote.RemoteStoreEnums; +import org.opensearch.index.remote.RemoteStorePathStrategy; +import org.opensearch.index.translog.transfer.BlobStoreTransferService; +import org.opensearch.repositories.blobstore.BlobStoreRepository; +import org.opensearch.test.OpenSearchTestCase; +import org.opensearch.threadpool.TestThreadPool; +import org.opensearch.threadpool.ThreadPool; +import org.junit.After; +import org.junit.Before; + +import static org.opensearch.gateway.remote.RemoteClusterStateUtils.CLUSTER_STATE_PATH_TOKEN; +import static org.opensearch.gateway.remote.routingtable.RemoteIndexRoutingTable.INDEX_ROUTING_TABLE; +import static org.opensearch.index.remote.RemoteStoreEnums.PathHashAlgorithm.FNV_1A_BASE64; +import static org.opensearch.index.remote.RemoteStoreEnums.PathType.HASHED_PREFIX; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; + +public class RemoteRoutingTableBlobStoreTests extends OpenSearchTestCase { + + private RemoteRoutingTableBlobStore remoteIndexRoutingTableStore; + ClusterSettings clusterSettings; + ThreadPool threadPool; + + @Before + public void setup() { + clusterSettings = new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS); + BlobStoreTransferService blobStoreTransferService = mock(BlobStoreTransferService.class); + BlobStoreRepository blobStoreRepository = mock(BlobStoreRepository.class); + BlobPath blobPath = new BlobPath().add("base-path"); + when(blobStoreRepository.basePath()).thenReturn(blobPath); + + threadPool = new TestThreadPool(getClass().getName()); + this.remoteIndexRoutingTableStore = new RemoteRoutingTableBlobStore<>( + blobStoreTransferService, + blobStoreRepository, + "test-cluster", + threadPool, + ThreadPool.Names.REMOTE_STATE_READ, + clusterSettings + ); + } + + @After + public void tearDown() throws Exception { + super.tearDown(); + threadPool.shutdown(); + + } + + public void testRemoteRoutingTablePathTypeSetting() { + // Assert the default is HASHED_PREFIX + assertEquals(HASHED_PREFIX.toString(), remoteIndexRoutingTableStore.getPathTypeSetting().toString()); + + Settings newSettings = Settings.builder() + .put("cluster.remote_store.routing_table.path_type", RemoteStoreEnums.PathType.FIXED.toString()) + .build(); + clusterSettings.applySettings(newSettings); + assertEquals(RemoteStoreEnums.PathType.FIXED.toString(), remoteIndexRoutingTableStore.getPathTypeSetting().toString()); + } + + public void testRemoteRoutingTableHashAlgoSetting() { + // Assert the default is FNV_1A_BASE64 + assertEquals(FNV_1A_BASE64.toString(), remoteIndexRoutingTableStore.getPathHashAlgoSetting().toString()); + + Settings newSettings = Settings.builder() + .put("cluster.remote_store.routing_table.path_hash_algo", RemoteStoreEnums.PathHashAlgorithm.FNV_1A_COMPOSITE_1.toString()) + .build(); + clusterSettings.applySettings(newSettings); + assertEquals( + RemoteStoreEnums.PathHashAlgorithm.FNV_1A_COMPOSITE_1.toString(), + remoteIndexRoutingTableStore.getPathHashAlgoSetting().toString() + ); + } + + public void testGetBlobPathForUpload() { + + Index index = new Index("test-idx", "index-uuid"); + Settings idxSettings = Settings.builder() + .put(IndexMetadata.SETTING_VERSION_CREATED, Version.CURRENT) + .put(IndexMetadata.SETTING_INDEX_UUID, index.getUUID()) + .build(); + + IndexMetadata indexMetadata = new IndexMetadata.Builder(index.getName()).settings(idxSettings) + .numberOfShards(1) + .numberOfReplicas(0) + .build(); + + IndexRoutingTable indexRoutingTable = new IndexRoutingTable.Builder(index).initializeAsNew(indexMetadata).build(); + + RemoteIndexRoutingTable remoteObjectForUpload = new RemoteIndexRoutingTable( + indexRoutingTable, + "cluster-uuid", + new DeflateCompressor(), + 2L, + 3L + ); + BlobPath blobPath = remoteIndexRoutingTableStore.getBlobPathForUpload(remoteObjectForUpload); + BlobPath expectedPath = HASHED_PREFIX.path( + RemoteStorePathStrategy.PathInput.builder() + .basePath( + new BlobPath().add("base-path") + .add(RemoteClusterStateUtils.encodeString("test-cluster")) + .add(CLUSTER_STATE_PATH_TOKEN) + .add("cluster-uuid") + .add(INDEX_ROUTING_TABLE) + ) + .indexUUID(index.getUUID()) + .build(), + FNV_1A_BASE64 + ); + assertEquals(expectedPath, blobPath); + } +} diff --git a/server/src/test/java/org/opensearch/gateway/remote/routingtable/IndexRoutingTableHeaderTests.java b/server/src/test/java/org/opensearch/gateway/remote/routingtable/IndexRoutingTableHeaderTests.java deleted file mode 100644 index a3f0ac36a40f1..0000000000000 --- a/server/src/test/java/org/opensearch/gateway/remote/routingtable/IndexRoutingTableHeaderTests.java +++ /dev/null @@ -1,32 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.gateway.remote.routingtable; - -import org.opensearch.common.io.stream.BytesStreamOutput; -import org.opensearch.core.common.io.stream.BytesStreamInput; -import org.opensearch.test.OpenSearchTestCase; - -import java.io.IOException; - -public class IndexRoutingTableHeaderTests extends OpenSearchTestCase { - - public void testIndexRoutingTableHeader() throws IOException { - String indexName = randomAlphaOfLength(randomIntBetween(1, 50)); - IndexRoutingTableHeader header = new IndexRoutingTableHeader(indexName); - try (BytesStreamOutput out = new BytesStreamOutput()) { - header.writeTo(out); - - BytesStreamInput in = new BytesStreamInput(out.bytes().toBytesRef().bytes); - IndexRoutingTableHeader headerRead = new IndexRoutingTableHeader(in); - assertEquals(indexName, headerRead.getIndexName()); - - } - } - -} diff --git a/server/src/test/java/org/opensearch/gateway/remote/routingtable/RemoteIndexRoutingTableTests.java b/server/src/test/java/org/opensearch/gateway/remote/routingtable/RemoteIndexRoutingTableTests.java index 72066d8afb45b..29d4ffa978851 100644 --- a/server/src/test/java/org/opensearch/gateway/remote/routingtable/RemoteIndexRoutingTableTests.java +++ b/server/src/test/java/org/opensearch/gateway/remote/routingtable/RemoteIndexRoutingTableTests.java @@ -13,16 +13,136 @@ import org.opensearch.cluster.metadata.Metadata; import org.opensearch.cluster.routing.IndexRoutingTable; import org.opensearch.cluster.routing.RoutingTable; -import org.opensearch.cluster.routing.ShardRoutingState; -import org.opensearch.common.io.stream.BytesStreamOutput; +import org.opensearch.common.blobstore.BlobPath; +import org.opensearch.common.compress.DeflateCompressor; +import org.opensearch.common.remote.BlobPathParameters; +import org.opensearch.common.settings.ClusterSettings; +import org.opensearch.common.settings.Settings; +import org.opensearch.core.common.io.stream.NamedWriteableRegistry; +import org.opensearch.core.compress.Compressor; +import org.opensearch.core.compress.NoneCompressor; +import org.opensearch.gateway.remote.ClusterMetadataManifest; +import org.opensearch.gateway.remote.RemoteClusterStateUtils; +import org.opensearch.index.remote.RemoteStoreUtils; +import org.opensearch.index.translog.transfer.BlobStoreTransferService; +import org.opensearch.repositories.blobstore.BlobStoreRepository; import org.opensearch.test.OpenSearchTestCase; +import org.opensearch.threadpool.TestThreadPool; +import org.opensearch.threadpool.ThreadPool; +import org.junit.After; +import org.junit.Before; import java.io.IOException; -import java.util.concurrent.atomic.AtomicInteger; +import java.io.InputStream; +import java.util.List; + +import static org.opensearch.gateway.remote.routingtable.RemoteIndexRoutingTable.INDEX_ROUTING_FILE; +import static org.opensearch.gateway.remote.routingtable.RemoteIndexRoutingTable.INDEX_ROUTING_METADATA_PREFIX; +import static org.hamcrest.Matchers.greaterThan; +import static org.hamcrest.Matchers.is; +import static org.hamcrest.Matchers.lessThanOrEqualTo; +import static org.hamcrest.Matchers.nullValue; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; public class RemoteIndexRoutingTableTests extends OpenSearchTestCase { - public void testRoutingTableInput() { + private static final String TEST_BLOB_NAME = "/test-path/test-blob-name"; + private static final String TEST_BLOB_PATH = "test-path"; + private static final String TEST_BLOB_FILE_NAME = "test-blob-name"; + private static final String INDEX_ROUTING_TABLE_TYPE = "test-index-routing-table"; + private static final long STATE_VERSION = 3L; + private static final long STATE_TERM = 2L; + private String clusterUUID; + private BlobStoreTransferService blobStoreTransferService; + private BlobStoreRepository blobStoreRepository; + private String clusterName; + private ClusterSettings clusterSettings; + private Compressor compressor; + private NamedWriteableRegistry namedWriteableRegistry; + private final ThreadPool threadPool = new TestThreadPool(getClass().getName()); + + @Before + public void setup() { + clusterSettings = new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS); + this.clusterUUID = "test-cluster-uuid"; + this.blobStoreTransferService = mock(BlobStoreTransferService.class); + this.blobStoreRepository = mock(BlobStoreRepository.class); + BlobPath blobPath = new BlobPath().add("/path"); + when(blobStoreRepository.basePath()).thenReturn(blobPath); + when(blobStoreRepository.getCompressor()).thenReturn(new DeflateCompressor()); + compressor = new NoneCompressor(); + namedWriteableRegistry = writableRegistry(); + this.clusterName = "test-cluster-name"; + } + + @After + public void tearDown() throws Exception { + super.tearDown(); + threadPool.shutdown(); + } + + public void testClusterUUID() { + String indexName = randomAlphaOfLength(randomIntBetween(1, 50)); + int numberOfShards = randomIntBetween(1, 10); + int numberOfReplicas = randomIntBetween(1, 10); + Metadata metadata = Metadata.builder() + .put( + IndexMetadata.builder(indexName) + .settings(settings(Version.CURRENT)) + .numberOfShards(numberOfShards) + .numberOfReplicas(numberOfReplicas) + ) + .build(); + + RoutingTable initialRoutingTable = RoutingTable.builder().addAsNew(metadata.index(indexName)).build(); + + initialRoutingTable.getIndicesRouting().values().forEach(indexRoutingTable -> { + RemoteIndexRoutingTable remoteObjectForUpload = new RemoteIndexRoutingTable( + indexRoutingTable, + clusterUUID, + compressor, + STATE_TERM, + STATE_VERSION + ); + assertEquals(remoteObjectForUpload.clusterUUID(), clusterUUID); + + RemoteIndexRoutingTable remoteObjectForDownload = new RemoteIndexRoutingTable(TEST_BLOB_NAME, clusterUUID, compressor); + assertEquals(remoteObjectForDownload.clusterUUID(), clusterUUID); + }); + } + + public void testFullBlobName() { + String indexName = randomAlphaOfLength(randomIntBetween(1, 50)); + int numberOfShards = randomIntBetween(1, 10); + int numberOfReplicas = randomIntBetween(1, 10); + Metadata metadata = Metadata.builder() + .put( + IndexMetadata.builder(indexName) + .settings(settings(Version.CURRENT)) + .numberOfShards(numberOfShards) + .numberOfReplicas(numberOfReplicas) + ) + .build(); + + RoutingTable initialRoutingTable = RoutingTable.builder().addAsNew(metadata.index(indexName)).build(); + + initialRoutingTable.getIndicesRouting().values().forEach(indexRoutingTable -> { + RemoteIndexRoutingTable remoteObjectForUpload = new RemoteIndexRoutingTable( + indexRoutingTable, + clusterUUID, + compressor, + STATE_TERM, + STATE_VERSION + ); + assertThat(remoteObjectForUpload.getFullBlobName(), nullValue()); + + RemoteIndexRoutingTable remoteObjectForDownload = new RemoteIndexRoutingTable(TEST_BLOB_NAME, clusterUUID, compressor); + assertThat(remoteObjectForDownload.getFullBlobName(), is(TEST_BLOB_NAME)); + }); + } + + public void testBlobFileName() { String indexName = randomAlphaOfLength(randomIntBetween(1, 50)); int numberOfShards = randomIntBetween(1, 10); int numberOfReplicas = randomIntBetween(1, 10); @@ -37,51 +157,164 @@ public void testRoutingTableInput() { RoutingTable initialRoutingTable = RoutingTable.builder().addAsNew(metadata.index(indexName)).build(); - initialRoutingTable.getIndicesRouting().values().forEach(indexShardRoutingTables -> { - RemoteIndexRoutingTable indexRouting = new RemoteIndexRoutingTable(indexShardRoutingTables); - try (BytesStreamOutput streamOutput = new BytesStreamOutput();) { - indexRouting.writeTo(streamOutput); - RemoteIndexRoutingTable remoteIndexRoutingTable = new RemoteIndexRoutingTable( - streamOutput.bytes().streamInput(), - metadata.index(indexName).getIndex() - ); - IndexRoutingTable indexRoutingTable = remoteIndexRoutingTable.getIndexRoutingTable(); - assertEquals(numberOfShards, indexRoutingTable.getShards().size()); - assertEquals(metadata.index(indexName).getIndex(), indexRoutingTable.getIndex()); - assertEquals( - numberOfShards * (1 + numberOfReplicas), - indexRoutingTable.shardsWithState(ShardRoutingState.UNASSIGNED).size() - ); + initialRoutingTable.getIndicesRouting().values().forEach(indexRoutingTable -> { + RemoteIndexRoutingTable remoteObjectForUpload = new RemoteIndexRoutingTable( + indexRoutingTable, + clusterUUID, + compressor, + STATE_TERM, + STATE_VERSION + ); + assertThat(remoteObjectForUpload.getBlobFileName(), nullValue()); + + RemoteIndexRoutingTable remoteObjectForDownload = new RemoteIndexRoutingTable(TEST_BLOB_NAME, clusterUUID, compressor); + assertThat(remoteObjectForDownload.getBlobFileName(), is(TEST_BLOB_FILE_NAME)); + }); + } + + public void testBlobPathTokens() { + String uploadedFile = "user/local/opensearch/routingTable"; + RemoteIndexRoutingTable remoteObjectForDownload = new RemoteIndexRoutingTable(uploadedFile, clusterUUID, compressor); + assertThat(remoteObjectForDownload.getBlobPathTokens(), is(new String[] { "user", "local", "opensearch", "routingTable" })); + } + + public void testBlobPathParameters() { + String indexName = randomAlphaOfLength(randomIntBetween(1, 50)); + int numberOfShards = randomIntBetween(1, 10); + int numberOfReplicas = randomIntBetween(1, 10); + Metadata metadata = Metadata.builder() + .put( + IndexMetadata.builder(indexName) + .settings(settings(Version.CURRENT)) + .numberOfShards(numberOfShards) + .numberOfReplicas(numberOfReplicas) + ) + .build(); + + RoutingTable initialRoutingTable = RoutingTable.builder().addAsNew(metadata.index(indexName)).build(); + + initialRoutingTable.getIndicesRouting().values().forEach(indexRoutingTable -> { + RemoteIndexRoutingTable remoteObjectForUpload = new RemoteIndexRoutingTable( + indexRoutingTable, + clusterUUID, + compressor, + STATE_TERM, + STATE_VERSION + ); + assertThat(remoteObjectForUpload.getBlobFileName(), nullValue()); + + BlobPathParameters params = remoteObjectForUpload.getBlobPathParameters(); + assertThat(params.getPathTokens(), is(List.of(indexRoutingTable.getIndex().getUUID()))); + String expectedPrefix = INDEX_ROUTING_FILE; + assertThat(params.getFilePrefix(), is(expectedPrefix)); + }); + } + + public void testGenerateBlobFileName() { + String indexName = randomAlphaOfLength(randomIntBetween(1, 50)); + int numberOfShards = randomIntBetween(1, 10); + int numberOfReplicas = randomIntBetween(1, 10); + Metadata metadata = Metadata.builder() + .put( + IndexMetadata.builder(indexName) + .settings(settings(Version.CURRENT)) + .numberOfShards(numberOfShards) + .numberOfReplicas(numberOfReplicas) + ) + .build(); + + RoutingTable initialRoutingTable = RoutingTable.builder().addAsNew(metadata.index(indexName)).build(); + + initialRoutingTable.getIndicesRouting().values().forEach(indexRoutingTable -> { + RemoteIndexRoutingTable remoteObjectForUpload = new RemoteIndexRoutingTable( + indexRoutingTable, + clusterUUID, + compressor, + STATE_TERM, + STATE_VERSION + ); + + String blobFileName = remoteObjectForUpload.generateBlobFileName(); + String[] nameTokens = blobFileName.split(RemoteClusterStateUtils.DELIMITER); + assertEquals(nameTokens[0], INDEX_ROUTING_FILE); + assertEquals(nameTokens[1], RemoteStoreUtils.invertLong(STATE_TERM)); + assertEquals(nameTokens[2], RemoteStoreUtils.invertLong(STATE_VERSION)); + assertThat(RemoteStoreUtils.invertLong(nameTokens[3]), lessThanOrEqualTo(System.currentTimeMillis())); + }); + } + + public void testGetUploadedMetadata() throws IOException { + String indexName = randomAlphaOfLength(randomIntBetween(1, 50)); + int numberOfShards = randomIntBetween(1, 10); + int numberOfReplicas = randomIntBetween(1, 10); + Metadata metadata = Metadata.builder() + .put( + IndexMetadata.builder(indexName) + .settings(settings(Version.CURRENT)) + .numberOfShards(numberOfShards) + .numberOfReplicas(numberOfReplicas) + ) + .build(); + + RoutingTable initialRoutingTable = RoutingTable.builder().addAsNew(metadata.index(indexName)).build(); + + initialRoutingTable.getIndicesRouting().values().forEach(indexRoutingTable -> { + RemoteIndexRoutingTable remoteObjectForUpload = new RemoteIndexRoutingTable( + indexRoutingTable, + clusterUUID, + compressor, + STATE_TERM, + STATE_VERSION + ); + + assertThrows(AssertionError.class, remoteObjectForUpload::getUploadedMetadata); + + try (InputStream inputStream = remoteObjectForUpload.serialize()) { + remoteObjectForUpload.setFullBlobName(new BlobPath().add(TEST_BLOB_PATH)); + ClusterMetadataManifest.UploadedMetadata uploadedMetadata = remoteObjectForUpload.getUploadedMetadata(); + String expectedPrefix = INDEX_ROUTING_METADATA_PREFIX + indexRoutingTable.getIndex().getName(); + assertThat(uploadedMetadata.getComponent(), is(expectedPrefix)); + assertThat(uploadedMetadata.getUploadedFilename(), is(remoteObjectForUpload.getFullBlobName())); } catch (IOException e) { throw new RuntimeException(e); } }); } - public void testRoutingTableInputStreamWithInvalidIndex() { + public void testSerDe() throws IOException { + String indexName = randomAlphaOfLength(randomIntBetween(1, 50)); + int numberOfShards = randomIntBetween(1, 10); + int numberOfReplicas = randomIntBetween(1, 10); Metadata metadata = Metadata.builder() - .put(IndexMetadata.builder("test").settings(settings(Version.CURRENT)).numberOfShards(1).numberOfReplicas(1)) - .put(IndexMetadata.builder("invalid-index").settings(settings(Version.CURRENT)).numberOfShards(1).numberOfReplicas(1)) + .put( + IndexMetadata.builder(indexName) + .settings(settings(Version.CURRENT)) + .numberOfShards(numberOfShards) + .numberOfReplicas(numberOfReplicas) + ) .build(); - RoutingTable initialRoutingTable = RoutingTable.builder().addAsNew(metadata.index("test")).build(); - AtomicInteger assertionError = new AtomicInteger(); - initialRoutingTable.getIndicesRouting().values().forEach(indexShardRoutingTables -> { - RemoteIndexRoutingTable indexRouting = new RemoteIndexRoutingTable(indexShardRoutingTables); - try (BytesStreamOutput streamOutput = new BytesStreamOutput()) { - indexRouting.writeTo(streamOutput); - RemoteIndexRoutingTable remoteIndexRoutingTable = new RemoteIndexRoutingTable( - streamOutput.bytes().streamInput(), - metadata.index("invalid-index").getIndex() - ); - } catch (AssertionError e) { - assertionError.getAndIncrement(); + RoutingTable initialRoutingTable = RoutingTable.builder().addAsNew(metadata.index(indexName)).build(); + + initialRoutingTable.getIndicesRouting().values().forEach(indexRoutingTable -> { + RemoteIndexRoutingTable remoteObjectForUpload = new RemoteIndexRoutingTable( + indexRoutingTable, + clusterUUID, + compressor, + STATE_TERM, + STATE_VERSION + ); + + assertThrows(AssertionError.class, remoteObjectForUpload::getUploadedMetadata); + + try (InputStream inputStream = remoteObjectForUpload.serialize()) { + remoteObjectForUpload.setFullBlobName(BlobPath.cleanPath()); + assertThat(inputStream.available(), greaterThan(0)); + IndexRoutingTable readIndexRoutingTable = remoteObjectForUpload.deserialize(inputStream); + assertEquals(readIndexRoutingTable, indexRoutingTable); } catch (IOException e) { throw new RuntimeException(e); } }); - - assertEquals(1, assertionError.get()); } - } From 3853c919b67e9812ca9e9fd3dba1a0c38c45795e Mon Sep 17 00:00:00 2001 From: Sandeep Kumawat <2025sandeepkumawat@gmail.com> Date: Thu, 18 Jul 2024 19:36:38 +0530 Subject: [PATCH 075/167] Set version to 2.15 for determining metadata during migration to remote store Signed-off-by: Sandeep Kumawat Co-authored-by: Sandeep Kumawat --- .../java/org/opensearch/index/remote/RemoteStoreUtils.java | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/server/src/main/java/org/opensearch/index/remote/RemoteStoreUtils.java b/server/src/main/java/org/opensearch/index/remote/RemoteStoreUtils.java index 9a9de6c819424..654e554c96bf0 100644 --- a/server/src/main/java/org/opensearch/index/remote/RemoteStoreUtils.java +++ b/server/src/main/java/org/opensearch/index/remote/RemoteStoreUtils.java @@ -214,13 +214,13 @@ public static Map determineRemoteStoreCustomMetadataDuringMigrat // does not support custom metadata. // https://github.com/opensearch-project/OpenSearch/issues/13745 boolean blobStoreMetadataEnabled = false; - boolean translogMetadata = Version.CURRENT.compareTo(minNodeVersion) <= 0 + boolean translogMetadata = Version.V_2_15_0.compareTo(minNodeVersion) <= 0 && CLUSTER_REMOTE_STORE_TRANSLOG_METADATA.get(clusterSettings) && blobStoreMetadataEnabled; remoteCustomData.put(IndexMetadata.TRANSLOG_METADATA_KEY, Boolean.toString(translogMetadata)); - RemoteStoreEnums.PathType pathType = Version.CURRENT.compareTo(minNodeVersion) <= 0 + RemoteStoreEnums.PathType pathType = Version.V_2_15_0.compareTo(minNodeVersion) <= 0 ? CLUSTER_REMOTE_STORE_PATH_TYPE_SETTING.get(clusterSettings) : RemoteStoreEnums.PathType.FIXED; RemoteStoreEnums.PathHashAlgorithm pathHashAlgorithm = pathType == RemoteStoreEnums.PathType.FIXED From 345fa40080fc939ce9fa34d955658c5e14848e02 Mon Sep 17 00:00:00 2001 From: Andriy Redko Date: Thu, 18 Jul 2024 10:28:46 -0400 Subject: [PATCH 076/167] Fix bulk upsert ignores the default_pipeline and final_pipeline when the auto-created index matches the index template (#14793) Signed-off-by: Andriy Redko --- .../resources/rest-api-spec/test/ingest/70_bulk.yml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/ingest-common/src/yamlRestTest/resources/rest-api-spec/test/ingest/70_bulk.yml b/modules/ingest-common/src/yamlRestTest/resources/rest-api-spec/test/ingest/70_bulk.yml index 8830503940f4d..36b2b5351dcad 100644 --- a/modules/ingest-common/src/yamlRestTest/resources/rest-api-spec/test/ingest/70_bulk.yml +++ b/modules/ingest-common/src/yamlRestTest/resources/rest-api-spec/test/ingest/70_bulk.yml @@ -176,8 +176,8 @@ teardown: --- "Test bulk upsert honors default_pipeline and final_pipeline when the auto-created index matches with the index template": - skip: - version: " - 2.99.99" - reason: "fixed in 3.0.0" + version: " - 2.15.99" + reason: "fixed in 2.16.0" - do: indices.put_index_template: name: test_for_bulk_upsert_index_template From fba482db6875447f3e74674721eb95eee26bb6d5 Mon Sep 17 00:00:00 2001 From: Andriy Redko Date: Thu, 18 Jul 2024 10:29:02 -0400 Subject: [PATCH 077/167] Fix create or update alias API doesn't throw exception for unsupported parameters (#14769) Signed-off-by: Andriy Redko --- .../rest-api-spec/test/indices.put_alias/10_basic.yml | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/indices.put_alias/10_basic.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/indices.put_alias/10_basic.yml index e78a5cf93c666..41f87c1df28ed 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/indices.put_alias/10_basic.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/indices.put_alias/10_basic.yml @@ -273,8 +273,8 @@ --- "Can set is_hidden": - skip: - version: " - 2.99.99" - reason: "Fix was introduced in 3.0.0" + version: " - 2.15.99" + reason: "Fix was introduced in 2.16.0" - do: indices.create: index: test_index @@ -295,8 +295,8 @@ --- "Throws exception with invalid parameters": - skip: - version: " - 2.99.99" - reason: "Fix was introduced in 3.0.0" + version: " - 2.15.99" + reason: "Fix was introduced in 2.16.0" - do: indices.create: From 1299919a8b66916bb942cf91012dc3754438b1a1 Mon Sep 17 00:00:00 2001 From: Shivansh Arora Date: Thu, 18 Jul 2024 20:18:03 +0530 Subject: [PATCH 078/167] Change RCSS info logs to debug (#14814) Signed-off-by: Shivansh Arora --- .../gateway/remote/RemoteClusterStateService.java | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/server/src/main/java/org/opensearch/gateway/remote/RemoteClusterStateService.java b/server/src/main/java/org/opensearch/gateway/remote/RemoteClusterStateService.java index 7e7a93e1d42ec..e7ca3e8aa7594 100644 --- a/server/src/main/java/org/opensearch/gateway/remote/RemoteClusterStateService.java +++ b/server/src/main/java/org/opensearch/gateway/remote/RemoteClusterStateService.java @@ -258,7 +258,7 @@ public RemoteClusterStateManifestInfo writeFullMetadata(ClusterState clusterStat uploadedMetadataResults.uploadedIndicesRoutingMetadata.size() ); } else { - logger.info( + logger.debug( "writing cluster state took [{}ms]; " + "wrote full state with [{}] indices, [{}] indicesRouting and global metadata", durationMillis, uploadedMetadataResults.uploadedIndexMetadata.size(), @@ -457,8 +457,8 @@ public RemoteClusterStateManifestInfo writeIncrementalMetadata( customsDiff.getUpserts().size() ); } else { - logger.info("{}; {}", clusterStateUploadTimeMessage, metadataUpdateMessage); - logger.info( + logger.debug("{}; {}", clusterStateUploadTimeMessage, metadataUpdateMessage); + logger.debug( "writing cluster state for version [{}] took [{}ms]; " + "wrote metadata for [{}] indices and skipped [{}] unchanged indices, coordination metadata updated : [{}], " + "settings metadata updated : [{}], templates metadata updated : [{}], custom metadata updated : [{}]", From cf0d6ccda670652f90af3813de0019713b55976c Mon Sep 17 00:00:00 2001 From: Daniil Roman Date: Thu, 18 Jul 2024 19:46:36 +0200 Subject: [PATCH 079/167] [Bugfix] Fix NPE in ReplicaShardAllocator (#13993) (#14385) * [Bugfix] Fix NPE in ReplicaShardAllocator (#13993) Signed-off-by: Daniil Roman * Add fix info to CHANGELOG.md Signed-off-by: Daniil Roman --------- Signed-off-by: Daniil Roman Signed-off-by: Daniil Roman --- CHANGELOG.md | 1 + .../gateway/ReplicaShardAllocator.java | 5 +- .../ReplicaShardBatchAllocatorTests.java | 54 +++++++++++++++++++ 3 files changed, 59 insertions(+), 1 deletion(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 0417cc14ee86f..ac459b52383b1 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -75,6 +75,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Refactoring Grok.validatePatternBank by using an iterative approach ([#14206](https://github.com/opensearch-project/OpenSearch/pull/14206)) - Update help output for _cat ([#14722](https://github.com/opensearch-project/OpenSearch/pull/14722)) - Fix bulk upsert ignores the default_pipeline and final_pipeline when auto-created index matches the index template ([#12891](https://github.com/opensearch-project/OpenSearch/pull/12891)) +- Fix NPE in ReplicaShardAllocator ([#14385](https://github.com/opensearch-project/OpenSearch/pull/14385)) ### Security diff --git a/server/src/main/java/org/opensearch/gateway/ReplicaShardAllocator.java b/server/src/main/java/org/opensearch/gateway/ReplicaShardAllocator.java index d9474b32bdbf6..aaf0d696e1444 100644 --- a/server/src/main/java/org/opensearch/gateway/ReplicaShardAllocator.java +++ b/server/src/main/java/org/opensearch/gateway/ReplicaShardAllocator.java @@ -100,7 +100,10 @@ protected Runnable cancelExistingRecoveryForBetterMatch( Metadata metadata = allocation.metadata(); RoutingNodes routingNodes = allocation.routingNodes(); ShardRouting primaryShard = allocation.routingNodes().activePrimary(shard.shardId()); - assert primaryShard != null : "the replica shard can be allocated on at least one node, so there must be an active primary"; + if (primaryShard == null) { + logger.trace("{}: no active primary shard found or allocated, letting actual allocation figure it out", shard); + return null; + } assert primaryShard.currentNodeId() != null; final DiscoveryNode primaryNode = allocation.nodes().get(primaryShard.currentNodeId()); diff --git a/server/src/test/java/org/opensearch/gateway/ReplicaShardBatchAllocatorTests.java b/server/src/test/java/org/opensearch/gateway/ReplicaShardBatchAllocatorTests.java index 2e148c2bc8130..526a3990955b8 100644 --- a/server/src/test/java/org/opensearch/gateway/ReplicaShardBatchAllocatorTests.java +++ b/server/src/test/java/org/opensearch/gateway/ReplicaShardBatchAllocatorTests.java @@ -644,6 +644,25 @@ public void testDoNotCancelForBrokenNode() { assertThat(allocation.routingNodes().shardsWithState(ShardRoutingState.UNASSIGNED), empty()); } + public void testDoNotCancelForInactivePrimaryNode() { + RoutingAllocation allocation = oneInactivePrimaryOnNode1And1ReplicaRecovering(yesAllocationDeciders(), null); + testBatchAllocator.addData( + node1, + null, + "MATCH", + null, + new StoreFileMetadata("file1", 10, "MATCH_CHECKSUM", MIN_SUPPORTED_LUCENE_VERSION) + ).addData(node2, randomSyncId(), null, new StoreFileMetadata("file1", 10, "MATCH_CHECKSUM", MIN_SUPPORTED_LUCENE_VERSION)); + + testBatchAllocator.processExistingRecoveries( + allocation, + Collections.singletonList(new ArrayList<>(allocation.routingNodes().shardsWithState(ShardRoutingState.INITIALIZING))) + ); + + assertThat(allocation.routingNodesChanged(), equalTo(false)); + assertThat(allocation.routingNodes().shardsWithState(ShardRoutingState.UNASSIGNED), empty()); + } + public void testAllocateUnassignedBatchThrottlingAllocationDeciderIsHonoured() throws InterruptedException { ClusterSettings clusterSettings = new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS); AllocationDeciders allocationDeciders = randomAllocationDeciders( @@ -872,6 +891,41 @@ private RoutingAllocation onePrimaryOnNode1And1ReplicaRecovering(AllocationDecid ); } + private RoutingAllocation oneInactivePrimaryOnNode1And1ReplicaRecovering(AllocationDeciders deciders, UnassignedInfo unassignedInfo) { + ShardRouting primaryShard = TestShardRouting.newShardRouting(shardId, node1.getId(), true, ShardRoutingState.INITIALIZING); + RoutingTable routingTable = RoutingTable.builder() + .add( + IndexRoutingTable.builder(shardId.getIndex()) + .addIndexShard( + new IndexShardRoutingTable.Builder(shardId).addShard(primaryShard) + .addShard( + TestShardRouting.newShardRouting( + shardId, + node2.getId(), + null, + false, + ShardRoutingState.INITIALIZING, + unassignedInfo + ) + ) + .build() + ) + ) + .build(); + ClusterState state = ClusterState.builder(org.opensearch.cluster.ClusterName.CLUSTER_NAME_SETTING.getDefault(Settings.EMPTY)) + .routingTable(routingTable) + .nodes(DiscoveryNodes.builder().add(node1).add(node2)) + .build(); + return new RoutingAllocation( + deciders, + new RoutingNodes(state, false), + state, + ClusterInfo.EMPTY, + SnapshotShardSizeInfo.EMPTY, + System.nanoTime() + ); + } + private RoutingAllocation onePrimaryOnNode1And1ReplicaRecovering(AllocationDeciders deciders) { return onePrimaryOnNode1And1ReplicaRecovering(deciders, new UnassignedInfo(UnassignedInfo.Reason.CLUSTER_RECOVERED, null)); } From 71aefa51b84750042b1698ed2b549d4f92209e1b Mon Sep 17 00:00:00 2001 From: Rishabh Singh Date: Thu, 18 Jul 2024 11:36:03 -0700 Subject: [PATCH 080/167] Run performance benchmark on pull requests (#14760) * add performance benchmark workflow for pull requests Signed-off-by: Rishabh Singh * Update PERFORMANCE_BENCHMARKS.md Co-authored-by: Andriy Redko Signed-off-by: Rishabh Singh * Update PERFORMANCE_BENCHMARKS.md Co-authored-by: Andriy Redko Signed-off-by: Rishabh Singh * Update .github/workflows/benchmark-pull-request.yml Co-authored-by: Andriy Redko Signed-off-by: Rishabh Singh * Update .github/workflows/benchmark-pull-request.yml Co-authored-by: Andriy Redko Signed-off-by: Rishabh Singh * Update .github/workflows/benchmark-pull-request.yml Co-authored-by: Andriy Redko Signed-off-by: Rishabh Singh * Update .github/workflows/benchmark-pull-request.yml Co-authored-by: Andriy Redko Signed-off-by: Rishabh Singh --------- Signed-off-by: Rishabh Singh Signed-off-by: Rishabh Singh Co-authored-by: Andriy Redko --- .github/benchmark-configs.json | 155 ++++++++++++++++++ .github/workflows/add-performance-comment.yml | 25 +++ .github/workflows/benchmark-pull-request.yml | 142 ++++++++++++++++ PERFORMANCE_BENCHMARKS.md | 112 +++++++++++++ 4 files changed, 434 insertions(+) create mode 100644 .github/benchmark-configs.json create mode 100644 .github/workflows/add-performance-comment.yml create mode 100644 .github/workflows/benchmark-pull-request.yml create mode 100644 PERFORMANCE_BENCHMARKS.md diff --git a/.github/benchmark-configs.json b/.github/benchmark-configs.json new file mode 100644 index 0000000000000..a5b1951d2240c --- /dev/null +++ b/.github/benchmark-configs.json @@ -0,0 +1,155 @@ +{ + "name": "Cluster and opensearch-benchmark configurations", + "id_1": { + "description": "Indexing only configuration for NYC_TAXIS workload", + "supported_major_versions": ["2", "3"], + "cluster-benchmark-configs": { + "SINGLE_NODE_CLUSTER": "true", + "MIN_DISTRIBUTION": "true", + "TEST_WORKLOAD": "nyc_taxis", + "WORKLOAD_PARAMS": "{\"number_of_replicas\":\"0\",\"number_of_shards\":\"1\"}", + "EXCLUDE_TASKS": "type:search", + "CAPTURE_NODE_STAT": "true" + }, + "cluster_configuration": { + "size": "Single-Node", + "data_instance_config": "4vCPU, 32G Mem, 16G Heap" + } + }, + "id_2": { + "description": "Indexing only configuration for HTTP_LOGS workload", + "supported_major_versions": ["2", "3"], + "cluster-benchmark-configs": { + "SINGLE_NODE_CLUSTER": "true", + "MIN_DISTRIBUTION": "true", + "TEST_WORKLOAD": "http_logs", + "WORKLOAD_PARAMS": "{\"number_of_replicas\":\"0\",\"number_of_shards\":\"1\"}", + "EXCLUDE_TASKS": "type:search", + "CAPTURE_NODE_STAT": "true" + }, + "cluster_configuration": { + "size": "Single-Node", + "data_instance_config": "4vCPU, 32G Mem, 16G Heap" + } + }, + "id_3": { + "description": "Search only test-procedure for NYC_TAXIS, uses snapshot to restore the data for OS-3.0.0", + "supported_major_versions": ["3"], + "cluster-benchmark-configs": { + "SINGLE_NODE_CLUSTER": "true", + "MIN_DISTRIBUTION": "true", + "TEST_WORKLOAD": "nyc_taxis", + "WORKLOAD_PARAMS": "{\"snapshot_repo_name\":\"benchmark-workloads-repo-300\",\"snapshot_bucket_name\":\"benchmark-workload-snapshots\",\"snapshot_region\":\"us-east-1\",\"snapshot_base_path\":\"workload-snapshots-300\",\"snapshot_name\":\"nyc_taxis_1_shard\"}", + "CAPTURE_NODE_STAT": "true" + }, + "cluster_configuration": { + "size": "Single-Node", + "data_instance_config": "4vCPU, 32G Mem, 16G Heap" + } + }, + "id_4": { + "description": "Search only test-procedure for HTTP_LOGS, uses snapshot to restore the data for OS-3.0.0", + "supported_major_versions": ["3"], + "cluster-benchmark-configs": { + "SINGLE_NODE_CLUSTER": "true", + "MIN_DISTRIBUTION": "true", + "TEST_WORKLOAD": "http_logs", + "WORKLOAD_PARAMS": "{\"snapshot_repo_name\":\"benchmark-workloads-repo-300\",\"snapshot_bucket_name\":\"benchmark-workload-snapshots\",\"snapshot_region\":\"us-east-1\",\"snapshot_base_path\":\"workload-snapshots-300\",\"snapshot_name\":\"http_logs_1_shard\"}", + "CAPTURE_NODE_STAT": "true" + }, + "cluster_configuration": { + "size": "Single-Node", + "data_instance_config": "4vCPU, 32G Mem, 16G Heap" + } + }, + "id_5": { + "description": "Search only test-procedure for HTTP_LOGS, uses snapshot to restore the data for OS-3.0.0", + "supported_major_versions": ["3"], + "cluster-benchmark-configs": { + "SINGLE_NODE_CLUSTER": "true", + "MIN_DISTRIBUTION": "true", + "TEST_WORKLOAD": "big5", + "WORKLOAD_PARAMS": "{\"snapshot_repo_name\":\"benchmark-workloads-repo-300\",\"snapshot_bucket_name\":\"benchmark-workload-snapshots\",\"snapshot_region\":\"us-east-1\",\"snapshot_base_path\":\"workload-snapshots-300\",\"snapshot_name\":\"big5_1_shard\"}", + "CAPTURE_NODE_STAT": "true" + }, + "cluster_configuration": { + "size": "Single-Node", + "data_instance_config": "4vCPU, 32G Mem, 16G Heap" + } + }, + "id_6": { + "description": "Search only test-procedure for NYC_TAXIS, uses snapshot to restore the data for OS-2.x", + "supported_major_versions": ["2"], + "cluster-benchmark-configs": { + "SINGLE_NODE_CLUSTER": "true", + "MIN_DISTRIBUTION": "true", + "TEST_WORKLOAD": "nyc_taxis", + "WORKLOAD_PARAMS": "{\"snapshot_repo_name\":\"benchmark-workloads-repo\",\"snapshot_bucket_name\":\"benchmark-workload-snapshots\",\"snapshot_region\":\"us-east-1\",\"snapshot_base_path\":\"workload-snapshots\",\"snapshot_name\":\"nyc_taxis_1_shard\"}", + "CAPTURE_NODE_STAT": "true" + }, + "cluster_configuration": { + "size": "Single-Node", + "data_instance_config": "4vCPU, 32G Mem, 16G Heap" + } + }, + "id_7": { + "description": "Search only test-procedure for HTTP_LOGS, uses snapshot to restore the data for OS-2.x", + "supported_major_versions": ["2"], + "cluster-benchmark-configs": { + "SINGLE_NODE_CLUSTER": "true", + "MIN_DISTRIBUTION": "true", + "TEST_WORKLOAD": "http_logs", + "WORKLOAD_PARAMS": "{\"snapshot_repo_name\":\"benchmark-workloads-repo\",\"snapshot_bucket_name\":\"benchmark-workload-snapshots\",\"snapshot_region\":\"us-east-1\",\"snapshot_base_path\":\"workload-snapshots\",\"snapshot_name\":\"http_logs_1_shard\"}", + "CAPTURE_NODE_STAT": "true" + }, + "cluster_configuration": { + "size": "Single-Node", + "data_instance_config": "4vCPU, 32G Mem, 16G Heap" + } + }, + "id_8": { + "description": "Search only test-procedure for HTTP_LOGS, uses snapshot to restore the data for OS-2.x", + "supported_major_versions": ["2"], + "cluster-benchmark-configs": { + "SINGLE_NODE_CLUSTER": "true", + "MIN_DISTRIBUTION": "true", + "TEST_WORKLOAD": "big5", + "WORKLOAD_PARAMS": "{\"snapshot_repo_name\":\"benchmark-workloads-repo\",\"snapshot_bucket_name\":\"benchmark-workload-snapshots\",\"snapshot_region\":\"us-east-1\",\"snapshot_base_path\":\"workload-snapshots\",\"snapshot_name\":\"big5_1_shard\"}", + "CAPTURE_NODE_STAT": "true" + }, + "cluster_configuration": { + "size": "Single-Node", + "data_instance_config": "4vCPU, 32G Mem, 16G Heap" + } + }, + "id_9": { + "description": "Indexing and search configuration for pmc workload", + "supported_major_versions": ["2", "3"], + "cluster-benchmark-configs": { + "SINGLE_NODE_CLUSTER": "true", + "MIN_DISTRIBUTION": "true", + "TEST_WORKLOAD": "pmc", + "WORKLOAD_PARAMS": "{\"number_of_replicas\":\"0\",\"number_of_shards\":\"1\"}", + "CAPTURE_NODE_STAT": "true" + }, + "cluster_configuration": { + "size": "Single-Node", + "data_instance_config": "4vCPU, 32G Mem, 16G Heap" + } + }, + "id_10": { + "description": "Indexing only configuration for stack-overflow workload", + "supported_major_versions": ["2", "3"], + "cluster-benchmark-configs": { + "SINGLE_NODE_CLUSTER": "true", + "MIN_DISTRIBUTION": "true", + "TEST_WORKLOAD": "so", + "WORKLOAD_PARAMS": "{\"number_of_replicas\":\"0\",\"number_of_shards\":\"1\"}", + "CAPTURE_NODE_STAT": "true" + }, + "cluster_configuration": { + "size": "Single-Node", + "data_instance_config": "4vCPU, 32G Mem, 16G Heap" + } + } +} diff --git a/.github/workflows/add-performance-comment.yml b/.github/workflows/add-performance-comment.yml new file mode 100644 index 0000000000000..3939de25e4cbe --- /dev/null +++ b/.github/workflows/add-performance-comment.yml @@ -0,0 +1,25 @@ +name: Performance Label Action + +on: + pull_request: + types: [labeled] + +jobs: + add-comment: + if: github.event.label.name == 'Performance' + runs-on: ubuntu-latest + permissions: + pull-requests: write + + steps: + - name: Add comment to PR + uses: actions/github-script@v6 + with: + github-token: ${{secrets.GITHUB_TOKEN}} + script: | + github.rest.issues.createComment({ + issue_number: context.issue.number, + owner: context.repo.owner, + repo: context.repo.repo, + body: "Hello!\nWe have added a performance benchmark workflow that runs by adding a comment on the PR.\n Please refer https://github.com/opensearch-project/OpenSearch/blob/main/PERFORMANCE_BENCHMARKS.md on how to run benchmarks on pull requests." + }) diff --git a/.github/workflows/benchmark-pull-request.yml b/.github/workflows/benchmark-pull-request.yml new file mode 100644 index 0000000000000..0de50981fa3d7 --- /dev/null +++ b/.github/workflows/benchmark-pull-request.yml @@ -0,0 +1,142 @@ +name: Run performance benchmark on pull request +on: + issue_comment: + types: [created] +jobs: + run-performance-benchmark-on-pull-request: + if: ${{ (github.event.issue.pull_request) && (contains(github.event.comment.body, '"run-benchmark-test"')) }} + runs-on: ubuntu-latest + permissions: + id-token: write + contents: read + issues: write + pull-requests: write + steps: + - name: Checkout Repository + uses: actions/checkout@v3 + - name: Set up required env vars + run: | + echo "PR_NUMBER=${{ github.event.issue.number }}" >> $GITHUB_ENV + echo "REPOSITORY=${{ github.event.repository.full_name }}" >> $GITHUB_ENV + OPENSEARCH_VERSION=$(awk -F '=' '/^opensearch[[:space:]]*=/ {gsub(/[[:space:]]/, "", $2); print $2}' buildSrc/version.properties) + echo "OPENSEARCH_VERSION=$OPENSEARCH_VERSION" >> $GITHUB_ENV + major_version=$(echo $OPENSEARCH_VERSION | cut -d'.' -f1) + echo "OPENSEARCH_MAJOR_VERSION=$major_version" >> $GITHUB_ENV + echo "USER_TAGS=pull_request_number:${{ github.event.issue.number }},repository:OpenSearch" >> $GITHUB_ENV + - name: Check comment format + id: check_comment + run: | + comment='${{ github.event.comment.body }}' + if echo "$comment" | jq -e 'has("run-benchmark-test")'; then + echo "Valid comment format detected, check if valid config id is provided" + config_id=$(echo $comment | jq -r '.["run-benchmark-test"]') + benchmark_configs=$(cat .github/benchmark-configs.json) + if echo $benchmark_configs | jq -e --arg id "$config_id" 'has($id)' && echo "$benchmark_configs" | jq -e --arg version "$OPENSEARCH_MAJOR_VERSION" --arg id "$config_id" '.[$id].supported_major_versions | index($version) != null' > /dev/null; then + echo $benchmark_configs | jq -r --arg id "$config_id" '.[$id]."cluster-benchmark-configs" | to_entries[] | "\(.key)=\(.value)"' >> $GITHUB_ENV + else + echo "invalid=true" >> $GITHUB_OUTPUT + fi + else + echo "invalid=true" >> $GITHUB_OUTPUT + fi + - name: Post invalid format comment + if: steps.check_comment.outputs.invalid == 'true' + uses: actions/github-script@v6 + with: + github-token: ${{secrets.GITHUB_TOKEN}} + script: | + github.rest.issues.createComment({ + issue_number: context.issue.number, + owner: context.repo.owner, + repo: context.repo.repo, + body: 'Invalid comment format or config id. Please refer to https://github.com/opensearch-project/OpenSearch/blob/main/PERFORMANCE_BENCHMARKS.md on how to run benchmarks on pull requests.' + }) + - name: Fail workflow for invalid comment + if: steps.check_comment.outputs.invalid == 'true' + run: | + echo "Invalid comment format detected. Failing the workflow." + exit 1 + - id: get_approvers + run: | + echo "approvers=$(cat .github/CODEOWNERS | grep '^\*' | tr -d '* ' | sed 's/@/,/g' | sed 's/,//1')" >> $GITHUB_OUTPUT + - uses: trstringer/manual-approval@v1 + if: (!contains(steps.get_approvers.outputs.approvers, github.event.comment.user.login)) + with: + secret: ${{ github.TOKEN }} + approvers: ${{ steps.get_approvers.outputs.approvers }} + minimum-approvals: 1 + issue-title: 'Request to approve/deny benchmark run for PR #${{ env.PR_NUMBER }}' + issue-body: "Please approve or deny the benchmark run for PR #${{ env.PR_NUMBER }}" + exclude-workflow-initiator-as-approver: false + - name: Get PR Details + id: get_pr + uses: actions/github-script@v7 + with: + script: | + const issue = context.payload.issue; + const prNumber = issue.number; + console.log(`Pull Request Number: ${prNumber}`); + + const { data: pull_request } = await github.rest.pulls.get({ + owner: context.repo.owner, + repo: context.repo.repo, + pull_number: prNumber, + }); + + return { + "headRepoFullName": pull_request.head.repo.full_name, + "headRef": pull_request.head.ref + }; + - name: Set pr details env vars + run: | + echo '${{ steps.get_pr.outputs.result }}' | jq -r '.headRepoFullName' + echo '${{ steps.get_pr.outputs.result }}' | jq -r '.headRef' + headRepo=$(echo '${{ steps.get_pr.outputs.result }}' | jq -r '.headRepoFullName') + headRef=$(echo '${{ steps.get_pr.outputs.result }}' | jq -r '.headRef') + echo "prHeadRepo=$headRepo" >> $GITHUB_ENV + echo "prheadRef=$headRef" >> $GITHUB_ENV + - name: Checkout PR Repo + uses: actions/checkout@v2 + with: + repository: ${{ env.prHeadRepo }} + ref: ${{ env.prHeadRef }} + token: ${{ secrets.GITHUB_TOKEN }} + - name: Setup Java + uses: actions/setup-java@v1 + with: + java-version: 21 + - name: Build and Assemble OpenSearch from PR + run: | + ./gradlew :distribution:archives:linux-tar:assemble -Dbuild.snapshot=false + - name: Configure AWS credentials + uses: aws-actions/configure-aws-credentials@v4 + with: + role-to-assume: ${{ secrets.UPLOAD_ARCHIVE_ARTIFACT_ROLE }} + role-session-name: publish-to-s3 + aws-region: us-west-2 + - name: Push to S3 + run: | + aws s3 cp distribution/archives/linux-tar/build/distributions/opensearch-min-$OPENSEARCH_VERSION-linux-x64.tar.gz s3://${{ secrets.ARCHIVE_ARTIFACT_BUCKET_NAME }}/PR-$PR_NUMBER/ + echo "DISTRIBUTION_URL=${{ secrets.ARTIFACT_BUCKET_CLOUDFRONT_URL }}/PR-$PR_NUMBER/opensearch-min-$OPENSEARCH_VERSION-linux-x64.tar.gz" >> $GITHUB_ENV + - name: Checkout opensearch-build repo + uses: actions/checkout@v4 + with: + repository: opensearch-project/opensearch-build + ref: main + path: opensearch-build + - name: Trigger jenkins workflow to run gradle check + run: | + cat $GITHUB_ENV + bash opensearch-build/scripts/benchmark/benchmark-pull-request.sh ${{ secrets.JENKINS_PR_BENCHMARK_GENERIC_WEBHOOK_TOKEN }} + - name: Update PR with Job Url + uses: actions/github-script@v6 + with: + github-token: ${{ secrets.GITHUB_TOKEN }} + script: | + const workflowUrl = process.env.WORKFLOW_URL; + github.rest.issues.createComment({ + issue_number: context.issue.number, + owner: context.repo.owner, + repo: context.repo.repo, + body: `The Jenkins job url is ${workflowUrl} . Final results will be published once the job is completed.` + }) diff --git a/PERFORMANCE_BENCHMARKS.md b/PERFORMANCE_BENCHMARKS.md new file mode 100644 index 0000000000000..252c4ae312136 --- /dev/null +++ b/PERFORMANCE_BENCHMARKS.md @@ -0,0 +1,112 @@ +# README: Running Performance Benchmarks on Pull Requests + +## Overview + +`benchmark-pull-request` GitHub Actions workflow is designed to automatically run performance benchmarks on a pull request when a specific comment is made on the pull request. This ensures that performance benchmarks are consistently and accurately applied to code changes, helping maintain the performance standards of the repository. + +## Workflow Trigger + +The workflow is triggered when a new comment is created on a pull request. Specifically, it checks for the presence of the `"run-benchmark-test"` keyword in the comment body. If this keyword is detected, the workflow proceeds to run the performance benchmarks. + +## Key Steps in the Workflow + +1. **Check Comment Format and Configuration:** + - Validates the format of the comment to ensure it contains the required `"run-benchmark-test"` keyword and is in json format. + - Extracts the benchmark configuration ID from the comment and verifies if it exists in the `benchmark-config.json` file. + - Checks if the extracted configuration ID is supported for the current OpenSearch major version. + +2. **Post Invalid Format Comment:** + - If the comment format is invalid or the configuration ID is not supported, a comment is posted on the pull request indicating the problem, and the workflow fails. + +3. **Manual Approval (if necessary):** + - Fetches the list of approvers from the `.github/CODEOWNERS` file. + - If the commenter is not one of the maintainers, a manual approval request is created. The workflow pauses until an approver approves or denies the benchmark run by commenting appropriate word on the issue. + - The issue for approval request is auto-closed once the approver is done adding appropriate comment + +4. **Build and Assemble OpenSearch:** + - Builds and assembles (x64-linux tar) the OpenSearch distribution from the pull request code changes. + +5. **Upload to S3:** + - Configures AWS credentials and uploads the assembled OpenSearch distribution to an S3 bucket for further use in benchmarking. + - The S3 bucket is fronted by cloudfront to only allow downloads. + - The lifecycle policy on the S3 bucket will delete the uploaded artifacts after 30-days. + +6. **Trigger Jenkins Workflow:** + - Triggers a Jenkins workflow to run the benchmark tests using a webhook token. + +7. **Update Pull Request with Job URL:** + - Posts a comment on the pull request with the URL of the Jenkins job. The final benchmark results will be posted once the job completes. + - To learn about how benchmark job works see https://github.com/opensearch-project/opensearch-build/tree/main/src/test_workflow#benchmarking-tests + +## How to Use This Workflow + +1. **Ensure `benchmark-config.json` is Up-to-Date:** + - The `benchmark-config.json` file should contain valid benchmark configurations with supported major versions and cluster-benchmark configurations. + +2. **Add the Workflow to Your Repository:** + - Save the workflow YAML file (typically named `benchmark.yml`) in the `.github/workflows` directory of your repository. + +3. **Make a Comment to Trigger the Workflow:** + - On any pull request issue, make a comment containing the keyword `"run-benchmark-test"` along with the configuration ID. For example: + ```json + {"run-benchmark-test": "id_1"} + ``` + +4. **Monitor Workflow Progress:** + - The workflow will validate the comment, check for approval (if necessary), build the OpenSearch distribution, and trigger the Jenkins job. + - A comment will be posted on the pull request with the URL of the Jenkins job. You can monitor the progress and final results there as well. + +## Example Comment Format + +To run the benchmark with configuration ID `id_1`, post the following comment on the pull request issue: +```json +{"run-benchmark-test": "id_1"} +``` + +## How to add a new benchmark configuration + +The benchmark-config.json file accepts the following schema. +```json +{ + "id_": { + "description": "Short description of the configuration", + "supported_major_versions": ["2", "3"], + "cluster-benchmark-configs": { + "SINGLE_NODE_CLUSTER": "Use single node cluster for benchmarking, accepted values are \"true\" or \"false\"", + "MIN_DISTRIBUTION": "Use OpenSearch min distribution, should always be \"true\"", + "MANAGER_NODE_COUNT": "For multi-node cluster tests, number of cluster manager nodes, empty value defaults to 3.", + "DATA_NODE_COUNT": "For multi-node cluster tests, number of data nodes, empty value defaults to 2.", + "DATA_INSTANCE_TYPE": "EC2 instance type for data node, empty defaults to r5.xlarge.", + "DATA_NODE_STORAGE": "Data node ebs block storage size, empty value defaults to 100Gb", + "JVM_SYS_PROPS": "A comma-separated list of key=value pairs that will be added to jvm.options as JVM system properties", + "ADDITIONAL_CONFIG": "Additional space delimited opensearch.yml config parameters. e.g., `search.concurrent_segment_search.enabled:true`", + "TEST_WORKLOAD": "The workload name from OpenSearch Benchmark Workloads. https://github.com/opensearch-project/opensearch-benchmark-workloads. Default is nyc_taxis", + "WORKLOAD_PARAMS": "With this parameter you can inject variables into workloads, e.g.{\"number_of_replicas\":\"0\",\"number_of_shards\":\"3\"}. See https://opensearch.org/docs/latest/benchmark/reference/commands/command-flags/#workload-params", + "EXCLUDE_TASKS": "Defines a comma-separated list of test procedure tasks not to run. e.g. type:search, see https://opensearch.org/docs/latest/benchmark/reference/commands/command-flags/#exclude-tasks", + "INCLUDE_TASKS": "Defines a comma-separated list of test procedure tasks to run. By default, all tasks listed in a test procedure array are run. See https://opensearch.org/docs/latest/benchmark/reference/commands/command-flags/#include-tasks", + "TEST_PROCEDURE": "Defines a test procedure to use. e.g., `append-no-conflicts,significant-text`. Uses default if none provided. See https://opensearch.org/docs/latest/benchmark/reference/commands/command-flags/#test-procedure", + "CAPTURE_NODE_STAT": "Enable opensearch-benchmark node-stats telemetry to capture system level metrics like cpu, jvm etc., see https://opensearch.org/docs/latest/benchmark/reference/telemetry/#node-stats" + }, + "cluster_configuration": { + "size": "Single-Node/Multi-Node", + "data_instance_config": "data-instance-config, e.g., 4vCPU, 32G Mem, 16G Heap" + } + } +} +``` +To add a new test configuration that are suitable to your changes please create a new PR to add desired cluster and benchmark configurations. + +## How to compare my results against baseline? + +Apart from just running benchmarks the user will also be interested in how their change is performing against current OpenSearch distribution with the exactly same cluster and benchmark configurations. +The user can refer to https://s12d.com/basline-dashboards (WIP) to access baseline data for their workload, this data is generated by our nightly benchmark runs on latest build distribution artifacts for 3.0 and 2.x. +In the future, we will add the [compare](https://opensearch.org/docs/latest/benchmark/reference/commands/compare/) feature of opensearch-benchmark to run comparison and publish data on the PR as well. + +## Notes + +- Ensure all required secrets (e.g., `GITHUB_TOKEN`, `UPLOAD_ARCHIVE_ARTIFACT_ROLE`, `ARCHIVE_ARTIFACT_BUCKET_NAME`, `JENKINS_PR_BENCHMARK_GENERIC_WEBHOOK_TOKEN`) are properly set in the repository secrets. +- The `CODEOWNERS` file should list the GitHub usernames of approvers for the benchmark process. + +By following these instructions, repository maintainers can ensure consistent and automated performance benchmarking for all code changes introduced via pull requests. + + From cb743710b2ee5c8ed7de7123891463fc3a7a7fde Mon Sep 17 00:00:00 2001 From: kkewwei Date: Fri, 19 Jul 2024 03:43:30 +0800 Subject: [PATCH 081/167] fix constant_keyword field type (#14807) Signed-off-by: kkewwei test Signed-off-by: Daniel (dB.) Doubrovkine Co-authored-by: Daniel (dB.) Doubrovkine --- CHANGELOG.md | 1 + .../test/index/110_constant_keyword.yml | 70 +++++++++++++++++++ .../mapper/ConstantKeywordFieldMapper.java | 4 +- .../ConstantKeywordFieldMapperTests.java | 8 +++ 4 files changed, 81 insertions(+), 2 deletions(-) create mode 100644 rest-api-spec/src/main/resources/rest-api-spec/test/index/110_constant_keyword.yml diff --git a/CHANGELOG.md b/CHANGELOG.md index ac459b52383b1..98b4f520a5bfb 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -76,6 +76,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Update help output for _cat ([#14722](https://github.com/opensearch-project/OpenSearch/pull/14722)) - Fix bulk upsert ignores the default_pipeline and final_pipeline when auto-created index matches the index template ([#12891](https://github.com/opensearch-project/OpenSearch/pull/12891)) - Fix NPE in ReplicaShardAllocator ([#14385](https://github.com/opensearch-project/OpenSearch/pull/14385)) +- Fix constant_keyword field type used when creating index ([#14807](https://github.com/opensearch-project/OpenSearch/pull/14807)) ### Security diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/index/110_constant_keyword.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/index/110_constant_keyword.yml new file mode 100644 index 0000000000000..9864bfbbb26e9 --- /dev/null +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/index/110_constant_keyword.yml @@ -0,0 +1,70 @@ +--- +# The test setup includes: +# - Create index with constant_keyword field type +# - Check mapping +# - Index two example documents +# - Search +# - Delete Index when connection is teardown + +"Mappings and Supported queries": + - skip: + version: " - 2.99.99" + reason: "fixed in 3.0.0" + + # Create index with constant_keyword field type + - do: + indices.create: + index: test + body: + mappings: + properties: + genre: + type: "constant_keyword" + value: "1" + + # Index document + - do: + index: + index: test + id: 1 + body: { + "genre": "1" + } + + - do: + index: + index: test + id: 2 + body: { + "genre": 1 + } + + - do: + indices.refresh: + index: test + + # Check mapping + - do: + indices.get_mapping: + index: test + - is_true: test.mappings + - match: { test.mappings.properties.genre.type: constant_keyword } + - length: { test.mappings.properties.genre: 2 } + + # Verify Document Count + - do: + search: + body: { + query: { + match_all: {} + } + } + + - length: { hits.hits: 2 } + - match: { hits.hits.0._source.genre: "1" } + - match: { hits.hits.1._source.genre: 1 } + + # Delete Index when connection is teardown + - do: + indices.delete: + index: test diff --git a/server/src/main/java/org/opensearch/index/mapper/ConstantKeywordFieldMapper.java b/server/src/main/java/org/opensearch/index/mapper/ConstantKeywordFieldMapper.java index f4730c70362d1..2edd817f61f61 100644 --- a/server/src/main/java/org/opensearch/index/mapper/ConstantKeywordFieldMapper.java +++ b/server/src/main/java/org/opensearch/index/mapper/ConstantKeywordFieldMapper.java @@ -68,11 +68,11 @@ private static ConstantKeywordFieldMapper toType(FieldMapper in) { */ public static class Builder extends ParametrizedFieldMapper.Builder { - private final Parameter value; + private final Parameter value = Parameter.stringParam(valuePropertyName, false, m -> toType(m).value, null); public Builder(String name, String value) { super(name); - this.value = Parameter.stringParam(valuePropertyName, false, m -> toType(m).value, value); + this.value.setValue(value); } @Override diff --git a/server/src/test/java/org/opensearch/index/mapper/ConstantKeywordFieldMapperTests.java b/server/src/test/java/org/opensearch/index/mapper/ConstantKeywordFieldMapperTests.java index 65dd3b6447663..e9d0b6d826ade 100644 --- a/server/src/test/java/org/opensearch/index/mapper/ConstantKeywordFieldMapperTests.java +++ b/server/src/test/java/org/opensearch/index/mapper/ConstantKeywordFieldMapperTests.java @@ -105,6 +105,14 @@ public void testMissingDefaultIndexMapper() throws Exception { assertThat(e.getMessage(), containsString("Field [field] is missing required parameter [value]")); } + public void testBuilderToXContent() throws IOException { + ConstantKeywordFieldMapper.Builder builder = new ConstantKeywordFieldMapper.Builder("name", "value1"); + XContentBuilder xContentBuilder = JsonXContent.contentBuilder().startObject(); + builder.toXContent(xContentBuilder, false); + xContentBuilder.endObject(); + assertEquals("{\"value\":\"value1\"}", xContentBuilder.toString()); + } + private final SourceToParse source(CheckedConsumer build) throws IOException { XContentBuilder builder = JsonXContent.contentBuilder().startObject(); build.accept(builder); From 18da095b5afee38a8e9ee4b6dc9a646130b6668d Mon Sep 17 00:00:00 2001 From: Shourya Dutta Biswas <114977491+shourya035@users.noreply.github.com> Date: Fri, 19 Jul 2024 11:56:07 +0530 Subject: [PATCH 082/167] [Remote Store Migration] Reconcile remote store based index settings during STRICT mode switch (#14792) Signed-off-by: Shourya Dutta Biswas <114977491+shourya035@users.noreply.github.com> --- .../MigrationBaseTestCase.java | 27 ++++ .../RemoteMigrationIndexMetadataUpdateIT.java | 67 +++++++++ .../TransportClusterUpdateSettingsAction.java | 51 +++---- .../index/remote/RemoteStoreUtils.java | 123 ++++++++++++++++ ...ransportClusterManagerNodeActionTests.java | 84 ----------- .../index/remote/RemoteStoreUtilsTests.java | 139 ++++++++++++++++++ 6 files changed, 375 insertions(+), 116 deletions(-) diff --git a/server/src/internalClusterTest/java/org/opensearch/remotemigration/MigrationBaseTestCase.java b/server/src/internalClusterTest/java/org/opensearch/remotemigration/MigrationBaseTestCase.java index 2bea36ed80c9f..e4e681a5433b5 100644 --- a/server/src/internalClusterTest/java/org/opensearch/remotemigration/MigrationBaseTestCase.java +++ b/server/src/internalClusterTest/java/org/opensearch/remotemigration/MigrationBaseTestCase.java @@ -46,6 +46,7 @@ import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicLong; +import static org.opensearch.cluster.metadata.IndexMetadata.SETTING_NUMBER_OF_REPLICAS; import static org.opensearch.cluster.routing.allocation.decider.EnableAllocationDecider.CLUSTER_ROUTING_REBALANCE_ENABLE_SETTING; import static org.opensearch.gateway.remote.RemoteClusterStateService.REMOTE_CLUSTER_STATE_ENABLED_SETTING; import static org.opensearch.node.remotestore.RemoteStoreNodeService.MIGRATION_DIRECTION_SETTING; @@ -277,4 +278,30 @@ protected IndexShard getIndexShard(String dataNode, String indexName) throws Exe IndexService indexService = indicesService.indexService(new Index(indexName, uuid)); return indexService.getShard(0); } + + public void changeReplicaCountAndEnsureGreen(int replicaCount, String indexName) { + assertAcked( + client().admin() + .indices() + .prepareUpdateSettings(indexName) + .setSettings(Settings.builder().put(SETTING_NUMBER_OF_REPLICAS, replicaCount)) + ); + ensureGreen(indexName); + } + + public void completeDocRepToRemoteMigration() { + assertTrue( + internalCluster().client() + .admin() + .cluster() + .prepareUpdateSettings() + .setPersistentSettings( + Settings.builder() + .putNull(REMOTE_STORE_COMPATIBILITY_MODE_SETTING.getKey()) + .putNull(MIGRATION_DIRECTION_SETTING.getKey()) + ) + .get() + .isAcknowledged() + ); + } } diff --git a/server/src/internalClusterTest/java/org/opensearch/remotemigration/RemoteMigrationIndexMetadataUpdateIT.java b/server/src/internalClusterTest/java/org/opensearch/remotemigration/RemoteMigrationIndexMetadataUpdateIT.java index 216c104dfecc1..b55219e1cb37f 100644 --- a/server/src/internalClusterTest/java/org/opensearch/remotemigration/RemoteMigrationIndexMetadataUpdateIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/remotemigration/RemoteMigrationIndexMetadataUpdateIT.java @@ -546,6 +546,73 @@ public void testRemoteIndexPathFileExistsAfterMigration() throws Exception { assertTrue(Arrays.stream(files).anyMatch(file -> file.toString().contains(fileNamePrefix))); } + /** + * Scenario: + * Creates an index with 1 pri 1 rep setup with 3 docrep nodes (1 cluster manager + 2 data nodes), + * initiate migration and create 3 remote nodes (1 cluster manager + 2 data nodes) and moves over + * only primary shard copy of the index + * After the primary shard copy is relocated, decrease replica count to 0, stop all docrep nodes + * and conclude migration. Remote store index settings should be applied to the index at this point. + */ + public void testIndexSettingsUpdateDuringReplicaCountDecrement() throws Exception { + String indexName = "migration-index-replica-decrement"; + String docrepClusterManager = internalCluster().startClusterManagerOnlyNode(); + + logger.info("---> Starting 2 docrep nodes"); + List docrepNodeNames = internalCluster().startDataOnlyNodes(2); + internalCluster().validateClusterFormed(); + + logger.info("---> Creating index with 1 primary and 1 replica"); + Settings oneReplica = Settings.builder() + .put(IndexMetadata.SETTING_NUMBER_OF_REPLICAS, 1) + .put(IndexMetadata.SETTING_NUMBER_OF_SHARDS, 1) + .build(); + createIndexAndAssertDocrepProperties(indexName, oneReplica); + + int docsToIndex = randomIntBetween(10, 100); + logger.info("---> Indexing {} on both indices", docsToIndex); + indexBulk(indexName, docsToIndex); + + logger.info( + "---> Stopping shard rebalancing to ensure shards do not automatically move over to newer nodes after they are launched" + ); + stopShardRebalancing(); + + logger.info("---> Starting 3 remote store enabled nodes"); + initDocRepToRemoteMigration(); + setAddRemote(true); + internalCluster().startClusterManagerOnlyNode(); + List remoteNodeNames = internalCluster().startDataOnlyNodes(2); + internalCluster().validateClusterFormed(); + + String primaryNode = primaryNodeName(indexName); + + logger.info("---> Moving over primary to remote store enabled nodes"); + assertAcked( + client().admin() + .cluster() + .prepareReroute() + .add(new MoveAllocationCommand(indexName, 0, primaryNode, remoteNodeNames.get(0))) + .execute() + .actionGet() + ); + waitForRelocation(); + waitNoPendingTasksOnAll(); + + logger.info("---> Reducing replica count to 0 for the index"); + changeReplicaCountAndEnsureGreen(0, indexName); + + logger.info("---> Stopping all docrep nodes"); + internalCluster().stopRandomNode(InternalTestCluster.nameFilter(docrepClusterManager)); + for (String node : docrepNodeNames) { + internalCluster().stopRandomNode(InternalTestCluster.nameFilter(node)); + } + internalCluster().validateClusterFormed(); + completeDocRepToRemoteMigration(); + waitNoPendingTasksOnAll(); + assertRemoteProperties(indexName); + } + private void createIndexAndAssertDocrepProperties(String index, Settings settings) { createIndexAssertHealthAndDocrepProperties(index, settings, this::ensureGreen); } diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/settings/TransportClusterUpdateSettingsAction.java b/server/src/main/java/org/opensearch/action/admin/cluster/settings/TransportClusterUpdateSettingsAction.java index 216e1fb2ed1cc..3988d50b2ce1e 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/settings/TransportClusterUpdateSettingsAction.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/settings/TransportClusterUpdateSettingsAction.java @@ -42,7 +42,6 @@ import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.block.ClusterBlockException; import org.opensearch.cluster.block.ClusterBlockLevel; -import org.opensearch.cluster.metadata.IndexMetadata; import org.opensearch.cluster.metadata.IndexNameExpressionResolver; import org.opensearch.cluster.metadata.Metadata; import org.opensearch.cluster.node.DiscoveryNode; @@ -59,17 +58,13 @@ import org.opensearch.common.settings.SettingsException; import org.opensearch.core.action.ActionListener; import org.opensearch.core.common.io.stream.StreamInput; -import org.opensearch.index.remote.RemoteMigrationIndexMetadataUpdater; import org.opensearch.node.remotestore.RemoteStoreNodeService; import org.opensearch.threadpool.ThreadPool; import org.opensearch.transport.TransportService; import java.io.IOException; -import java.util.Collection; -import java.util.Set; -import java.util.stream.Collectors; -import static org.opensearch.index.remote.RemoteMigrationIndexMetadataUpdater.indexHasAllRemoteStoreRelatedMetadata; +import static org.opensearch.index.remote.RemoteStoreUtils.checkAndFinalizeRemoteStoreMigration; /** * Transport action for updating cluster settings @@ -262,13 +257,14 @@ public void onFailure(String source, Exception e) { @Override public ClusterState execute(final ClusterState currentState) { - validateCompatibilityModeSettingRequest(request, state); - final ClusterState clusterState = updater.updateSettings( + boolean isCompatibilityModeChanging = validateCompatibilityModeSettingRequest(request, state); + ClusterState clusterState = updater.updateSettings( currentState, clusterSettings.upgradeSettings(request.transientSettings()), clusterSettings.upgradeSettings(request.persistentSettings()), logger ); + clusterState = checkAndFinalizeRemoteStoreMigration(isCompatibilityModeChanging, request, clusterState, logger); changed = clusterState != currentState; return clusterState; } @@ -278,19 +274,23 @@ public ClusterState execute(final ClusterState currentState) { /** * Runs various checks associated with changing cluster compatibility mode + * * @param request cluster settings update request, for settings to be updated and new values * @param clusterState current state of cluster, for information on nodes + * @return true if the incoming cluster settings update request is switching compatibility modes */ - public void validateCompatibilityModeSettingRequest(ClusterUpdateSettingsRequest request, ClusterState clusterState) { + public boolean validateCompatibilityModeSettingRequest(ClusterUpdateSettingsRequest request, ClusterState clusterState) { Settings settings = Settings.builder().put(request.persistentSettings()).put(request.transientSettings()).build(); if (RemoteStoreNodeService.REMOTE_STORE_COMPATIBILITY_MODE_SETTING.exists(settings)) { - String value = RemoteStoreNodeService.REMOTE_STORE_COMPATIBILITY_MODE_SETTING.get(settings).mode; validateAllNodesOfSameVersion(clusterState.nodes()); - if (RemoteStoreNodeService.CompatibilityMode.STRICT.mode.equals(value)) { + if (RemoteStoreNodeService.REMOTE_STORE_COMPATIBILITY_MODE_SETTING.get( + settings + ) == RemoteStoreNodeService.CompatibilityMode.STRICT) { validateAllNodesOfSameType(clusterState.nodes()); - validateIndexSettings(clusterState); } + return true; } + return false; } /** @@ -310,31 +310,18 @@ private void validateAllNodesOfSameVersion(DiscoveryNodes discoveryNodes) { * @param discoveryNodes current discovery nodes in the cluster */ private void validateAllNodesOfSameType(DiscoveryNodes discoveryNodes) { - Set nodeTypes = discoveryNodes.getNodes() + boolean allNodesDocrepEnabled = discoveryNodes.getNodes() .values() .stream() - .map(DiscoveryNode::isRemoteStoreNode) - .collect(Collectors.toSet()); - if (nodeTypes.size() != 1) { + .allMatch(discoveryNode -> discoveryNode.isRemoteStoreNode() == false); + boolean allNodesRemoteStoreEnabled = discoveryNodes.getNodes() + .values() + .stream() + .allMatch(discoveryNode -> discoveryNode.isRemoteStoreNode()); + if (allNodesDocrepEnabled == false && allNodesRemoteStoreEnabled == false) { throw new SettingsException( "can not switch to STRICT compatibility mode when the cluster contains both remote and non-remote nodes" ); } } - - /** - * Verifies that while trying to switch to STRICT compatibility mode, - * all indices in the cluster have {@link RemoteMigrationIndexMetadataUpdater#indexHasAllRemoteStoreRelatedMetadata(IndexMetadata)} as true. - * If not, throws {@link SettingsException} - * @param clusterState current cluster state - */ - private void validateIndexSettings(ClusterState clusterState) { - Collection allIndicesMetadata = clusterState.metadata().indices().values(); - if (allIndicesMetadata.isEmpty() == false - && allIndicesMetadata.stream().anyMatch(indexMetadata -> indexHasAllRemoteStoreRelatedMetadata(indexMetadata) == false)) { - throw new SettingsException( - "can not switch to STRICT compatibility mode since all indices in the cluster does not have remote store based index settings" - ); - } - } } diff --git a/server/src/main/java/org/opensearch/index/remote/RemoteStoreUtils.java b/server/src/main/java/org/opensearch/index/remote/RemoteStoreUtils.java index 654e554c96bf0..a5e0c10f72301 100644 --- a/server/src/main/java/org/opensearch/index/remote/RemoteStoreUtils.java +++ b/server/src/main/java/org/opensearch/index/remote/RemoteStoreUtils.java @@ -11,17 +11,23 @@ import org.apache.logging.log4j.LogManager; import org.apache.logging.log4j.Logger; import org.opensearch.Version; +import org.opensearch.action.admin.cluster.settings.ClusterUpdateSettingsRequest; import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.metadata.IndexMetadata; +import org.opensearch.cluster.metadata.Metadata; import org.opensearch.cluster.node.DiscoveryNode; import org.opensearch.cluster.node.DiscoveryNodes; +import org.opensearch.cluster.routing.RoutingTable; import org.opensearch.common.collect.Tuple; import org.opensearch.common.settings.Settings; import org.opensearch.node.remotestore.RemoteStoreNodeAttribute; +import org.opensearch.node.remotestore.RemoteStoreNodeService; import java.nio.ByteBuffer; import java.util.Arrays; import java.util.Base64; +import java.util.Collection; +import java.util.Collections; import java.util.HashMap; import java.util.List; import java.util.Locale; @@ -29,7 +35,9 @@ import java.util.Objects; import java.util.Optional; import java.util.function.Function; +import java.util.stream.Collectors; +import static org.opensearch.index.remote.RemoteMigrationIndexMetadataUpdater.indexHasRemoteStoreSettings; import static org.opensearch.indices.RemoteStoreSettings.CLUSTER_REMOTE_STORE_PATH_HASH_ALGORITHM_SETTING; import static org.opensearch.indices.RemoteStoreSettings.CLUSTER_REMOTE_STORE_PATH_TYPE_SETTING; import static org.opensearch.indices.RemoteStoreSettings.CLUSTER_REMOTE_STORE_TRANSLOG_METADATA; @@ -250,4 +258,119 @@ public static Map getRemoteStoreRepoName(DiscoveryNodes discover .findFirst(); return remoteNode.map(RemoteStoreNodeAttribute::getDataRepoNames).orElseGet(HashMap::new); } + + /** + * Invoked after a cluster settings update. + * Checks if the applied cluster settings has switched the cluster to STRICT mode. + * If so, checks and applies appropriate index settings depending on the current set + * of node types in the cluster + * This has been intentionally done after the cluster settings update + * flow. That way we are not interfering with the usual settings update + * and the cluster state mutation that comes along with it + * + * @param isCompatibilityModeChanging flag passed from cluster settings update call to denote if a compatibility mode change has been done + * @param request request payload passed from cluster settings update + * @param currentState cluster state generated after changing cluster settings were applied + * @param logger Logger reference + * @return Mutated cluster state with remote store index settings applied, no-op if the cluster is not switching to `STRICT` compatibility mode + */ + public static ClusterState checkAndFinalizeRemoteStoreMigration( + boolean isCompatibilityModeChanging, + ClusterUpdateSettingsRequest request, + ClusterState currentState, + Logger logger + ) { + if (isCompatibilityModeChanging && isSwitchToStrictCompatibilityMode(request)) { + return finalizeMigration(currentState, logger); + } + return currentState; + } + + /** + * Finalizes the docrep to remote-store migration process by applying remote store based index settings + * on indices that are missing them. No-Op if all indices already have the settings applied through + * IndexMetadataUpdater + * + * @param incomingState mutated cluster state after cluster settings were applied + * @return new cluster state with index settings updated + */ + public static ClusterState finalizeMigration(ClusterState incomingState, Logger logger) { + Map discoveryNodeMap = incomingState.nodes().getNodes(); + if (discoveryNodeMap.isEmpty() == false) { + // At this point, we have already validated that all nodes in the cluster are of uniform type. + // Either all of them are remote store enabled, or all of them are docrep enabled + boolean remoteStoreEnabledNodePresent = discoveryNodeMap.values().stream().findFirst().get().isRemoteStoreNode(); + if (remoteStoreEnabledNodePresent == true) { + List indicesWithoutRemoteStoreSettings = getIndicesWithoutRemoteStoreSettings(incomingState, logger); + if (indicesWithoutRemoteStoreSettings.isEmpty() == true) { + logger.info("All indices in the cluster has remote store based index settings"); + } else { + Metadata mutatedMetadata = applyRemoteStoreSettings(incomingState, indicesWithoutRemoteStoreSettings, logger); + return ClusterState.builder(incomingState).metadata(mutatedMetadata).build(); + } + } else { + logger.debug("All nodes in the cluster are not remote nodes. Skipping."); + } + } + return incomingState; + } + + /** + * Filters out indices which does not have remote store based + * index settings applied even after all shard copies have + * migrated to remote store enabled nodes + */ + private static List getIndicesWithoutRemoteStoreSettings(ClusterState clusterState, Logger logger) { + Collection allIndicesMetadata = clusterState.metadata().indices().values(); + if (allIndicesMetadata.isEmpty() == false) { + List indicesWithoutRemoteSettings = allIndicesMetadata.stream() + .filter(idxMd -> indexHasRemoteStoreSettings(idxMd.getSettings()) == false) + .collect(Collectors.toList()); + logger.debug( + "Attempting to switch to strict mode. Count of indices without remote store settings {}", + indicesWithoutRemoteSettings.size() + ); + return indicesWithoutRemoteSettings; + } + return Collections.emptyList(); + } + + /** + * Applies remote store index settings through {@link RemoteMigrationIndexMetadataUpdater} + */ + private static Metadata applyRemoteStoreSettings( + ClusterState clusterState, + List indicesWithoutRemoteStoreSettings, + Logger logger + ) { + Metadata.Builder metadataBuilder = Metadata.builder(clusterState.getMetadata()); + RoutingTable currentRoutingTable = clusterState.getRoutingTable(); + DiscoveryNodes currentDiscoveryNodes = clusterState.getNodes(); + Settings currentClusterSettings = clusterState.metadata().settings(); + for (IndexMetadata indexMetadata : indicesWithoutRemoteStoreSettings) { + IndexMetadata.Builder indexMetadataBuilder = IndexMetadata.builder(indexMetadata); + RemoteMigrationIndexMetadataUpdater indexMetadataUpdater = new RemoteMigrationIndexMetadataUpdater( + currentDiscoveryNodes, + currentRoutingTable, + indexMetadata, + currentClusterSettings, + logger + ); + indexMetadataUpdater.maybeAddRemoteIndexSettings(indexMetadataBuilder, indexMetadata.getIndex().getName()); + metadataBuilder.put(indexMetadataBuilder); + } + return metadataBuilder.build(); + } + + /** + * Checks if the incoming cluster settings payload is attempting to switch + * the cluster to `STRICT` compatibility mode + * Visible only for tests + */ + public static boolean isSwitchToStrictCompatibilityMode(ClusterUpdateSettingsRequest request) { + Settings incomingSettings = Settings.builder().put(request.persistentSettings()).put(request.transientSettings()).build(); + return RemoteStoreNodeService.REMOTE_STORE_COMPATIBILITY_MODE_SETTING.get( + incomingSettings + ) == RemoteStoreNodeService.CompatibilityMode.STRICT; + } } diff --git a/server/src/test/java/org/opensearch/action/support/clustermanager/TransportClusterManagerNodeActionTests.java b/server/src/test/java/org/opensearch/action/support/clustermanager/TransportClusterManagerNodeActionTests.java index 35c5c5e605b4d..37e884502b613 100644 --- a/server/src/test/java/org/opensearch/action/support/clustermanager/TransportClusterManagerNodeActionTests.java +++ b/server/src/test/java/org/opensearch/action/support/clustermanager/TransportClusterManagerNodeActionTests.java @@ -47,7 +47,6 @@ import org.opensearch.common.settings.Settings; import org.opensearch.common.settings.SettingsException; import org.opensearch.common.unit.TimeValue; -import org.opensearch.common.util.FeatureFlags; import org.opensearch.common.util.concurrent.ThreadContext; import org.opensearch.core.action.ActionListener; import org.opensearch.core.action.ActionResponse; @@ -85,8 +84,6 @@ import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicBoolean; -import static org.opensearch.common.util.FeatureFlags.REMOTE_STORE_MIGRATION_EXPERIMENTAL; -import static org.opensearch.index.remote.RemoteMigrationIndexMetadataUpdaterTests.createIndexMetadataWithDocrepSettings; import static org.opensearch.index.remote.RemoteMigrationIndexMetadataUpdaterTests.createIndexMetadataWithRemoteStoreSettings; import static org.opensearch.node.remotestore.RemoteStoreNodeAttribute.REMOTE_STORE_SEGMENT_REPOSITORY_NAME_ATTRIBUTE_KEY; import static org.opensearch.node.remotestore.RemoteStoreNodeAttribute.REMOTE_STORE_TRANSLOG_REPOSITORY_NAME_ATTRIBUTE_KEY; @@ -718,9 +715,6 @@ protected void masterOperation(Task task, Request request, ClusterState state, A } public void testDontAllowSwitchingToStrictCompatibilityModeForMixedCluster() { - Settings nodeSettings = Settings.builder().put(REMOTE_STORE_MIGRATION_EXPERIMENTAL, "true").build(); - FeatureFlags.initializeFeatureFlags(nodeSettings); - // request to change cluster compatibility mode to STRICT Settings currentCompatibilityModeSettings = Settings.builder() .put(REMOTE_STORE_COMPATIBILITY_MODE_SETTING.getKey(), RemoteStoreNodeService.CompatibilityMode.MIXED) @@ -809,84 +803,7 @@ public void testDontAllowSwitchingToStrictCompatibilityModeForMixedCluster() { transportClusterUpdateSettingsAction.validateCompatibilityModeSettingRequest(request, sameTypeClusterState); } - public void testDontAllowSwitchingToStrictCompatibilityModeWithoutRemoteIndexSettings() { - Settings nodeSettings = Settings.builder().put(REMOTE_STORE_MIGRATION_EXPERIMENTAL, "true").build(); - FeatureFlags.initializeFeatureFlags(nodeSettings); - Settings currentCompatibilityModeSettings = Settings.builder() - .put(REMOTE_STORE_COMPATIBILITY_MODE_SETTING.getKey(), RemoteStoreNodeService.CompatibilityMode.MIXED) - .build(); - Settings intendedCompatibilityModeSettings = Settings.builder() - .put(REMOTE_STORE_COMPATIBILITY_MODE_SETTING.getKey(), RemoteStoreNodeService.CompatibilityMode.STRICT) - .build(); - ClusterUpdateSettingsRequest request = new ClusterUpdateSettingsRequest(); - request.persistentSettings(intendedCompatibilityModeSettings); - DiscoveryNode remoteNode1 = new DiscoveryNode( - UUIDs.base64UUID(), - buildNewFakeTransportAddress(), - getRemoteStoreNodeAttributes(), - DiscoveryNodeRole.BUILT_IN_ROLES, - Version.CURRENT - ); - DiscoveryNode remoteNode2 = new DiscoveryNode( - UUIDs.base64UUID(), - buildNewFakeTransportAddress(), - getRemoteStoreNodeAttributes(), - DiscoveryNodeRole.BUILT_IN_ROLES, - Version.CURRENT - ); - DiscoveryNodes discoveryNodes = DiscoveryNodes.builder() - .add(remoteNode1) - .localNodeId(remoteNode1.getId()) - .add(remoteNode2) - .localNodeId(remoteNode2.getId()) - .build(); - AllocationService allocationService = new AllocationService( - new AllocationDeciders(Collections.singleton(new MaxRetryAllocationDecider())), - new TestGatewayAllocator(), - new BalancedShardsAllocator(Settings.EMPTY), - EmptyClusterInfoService.INSTANCE, - EmptySnapshotsInfoService.INSTANCE - ); - TransportClusterUpdateSettingsAction transportClusterUpdateSettingsAction = new TransportClusterUpdateSettingsAction( - transportService, - clusterService, - threadPool, - allocationService, - new ActionFilters(Collections.emptySet()), - new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)), - clusterService.getClusterSettings() - ); - - Metadata nonRemoteIndexMd = Metadata.builder(createIndexMetadataWithDocrepSettings("test")) - .persistentSettings(currentCompatibilityModeSettings) - .build(); - final ClusterState clusterState = ClusterState.builder(ClusterName.DEFAULT) - .metadata(nonRemoteIndexMd) - .nodes(discoveryNodes) - .build(); - final SettingsException exception = expectThrows( - SettingsException.class, - () -> transportClusterUpdateSettingsAction.validateCompatibilityModeSettingRequest(request, clusterState) - ); - assertEquals( - "can not switch to STRICT compatibility mode since all indices in the cluster does not have remote store based index settings", - exception.getMessage() - ); - - Metadata remoteIndexMd = Metadata.builder(createIndexMetadataWithRemoteStoreSettings("test")) - .persistentSettings(currentCompatibilityModeSettings) - .build(); - ClusterState clusterStateWithRemoteIndices = ClusterState.builder(ClusterName.DEFAULT) - .metadata(remoteIndexMd) - .nodes(discoveryNodes) - .build(); - transportClusterUpdateSettingsAction.validateCompatibilityModeSettingRequest(request, clusterStateWithRemoteIndices); - } - public void testDontAllowSwitchingCompatibilityModeForClusterWithMultipleVersions() { - Settings nodeSettings = Settings.builder().put(REMOTE_STORE_MIGRATION_EXPERIMENTAL, "true").build(); - FeatureFlags.initializeFeatureFlags(nodeSettings); - // request to change cluster compatibility mode boolean toStrictMode = randomBoolean(); Settings currentCompatibilityModeSettings = Settings.builder() @@ -988,5 +905,4 @@ private Map getRemoteStoreNodeAttributes() { remoteStoreNodeAttributes.put(REMOTE_STORE_TRANSLOG_REPOSITORY_NAME_ATTRIBUTE_KEY, "my-translog-repo-1"); return remoteStoreNodeAttributes; } - } diff --git a/server/src/test/java/org/opensearch/index/remote/RemoteStoreUtilsTests.java b/server/src/test/java/org/opensearch/index/remote/RemoteStoreUtilsTests.java index 15915ee431972..ec48032df4a15 100644 --- a/server/src/test/java/org/opensearch/index/remote/RemoteStoreUtilsTests.java +++ b/server/src/test/java/org/opensearch/index/remote/RemoteStoreUtilsTests.java @@ -9,15 +9,29 @@ package org.opensearch.index.remote; import org.opensearch.Version; +import org.opensearch.action.admin.cluster.settings.ClusterUpdateSettingsRequest; +import org.opensearch.cluster.ClusterName; +import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.metadata.IndexMetadata; import org.opensearch.cluster.metadata.Metadata; +import org.opensearch.cluster.node.DiscoveryNode; +import org.opensearch.cluster.node.DiscoveryNodeRole; import org.opensearch.cluster.node.DiscoveryNodes; +import org.opensearch.cluster.routing.IndexRoutingTable; +import org.opensearch.cluster.routing.IndexShardRoutingTable; +import org.opensearch.cluster.routing.RoutingTable; +import org.opensearch.cluster.routing.ShardRoutingState; +import org.opensearch.cluster.routing.TestShardRouting; +import org.opensearch.common.UUIDs; import org.opensearch.common.blobstore.BlobMetadata; import org.opensearch.common.blobstore.support.PlainBlobMetadata; import org.opensearch.common.settings.Settings; +import org.opensearch.core.index.Index; +import org.opensearch.core.index.shard.ShardId; import org.opensearch.index.shard.IndexShardTestUtils; import org.opensearch.index.store.RemoteSegmentStoreDirectory; import org.opensearch.index.translog.transfer.TranslogTransferMetadata; +import org.opensearch.indices.replication.common.ReplicationType; import org.opensearch.node.remotestore.RemoteStoreNodeAttribute; import org.opensearch.test.OpenSearchTestCase; @@ -28,11 +42,15 @@ import java.util.LinkedList; import java.util.List; import java.util.Map; +import java.util.UUID; import java.util.stream.Collectors; import static org.opensearch.cluster.metadata.IndexMetadata.REMOTE_STORE_CUSTOM_KEY; +import static org.opensearch.index.remote.RemoteMigrationIndexMetadataUpdaterTests.createIndexMetadataWithDocrepSettings; import static org.opensearch.index.remote.RemoteStoreUtils.URL_BASE64_CHARSET; import static org.opensearch.index.remote.RemoteStoreUtils.determineTranslogMetadataEnabled; +import static org.opensearch.index.remote.RemoteStoreUtils.finalizeMigration; +import static org.opensearch.index.remote.RemoteStoreUtils.isSwitchToStrictCompatibilityMode; import static org.opensearch.index.remote.RemoteStoreUtils.longToCompositeBase64AndBinaryEncoding; import static org.opensearch.index.remote.RemoteStoreUtils.longToUrlBase64; import static org.opensearch.index.remote.RemoteStoreUtils.urlBase64ToLong; @@ -42,6 +60,9 @@ import static org.opensearch.index.store.RemoteSegmentStoreDirectory.MetadataFilenameUtils.METADATA_PREFIX; import static org.opensearch.index.store.RemoteSegmentStoreDirectory.MetadataFilenameUtils.SEPARATOR; import static org.opensearch.index.translog.transfer.TranslogTransferMetadata.METADATA_SEPARATOR; +import static org.opensearch.node.remotestore.RemoteStoreNodeAttribute.REMOTE_STORE_SEGMENT_REPOSITORY_NAME_ATTRIBUTE_KEY; +import static org.opensearch.node.remotestore.RemoteStoreNodeAttribute.REMOTE_STORE_TRANSLOG_REPOSITORY_NAME_ATTRIBUTE_KEY; +import static org.opensearch.node.remotestore.RemoteStoreNodeService.REMOTE_STORE_COMPATIBILITY_MODE_SETTING; public class RemoteStoreUtilsTests extends OpenSearchTestCase { @@ -398,4 +419,122 @@ private static Map getCustomDataMap(int option) { ); } + public void testFinalizeMigrationWithAllRemoteNodes() { + String migratedIndex = "migrated-index"; + Settings mockSettings = Settings.builder().put(REMOTE_STORE_COMPATIBILITY_MODE_SETTING.getKey(), "strict").build(); + DiscoveryNode remoteNode1 = new DiscoveryNode( + UUIDs.base64UUID(), + buildNewFakeTransportAddress(), + getRemoteStoreNodeAttributes(), + DiscoveryNodeRole.BUILT_IN_ROLES, + Version.CURRENT + ); + DiscoveryNode remoteNode2 = new DiscoveryNode( + UUIDs.base64UUID(), + buildNewFakeTransportAddress(), + getRemoteStoreNodeAttributes(), + DiscoveryNodeRole.BUILT_IN_ROLES, + Version.CURRENT + ); + DiscoveryNodes discoveryNodes = DiscoveryNodes.builder() + .add(remoteNode1) + .localNodeId(remoteNode1.getId()) + .add(remoteNode2) + .localNodeId(remoteNode2.getId()) + .build(); + Metadata docrepIdxMetadata = createIndexMetadataWithDocrepSettings(migratedIndex); + assertDocrepSettingsApplied(docrepIdxMetadata.index(migratedIndex)); + Metadata remoteIndexMd = Metadata.builder(docrepIdxMetadata).persistentSettings(mockSettings).build(); + ClusterState clusterStateWithDocrepIndexSettings = ClusterState.builder(ClusterName.DEFAULT) + .metadata(remoteIndexMd) + .nodes(discoveryNodes) + .routingTable(createRoutingTableAllShardsStarted(migratedIndex, 1, 1, remoteNode1, remoteNode2)) + .build(); + Metadata mutatedMetadata = finalizeMigration(clusterStateWithDocrepIndexSettings, logger).metadata(); + assertTrue(mutatedMetadata.index(migratedIndex).getVersion() > docrepIdxMetadata.index(migratedIndex).getVersion()); + assertRemoteSettingsApplied(mutatedMetadata.index(migratedIndex)); + } + + public void testFinalizeMigrationWithAllDocrepNodes() { + String docrepIndex = "docrep-index"; + Settings mockSettings = Settings.builder().put(REMOTE_STORE_COMPATIBILITY_MODE_SETTING.getKey(), "strict").build(); + DiscoveryNode docrepNode1 = new DiscoveryNode(UUIDs.base64UUID(), buildNewFakeTransportAddress(), Version.CURRENT); + DiscoveryNode docrepNode2 = new DiscoveryNode(UUIDs.base64UUID(), buildNewFakeTransportAddress(), Version.CURRENT); + DiscoveryNodes discoveryNodes = DiscoveryNodes.builder() + .add(docrepNode1) + .localNodeId(docrepNode1.getId()) + .add(docrepNode2) + .localNodeId(docrepNode2.getId()) + .build(); + Metadata docrepIdxMetadata = createIndexMetadataWithDocrepSettings(docrepIndex); + assertDocrepSettingsApplied(docrepIdxMetadata.index(docrepIndex)); + Metadata remoteIndexMd = Metadata.builder(docrepIdxMetadata).persistentSettings(mockSettings).build(); + ClusterState clusterStateWithDocrepIndexSettings = ClusterState.builder(ClusterName.DEFAULT) + .metadata(remoteIndexMd) + .nodes(discoveryNodes) + .routingTable(createRoutingTableAllShardsStarted(docrepIndex, 1, 1, docrepNode1, docrepNode2)) + .build(); + Metadata mutatedMetadata = finalizeMigration(clusterStateWithDocrepIndexSettings, logger).metadata(); + assertEquals(docrepIdxMetadata.index(docrepIndex).getVersion(), mutatedMetadata.index(docrepIndex).getVersion()); + assertDocrepSettingsApplied(mutatedMetadata.index(docrepIndex)); + } + + public void testIsSwitchToStrictCompatibilityMode() { + Settings mockSettings = Settings.builder().put(REMOTE_STORE_COMPATIBILITY_MODE_SETTING.getKey(), "strict").build(); + ClusterUpdateSettingsRequest request = new ClusterUpdateSettingsRequest(); + request.persistentSettings(mockSettings); + assertTrue(isSwitchToStrictCompatibilityMode(request)); + + mockSettings = Settings.builder().put(REMOTE_STORE_COMPATIBILITY_MODE_SETTING.getKey(), "mixed").build(); + request.persistentSettings(mockSettings); + assertFalse(isSwitchToStrictCompatibilityMode(request)); + } + + private void assertRemoteSettingsApplied(IndexMetadata indexMetadata) { + assertTrue(IndexMetadata.INDEX_REMOTE_STORE_ENABLED_SETTING.get(indexMetadata.getSettings())); + assertTrue(IndexMetadata.INDEX_REMOTE_TRANSLOG_REPOSITORY_SETTING.exists(indexMetadata.getSettings())); + assertTrue(IndexMetadata.INDEX_REMOTE_SEGMENT_STORE_REPOSITORY_SETTING.exists(indexMetadata.getSettings())); + assertEquals(ReplicationType.SEGMENT, IndexMetadata.INDEX_REPLICATION_TYPE_SETTING.get(indexMetadata.getSettings())); + } + + private void assertDocrepSettingsApplied(IndexMetadata indexMetadata) { + assertFalse(IndexMetadata.INDEX_REMOTE_STORE_ENABLED_SETTING.get(indexMetadata.getSettings())); + assertFalse(IndexMetadata.INDEX_REMOTE_TRANSLOG_REPOSITORY_SETTING.exists(indexMetadata.getSettings())); + assertFalse(IndexMetadata.INDEX_REMOTE_SEGMENT_STORE_REPOSITORY_SETTING.exists(indexMetadata.getSettings())); + assertEquals(ReplicationType.DOCUMENT, IndexMetadata.INDEX_REPLICATION_TYPE_SETTING.get(indexMetadata.getSettings())); + } + + private RoutingTable createRoutingTableAllShardsStarted( + String indexName, + int numberOfShards, + int numberOfReplicas, + DiscoveryNode primaryHostingNode, + DiscoveryNode replicaHostingNode + ) { + RoutingTable.Builder builder = RoutingTable.builder(); + Index index = new Index(indexName, UUID.randomUUID().toString()); + + IndexRoutingTable.Builder indexRoutingTableBuilder = IndexRoutingTable.builder(index); + for (int i = 0; i < numberOfShards; i++) { + ShardId shardId = new ShardId(index, i); + IndexShardRoutingTable.Builder indexShardRoutingTable = new IndexShardRoutingTable.Builder(shardId); + indexShardRoutingTable.addShard( + TestShardRouting.newShardRouting(shardId, primaryHostingNode.getId(), true, ShardRoutingState.STARTED) + ); + for (int j = 0; j < numberOfReplicas; j++) { + indexShardRoutingTable.addShard( + TestShardRouting.newShardRouting(shardId, replicaHostingNode.getId(), false, ShardRoutingState.STARTED) + ); + } + indexRoutingTableBuilder.addIndexShard(indexShardRoutingTable.build()); + } + return builder.add(indexRoutingTableBuilder.build()).build(); + } + + private Map getRemoteStoreNodeAttributes() { + Map remoteStoreNodeAttributes = new HashMap<>(); + remoteStoreNodeAttributes.put(REMOTE_STORE_SEGMENT_REPOSITORY_NAME_ATTRIBUTE_KEY, "my-segment-repo-1"); + remoteStoreNodeAttributes.put(REMOTE_STORE_TRANSLOG_REPOSITORY_NAME_ATTRIBUTE_KEY, "my-translog-repo-1"); + return remoteStoreNodeAttributes; + } } From e288962d2870efa28b3dc67f5bcb7ad1a2f9d57f Mon Sep 17 00:00:00 2001 From: Ashish Singh Date: Fri, 19 Jul 2024 20:35:08 +0530 Subject: [PATCH 083/167] Add prefix mode verification setting for repository verification (#14790) * Add prefix mode verification setting for repository verification Signed-off-by: Ashish Singh * Add UTs and randomise prefix mode repository verification Signed-off-by: Ashish Singh * Incorporate PR review feedback Signed-off-by: Ashish Singh --------- Signed-off-by: Ashish Singh --- CHANGELOG.md | 1 + .../blobstore/BlobStoreRepository.java | 43 +++++++++++++++++-- .../blobstore/BlobStoreRepositoryTests.java | 22 ++++++++++ .../test/OpenSearchIntegTestCase.java | 11 ++++- 4 files changed, 72 insertions(+), 5 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 98b4f520a5bfb..a173a8a2d5ed9 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -20,6 +20,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Add matchesPluginSystemIndexPattern to SystemIndexRegistry ([#14750](https://github.com/opensearch-project/OpenSearch/pull/14750)) - Add Plugin interface for loading application based configuration templates (([#14659](https://github.com/opensearch-project/OpenSearch/issues/14659))) - Refactor remote-routing-table service inline with remote state interfaces([#14668](https://github.com/opensearch-project/OpenSearch/pull/14668)) +- Add prefix mode verification setting for repository verification (([#14790](https://github.com/opensearch-project/OpenSearch/pull/14790))) ### Dependencies - Bump `org.gradle.test-retry` from 1.5.8 to 1.5.9 ([#13442](https://github.com/opensearch-project/OpenSearch/pull/13442)) diff --git a/server/src/main/java/org/opensearch/repositories/blobstore/BlobStoreRepository.java b/server/src/main/java/org/opensearch/repositories/blobstore/BlobStoreRepository.java index 02290b6a5e566..c4908f8c5fc4b 100644 --- a/server/src/main/java/org/opensearch/repositories/blobstore/BlobStoreRepository.java +++ b/server/src/main/java/org/opensearch/repositories/blobstore/BlobStoreRepository.java @@ -109,6 +109,7 @@ import org.opensearch.core.xcontent.XContentParser; import org.opensearch.index.mapper.MapperService; import org.opensearch.index.remote.RemoteStorePathStrategy; +import org.opensearch.index.remote.RemoteStorePathStrategy.PathInput; import org.opensearch.index.snapshots.IndexShardRestoreFailedException; import org.opensearch.index.snapshots.IndexShardSnapshotStatus; import org.opensearch.index.snapshots.blobstore.BlobStoreIndexShardSnapshot; @@ -157,6 +158,7 @@ import java.util.List; import java.util.Locale; import java.util.Map; +import java.util.Objects; import java.util.Optional; import java.util.Set; import java.util.concurrent.BlockingQueue; @@ -174,6 +176,8 @@ import java.util.stream.LongStream; import java.util.stream.Stream; +import static org.opensearch.index.remote.RemoteStoreEnums.PathHashAlgorithm.FNV_1A_COMPOSITE_1; +import static org.opensearch.index.remote.RemoteStoreEnums.PathType.HASHED_PREFIX; import static org.opensearch.index.snapshots.blobstore.BlobStoreIndexShardSnapshot.FileInfo.canonicalName; import static org.opensearch.repositories.blobstore.ChecksumBlobStoreFormat.SNAPSHOT_ONLY_FORMAT_PARAMS; @@ -302,6 +306,16 @@ public abstract class BlobStoreRepository extends AbstractLifecycleComponent imp Setting.Property.NodeScope ); + /** + * Setting to enable prefix mode verification. In this mode, a hashed string is prepended at the prefix of the base + * path during repository verification. + */ + public static final Setting PREFIX_MODE_VERIFICATION_SETTING = Setting.boolSetting( + "prefix_mode_verification", + false, + Setting.Property.NodeScope + ); + protected volatile boolean supportURLRepo; private volatile int maxShardBlobDeleteBatch; @@ -369,6 +383,8 @@ public abstract class BlobStoreRepository extends AbstractLifecycleComponent imp private final boolean isSystemRepository; + private final boolean prefixModeVerification; + private final Object lock = new Object(); private final SetOnce blobContainer = new SetOnce<>(); @@ -426,6 +442,7 @@ protected BlobStoreRepository( readRepositoryMetadata(repositoryMetadata); isSystemRepository = SYSTEM_REPOSITORY_SETTING.get(metadata.settings()); + prefixModeVerification = PREFIX_MODE_VERIFICATION_SETTING.get(metadata.settings()); this.namedXContentRegistry = namedXContentRegistry; this.threadPool = clusterService.getClusterApplierService().threadPool(); this.clusterService = clusterService; @@ -767,6 +784,10 @@ protected BlobStore getBlobStore() { return blobStore.get(); } + boolean getPrefixModeVerification() { + return prefixModeVerification; + } + /** * maintains single lazy instance of {@link BlobContainer} */ @@ -1918,7 +1939,7 @@ public String startVerification() { } else { String seed = UUIDs.randomBase64UUID(); byte[] testBytes = Strings.toUTF8Bytes(seed); - BlobContainer testContainer = blobStore().blobContainer(basePath().add(testBlobPrefix(seed))); + BlobContainer testContainer = testContainer(seed); BytesArray bytes = new BytesArray(testBytes); if (isSystemRepository == false) { try (InputStream stream = bytes.streamInput()) { @@ -1936,12 +1957,26 @@ public String startVerification() { } } + /** + * Returns the blobContainer depending on the seed and {@code prefixModeVerification}. + */ + private BlobContainer testContainer(String seed) { + BlobPath testBlobPath; + if (prefixModeVerification == true) { + PathInput pathInput = PathInput.builder().basePath(basePath()).indexUUID(seed).build(); + testBlobPath = HASHED_PREFIX.path(pathInput, FNV_1A_COMPOSITE_1); + } else { + testBlobPath = basePath(); + } + assert Objects.nonNull(testBlobPath); + return blobStore().blobContainer(testBlobPath.add(testBlobPrefix(seed))); + } + @Override public void endVerification(String seed) { if (isReadOnly() == false) { try { - final String testPrefix = testBlobPrefix(seed); - blobStore().blobContainer(basePath().add(testPrefix)).delete(); + testContainer(seed).delete(); } catch (Exception exp) { throw new RepositoryVerificationException(metadata.name(), "cannot delete test data at " + basePath(), exp); } @@ -3266,7 +3301,7 @@ public void verify(String seed, DiscoveryNode localNode) { ); } } else { - BlobContainer testBlobContainer = blobStore().blobContainer(basePath().add(testBlobPrefix(seed))); + BlobContainer testBlobContainer = testContainer(seed); try { BytesArray bytes = new BytesArray(seed); try (InputStream stream = bytes.streamInput()) { diff --git a/server/src/test/java/org/opensearch/repositories/blobstore/BlobStoreRepositoryTests.java b/server/src/test/java/org/opensearch/repositories/blobstore/BlobStoreRepositoryTests.java index 2445cad01574c..bd47507da4863 100644 --- a/server/src/test/java/org/opensearch/repositories/blobstore/BlobStoreRepositoryTests.java +++ b/server/src/test/java/org/opensearch/repositories/blobstore/BlobStoreRepositoryTests.java @@ -255,6 +255,28 @@ public void testBadChunksize() throws Exception { ); } + public void testPrefixModeVerification() throws Exception { + final Client client = client(); + final Path location = OpenSearchIntegTestCase.randomRepoPath(node().settings()); + final String repositoryName = "test-repo"; + AcknowledgedResponse putRepositoryResponse = client.admin() + .cluster() + .preparePutRepository(repositoryName) + .setType(REPO_TYPE) + .setSettings( + Settings.builder() + .put(node().settings()) + .put("location", location) + .put(BlobStoreRepository.PREFIX_MODE_VERIFICATION_SETTING.getKey(), true) + ) + .get(); + assertTrue(putRepositoryResponse.isAcknowledged()); + + final RepositoriesService repositoriesService = getInstanceFromNode(RepositoriesService.class); + final BlobStoreRepository repository = (BlobStoreRepository) repositoriesService.repository(repositoryName); + assertTrue(repository.getPrefixModeVerification()); + } + public void testFsRepositoryCompressDeprecatedIgnored() { final Path location = OpenSearchIntegTestCase.randomRepoPath(node().settings()); final Settings settings = Settings.builder().put(node().settings()).put("location", location).build(); diff --git a/test/framework/src/main/java/org/opensearch/test/OpenSearchIntegTestCase.java b/test/framework/src/main/java/org/opensearch/test/OpenSearchIntegTestCase.java index 7a50502e418e2..9853cef482254 100644 --- a/test/framework/src/main/java/org/opensearch/test/OpenSearchIntegTestCase.java +++ b/test/framework/src/main/java/org/opensearch/test/OpenSearchIntegTestCase.java @@ -152,6 +152,7 @@ import org.opensearch.node.remotestore.RemoteStoreNodeService; import org.opensearch.plugins.NetworkPlugin; import org.opensearch.plugins.Plugin; +import org.opensearch.repositories.blobstore.BlobStoreRepository; import org.opensearch.repositories.fs.ReloadableFsRepository; import org.opensearch.script.MockScriptService; import org.opensearch.search.MockSearchService; @@ -386,6 +387,8 @@ public abstract class OpenSearchIntegTestCase extends OpenSearchTestCase { protected static final String REMOTE_BACKED_STORAGE_REPOSITORY_NAME = "test-remote-store-repo"; + private static Boolean prefixModeVerificationEnable; + private Path remoteStoreRepositoryPath; private ReplicationType randomReplicationType; @@ -394,6 +397,7 @@ public abstract class OpenSearchIntegTestCase extends OpenSearchTestCase { @BeforeClass public static void beforeClass() throws Exception { + prefixModeVerificationEnable = randomBoolean(); testClusterRule.beforeClass(); } @@ -2645,16 +2649,21 @@ private static Settings buildRemoteStoreNodeAttributes( segmentRepoName ); + String prefixModeVerificationSuffix = BlobStoreRepository.PREFIX_MODE_VERIFICATION_SETTING.getKey(); + Settings.Builder settings = Settings.builder() .put("node.attr." + REMOTE_STORE_SEGMENT_REPOSITORY_NAME_ATTRIBUTE_KEY, segmentRepoName) .put(segmentRepoTypeAttributeKey, segmentRepoType) .put(segmentRepoSettingsAttributeKeyPrefix + "location", segmentRepoPath) + .put(segmentRepoSettingsAttributeKeyPrefix + prefixModeVerificationSuffix, prefixModeVerificationEnable) .put("node.attr." + REMOTE_STORE_TRANSLOG_REPOSITORY_NAME_ATTRIBUTE_KEY, translogRepoName) .put(translogRepoTypeAttributeKey, translogRepoType) .put(translogRepoSettingsAttributeKeyPrefix + "location", translogRepoPath) + .put(translogRepoSettingsAttributeKeyPrefix + prefixModeVerificationSuffix, prefixModeVerificationEnable) .put("node.attr." + REMOTE_STORE_CLUSTER_STATE_REPOSITORY_NAME_ATTRIBUTE_KEY, segmentRepoName) .put(stateRepoTypeAttributeKey, segmentRepoType) - .put(stateRepoSettingsAttributeKeyPrefix + "location", segmentRepoPath); + .put(stateRepoSettingsAttributeKeyPrefix + "location", segmentRepoPath) + .put(stateRepoSettingsAttributeKeyPrefix + prefixModeVerificationSuffix, prefixModeVerificationEnable); if (withRateLimiterAttributes) { settings.put(segmentRepoSettingsAttributeKeyPrefix + "compress", randomBoolean()) From 77a74e2fcf2c6b56b0959a563fbc6a0d7aff220a Mon Sep 17 00:00:00 2001 From: Rishabh Singh Date: Fri, 19 Jul 2024 10:08:35 -0700 Subject: [PATCH 084/167] add length check on comment body for benchmark workflow (#14834) Signed-off-by: Rishabh Singh --- .github/workflows/add-performance-comment.yml | 2 +- .github/workflows/benchmark-pull-request.yml | 49 +++++++++++++------ 2 files changed, 36 insertions(+), 15 deletions(-) diff --git a/.github/workflows/add-performance-comment.yml b/.github/workflows/add-performance-comment.yml index 3939de25e4cbe..b522d348c84b2 100644 --- a/.github/workflows/add-performance-comment.yml +++ b/.github/workflows/add-performance-comment.yml @@ -1,7 +1,7 @@ name: Performance Label Action on: - pull_request: + pull_request_target: types: [labeled] jobs: diff --git a/.github/workflows/benchmark-pull-request.yml b/.github/workflows/benchmark-pull-request.yml index 0de50981fa3d7..1aa2b6271719b 100644 --- a/.github/workflows/benchmark-pull-request.yml +++ b/.github/workflows/benchmark-pull-request.yml @@ -25,20 +25,41 @@ jobs: echo "USER_TAGS=pull_request_number:${{ github.event.issue.number }},repository:OpenSearch" >> $GITHUB_ENV - name: Check comment format id: check_comment - run: | - comment='${{ github.event.comment.body }}' - if echo "$comment" | jq -e 'has("run-benchmark-test")'; then - echo "Valid comment format detected, check if valid config id is provided" - config_id=$(echo $comment | jq -r '.["run-benchmark-test"]') - benchmark_configs=$(cat .github/benchmark-configs.json) - if echo $benchmark_configs | jq -e --arg id "$config_id" 'has($id)' && echo "$benchmark_configs" | jq -e --arg version "$OPENSEARCH_MAJOR_VERSION" --arg id "$config_id" '.[$id].supported_major_versions | index($version) != null' > /dev/null; then - echo $benchmark_configs | jq -r --arg id "$config_id" '.[$id]."cluster-benchmark-configs" | to_entries[] | "\(.key)=\(.value)"' >> $GITHUB_ENV - else - echo "invalid=true" >> $GITHUB_OUTPUT - fi - else - echo "invalid=true" >> $GITHUB_OUTPUT - fi + uses: actions/github-script@v6 + with: + script: | + const fs = require('fs'); + const comment = context.payload.comment.body; + let commentJson; + try { + commentJson = JSON.parse(comment); + } catch (error) { + core.setOutput('invalid', 'true'); + return; + } + if (!commentJson.hasOwnProperty('run-benchmark-test')) { + core.setOutput('invalid', 'true'); + return; + } + const configId = commentJson['run-benchmark-test']; + let benchmarkConfigs; + try { + benchmarkConfigs = JSON.parse(fs.readFileSync('.github/benchmark-configs.json', 'utf8')); + } catch (error) { + core.setFailed('Failed to read benchmark-configs.json'); + return; + } + const openSearchMajorVersion = process.env.OPENSEARCH_MAJOR_VERSION; + console.log('MAJOR_VERSION', openSearchMajorVersion) + if (!benchmarkConfigs.hasOwnProperty(configId) || + !benchmarkConfigs[configId].supported_major_versions.includes(openSearchMajorVersion)) { + core.setOutput('invalid', 'true'); + return; + } + const clusterBenchmarkConfigs = benchmarkConfigs[configId]['cluster-benchmark-configs']; + for (const [key, value] of Object.entries(clusterBenchmarkConfigs)) { + core.exportVariable(key, value); + } - name: Post invalid format comment if: steps.check_comment.outputs.invalid == 'true' uses: actions/github-script@v6 From 9c6e6187864c976656889cdec45064920cf856ee Mon Sep 17 00:00:00 2001 From: Rishabh Singh Date: Fri, 19 Jul 2024 12:28:02 -0700 Subject: [PATCH 085/167] Add restore-from-snapshot test procedure for snapshot run benchmark config (#14842) Signed-off-by: Rishabh Singh --- .github/benchmark-configs.json | 18 ++++++++++++------ 1 file changed, 12 insertions(+), 6 deletions(-) diff --git a/.github/benchmark-configs.json b/.github/benchmark-configs.json index a5b1951d2240c..5b44198cd3b8e 100644 --- a/.github/benchmark-configs.json +++ b/.github/benchmark-configs.json @@ -40,7 +40,8 @@ "MIN_DISTRIBUTION": "true", "TEST_WORKLOAD": "nyc_taxis", "WORKLOAD_PARAMS": "{\"snapshot_repo_name\":\"benchmark-workloads-repo-300\",\"snapshot_bucket_name\":\"benchmark-workload-snapshots\",\"snapshot_region\":\"us-east-1\",\"snapshot_base_path\":\"workload-snapshots-300\",\"snapshot_name\":\"nyc_taxis_1_shard\"}", - "CAPTURE_NODE_STAT": "true" + "CAPTURE_NODE_STAT": "true", + "TEST_PROCEDURE": "restore-from-snapshot" }, "cluster_configuration": { "size": "Single-Node", @@ -55,7 +56,8 @@ "MIN_DISTRIBUTION": "true", "TEST_WORKLOAD": "http_logs", "WORKLOAD_PARAMS": "{\"snapshot_repo_name\":\"benchmark-workloads-repo-300\",\"snapshot_bucket_name\":\"benchmark-workload-snapshots\",\"snapshot_region\":\"us-east-1\",\"snapshot_base_path\":\"workload-snapshots-300\",\"snapshot_name\":\"http_logs_1_shard\"}", - "CAPTURE_NODE_STAT": "true" + "CAPTURE_NODE_STAT": "true", + "TEST_PROCEDURE": "restore-from-snapshot" }, "cluster_configuration": { "size": "Single-Node", @@ -70,7 +72,8 @@ "MIN_DISTRIBUTION": "true", "TEST_WORKLOAD": "big5", "WORKLOAD_PARAMS": "{\"snapshot_repo_name\":\"benchmark-workloads-repo-300\",\"snapshot_bucket_name\":\"benchmark-workload-snapshots\",\"snapshot_region\":\"us-east-1\",\"snapshot_base_path\":\"workload-snapshots-300\",\"snapshot_name\":\"big5_1_shard\"}", - "CAPTURE_NODE_STAT": "true" + "CAPTURE_NODE_STAT": "true", + "TEST_PROCEDURE": "restore-from-snapshot" }, "cluster_configuration": { "size": "Single-Node", @@ -85,7 +88,8 @@ "MIN_DISTRIBUTION": "true", "TEST_WORKLOAD": "nyc_taxis", "WORKLOAD_PARAMS": "{\"snapshot_repo_name\":\"benchmark-workloads-repo\",\"snapshot_bucket_name\":\"benchmark-workload-snapshots\",\"snapshot_region\":\"us-east-1\",\"snapshot_base_path\":\"workload-snapshots\",\"snapshot_name\":\"nyc_taxis_1_shard\"}", - "CAPTURE_NODE_STAT": "true" + "CAPTURE_NODE_STAT": "true", + "TEST_PROCEDURE": "restore-from-snapshot" }, "cluster_configuration": { "size": "Single-Node", @@ -100,7 +104,8 @@ "MIN_DISTRIBUTION": "true", "TEST_WORKLOAD": "http_logs", "WORKLOAD_PARAMS": "{\"snapshot_repo_name\":\"benchmark-workloads-repo\",\"snapshot_bucket_name\":\"benchmark-workload-snapshots\",\"snapshot_region\":\"us-east-1\",\"snapshot_base_path\":\"workload-snapshots\",\"snapshot_name\":\"http_logs_1_shard\"}", - "CAPTURE_NODE_STAT": "true" + "CAPTURE_NODE_STAT": "true", + "TEST_PROCEDURE": "restore-from-snapshot" }, "cluster_configuration": { "size": "Single-Node", @@ -115,7 +120,8 @@ "MIN_DISTRIBUTION": "true", "TEST_WORKLOAD": "big5", "WORKLOAD_PARAMS": "{\"snapshot_repo_name\":\"benchmark-workloads-repo\",\"snapshot_bucket_name\":\"benchmark-workload-snapshots\",\"snapshot_region\":\"us-east-1\",\"snapshot_base_path\":\"workload-snapshots\",\"snapshot_name\":\"big5_1_shard\"}", - "CAPTURE_NODE_STAT": "true" + "CAPTURE_NODE_STAT": "true", + "TEST_PROCEDURE": "restore-from-snapshot" }, "cluster_configuration": { "size": "Single-Node", From 0bcbafdbbd72e636ce77213e82729256c58c8d46 Mon Sep 17 00:00:00 2001 From: Rishabh Singh Date: Fri, 19 Jul 2024 16:14:27 -0700 Subject: [PATCH 086/167] Fix env variable name typo (#14843) Signed-off-by: Rishabh Singh --- .github/workflows/benchmark-pull-request.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/benchmark-pull-request.yml b/.github/workflows/benchmark-pull-request.yml index 1aa2b6271719b..2e2e83eb132de 100644 --- a/.github/workflows/benchmark-pull-request.yml +++ b/.github/workflows/benchmark-pull-request.yml @@ -115,7 +115,7 @@ jobs: headRepo=$(echo '${{ steps.get_pr.outputs.result }}' | jq -r '.headRepoFullName') headRef=$(echo '${{ steps.get_pr.outputs.result }}' | jq -r '.headRef') echo "prHeadRepo=$headRepo" >> $GITHUB_ENV - echo "prheadRef=$headRef" >> $GITHUB_ENV + echo "prHeadRef=$headRef" >> $GITHUB_ENV - name: Checkout PR Repo uses: actions/checkout@v2 with: From b980b12e7fa14620aab38363f316ce72af70df1e Mon Sep 17 00:00:00 2001 From: bowenlan-amzn Date: Fri, 19 Jul 2024 16:36:29 -0700 Subject: [PATCH 087/167] Use circuit breaker in InternalHistogram when adding empty buckets (#14754) * introduce circuit breaker in InternalHistogram Signed-off-by: bowenlan-amzn * use circuit breaker from reduce context Signed-off-by: bowenlan-amzn * add test Signed-off-by: bowenlan-amzn * revert use_real_memory change in OpenSearchNode Signed-off-by: bowenlan-amzn * add change log Signed-off-by: bowenlan-amzn --------- Signed-off-by: bowenlan-amzn --- CHANGELOG.md | 1 + .../bucket/histogram/InternalHistogram.java | 6 ++- .../histogram/InternalHistogramTests.java | 43 +++++++++++++++++++ 3 files changed, 49 insertions(+), 1 deletion(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index a173a8a2d5ed9..29e70c5026bb8 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -78,6 +78,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Fix bulk upsert ignores the default_pipeline and final_pipeline when auto-created index matches the index template ([#12891](https://github.com/opensearch-project/OpenSearch/pull/12891)) - Fix NPE in ReplicaShardAllocator ([#14385](https://github.com/opensearch-project/OpenSearch/pull/14385)) - Fix constant_keyword field type used when creating index ([#14807](https://github.com/opensearch-project/OpenSearch/pull/14807)) +- Use circuit breaker in InternalHistogram when adding empty buckets ([#14754](https://github.com/opensearch-project/OpenSearch/pull/14754)) ### Security diff --git a/server/src/main/java/org/opensearch/search/aggregations/bucket/histogram/InternalHistogram.java b/server/src/main/java/org/opensearch/search/aggregations/bucket/histogram/InternalHistogram.java index a27c689127ac9..a988b911de5a3 100644 --- a/server/src/main/java/org/opensearch/search/aggregations/bucket/histogram/InternalHistogram.java +++ b/server/src/main/java/org/opensearch/search/aggregations/bucket/histogram/InternalHistogram.java @@ -395,6 +395,7 @@ private void addEmptyBuckets(List list, ReduceContext reduceContext) { // fill with empty buckets for (double key = round(emptyBucketInfo.minBound); key <= emptyBucketInfo.maxBound; key = nextKey(key)) { iter.add(new Bucket(key, 0, keyed, format, reducedEmptySubAggs)); + reduceContext.consumeBucketsAndMaybeBreak(0); } } else { Bucket first = list.get(iter.nextIndex()); @@ -402,11 +403,12 @@ private void addEmptyBuckets(List list, ReduceContext reduceContext) { // fill with empty buckets until the first key for (double key = round(emptyBucketInfo.minBound); key < first.key; key = nextKey(key)) { iter.add(new Bucket(key, 0, keyed, format, reducedEmptySubAggs)); + reduceContext.consumeBucketsAndMaybeBreak(0); } } // now adding the empty buckets within the actual data, - // e.g. if the data series is [1,2,3,7] there're 3 empty buckets that will be created for 4,5,6 + // e.g. if the data series is [1,2,3,7] there are 3 empty buckets that will be created for 4,5,6 Bucket lastBucket = null; do { Bucket nextBucket = list.get(iter.nextIndex()); @@ -414,6 +416,7 @@ private void addEmptyBuckets(List list, ReduceContext reduceContext) { double key = nextKey(lastBucket.key); while (key < nextBucket.key) { iter.add(new Bucket(key, 0, keyed, format, reducedEmptySubAggs)); + reduceContext.consumeBucketsAndMaybeBreak(0); key = nextKey(key); } assert key == nextBucket.key || Double.isNaN(nextBucket.key) : "key: " + key + ", nextBucket.key: " + nextBucket.key; @@ -424,6 +427,7 @@ private void addEmptyBuckets(List list, ReduceContext reduceContext) { // finally, adding the empty buckets *after* the actual data (based on the extended_bounds.max requested by the user) for (double key = nextKey(lastBucket.key); key <= emptyBucketInfo.maxBound; key = nextKey(key)) { iter.add(new Bucket(key, 0, keyed, format, reducedEmptySubAggs)); + reduceContext.consumeBucketsAndMaybeBreak(0); } } } diff --git a/server/src/test/java/org/opensearch/search/aggregations/bucket/histogram/InternalHistogramTests.java b/server/src/test/java/org/opensearch/search/aggregations/bucket/histogram/InternalHistogramTests.java index 288b22ccfcc92..98c6ac2b3de45 100644 --- a/server/src/test/java/org/opensearch/search/aggregations/bucket/histogram/InternalHistogramTests.java +++ b/server/src/test/java/org/opensearch/search/aggregations/bucket/histogram/InternalHistogramTests.java @@ -33,10 +33,15 @@ package org.opensearch.search.aggregations.bucket.histogram; import org.apache.lucene.tests.util.TestUtil; +import org.opensearch.core.common.breaker.CircuitBreaker; +import org.opensearch.core.common.breaker.CircuitBreakingException; import org.opensearch.search.DocValueFormat; import org.opensearch.search.aggregations.BucketOrder; +import org.opensearch.search.aggregations.InternalAggregation; import org.opensearch.search.aggregations.InternalAggregations; +import org.opensearch.search.aggregations.MultiBucketConsumerService; import org.opensearch.search.aggregations.ParsedMultiBucketAggregation; +import org.opensearch.search.aggregations.pipeline.PipelineAggregator; import org.opensearch.test.InternalAggregationTestCase; import org.opensearch.test.InternalMultiBucketAggregationTestCase; @@ -47,6 +52,8 @@ import java.util.Map; import java.util.TreeMap; +import org.mockito.Mockito; + public class InternalHistogramTests extends InternalMultiBucketAggregationTestCase { private boolean keyed; @@ -123,6 +130,42 @@ public void testHandlesNaN() { ); } + public void testCircuitBreakerWhenAddEmptyBuckets() { + String name = randomAlphaOfLength(5); + double interval = 1; + double lowerBound = 1; + double upperBound = 1026; + List bucket1 = List.of( + new InternalHistogram.Bucket(lowerBound, 1, false, format, InternalAggregations.EMPTY) + ); + List bucket2 = List.of( + new InternalHistogram.Bucket(upperBound, 1, false, format, InternalAggregations.EMPTY) + ); + BucketOrder order = BucketOrder.key(true); + InternalHistogram.EmptyBucketInfo emptyBucketInfo = new InternalHistogram.EmptyBucketInfo( + interval, + 0, + lowerBound, + upperBound, + InternalAggregations.EMPTY + ); + InternalHistogram histogram1 = new InternalHistogram(name, bucket1, order, 0, emptyBucketInfo, format, false, null); + InternalHistogram histogram2 = new InternalHistogram(name, bucket2, order, 0, emptyBucketInfo, format, false, null); + + CircuitBreaker breaker = Mockito.mock(CircuitBreaker.class); + Mockito.when(breaker.addEstimateBytesAndMaybeBreak(0, "allocated_buckets")).thenThrow(CircuitBreakingException.class); + + MultiBucketConsumerService.MultiBucketConsumer bucketConsumer = new MultiBucketConsumerService.MultiBucketConsumer(0, breaker); + InternalAggregation.ReduceContext reduceContext = InternalAggregation.ReduceContext.forFinalReduction( + null, + null, + bucketConsumer, + PipelineAggregator.PipelineTree.EMPTY + ); + expectThrows(CircuitBreakingException.class, () -> histogram1.reduce(List.of(histogram1, histogram2), reduceContext)); + Mockito.verify(breaker, Mockito.times(1)).addEstimateBytesAndMaybeBreak(0, "allocated_buckets"); + } + @Override protected void assertReduced(InternalHistogram reduced, List inputs) { TreeMap expectedCounts = new TreeMap<>(); From b58546914c1191d6364d9afa006a48bba00ef596 Mon Sep 17 00:00:00 2001 From: Shivansh Arora Date: Mon, 22 Jul 2024 15:03:13 +0530 Subject: [PATCH 088/167] [Remote State] Create interface RemoteEntitiesManager (#14671) * Create interface RemoteEntitiesManager Signed-off-by: Shivansh Arora --- .../InternalRemoteRoutingTableService.java | 13 +- .../remote/NoopRemoteRoutingTableService.java | 7 +- .../remote/RemoteRoutingTableService.java | 5 +- .../AbstractRemoteWritableEntityManager.java | 84 ++++ .../remote/RemoteWritableEntityManager.java | 47 ++ .../RemoteClusterStateAttributesManager.java | 55 +-- .../remote/RemoteClusterStateService.java | 459 ++++++++---------- .../remote/RemoteGlobalMetadataManager.java | 57 +-- .../remote/RemoteIndexMetadataManager.java | 87 ++-- .../RemoteRoutingTableServiceTests.java | 4 +- ...tractRemoteWritableEntityManagerTests.java | 64 +++ ...oteClusterStateAttributesManagerTests.java | 59 +-- .../RemoteClusterStateServiceTests.java | 346 ++++++------- .../RemoteGlobalMetadataManagerTests.java | 114 +++-- .../RemoteIndexMetadataManagerTests.java | 42 +- 15 files changed, 769 insertions(+), 674 deletions(-) create mode 100644 server/src/main/java/org/opensearch/common/remote/AbstractRemoteWritableEntityManager.java create mode 100644 server/src/main/java/org/opensearch/common/remote/RemoteWritableEntityManager.java create mode 100644 server/src/test/java/org/opensearch/common/remote/AbstractRemoteWritableEntityManagerTests.java diff --git a/server/src/main/java/org/opensearch/cluster/routing/remote/InternalRemoteRoutingTableService.java b/server/src/main/java/org/opensearch/cluster/routing/remote/InternalRemoteRoutingTableService.java index f3f245ee9f8f0..d7ebc54598b37 100644 --- a/server/src/main/java/org/opensearch/cluster/routing/remote/InternalRemoteRoutingTableService.java +++ b/server/src/main/java/org/opensearch/cluster/routing/remote/InternalRemoteRoutingTableService.java @@ -15,7 +15,6 @@ import org.opensearch.cluster.DiffableUtils; import org.opensearch.cluster.routing.IndexRoutingTable; import org.opensearch.cluster.routing.RoutingTable; -import org.opensearch.common.CheckedRunnable; import org.opensearch.common.blobstore.BlobPath; import org.opensearch.common.lifecycle.AbstractLifecycleComponent; import org.opensearch.common.remote.RemoteWritableEntityStore; @@ -102,16 +101,16 @@ public DiffableUtils.MapDiff getAsyncIndexRoutingWriteAction( + public void getAsyncIndexRoutingWriteAction( String clusterUUID, long term, long version, @@ -128,7 +127,7 @@ public CheckedRunnable getAsyncIndexRoutingWriteAction( ) ); - return () -> remoteIndexRoutingTableStore.writeAsync(remoteIndexRoutingTable, completionListener); + remoteIndexRoutingTableStore.writeAsync(remoteIndexRoutingTable, completionListener); } /** @@ -156,7 +155,7 @@ public List getAllUploadedIndices } @Override - public CheckedRunnable getAsyncIndexRoutingReadAction( + public void getAsyncIndexRoutingReadAction( String clusterUUID, String uploadedFilename, LatchedActionListener latchedActionListener @@ -169,7 +168,7 @@ public CheckedRunnable getAsyncIndexRoutingReadAction( RemoteIndexRoutingTable remoteIndexRoutingTable = new RemoteIndexRoutingTable(uploadedFilename, clusterUUID, compressor); - return () -> remoteIndexRoutingTableStore.readAsync(remoteIndexRoutingTable, actionListener); + remoteIndexRoutingTableStore.readAsync(remoteIndexRoutingTable, actionListener); } @Override diff --git a/server/src/main/java/org/opensearch/cluster/routing/remote/NoopRemoteRoutingTableService.java b/server/src/main/java/org/opensearch/cluster/routing/remote/NoopRemoteRoutingTableService.java index 4636e492df28f..e6e68e01e761f 100644 --- a/server/src/main/java/org/opensearch/cluster/routing/remote/NoopRemoteRoutingTableService.java +++ b/server/src/main/java/org/opensearch/cluster/routing/remote/NoopRemoteRoutingTableService.java @@ -12,7 +12,6 @@ import org.opensearch.cluster.DiffableUtils; import org.opensearch.cluster.routing.IndexRoutingTable; import org.opensearch.cluster.routing.RoutingTable; -import org.opensearch.common.CheckedRunnable; import org.opensearch.common.lifecycle.AbstractLifecycleComponent; import org.opensearch.gateway.remote.ClusterMetadataManifest; @@ -39,7 +38,7 @@ public DiffableUtils.MapDiff getAsyncIndexRoutingWriteAction( + public void getAsyncIndexRoutingWriteAction( String clusterUUID, long term, long version, @@ -47,7 +46,6 @@ public CheckedRunnable getAsyncIndexRoutingWriteAction( LatchedActionListener latchedActionListener ) { // noop - return () -> {}; } @Override @@ -61,13 +59,12 @@ public List getAllUploadedIndices } @Override - public CheckedRunnable getAsyncIndexRoutingReadAction( + public void getAsyncIndexRoutingReadAction( String clusterUUID, String uploadedFilename, LatchedActionListener latchedActionListener ) { // noop - return () -> {}; } @Override diff --git a/server/src/main/java/org/opensearch/cluster/routing/remote/RemoteRoutingTableService.java b/server/src/main/java/org/opensearch/cluster/routing/remote/RemoteRoutingTableService.java index d319123bc2cee..0b0b4bb7dbc84 100644 --- a/server/src/main/java/org/opensearch/cluster/routing/remote/RemoteRoutingTableService.java +++ b/server/src/main/java/org/opensearch/cluster/routing/remote/RemoteRoutingTableService.java @@ -12,7 +12,6 @@ import org.opensearch.cluster.DiffableUtils; import org.opensearch.cluster.routing.IndexRoutingTable; import org.opensearch.cluster.routing.RoutingTable; -import org.opensearch.common.CheckedRunnable; import org.opensearch.common.lifecycle.LifecycleComponent; import org.opensearch.core.common.io.stream.StreamInput; import org.opensearch.core.common.io.stream.StreamOutput; @@ -43,7 +42,7 @@ public IndexRoutingTable read(StreamInput in, String key) throws IOException { List getIndicesRouting(RoutingTable routingTable); - CheckedRunnable getAsyncIndexRoutingReadAction( + void getAsyncIndexRoutingReadAction( String clusterUUID, String uploadedFilename, LatchedActionListener latchedActionListener @@ -59,7 +58,7 @@ DiffableUtils.MapDiff> RoutingTable after ); - CheckedRunnable getAsyncIndexRoutingWriteAction( + void getAsyncIndexRoutingWriteAction( String clusterUUID, long term, long version, diff --git a/server/src/main/java/org/opensearch/common/remote/AbstractRemoteWritableEntityManager.java b/server/src/main/java/org/opensearch/common/remote/AbstractRemoteWritableEntityManager.java new file mode 100644 index 0000000000000..dc301635c4a80 --- /dev/null +++ b/server/src/main/java/org/opensearch/common/remote/AbstractRemoteWritableEntityManager.java @@ -0,0 +1,84 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.common.remote; + +import org.opensearch.core.action.ActionListener; +import org.opensearch.gateway.remote.ClusterMetadataManifest; +import org.opensearch.gateway.remote.model.RemoteReadResult; + +import java.util.HashMap; +import java.util.Map; + +/** + * An abstract class that provides a base implementation for managing remote entities in the remote store. + */ +public abstract class AbstractRemoteWritableEntityManager implements RemoteWritableEntityManager { + /** + * A map that stores the remote writable entity stores, keyed by the entity type. + */ + protected final Map remoteWritableEntityStores = new HashMap<>(); + + /** + * Retrieves the remote writable entity store for the given entity. + * + * @param entity the entity for which the store is requested + * @return the remote writable entity store for the given entity + * @throws IllegalArgumentException if the entity type is unknown + */ + protected RemoteWritableEntityStore getStore(AbstractRemoteWritableBlobEntity entity) { + RemoteWritableEntityStore remoteStore = remoteWritableEntityStores.get(entity.getType()); + if (remoteStore == null) { + throw new IllegalArgumentException("Unknown entity type [" + entity.getType() + "]"); + } + return remoteStore; + } + + /** + * Returns an ActionListener for handling the write operation for the specified component, remote object, and latched action listener. + * + * @param component the component for which the write operation is performed + * @param remoteEntity the remote object to be written + * @param listener the listener to be notified when the write operation completes + * @return an ActionListener for handling the write operation + */ + protected abstract ActionListener getWrappedWriteListener( + String component, + AbstractRemoteWritableBlobEntity remoteEntity, + ActionListener listener + ); + + /** + * Returns an ActionListener for handling the read operation for the specified component, + * remote object, and latched action listener. + * + * @param component the component for which the read operation is performed + * @param remoteEntity the remote object to be read + * @param listener the listener to be notified when the read operation completes + * @return an ActionListener for handling the read operation + */ + protected abstract ActionListener getWrappedReadListener( + String component, + AbstractRemoteWritableBlobEntity remoteEntity, + ActionListener listener + ); + + @Override + public void writeAsync( + String component, + AbstractRemoteWritableBlobEntity entity, + ActionListener listener + ) { + getStore(entity).writeAsync(entity, getWrappedWriteListener(component, entity, listener)); + } + + @Override + public void readAsync(String component, AbstractRemoteWritableBlobEntity entity, ActionListener listener) { + getStore(entity).readAsync(entity, getWrappedReadListener(component, entity, listener)); + } +} diff --git a/server/src/main/java/org/opensearch/common/remote/RemoteWritableEntityManager.java b/server/src/main/java/org/opensearch/common/remote/RemoteWritableEntityManager.java new file mode 100644 index 0000000000000..7693d1b5284bd --- /dev/null +++ b/server/src/main/java/org/opensearch/common/remote/RemoteWritableEntityManager.java @@ -0,0 +1,47 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.common.remote; + +import org.opensearch.core.action.ActionListener; +import org.opensearch.gateway.remote.ClusterMetadataManifest.UploadedMetadata; +import org.opensearch.gateway.remote.model.RemoteReadResult; + +/** + * The RemoteWritableEntityManager interface provides async read and write methods for managing remote entities in the remote store + */ +public interface RemoteWritableEntityManager { + + /** + * Performs an asynchronous read operation for the specified component and entity. + * + * @param component the component for which the read operation is performed + * @param entity the entity to be read + * @param listener the listener to be notified when the read operation completes. + * The listener's {@link ActionListener#onResponse(Object)} method + * is called with a {@link RemoteReadResult} object containing the + * read data on successful read. The + * {@link ActionListener#onFailure(Exception)} method is called with + * an exception if the read operation fails. + */ + void readAsync(String component, AbstractRemoteWritableBlobEntity entity, ActionListener listener); + + /** + * Performs an asynchronous write operation for the specified component and entity. + * + * @param component the component for which the write operation is performed + * @param entity the entity to be written + * @param listener the listener to be notified when the write operation completes. + * The listener's {@link ActionListener#onResponse(Object)} method + * is called with a {@link UploadedMetadata} object containing the + * uploaded metadata on successful write. The + * {@link ActionListener#onFailure(Exception)} method is called with + * an exception if the write operation fails. + */ + void writeAsync(String component, AbstractRemoteWritableBlobEntity entity, ActionListener listener); +} diff --git a/server/src/main/java/org/opensearch/gateway/remote/RemoteClusterStateAttributesManager.java b/server/src/main/java/org/opensearch/gateway/remote/RemoteClusterStateAttributesManager.java index 8f986423587d7..67ac8d2b9a810 100644 --- a/server/src/main/java/org/opensearch/gateway/remote/RemoteClusterStateAttributesManager.java +++ b/server/src/main/java/org/opensearch/gateway/remote/RemoteClusterStateAttributesManager.java @@ -8,13 +8,11 @@ package org.opensearch.gateway.remote; -import org.opensearch.action.LatchedActionListener; import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.DiffableUtils; import org.opensearch.cluster.DiffableUtils.NonDiffableValueSerializer; -import org.opensearch.common.CheckedRunnable; import org.opensearch.common.remote.AbstractRemoteWritableBlobEntity; -import org.opensearch.common.remote.RemoteWritableEntityStore; +import org.opensearch.common.remote.AbstractRemoteWritableEntityManager; import org.opensearch.core.action.ActionListener; import org.opensearch.core.common.io.stream.NamedWriteableRegistry; import org.opensearch.gateway.remote.model.RemoteClusterBlocks; @@ -26,9 +24,7 @@ import org.opensearch.repositories.blobstore.BlobStoreRepository; import org.opensearch.threadpool.ThreadPool; -import java.io.IOException; import java.util.Collections; -import java.util.HashMap; import java.util.Map; /** @@ -36,13 +32,11 @@ * * @opensearch.internal */ -public class RemoteClusterStateAttributesManager { +public class RemoteClusterStateAttributesManager extends AbstractRemoteWritableEntityManager { public static final String CLUSTER_STATE_ATTRIBUTE = "cluster_state_attribute"; public static final String DISCOVERY_NODES = "nodes"; public static final String CLUSTER_BLOCKS = "blocks"; public static final int CLUSTER_STATE_ATTRIBUTES_CURRENT_CODEC_VERSION = 1; - private final Map remoteWritableEntityStores; - private final NamedWriteableRegistry namedWriteableRegistry; RemoteClusterStateAttributesManager( String clusterName, @@ -51,8 +45,6 @@ public class RemoteClusterStateAttributesManager { NamedWriteableRegistry namedWriteableRegistry, ThreadPool threadpool ) { - this.namedWriteableRegistry = namedWriteableRegistry; - this.remoteWritableEntityStores = new HashMap<>(); this.remoteWritableEntityStores.put( RemoteDiscoveryNodes.DISCOVERY_NODES, new RemoteClusterStateBlobStore<>( @@ -85,46 +77,28 @@ public class RemoteClusterStateAttributesManager { ); } - /** - * Allows async upload of Cluster State Attribute components to remote - */ - CheckedRunnable getAsyncMetadataWriteAction( + @Override + protected ActionListener getWrappedWriteListener( String component, - AbstractRemoteWritableBlobEntity blobEntity, - LatchedActionListener latchedActionListener - ) { - return () -> getStore(blobEntity).writeAsync(blobEntity, getActionListener(component, blobEntity, latchedActionListener)); - } - - private ActionListener getActionListener( - String component, - AbstractRemoteWritableBlobEntity remoteObject, - LatchedActionListener latchedActionListener + AbstractRemoteWritableBlobEntity remoteEntity, + ActionListener listener ) { return ActionListener.wrap( - resp -> latchedActionListener.onResponse(remoteObject.getUploadedMetadata()), - ex -> latchedActionListener.onFailure(new RemoteStateTransferException(component, remoteObject, ex)) + resp -> listener.onResponse(remoteEntity.getUploadedMetadata()), + ex -> listener.onFailure(new RemoteStateTransferException("Upload failed for " + component, remoteEntity, ex)) ); } - private RemoteWritableEntityStore getStore(AbstractRemoteWritableBlobEntity entity) { - RemoteWritableEntityStore remoteStore = remoteWritableEntityStores.get(entity.getType()); - if (remoteStore == null) { - throw new IllegalArgumentException("Unknown entity type [" + entity.getType() + "]"); - } - return remoteStore; - } - - public CheckedRunnable getAsyncMetadataReadAction( + @Override + protected ActionListener getWrappedReadListener( String component, - AbstractRemoteWritableBlobEntity blobEntity, - LatchedActionListener listener + AbstractRemoteWritableBlobEntity remoteEntity, + ActionListener listener ) { - final ActionListener actionListener = ActionListener.wrap( + return ActionListener.wrap( response -> listener.onResponse(new RemoteReadResult(response, CLUSTER_STATE_ATTRIBUTE, component)), - listener::onFailure + ex -> listener.onFailure(new RemoteStateTransferException("Download failed for " + component, remoteEntity, ex)) ); - return () -> getStore(blobEntity).readAsync(blobEntity, actionListener); } public DiffableUtils.MapDiff> getUpdatedCustoms( @@ -158,4 +132,5 @@ public DiffableUtils.MapDiff> uploadTasks = new ConcurrentHashMap<>(totalUploadTasks); + List uploadTasks = Collections.synchronizedList(new ArrayList<>(totalUploadTasks)); Map results = new ConcurrentHashMap<>(totalUploadTasks); List exceptionList = Collections.synchronizedList(new ArrayList<>(totalUploadTasks)); @@ -516,167 +515,155 @@ UploadedMetadataResults writeMetadataInParallel( ); if (uploadSettingsMetadata) { - uploadTasks.put( + uploadTasks.add(SETTING_METADATA); + remoteGlobalMetadataManager.writeAsync( SETTING_METADATA, - remoteGlobalMetadataManager.getAsyncMetadataWriteAction( - new RemotePersistentSettingsMetadata( - clusterState.metadata().persistentSettings(), - clusterState.metadata().version(), - clusterState.metadata().clusterUUID(), - blobStoreRepository.getCompressor(), - blobStoreRepository.getNamedXContentRegistry() - ), - listener - ) + new RemotePersistentSettingsMetadata( + clusterState.metadata().persistentSettings(), + clusterState.metadata().version(), + clusterState.metadata().clusterUUID(), + blobStoreRepository.getCompressor(), + blobStoreRepository.getNamedXContentRegistry() + ), + listener ); } if (uploadTransientSettingMetadata) { - uploadTasks.put( + uploadTasks.add(TRANSIENT_SETTING_METADATA); + remoteGlobalMetadataManager.writeAsync( TRANSIENT_SETTING_METADATA, - remoteGlobalMetadataManager.getAsyncMetadataWriteAction( - new RemoteTransientSettingsMetadata( - clusterState.metadata().transientSettings(), - clusterState.metadata().version(), - clusterState.metadata().clusterUUID(), - blobStoreRepository.getCompressor(), - blobStoreRepository.getNamedXContentRegistry() - ), - listener - ) + new RemoteTransientSettingsMetadata( + clusterState.metadata().transientSettings(), + clusterState.metadata().version(), + clusterState.metadata().clusterUUID(), + blobStoreRepository.getCompressor(), + blobStoreRepository.getNamedXContentRegistry() + ), + listener ); } if (uploadCoordinationMetadata) { - uploadTasks.put( + uploadTasks.add(COORDINATION_METADATA); + remoteGlobalMetadataManager.writeAsync( COORDINATION_METADATA, - remoteGlobalMetadataManager.getAsyncMetadataWriteAction( - new RemoteCoordinationMetadata( - clusterState.metadata().coordinationMetadata(), - clusterState.metadata().version(), - clusterState.metadata().clusterUUID(), - blobStoreRepository.getCompressor(), - blobStoreRepository.getNamedXContentRegistry() - ), - listener - ) + new RemoteCoordinationMetadata( + clusterState.metadata().coordinationMetadata(), + clusterState.metadata().version(), + clusterState.metadata().clusterUUID(), + blobStoreRepository.getCompressor(), + blobStoreRepository.getNamedXContentRegistry() + ), + listener ); } if (uploadTemplateMetadata) { - uploadTasks.put( + uploadTasks.add(TEMPLATES_METADATA); + remoteGlobalMetadataManager.writeAsync( TEMPLATES_METADATA, - remoteGlobalMetadataManager.getAsyncMetadataWriteAction( - new RemoteTemplatesMetadata( - clusterState.metadata().templatesMetadata(), - clusterState.metadata().version(), - clusterState.metadata().clusterUUID(), - blobStoreRepository.getCompressor(), - blobStoreRepository.getNamedXContentRegistry() - ), - listener - ) + new RemoteTemplatesMetadata( + clusterState.metadata().templatesMetadata(), + clusterState.metadata().version(), + clusterState.metadata().clusterUUID(), + blobStoreRepository.getCompressor(), + blobStoreRepository.getNamedXContentRegistry() + ), + listener ); } if (uploadDiscoveryNodes) { - uploadTasks.put( - DISCOVERY_NODES, - remoteClusterStateAttributesManager.getAsyncMetadataWriteAction( - RemoteDiscoveryNodes.DISCOVERY_NODES, - new RemoteDiscoveryNodes( - clusterState.nodes(), - clusterState.version(), - clusterState.stateUUID(), - blobStoreRepository.getCompressor() - ), - listener - ) + uploadTasks.add(DISCOVERY_NODES); + remoteClusterStateAttributesManager.writeAsync( + RemoteDiscoveryNodes.DISCOVERY_NODES, + new RemoteDiscoveryNodes( + clusterState.nodes(), + clusterState.version(), + clusterState.stateUUID(), + blobStoreRepository.getCompressor() + ), + listener ); } if (uploadClusterBlock) { - uploadTasks.put( - CLUSTER_BLOCKS, - remoteClusterStateAttributesManager.getAsyncMetadataWriteAction( - RemoteClusterBlocks.CLUSTER_BLOCKS, - new RemoteClusterBlocks( - clusterState.blocks(), - clusterState.version(), - clusterState.metadata().clusterUUID(), - blobStoreRepository.getCompressor() - ), - listener - ) + uploadTasks.add(CLUSTER_BLOCKS); + remoteClusterStateAttributesManager.writeAsync( + RemoteClusterBlocks.CLUSTER_BLOCKS, + new RemoteClusterBlocks( + clusterState.blocks(), + clusterState.version(), + clusterState.metadata().clusterUUID(), + blobStoreRepository.getCompressor() + ), + listener ); } if (uploadHashesOfConsistentSettings) { - uploadTasks.put( + uploadTasks.add(HASHES_OF_CONSISTENT_SETTINGS); + remoteGlobalMetadataManager.writeAsync( HASHES_OF_CONSISTENT_SETTINGS, - remoteGlobalMetadataManager.getAsyncMetadataWriteAction( - new RemoteHashesOfConsistentSettings( - (DiffableStringMap) clusterState.metadata().hashesOfConsistentSettings(), - clusterState.metadata().version(), - clusterState.metadata().clusterUUID(), - blobStoreRepository.getCompressor() - ), - listener - ) + new RemoteHashesOfConsistentSettings( + (DiffableStringMap) clusterState.metadata().hashesOfConsistentSettings(), + clusterState.metadata().version(), + clusterState.metadata().clusterUUID(), + blobStoreRepository.getCompressor() + ), + listener ); } customToUpload.forEach((key, value) -> { String customComponent = String.join(CUSTOM_DELIMITER, CUSTOM_METADATA, key); - uploadTasks.put( + uploadTasks.add(customComponent); + remoteGlobalMetadataManager.writeAsync( customComponent, - remoteGlobalMetadataManager.getAsyncMetadataWriteAction( - new RemoteCustomMetadata( - value, - key, - clusterState.metadata().version(), - clusterState.metadata().clusterUUID(), - blobStoreRepository.getCompressor(), - namedWriteableRegistry - ), - listener - ) + new RemoteCustomMetadata( + value, + key, + clusterState.metadata().version(), + clusterState.metadata().clusterUUID(), + blobStoreRepository.getCompressor(), + namedWriteableRegistry + ), + listener ); }); indexToUpload.forEach(indexMetadata -> { - uploadTasks.put( + uploadTasks.add(indexMetadata.getIndex().getName()); + remoteIndexMetadataManager.writeAsync( indexMetadata.getIndex().getName(), - remoteIndexMetadataManager.getAsyncIndexMetadataWriteAction(indexMetadata, clusterState.metadata().clusterUUID(), listener) + new RemoteIndexMetadata( + indexMetadata, + clusterState.metadata().clusterUUID(), + blobStoreRepository.getCompressor(), + blobStoreRepository.getNamedXContentRegistry() + ), + listener ); }); clusterStateCustomToUpload.forEach((key, value) -> { - uploadTasks.put( - key, - remoteClusterStateAttributesManager.getAsyncMetadataWriteAction( - CLUSTER_STATE_CUSTOM, - new RemoteClusterStateCustoms( - value, - key, - clusterState.version(), - clusterState.metadata().clusterUUID(), - blobStoreRepository.getCompressor(), - namedWriteableRegistry - ), - listener - ) + uploadTasks.add(key); + remoteClusterStateAttributesManager.writeAsync( + CLUSTER_STATE_CUSTOM, + new RemoteClusterStateCustoms( + value, + key, + clusterState.version(), + clusterState.metadata().clusterUUID(), + blobStoreRepository.getCompressor(), + namedWriteableRegistry + ), + listener ); }); indicesRoutingToUpload.forEach(indexRoutingTable -> { - uploadTasks.put( - INDEX_ROUTING_METADATA_PREFIX + indexRoutingTable.getIndex().getName(), - remoteRoutingTableService.getAsyncIndexRoutingWriteAction( - clusterState.metadata().clusterUUID(), - clusterState.term(), - clusterState.version(), - indexRoutingTable, - listener - ) + uploadTasks.add(INDEX_ROUTING_METADATA_PREFIX + indexRoutingTable.getIndex().getName()); + remoteRoutingTableService.getAsyncIndexRoutingWriteAction( + clusterState.metadata().clusterUUID(), + clusterState.term(), + clusterState.version(), + indexRoutingTable, + listener ); }); - - // start async upload of all required metadata files - for (CheckedRunnable uploadTask : uploadTasks.values()) { - uploadTask.run(); - } invokeIndexMetadataUploadListeners(indexToUpload, prevIndexMetadataByName, latch, exceptionList); try { @@ -686,7 +673,7 @@ UploadedMetadataResults writeMetadataInParallel( String.format( Locale.ROOT, "Timed out waiting for transfer of following metadata to complete - %s", - String.join(", ", uploadTasks.keySet()) + String.join(", ", uploadTasks) ) ); exceptionList.forEach(ex::addSuppressed); @@ -695,11 +682,7 @@ UploadedMetadataResults writeMetadataInParallel( } catch (InterruptedException ex) { exceptionList.forEach(ex::addSuppressed); RemoteStateTransferException exception = new RemoteStateTransferException( - String.format( - Locale.ROOT, - "Timed out waiting for transfer of metadata to complete - %s", - String.join(", ", uploadTasks.keySet()) - ), + String.format(Locale.ROOT, "Timed out waiting for transfer of metadata to complete - %s", String.join(", ", uploadTasks)), ex ); Thread.currentThread().interrupt(); @@ -707,14 +690,20 @@ UploadedMetadataResults writeMetadataInParallel( } if (!exceptionList.isEmpty()) { RemoteStateTransferException exception = new RemoteStateTransferException( + String.format(Locale.ROOT, "Exception during transfer of following metadata to Remote - %s", String.join(", ", uploadTasks)) + ); + exceptionList.forEach(exception::addSuppressed); + throw exception; + } + if (results.size() != uploadTasks.size()) { + throw new RemoteStateTransferException( String.format( Locale.ROOT, - "Exception during transfer of following metadata to Remote - %s", - String.join(", ", uploadTasks.keySet()) + "Some metadata components were not uploaded successfully. Objects to be uploaded: %s, uploaded objects: %s", + String.join(", ", uploadTasks), + String.join(", ", results.keySet()) ) ); - exceptionList.forEach(exception::addSuppressed); - throw exception; } UploadedMetadataResults response = new UploadedMetadataResults(); results.forEach((name, uploadedMetadata) -> { @@ -998,7 +987,6 @@ ClusterState readClusterStateInParallel( + (readTransientSettingsMetadata ? 1 : 0) + (readHashesOfConsistentSettings ? 1 : 0) + clusterStateCustomToRead.size() + indicesRoutingToRead.size(); CountDownLatch latch = new CountDownLatch(totalReadTasks); - List> asyncMetadataReadActions = new ArrayList<>(); List readResults = Collections.synchronizedList(new ArrayList<>()); List readIndexRoutingTableResults = Collections.synchronizedList(new ArrayList<>()); List exceptionList = Collections.synchronizedList(new ArrayList<>(totalReadTasks)); @@ -1012,8 +1000,15 @@ ClusterState readClusterStateInParallel( }), latch); for (UploadedIndexMetadata indexMetadata : indicesToRead) { - asyncMetadataReadActions.add( - remoteIndexMetadataManager.getAsyncIndexMetadataReadAction(clusterUUID, indexMetadata.getUploadedFilename(), listener) + remoteIndexMetadataManager.readAsync( + indexMetadata.getIndexName(), + new RemoteIndexMetadata( + RemoteClusterStateUtils.getFormattedIndexFileName(indexMetadata.getUploadedFilename()), + clusterUUID, + blobStoreRepository.getCompressor(), + blobStoreRepository.getNamedXContentRegistry() + ), + listener ); } @@ -1029,154 +1024,130 @@ ClusterState readClusterStateInParallel( ); for (UploadedIndexMetadata indexRouting : indicesRoutingToRead) { - asyncMetadataReadActions.add( - remoteRoutingTableService.getAsyncIndexRoutingReadAction( - clusterUUID, - indexRouting.getUploadedFilename(), - routingTableLatchedActionListener - ) + remoteRoutingTableService.getAsyncIndexRoutingReadAction( + clusterUUID, + indexRouting.getUploadedFilename(), + routingTableLatchedActionListener ); } for (Map.Entry entry : customToRead.entrySet()) { - asyncMetadataReadActions.add( - remoteGlobalMetadataManager.getAsyncMetadataReadAction( - new RemoteCustomMetadata( - entry.getValue().getUploadedFilename(), - entry.getKey(), - clusterUUID, - blobStoreRepository.getCompressor(), - namedWriteableRegistry - ), - entry.getValue().getAttributeName(), - listener - ) + remoteGlobalMetadataManager.readAsync( + entry.getValue().getAttributeName(), + new RemoteCustomMetadata( + entry.getValue().getUploadedFilename(), + entry.getKey(), + clusterUUID, + blobStoreRepository.getCompressor(), + namedWriteableRegistry + ), + listener ); } if (readCoordinationMetadata) { - asyncMetadataReadActions.add( - remoteGlobalMetadataManager.getAsyncMetadataReadAction( - new RemoteCoordinationMetadata( - manifest.getCoordinationMetadata().getUploadedFilename(), - clusterUUID, - blobStoreRepository.getCompressor(), - blobStoreRepository.getNamedXContentRegistry() - ), - COORDINATION_METADATA, - listener - ) + remoteGlobalMetadataManager.readAsync( + COORDINATION_METADATA, + new RemoteCoordinationMetadata( + manifest.getCoordinationMetadata().getUploadedFilename(), + clusterUUID, + blobStoreRepository.getCompressor(), + blobStoreRepository.getNamedXContentRegistry() + ), + listener ); } if (readSettingsMetadata) { - asyncMetadataReadActions.add( - remoteGlobalMetadataManager.getAsyncMetadataReadAction( - new RemotePersistentSettingsMetadata( - manifest.getSettingsMetadata().getUploadedFilename(), - clusterUUID, - blobStoreRepository.getCompressor(), - blobStoreRepository.getNamedXContentRegistry() - ), - SETTING_METADATA, - listener - ) + remoteGlobalMetadataManager.readAsync( + SETTING_METADATA, + new RemotePersistentSettingsMetadata( + manifest.getSettingsMetadata().getUploadedFilename(), + clusterUUID, + blobStoreRepository.getCompressor(), + blobStoreRepository.getNamedXContentRegistry() + ), + listener ); } if (readTransientSettingsMetadata) { - asyncMetadataReadActions.add( - remoteGlobalMetadataManager.getAsyncMetadataReadAction( - new RemoteTransientSettingsMetadata( - manifest.getTransientSettingsMetadata().getUploadedFilename(), - clusterUUID, - blobStoreRepository.getCompressor(), - blobStoreRepository.getNamedXContentRegistry() - ), - TRANSIENT_SETTING_METADATA, - listener - ) + remoteGlobalMetadataManager.readAsync( + TRANSIENT_SETTING_METADATA, + new RemoteTransientSettingsMetadata( + manifest.getTransientSettingsMetadata().getUploadedFilename(), + clusterUUID, + blobStoreRepository.getCompressor(), + blobStoreRepository.getNamedXContentRegistry() + ), + listener ); } if (readTemplatesMetadata) { - asyncMetadataReadActions.add( - remoteGlobalMetadataManager.getAsyncMetadataReadAction( - new RemoteTemplatesMetadata( - manifest.getTemplatesMetadata().getUploadedFilename(), - clusterUUID, - blobStoreRepository.getCompressor(), - blobStoreRepository.getNamedXContentRegistry() - ), - TEMPLATES_METADATA, - listener - ) + remoteGlobalMetadataManager.readAsync( + TEMPLATES_METADATA, + new RemoteTemplatesMetadata( + manifest.getTemplatesMetadata().getUploadedFilename(), + clusterUUID, + blobStoreRepository.getCompressor(), + blobStoreRepository.getNamedXContentRegistry() + ), + listener ); } if (readDiscoveryNodes) { - asyncMetadataReadActions.add( - remoteClusterStateAttributesManager.getAsyncMetadataReadAction( - DISCOVERY_NODES, - new RemoteDiscoveryNodes( - manifest.getDiscoveryNodesMetadata().getUploadedFilename(), - clusterUUID, - blobStoreRepository.getCompressor() - ), - listener - ) + remoteClusterStateAttributesManager.readAsync( + DISCOVERY_NODES, + new RemoteDiscoveryNodes( + manifest.getDiscoveryNodesMetadata().getUploadedFilename(), + clusterUUID, + blobStoreRepository.getCompressor() + ), + listener ); } if (readClusterBlocks) { - asyncMetadataReadActions.add( - remoteClusterStateAttributesManager.getAsyncMetadataReadAction( - CLUSTER_BLOCKS, - new RemoteClusterBlocks( - manifest.getClusterBlocksMetadata().getUploadedFilename(), - clusterUUID, - blobStoreRepository.getCompressor() - ), - listener - ) + remoteClusterStateAttributesManager.readAsync( + CLUSTER_BLOCKS, + new RemoteClusterBlocks( + manifest.getClusterBlocksMetadata().getUploadedFilename(), + clusterUUID, + blobStoreRepository.getCompressor() + ), + listener ); } if (readHashesOfConsistentSettings) { - asyncMetadataReadActions.add( - remoteGlobalMetadataManager.getAsyncMetadataReadAction( - new RemoteHashesOfConsistentSettings( - manifest.getHashesOfConsistentSettings().getUploadedFilename(), - clusterUUID, - blobStoreRepository.getCompressor() - ), - HASHES_OF_CONSISTENT_SETTINGS, - listener - ) + remoteGlobalMetadataManager.readAsync( + HASHES_OF_CONSISTENT_SETTINGS, + new RemoteHashesOfConsistentSettings( + manifest.getHashesOfConsistentSettings().getUploadedFilename(), + clusterUUID, + blobStoreRepository.getCompressor() + ), + listener ); } for (Map.Entry entry : clusterStateCustomToRead.entrySet()) { - asyncMetadataReadActions.add( - remoteClusterStateAttributesManager.getAsyncMetadataReadAction( - // pass component name as cluster-state-custom--, so that we can interpret it later - String.join(CUSTOM_DELIMITER, CLUSTER_STATE_CUSTOM, entry.getKey()), - new RemoteClusterStateCustoms( - entry.getValue().getUploadedFilename(), - entry.getValue().getAttributeName(), - clusterUUID, - blobStoreRepository.getCompressor(), - namedWriteableRegistry - ), - listener - ) + remoteClusterStateAttributesManager.readAsync( + // pass component name as cluster-state-custom--, so that we can interpret it later + String.join(CUSTOM_DELIMITER, CLUSTER_STATE_CUSTOM, entry.getKey()), + new RemoteClusterStateCustoms( + entry.getValue().getUploadedFilename(), + entry.getValue().getAttributeName(), + clusterUUID, + blobStoreRepository.getCompressor(), + namedWriteableRegistry + ), + listener ); } - for (CheckedRunnable asyncMetadataReadAction : asyncMetadataReadActions) { - asyncMetadataReadAction.run(); - } - try { if (latch.await(this.remoteStateReadTimeout.getMillis(), TimeUnit.MILLISECONDS) == false) { RemoteStateTransferException exception = new RemoteStateTransferException( diff --git a/server/src/main/java/org/opensearch/gateway/remote/RemoteGlobalMetadataManager.java b/server/src/main/java/org/opensearch/gateway/remote/RemoteGlobalMetadataManager.java index 2c5aad99adc0c..5a6f4b7e9f1f1 100644 --- a/server/src/main/java/org/opensearch/gateway/remote/RemoteGlobalMetadataManager.java +++ b/server/src/main/java/org/opensearch/gateway/remote/RemoteGlobalMetadataManager.java @@ -8,7 +8,6 @@ package org.opensearch.gateway.remote; -import org.opensearch.action.LatchedActionListener; import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.DiffableUtils; import org.opensearch.cluster.DiffableUtils.NonDiffableValueSerializer; @@ -17,9 +16,8 @@ import org.opensearch.cluster.metadata.Metadata.Custom; import org.opensearch.cluster.metadata.Metadata.XContentContext; import org.opensearch.cluster.metadata.TemplatesMetadata; -import org.opensearch.common.CheckedRunnable; import org.opensearch.common.remote.AbstractRemoteWritableBlobEntity; -import org.opensearch.common.remote.RemoteWritableEntityStore; +import org.opensearch.common.remote.AbstractRemoteWritableEntityManager; import org.opensearch.common.settings.ClusterSettings; import org.opensearch.common.settings.Setting; import org.opensearch.common.settings.Settings; @@ -43,7 +41,6 @@ import java.io.IOException; import java.util.Collections; -import java.util.HashMap; import java.util.Locale; import java.util.Map; import java.util.Map.Entry; @@ -56,7 +53,7 @@ * * @opensearch.internal */ -public class RemoteGlobalMetadataManager { +public class RemoteGlobalMetadataManager extends AbstractRemoteWritableEntityManager { public static final TimeValue GLOBAL_METADATA_UPLOAD_TIMEOUT_DEFAULT = TimeValue.timeValueMillis(20000); @@ -70,7 +67,6 @@ public class RemoteGlobalMetadataManager { public static final int GLOBAL_METADATA_CURRENT_CODEC_VERSION = 1; private volatile TimeValue globalMetadataUploadTimeout; - private Map remoteWritableEntityStores; private final Compressor compressor; private final NamedXContentRegistry namedXContentRegistry; private final NamedWriteableRegistry namedWriteableRegistry; @@ -87,7 +83,6 @@ public class RemoteGlobalMetadataManager { this.compressor = blobStoreRepository.getCompressor(); this.namedXContentRegistry = blobStoreRepository.getNamedXContentRegistry(); this.namedWriteableRegistry = namedWriteableRegistry; - this.remoteWritableEntityStores = new HashMap<>(); this.remoteWritableEntityStores.put( RemoteGlobalMetadata.GLOBAL_METADATA, new RemoteClusterStateBlobStore<>( @@ -161,46 +156,28 @@ public class RemoteGlobalMetadataManager { clusterSettings.addSettingsUpdateConsumer(GLOBAL_METADATA_UPLOAD_TIMEOUT_SETTING, this::setGlobalMetadataUploadTimeout); } - /** - * Allows async upload of Metadata components to remote - */ - CheckedRunnable getAsyncMetadataWriteAction( - AbstractRemoteWritableBlobEntity writeEntity, - LatchedActionListener latchedActionListener - ) { - return (() -> getStore(writeEntity).writeAsync(writeEntity, getActionListener(writeEntity, latchedActionListener))); - } - - private RemoteWritableEntityStore getStore(AbstractRemoteWritableBlobEntity entity) { - RemoteWritableEntityStore remoteStore = remoteWritableEntityStores.get(entity.getType()); - if (remoteStore == null) { - throw new IllegalArgumentException("Unknown entity type [" + entity.getType() + "]"); - } - return remoteStore; - } - - private ActionListener getActionListener( - AbstractRemoteWritableBlobEntity remoteBlobStoreObject, - LatchedActionListener latchedActionListener + @Override + protected ActionListener getWrappedWriteListener( + String component, + AbstractRemoteWritableBlobEntity remoteEntity, + ActionListener listener ) { return ActionListener.wrap( - resp -> latchedActionListener.onResponse(remoteBlobStoreObject.getUploadedMetadata()), - ex -> latchedActionListener.onFailure( - new RemoteStateTransferException("Upload failed for " + remoteBlobStoreObject.getType(), ex) - ) + resp -> listener.onResponse(remoteEntity.getUploadedMetadata()), + ex -> listener.onFailure(new RemoteStateTransferException("Upload failed for " + component, remoteEntity, ex)) ); } - CheckedRunnable getAsyncMetadataReadAction( - AbstractRemoteWritableBlobEntity readEntity, - String componentName, - LatchedActionListener listener + @Override + protected ActionListener getWrappedReadListener( + String component, + AbstractRemoteWritableBlobEntity remoteEntity, + ActionListener listener ) { - ActionListener actionListener = ActionListener.wrap( - response -> listener.onResponse(new RemoteReadResult(response, readEntity.getType(), componentName)), - listener::onFailure + return ActionListener.wrap( + response -> listener.onResponse(new RemoteReadResult(response, remoteEntity.getType(), component)), + ex -> listener.onFailure(new RemoteStateTransferException("Download failed for " + component, remoteEntity, ex)) ); - return () -> getStore(readEntity).readAsync(readEntity, actionListener); } Metadata getGlobalMetadata(String clusterUUID, ClusterMetadataManifest clusterMetadataManifest) { diff --git a/server/src/main/java/org/opensearch/gateway/remote/RemoteIndexMetadataManager.java b/server/src/main/java/org/opensearch/gateway/remote/RemoteIndexMetadataManager.java index c595f19279354..c30721c8f625c 100644 --- a/server/src/main/java/org/opensearch/gateway/remote/RemoteIndexMetadataManager.java +++ b/server/src/main/java/org/opensearch/gateway/remote/RemoteIndexMetadataManager.java @@ -8,10 +8,9 @@ package org.opensearch.gateway.remote; -import org.opensearch.action.LatchedActionListener; import org.opensearch.cluster.metadata.IndexMetadata; -import org.opensearch.common.CheckedRunnable; -import org.opensearch.common.remote.RemoteWritableEntityStore; +import org.opensearch.common.remote.AbstractRemoteWritableBlobEntity; +import org.opensearch.common.remote.AbstractRemoteWritableEntityManager; import org.opensearch.common.settings.ClusterSettings; import org.opensearch.common.settings.Setting; import org.opensearch.common.unit.TimeValue; @@ -33,7 +32,7 @@ * * @opensearch.internal */ -public class RemoteIndexMetadataManager { +public class RemoteIndexMetadataManager extends AbstractRemoteWritableEntityManager { public static final TimeValue INDEX_METADATA_UPLOAD_TIMEOUT_DEFAULT = TimeValue.timeValueMillis(20000); @@ -45,7 +44,6 @@ public class RemoteIndexMetadataManager { Setting.Property.Deprecated ); - private final RemoteWritableEntityStore indexMetadataBlobStore; private final Compressor compressor; private final NamedXContentRegistry namedXContentRegistry; @@ -58,12 +56,15 @@ public RemoteIndexMetadataManager( BlobStoreTransferService blobStoreTransferService, ThreadPool threadpool ) { - this.indexMetadataBlobStore = new RemoteClusterStateBlobStore<>( - blobStoreTransferService, - blobStoreRepository, - clusterName, - threadpool, - ThreadPool.Names.REMOTE_STATE_READ + this.remoteWritableEntityStores.put( + RemoteIndexMetadata.INDEX, + new RemoteClusterStateBlobStore<>( + blobStoreTransferService, + blobStoreRepository, + clusterName, + threadpool, + ThreadPool.Names.REMOTE_STATE_READ + ) ); this.namedXContentRegistry = blobStoreRepository.getNamedXContentRegistry(); this.compressor = blobStoreRepository.getCompressor(); @@ -71,45 +72,6 @@ public RemoteIndexMetadataManager( clusterSettings.addSettingsUpdateConsumer(INDEX_METADATA_UPLOAD_TIMEOUT_SETTING, this::setIndexMetadataUploadTimeout); } - /** - * Allows async Upload of IndexMetadata to remote - * - * @param indexMetadata {@link IndexMetadata} to upload - * @param latchedActionListener listener to respond back on after upload finishes - */ - CheckedRunnable getAsyncIndexMetadataWriteAction( - IndexMetadata indexMetadata, - String clusterUUID, - LatchedActionListener latchedActionListener - ) { - RemoteIndexMetadata remoteIndexMetadata = new RemoteIndexMetadata(indexMetadata, clusterUUID, compressor, namedXContentRegistry); - ActionListener completionListener = ActionListener.wrap( - resp -> latchedActionListener.onResponse(remoteIndexMetadata.getUploadedMetadata()), - ex -> latchedActionListener.onFailure(new RemoteStateTransferException(indexMetadata.getIndex().getName(), ex)) - ); - return () -> indexMetadataBlobStore.writeAsync(remoteIndexMetadata, completionListener); - } - - CheckedRunnable getAsyncIndexMetadataReadAction( - String clusterUUID, - String uploadedFilename, - LatchedActionListener latchedActionListener - ) { - RemoteIndexMetadata remoteIndexMetadata = new RemoteIndexMetadata( - RemoteClusterStateUtils.getFormattedIndexFileName(uploadedFilename), - clusterUUID, - compressor, - namedXContentRegistry - ); - ActionListener actionListener = ActionListener.wrap( - response -> latchedActionListener.onResponse( - new RemoteReadResult(response, RemoteIndexMetadata.INDEX, response.getIndex().getName()) - ), - latchedActionListener::onFailure - ); - return () -> indexMetadataBlobStore.readAsync(remoteIndexMetadata, actionListener); - } - /** * Fetch index metadata from remote cluster state * @@ -124,7 +86,7 @@ IndexMetadata getIndexMetadata(ClusterMetadataManifest.UploadedIndexMetadata upl namedXContentRegistry ); try { - return indexMetadataBlobStore.read(remoteIndexMetadata); + return (IndexMetadata) getStore(remoteIndexMetadata).read(remoteIndexMetadata); } catch (IOException e) { throw new IllegalStateException( String.format(Locale.ROOT, "Error while downloading IndexMetadata - %s", uploadedIndexMetadata.getUploadedFilename()), @@ -141,4 +103,27 @@ private void setIndexMetadataUploadTimeout(TimeValue newIndexMetadataUploadTimeo this.indexMetadataUploadTimeout = newIndexMetadataUploadTimeout; } + @Override + protected ActionListener getWrappedWriteListener( + String component, + AbstractRemoteWritableBlobEntity remoteEntity, + ActionListener listener + ) { + return ActionListener.wrap( + resp -> listener.onResponse(remoteEntity.getUploadedMetadata()), + ex -> listener.onFailure(new RemoteStateTransferException("Upload failed for " + component, remoteEntity, ex)) + ); + } + + @Override + protected ActionListener getWrappedReadListener( + String component, + AbstractRemoteWritableBlobEntity remoteEntity, + ActionListener listener + ) { + return ActionListener.wrap( + response -> listener.onResponse(new RemoteReadResult(response, RemoteIndexMetadata.INDEX, component)), + ex -> listener.onFailure(new RemoteStateTransferException("Download failed for " + component, remoteEntity, ex)) + ); + } } diff --git a/server/src/test/java/org/opensearch/cluster/routing/remote/RemoteRoutingTableServiceTests.java b/server/src/test/java/org/opensearch/cluster/routing/remote/RemoteRoutingTableServiceTests.java index 564c7f7aed304..f66e096e9b548 100644 --- a/server/src/test/java/org/opensearch/cluster/routing/remote/RemoteRoutingTableServiceTests.java +++ b/server/src/test/java/org/opensearch/cluster/routing/remote/RemoteRoutingTableServiceTests.java @@ -543,7 +543,7 @@ public void testGetAsyncIndexRoutingReadAction() throws Exception { "cluster-uuid", uploadedFileName, new LatchedActionListener<>(listener, latch) - ).run(); + ); latch.await(); assertNull(listener.getFailure()); @@ -584,7 +584,7 @@ public void testGetAsyncIndexRoutingWriteAction() throws Exception { clusterState.version(), clusterState.getRoutingTable().indicesRouting().get(indexName), new LatchedActionListener<>(listener, latch) - ).run(); + ); latch.await(); assertNull(listener.getFailure()); assertNotNull(listener.getResult()); diff --git a/server/src/test/java/org/opensearch/common/remote/AbstractRemoteWritableEntityManagerTests.java b/server/src/test/java/org/opensearch/common/remote/AbstractRemoteWritableEntityManagerTests.java new file mode 100644 index 0000000000000..3d10bbf59f6ad --- /dev/null +++ b/server/src/test/java/org/opensearch/common/remote/AbstractRemoteWritableEntityManagerTests.java @@ -0,0 +1,64 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.common.remote; + +import org.opensearch.core.action.ActionListener; +import org.opensearch.gateway.remote.ClusterMetadataManifest; +import org.opensearch.gateway.remote.model.RemoteReadResult; +import org.opensearch.test.OpenSearchTestCase; + +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.times; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.when; + +public class AbstractRemoteWritableEntityManagerTests extends OpenSearchTestCase { + public void testGetStoreWithKnownEntityType() { + AbstractRemoteWritableEntityManager manager = new ConcreteRemoteWritableEntityManager(); + String knownEntityType = "knownType"; + RemoteWritableEntityStore mockStore = mock(RemoteWritableEntityStore.class); + manager.remoteWritableEntityStores.put(knownEntityType, mockStore); + AbstractRemoteWritableBlobEntity mockEntity = mock(AbstractRemoteWritableBlobEntity.class); + when(mockEntity.getType()).thenReturn(knownEntityType); + + RemoteWritableEntityStore store = manager.getStore(mockEntity); + verify(mockEntity).getType(); + assertEquals(mockStore, store); + } + + public void testGetStoreWithUnknownEntityType() { + AbstractRemoteWritableEntityManager manager = new ConcreteRemoteWritableEntityManager(); + String unknownEntityType = "unknownType"; + AbstractRemoteWritableBlobEntity mockEntity = mock(AbstractRemoteWritableBlobEntity.class); + when(mockEntity.getType()).thenReturn(unknownEntityType); + + assertThrows(IllegalArgumentException.class, () -> manager.getStore(mockEntity)); + verify(mockEntity, times(2)).getType(); + } + + private static class ConcreteRemoteWritableEntityManager extends AbstractRemoteWritableEntityManager { + @Override + protected ActionListener getWrappedWriteListener( + String component, + AbstractRemoteWritableBlobEntity remoteEntity, + ActionListener listener + ) { + return null; + } + + @Override + protected ActionListener getWrappedReadListener( + String component, + AbstractRemoteWritableBlobEntity remoteEntity, + ActionListener listener + ) { + return null; + } + } +} diff --git a/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateAttributesManagerTests.java b/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateAttributesManagerTests.java index 3f2edd1a6c5a5..4ef459e6657a1 100644 --- a/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateAttributesManagerTests.java +++ b/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateAttributesManagerTests.java @@ -107,7 +107,7 @@ public void tearDown() throws Exception { threadPool.shutdown(); } - public void testGetAsyncMetadataWriteAction_DiscoveryNodes() throws IOException, InterruptedException { + public void testGetAsyncWriteRunnable_DiscoveryNodes() throws IOException, InterruptedException { DiscoveryNodes discoveryNodes = getDiscoveryNodes(); RemoteDiscoveryNodes remoteDiscoveryNodes = new RemoteDiscoveryNodes(discoveryNodes, VERSION, CLUSTER_UUID, compressor); doAnswer(invocationOnMock -> { @@ -117,11 +117,7 @@ public void testGetAsyncMetadataWriteAction_DiscoveryNodes() throws IOException, .uploadBlob(any(InputStream.class), anyIterable(), anyString(), eq(URGENT), any(ActionListener.class)); final CountDownLatch latch = new CountDownLatch(1); final TestCapturingListener listener = new TestCapturingListener<>(); - remoteClusterStateAttributesManager.getAsyncMetadataWriteAction( - DISCOVERY_NODES, - remoteDiscoveryNodes, - new LatchedActionListener<>(listener, latch) - ).run(); + remoteClusterStateAttributesManager.writeAsync(DISCOVERY_NODES, remoteDiscoveryNodes, new LatchedActionListener<>(listener, latch)); latch.await(); assertNull(listener.getFailure()); assertNotNull(listener.getResult()); @@ -140,7 +136,7 @@ public void testGetAsyncMetadataWriteAction_DiscoveryNodes() throws IOException, assertEquals(CLUSTER_STATE_ATTRIBUTES_CURRENT_CODEC_VERSION, Integer.parseInt(splitFileName[3])); } - public void testGetAsyncMetadataReadAction_DiscoveryNodes() throws IOException, InterruptedException { + public void testGetAsyncReadRunnable_DiscoveryNodes() throws IOException, InterruptedException { DiscoveryNodes discoveryNodes = getDiscoveryNodes(); String fileName = randomAlphaOfLength(10); when(blobStoreTransferService.downloadBlob(anyIterable(), anyString())).thenReturn( @@ -149,11 +145,7 @@ public void testGetAsyncMetadataReadAction_DiscoveryNodes() throws IOException, RemoteDiscoveryNodes remoteObjForDownload = new RemoteDiscoveryNodes(fileName, "cluster-uuid", compressor); CountDownLatch latch = new CountDownLatch(1); TestCapturingListener listener = new TestCapturingListener<>(); - remoteClusterStateAttributesManager.getAsyncMetadataReadAction( - DISCOVERY_NODES, - remoteObjForDownload, - new LatchedActionListener<>(listener, latch) - ).run(); + remoteClusterStateAttributesManager.readAsync(DISCOVERY_NODES, remoteObjForDownload, new LatchedActionListener<>(listener, latch)); latch.await(); assertNull(listener.getFailure()); assertNotNull(listener.getResult()); @@ -165,7 +157,7 @@ public void testGetAsyncMetadataReadAction_DiscoveryNodes() throws IOException, assertEquals(discoveryNodes.getClusterManagerNodeId(), readDiscoveryNodes.getClusterManagerNodeId()); } - public void testGetAsyncMetadataWriteAction_ClusterBlocks() throws IOException, InterruptedException { + public void testGetAsyncWriteRunnable_ClusterBlocks() throws IOException, InterruptedException { ClusterBlocks clusterBlocks = randomClusterBlocks(); RemoteClusterBlocks remoteClusterBlocks = new RemoteClusterBlocks(clusterBlocks, VERSION, CLUSTER_UUID, compressor); doAnswer(invocationOnMock -> { @@ -175,11 +167,7 @@ public void testGetAsyncMetadataWriteAction_ClusterBlocks() throws IOException, .uploadBlob(any(InputStream.class), anyIterable(), anyString(), eq(URGENT), any(ActionListener.class)); final CountDownLatch latch = new CountDownLatch(1); final TestCapturingListener listener = new TestCapturingListener<>(); - remoteClusterStateAttributesManager.getAsyncMetadataWriteAction( - CLUSTER_BLOCKS, - remoteClusterBlocks, - new LatchedActionListener<>(listener, latch) - ).run(); + remoteClusterStateAttributesManager.writeAsync(CLUSTER_BLOCKS, remoteClusterBlocks, new LatchedActionListener<>(listener, latch)); latch.await(); assertNull(listener.getFailure()); assertNotNull(listener.getResult()); @@ -198,7 +186,7 @@ public void testGetAsyncMetadataWriteAction_ClusterBlocks() throws IOException, assertEquals(CLUSTER_STATE_ATTRIBUTES_CURRENT_CODEC_VERSION, Integer.parseInt(splitFileName[3])); } - public void testGetAsyncMetadataReadAction_ClusterBlocks() throws IOException, InterruptedException { + public void testGetAsyncReadRunnable_ClusterBlocks() throws IOException, InterruptedException { ClusterBlocks clusterBlocks = randomClusterBlocks(); String fileName = randomAlphaOfLength(10); when(blobStoreTransferService.downloadBlob(anyIterable(), anyString())).thenReturn( @@ -208,11 +196,7 @@ public void testGetAsyncMetadataReadAction_ClusterBlocks() throws IOException, I CountDownLatch latch = new CountDownLatch(1); TestCapturingListener listener = new TestCapturingListener<>(); - remoteClusterStateAttributesManager.getAsyncMetadataReadAction( - CLUSTER_BLOCKS, - remoteClusterBlocks, - new LatchedActionListener<>(listener, latch) - ).run(); + remoteClusterStateAttributesManager.readAsync(CLUSTER_BLOCKS, remoteClusterBlocks, new LatchedActionListener<>(listener, latch)); latch.await(); assertNull(listener.getFailure()); assertNotNull(listener.getResult()); @@ -226,7 +210,7 @@ public void testGetAsyncMetadataReadAction_ClusterBlocks() throws IOException, I } } - public void testGetAsyncMetadataWriteAction_Custom() throws IOException, InterruptedException { + public void testGetAsyncWriteRunnable_Custom() throws IOException, InterruptedException { Custom custom = getClusterStateCustom(); RemoteClusterStateCustoms remoteClusterStateCustoms = new RemoteClusterStateCustoms( custom, @@ -243,11 +227,11 @@ public void testGetAsyncMetadataWriteAction_Custom() throws IOException, Interru .uploadBlob(any(InputStream.class), anyIterable(), anyString(), eq(URGENT), any(ActionListener.class)); final TestCapturingListener listener = new TestCapturingListener<>(); final CountDownLatch latch = new CountDownLatch(1); - remoteClusterStateAttributesManager.getAsyncMetadataWriteAction( + remoteClusterStateAttributesManager.writeAsync( CLUSTER_STATE_CUSTOM, remoteClusterStateCustoms, new LatchedActionListener<>(listener, latch) - ).run(); + ); latch.await(); assertNull(listener.getFailure()); assertNotNull(listener.getResult()); @@ -266,7 +250,7 @@ public void testGetAsyncMetadataWriteAction_Custom() throws IOException, Interru assertEquals(CLUSTER_STATE_ATTRIBUTES_CURRENT_CODEC_VERSION, Integer.parseInt(splitFileName[3])); } - public void testGetAsyncMetadataReadAction_Custom() throws IOException, InterruptedException { + public void testGetAsyncReadRunnable_Custom() throws IOException, InterruptedException { Custom custom = getClusterStateCustom(); String fileName = randomAlphaOfLength(10); RemoteClusterStateCustoms remoteClusterStateCustoms = new RemoteClusterStateCustoms( @@ -281,11 +265,11 @@ public void testGetAsyncMetadataReadAction_Custom() throws IOException, Interrup ); TestCapturingListener capturingListener = new TestCapturingListener<>(); final CountDownLatch latch = new CountDownLatch(1); - remoteClusterStateAttributesManager.getAsyncMetadataReadAction( + remoteClusterStateAttributesManager.readAsync( CLUSTER_STATE_CUSTOM, remoteClusterStateCustoms, new LatchedActionListener<>(capturingListener, latch) - ).run(); + ); latch.await(); assertNull(capturingListener.getFailure()); assertNotNull(capturingListener.getResult()); @@ -294,7 +278,7 @@ public void testGetAsyncMetadataReadAction_Custom() throws IOException, Interrup assertEquals(CLUSTER_STATE_CUSTOM, capturingListener.getResult().getComponentName()); } - public void testGetAsyncMetadataWriteAction_Exception() throws IOException, InterruptedException { + public void testGetAsyncWriteRunnable_Exception() throws IOException, InterruptedException { DiscoveryNodes discoveryNodes = getDiscoveryNodes(); RemoteDiscoveryNodes remoteDiscoveryNodes = new RemoteDiscoveryNodes(discoveryNodes, VERSION, CLUSTER_UUID, compressor); @@ -307,32 +291,33 @@ public void testGetAsyncMetadataWriteAction_Exception() throws IOException, Inte TestCapturingListener capturingListener = new TestCapturingListener<>(); final CountDownLatch latch = new CountDownLatch(1); - remoteClusterStateAttributesManager.getAsyncMetadataWriteAction( + remoteClusterStateAttributesManager.writeAsync( DISCOVERY_NODES, remoteDiscoveryNodes, new LatchedActionListener<>(capturingListener, latch) - ).run(); + ); latch.await(); assertNull(capturingListener.getResult()); assertTrue(capturingListener.getFailure() instanceof RemoteStateTransferException); assertEquals(ioException, capturingListener.getFailure().getCause()); } - public void testGetAsyncMetadataReadAction_Exception() throws IOException, InterruptedException { + public void testGetAsyncReadRunnable_Exception() throws IOException, InterruptedException { String fileName = randomAlphaOfLength(10); RemoteDiscoveryNodes remoteDiscoveryNodes = new RemoteDiscoveryNodes(fileName, CLUSTER_UUID, compressor); Exception ioException = new IOException("mock test exception"); when(blobStoreTransferService.downloadBlob(anyIterable(), anyString())).thenThrow(ioException); CountDownLatch latch = new CountDownLatch(1); TestCapturingListener capturingListener = new TestCapturingListener<>(); - remoteClusterStateAttributesManager.getAsyncMetadataReadAction( + remoteClusterStateAttributesManager.readAsync( DISCOVERY_NODES, remoteDiscoveryNodes, new LatchedActionListener<>(capturingListener, latch) - ).run(); + ); latch.await(); assertNull(capturingListener.getResult()); - assertEquals(ioException, capturingListener.getFailure()); + assertEquals(ioException, capturingListener.getFailure().getCause()); + assertTrue(capturingListener.getFailure() instanceof RemoteStateTransferException); } public void testGetUpdatedCustoms() { diff --git a/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateServiceTests.java b/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateServiceTests.java index ebd3488d06007..6c764585c48e7 100644 --- a/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateServiceTests.java +++ b/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateServiceTests.java @@ -121,6 +121,7 @@ import static org.opensearch.gateway.remote.RemoteClusterStateUtils.DELIMITER; import static org.opensearch.gateway.remote.RemoteClusterStateUtils.FORMAT_PARAMS; import static org.opensearch.gateway.remote.RemoteClusterStateUtils.getFormattedIndexFileName; +import static org.opensearch.gateway.remote.RemoteGlobalMetadataManager.GLOBAL_METADATA_UPLOAD_TIMEOUT_DEFAULT; import static org.opensearch.gateway.remote.model.RemoteClusterBlocks.CLUSTER_BLOCKS_FORMAT; import static org.opensearch.gateway.remote.model.RemoteClusterBlocksTests.randomClusterBlocks; import static org.opensearch.gateway.remote.model.RemoteClusterMetadataManifest.MANIFEST_CURRENT_CODEC_VERSION; @@ -590,6 +591,55 @@ public void testFailWriteIncrementalMetadataWhenTermChanged() { ); } + public void testWriteMetadataInParallelIncompleteUpload() throws IOException { + final ClusterState clusterState = generateClusterStateWithOneIndex().nodes(nodesWithLocalNodeClusterManager()).build(); + final RemoteClusterStateService rcssSpy = Mockito.spy(remoteClusterStateService); + rcssSpy.start(); + RemoteIndexMetadataManager mockedIndexManager = mock(RemoteIndexMetadataManager.class); + RemoteGlobalMetadataManager mockedGlobalMetadataManager = mock(RemoteGlobalMetadataManager.class); + RemoteClusterStateAttributesManager mockedClusterStateAttributeManager = mock(RemoteClusterStateAttributesManager.class); + ClusterMetadataManifest.UploadedMetadata mockedUploadedMetadata = mock(ClusterMetadataManifest.UploadedMetadata.class); + rcssSpy.setRemoteIndexMetadataManager(mockedIndexManager); + rcssSpy.setRemoteGlobalMetadataManager(mockedGlobalMetadataManager); + rcssSpy.setRemoteClusterStateAttributesManager(mockedClusterStateAttributeManager); + ArgumentCaptor listenerArgumentCaptor = ArgumentCaptor.forClass(LatchedActionListener.class); + + when(mockedGlobalMetadataManager.getGlobalMetadataUploadTimeout()).thenReturn(GLOBAL_METADATA_UPLOAD_TIMEOUT_DEFAULT); + when(mockedUploadedMetadata.getComponent()).thenReturn("test-component"); + doAnswer(invocation -> { + listenerArgumentCaptor.getValue().onResponse(mockedUploadedMetadata); + return null; + }).when(mockedIndexManager).writeAsync(any(), any(), listenerArgumentCaptor.capture()); + doAnswer(invocation -> { + listenerArgumentCaptor.getValue().onResponse(mockedUploadedMetadata); + return null; + }).when(mockedGlobalMetadataManager).writeAsync(anyString(), any(), listenerArgumentCaptor.capture()); + doAnswer(invocation -> { + listenerArgumentCaptor.getValue().onResponse(mockedUploadedMetadata); + return null; + }).when(mockedClusterStateAttributeManager).writeAsync(any(), any(), listenerArgumentCaptor.capture()); + + RemoteStateTransferException exception = expectThrows( + RemoteStateTransferException.class, + () -> rcssSpy.writeMetadataInParallel( + clusterState, + new ArrayList<>(clusterState.getMetadata().indices().values()), + emptyMap(), + clusterState.getMetadata().customs(), + true, + true, + true, + true, + true, + true, + clusterState.getCustoms(), + true, + emptyList() + ) + ); + assertTrue(exception.getMessage().startsWith("Some metadata components were not uploaded successfully")); + } + public void testWriteIncrementalMetadataSuccess() throws IOException { final ClusterState clusterState = generateClusterStateWithOneIndex().nodes(nodesWithLocalNodeClusterManager()).build(); mockBlobStoreObjects(); @@ -781,14 +831,18 @@ public void testGetClusterStateForManifest_IncludeEphemeral() throws IOException ArgumentCaptor> listenerArgumentCaptor = ArgumentCaptor.forClass( LatchedActionListener.class ); - when(mockedIndexManager.getAsyncIndexMetadataReadAction(any(), anyString(), listenerArgumentCaptor.capture())).thenReturn( - () -> listenerArgumentCaptor.getValue().onResponse(mockedResult) - ); - when(mockedGlobalMetadataManager.getAsyncMetadataReadAction(any(), anyString(), listenerArgumentCaptor.capture())).thenReturn( - () -> listenerArgumentCaptor.getValue().onResponse(mockedResult) - ); - when(mockedClusterStateAttributeManager.getAsyncMetadataReadAction(anyString(), any(), listenerArgumentCaptor.capture())) - .thenReturn(() -> listenerArgumentCaptor.getValue().onResponse(mockedResult)); + doAnswer(invocation -> { + listenerArgumentCaptor.getValue().onResponse(mockedResult); + return null; + }).when(mockedIndexManager).readAsync(any(), any(), listenerArgumentCaptor.capture()); + doAnswer(invocation -> { + listenerArgumentCaptor.getValue().onResponse(mockedResult); + return null; + }).when(mockedGlobalMetadataManager).readAsync(any(), any(), listenerArgumentCaptor.capture()); + doAnswer(invocation -> { + listenerArgumentCaptor.getValue().onResponse(mockedResult); + return null; + }).when(mockedClusterStateAttributeManager).readAsync(anyString(), any(), listenerArgumentCaptor.capture()); when(mockedResult.getComponent()).thenReturn(COORDINATION_METADATA); RemoteClusterStateService mockService = spy(remoteClusterStateService); mockService.getClusterStateForManifest(ClusterName.DEFAULT.value(), manifest, NODE_ID, true); @@ -823,14 +877,18 @@ public void testGetClusterStateForManifest_ExcludeEphemeral() throws IOException ArgumentCaptor> listenerArgumentCaptor = ArgumentCaptor.forClass( LatchedActionListener.class ); - when(mockedIndexManager.getAsyncIndexMetadataReadAction(any(), anyString(), listenerArgumentCaptor.capture())).thenReturn( - () -> listenerArgumentCaptor.getValue().onResponse(mockedResult) - ); - when(mockedGlobalMetadataManager.getAsyncMetadataReadAction(any(), anyString(), listenerArgumentCaptor.capture())).thenReturn( - () -> listenerArgumentCaptor.getValue().onResponse(mockedResult) - ); - when(mockedClusterStateAttributeManager.getAsyncMetadataReadAction(anyString(), any(), listenerArgumentCaptor.capture())) - .thenReturn(() -> listenerArgumentCaptor.getValue().onResponse(mockedResult)); + doAnswer(invocation -> { + listenerArgumentCaptor.getValue().onResponse(mockedResult); + return null; + }).when(mockedIndexManager).readAsync(anyString(), any(), listenerArgumentCaptor.capture()); + doAnswer(invocation -> { + listenerArgumentCaptor.getValue().onResponse(mockedResult); + return null; + }).when(mockedGlobalMetadataManager).readAsync(anyString(), any(), listenerArgumentCaptor.capture()); + doAnswer(invocation -> { + listenerArgumentCaptor.getValue().onResponse(mockedResult); + return null; + }).when(mockedClusterStateAttributeManager).readAsync(anyString(), any(), listenerArgumentCaptor.capture()); when(mockedResult.getComponent()).thenReturn(COORDINATION_METADATA); remoteClusterStateService.setRemoteIndexMetadataManager(mockedIndexManager); remoteClusterStateService.setRemoteGlobalMetadataManager(mockedGlobalMetadataManager); @@ -877,9 +935,10 @@ public void testGetClusterStateFromManifest_CodecV1() throws IOException { ArgumentCaptor> listenerArgumentCaptor = ArgumentCaptor.forClass( LatchedActionListener.class ); - when(mockedIndexManager.getAsyncIndexMetadataReadAction(any(), anyString(), listenerArgumentCaptor.capture())).thenReturn( - () -> listenerArgumentCaptor.getValue().onResponse(new RemoteReadResult(indexMetadata, INDEX, INDEX)) - ); + doAnswer(invocation -> { + listenerArgumentCaptor.getValue().onResponse(new RemoteReadResult(indexMetadata, INDEX, INDEX)); + return null; + }).when(mockedIndexManager).readAsync(anyString(), any(), listenerArgumentCaptor.capture()); when(mockedGlobalMetadataManager.getGlobalMetadata(anyString(), eq(manifest))).thenReturn(Metadata.EMPTY_METADATA); RemoteClusterStateService spiedService = spy(remoteClusterStateService); spiedService.getClusterStateForManifest(ClusterName.DEFAULT.value(), manifest, NODE_ID, true); @@ -1258,7 +1317,7 @@ public void testReadClusterStateInParallel_ExceptionDuringRead() throws IOExcept ); assertEquals("Exception during reading cluster state from remote", exception.getMessage()); assertTrue(exception.getSuppressed().length > 0); - assertEquals(mockException, exception.getSuppressed()[0]); + assertEquals(mockException, exception.getSuppressed()[0].getCause()); } public void testReadClusterStateInParallel_UnexpectedResult() throws IOException { @@ -1322,19 +1381,20 @@ public void testReadClusterStateInParallel_UnexpectedResult() throws IOException RemoteIndexMetadataManager mockIndexMetadataManager = mock(RemoteIndexMetadataManager.class); CheckedRunnable mockRunnable = mock(CheckedRunnable.class); ArgumentCaptor> latchCapture = ArgumentCaptor.forClass(LatchedActionListener.class); - when(mockIndexMetadataManager.getAsyncIndexMetadataReadAction(anyString(), anyString(), latchCapture.capture())).thenReturn( - mockRunnable - ); + doAnswer(invocation -> { + latchCapture.getValue().onResponse(mockResult); + return null; + }).when(mockIndexMetadataManager).readAsync(anyString(), any(), latchCapture.capture()); RemoteGlobalMetadataManager mockGlobalMetadataManager = mock(RemoteGlobalMetadataManager.class); - when(mockGlobalMetadataManager.getAsyncMetadataReadAction(any(), anyString(), latchCapture.capture())).thenReturn(mockRunnable); + doAnswer(invocation -> { + latchCapture.getValue().onResponse(mockResult); + return null; + }).when(mockGlobalMetadataManager).readAsync(any(), any(), latchCapture.capture()); RemoteClusterStateAttributesManager mockClusterStateAttributeManager = mock(RemoteClusterStateAttributesManager.class); - when(mockClusterStateAttributeManager.getAsyncMetadataReadAction(anyString(), any(), latchCapture.capture())).thenReturn( - mockRunnable - ); - doAnswer(invocationOnMock -> { + doAnswer(invocation -> { latchCapture.getValue().onResponse(mockResult); return null; - }).when(mockRunnable).run(); + }).when(mockClusterStateAttributeManager).readAsync(anyString(), any(), latchCapture.capture()); when(mockResult.getComponent()).thenReturn("mock-result"); remoteClusterStateService.start(); remoteClusterStateService.setRemoteIndexMetadataManager(mockIndexMetadataManager); @@ -1363,56 +1423,56 @@ public void testReadClusterStateInParallel_UnexpectedResult() throws IOException ); assertEquals("Unknown component: mock-result", exception.getMessage()); newIndicesToRead.forEach( - uploadedIndexMetadata -> verify(mockIndexMetadataManager, times(1)).getAsyncIndexMetadataReadAction( - eq(previousClusterState.getMetadata().clusterUUID()), - eq(uploadedIndexMetadata.getUploadedFilename()), + uploadedIndexMetadata -> verify(mockIndexMetadataManager, times(1)).readAsync( + eq("test-index-1"), + argThat(new BlobNameMatcher(uploadedIndexMetadata.getUploadedFilename())), any() ) ); - verify(mockGlobalMetadataManager, times(1)).getAsyncMetadataReadAction( - argThat(new BlobNameMatcher(COORDINATION_METADATA_FILENAME)), + verify(mockGlobalMetadataManager, times(1)).readAsync( eq(COORDINATION_METADATA), + argThat(new BlobNameMatcher(COORDINATION_METADATA_FILENAME)), any() ); - verify(mockGlobalMetadataManager, times(1)).getAsyncMetadataReadAction( - argThat(new BlobNameMatcher(PERSISTENT_SETTINGS_FILENAME)), + verify(mockGlobalMetadataManager, times(1)).readAsync( eq(SETTING_METADATA), + argThat(new BlobNameMatcher(PERSISTENT_SETTINGS_FILENAME)), any() ); - verify(mockGlobalMetadataManager, times(1)).getAsyncMetadataReadAction( - argThat(new BlobNameMatcher(TRANSIENT_SETTINGS_FILENAME)), + verify(mockGlobalMetadataManager, times(1)).readAsync( eq(TRANSIENT_SETTING_METADATA), + argThat(new BlobNameMatcher(TRANSIENT_SETTINGS_FILENAME)), any() ); - verify(mockGlobalMetadataManager, times(1)).getAsyncMetadataReadAction( - argThat(new BlobNameMatcher(TEMPLATES_METADATA_FILENAME)), + verify(mockGlobalMetadataManager, times(1)).readAsync( eq(TEMPLATES_METADATA), + argThat(new BlobNameMatcher(TEMPLATES_METADATA_FILENAME)), any() ); - verify(mockGlobalMetadataManager, times(1)).getAsyncMetadataReadAction( - argThat(new BlobNameMatcher(HASHES_OF_CONSISTENT_SETTINGS_FILENAME)), + verify(mockGlobalMetadataManager, times(1)).readAsync( eq(HASHES_OF_CONSISTENT_SETTINGS), + argThat(new BlobNameMatcher(HASHES_OF_CONSISTENT_SETTINGS_FILENAME)), any() ); newCustomMetadataMap.keySet().forEach(uploadedCustomMetadataKey -> { - verify(mockGlobalMetadataManager, times(1)).getAsyncMetadataReadAction( - argThat(new BlobNameMatcher(newCustomMetadataMap.get(uploadedCustomMetadataKey).getUploadedFilename())), + verify(mockGlobalMetadataManager, times(1)).readAsync( eq(uploadedCustomMetadataKey), + argThat(new BlobNameMatcher(newCustomMetadataMap.get(uploadedCustomMetadataKey).getUploadedFilename())), any() ); }); - verify(mockClusterStateAttributeManager, times(1)).getAsyncMetadataReadAction( + verify(mockClusterStateAttributeManager, times(1)).readAsync( eq(DISCOVERY_NODES), argThat(new BlobNameMatcher(DISCOVERY_NODES_FILENAME)), any() ); - verify(mockClusterStateAttributeManager, times(1)).getAsyncMetadataReadAction( + verify(mockClusterStateAttributeManager, times(1)).readAsync( eq(CLUSTER_BLOCKS), argThat(new BlobNameMatcher(CLUSTER_BLOCKS_FILENAME)), any() ); newClusterStateCustoms.keySet().forEach(uploadedClusterStateCustomMetadataKey -> { - verify(mockClusterStateAttributeManager, times(1)).getAsyncMetadataReadAction( + verify(mockClusterStateAttributeManager, times(1)).readAsync( eq(String.join(CUSTOM_DELIMITER, CLUSTER_STATE_CUSTOM, uploadedClusterStateCustomMetadataKey)), argThat(new BlobNameMatcher(newClusterStateCustoms.get(uploadedClusterStateCustomMetadataKey).getUploadedFilename())), any() @@ -1495,131 +1555,81 @@ public void testReadClusterStateInParallel_Success() throws IOException { RemoteGlobalMetadataManager mockedGlobalMetadataManager = mock(RemoteGlobalMetadataManager.class); RemoteClusterStateAttributesManager mockedClusterStateAttributeManager = mock(RemoteClusterStateAttributesManager.class); - when( - mockedIndexManager.getAsyncIndexMetadataReadAction( - eq(manifest.getClusterUUID()), - eq(indexFilename), - any(LatchedActionListener.class) - ) - ).thenAnswer(invocationOnMock -> { + doAnswer(invocationOnMock -> { LatchedActionListener latchedActionListener = invocationOnMock.getArgument(2, LatchedActionListener.class); - return (CheckedRunnable) () -> latchedActionListener.onResponse( - new RemoteReadResult(newIndexMetadata, INDEX, "test-index-1") - ); - }); - when( - mockedGlobalMetadataManager.getAsyncMetadataReadAction( - argThat(new BlobNameMatcher(customMetadataFilename)), - eq("custom_md_3"), - any() - ) - ).thenAnswer(invocationOnMock -> { + latchedActionListener.onResponse(new RemoteReadResult(newIndexMetadata, INDEX, "test-index-1")); + return null; + }).when(mockedIndexManager) + .readAsync(eq("test-index-1"), argThat(new BlobNameMatcher(indexFilename)), any(LatchedActionListener.class)); + doAnswer(invocationOnMock -> { LatchedActionListener latchedActionListener = invocationOnMock.getArgument(2, LatchedActionListener.class); - return (CheckedRunnable) () -> latchedActionListener.onResponse( - new RemoteReadResult(customMetadata3, CUSTOM_METADATA, "custom_md_3") - ); - }); - when( - mockedGlobalMetadataManager.getAsyncMetadataReadAction( - argThat(new BlobNameMatcher(COORDINATION_METADATA_FILENAME)), - eq(COORDINATION_METADATA), - any() - ) - ).thenAnswer(invocationOnMock -> { + latchedActionListener.onResponse(new RemoteReadResult(customMetadata3, CUSTOM_METADATA, "custom_md_3")); + return null; + }).when(mockedGlobalMetadataManager).readAsync(eq("custom_md_3"), argThat(new BlobNameMatcher(customMetadataFilename)), any()); + doAnswer(invocationOnMock -> { LatchedActionListener latchedActionListener = invocationOnMock.getArgument(2, LatchedActionListener.class); - return (CheckedRunnable) () -> latchedActionListener.onResponse( + latchedActionListener.onResponse( new RemoteReadResult(updatedCoordinationMetadata, COORDINATION_METADATA, COORDINATION_METADATA) ); - }); - when( - mockedGlobalMetadataManager.getAsyncMetadataReadAction( - argThat(new BlobNameMatcher(PERSISTENT_SETTINGS_FILENAME)), - eq(SETTING_METADATA), - any() - ) - ).thenAnswer(invocationOnMock -> { + return null; + }).when(mockedGlobalMetadataManager) + .readAsync(eq(COORDINATION_METADATA), argThat(new BlobNameMatcher(COORDINATION_METADATA_FILENAME)), any()); + doAnswer(invocationOnMock -> { LatchedActionListener latchedActionListener = invocationOnMock.getArgument(2, LatchedActionListener.class); - - return (CheckedRunnable) () -> latchedActionListener.onResponse( - new RemoteReadResult(updatedPersistentSettings, SETTING_METADATA, SETTING_METADATA) - ); - }); - when( - mockedGlobalMetadataManager.getAsyncMetadataReadAction( - argThat(new BlobNameMatcher(TRANSIENT_SETTINGS_FILENAME)), - eq(TRANSIENT_SETTING_METADATA), - any() - ) - ).thenAnswer(invocationOnMock -> { + latchedActionListener.onResponse(new RemoteReadResult(updatedPersistentSettings, SETTING_METADATA, SETTING_METADATA)); + return null; + }).when(mockedGlobalMetadataManager) + .readAsync(eq(SETTING_METADATA), argThat(new BlobNameMatcher(PERSISTENT_SETTINGS_FILENAME)), any()); + doAnswer(invocationOnMock -> { LatchedActionListener latchedActionListener = invocationOnMock.getArgument(2, LatchedActionListener.class); - return (CheckedRunnable) () -> latchedActionListener.onResponse( + latchedActionListener.onResponse( new RemoteReadResult(updatedTransientSettings, TRANSIENT_SETTING_METADATA, TRANSIENT_SETTING_METADATA) ); - }); - when( - mockedGlobalMetadataManager.getAsyncMetadataReadAction( - argThat(new BlobNameMatcher(TEMPLATES_METADATA_FILENAME)), - eq(TEMPLATES_METADATA), - any() - ) - ).thenAnswer(invocationOnMock -> { + return null; + }).when(mockedGlobalMetadataManager) + .readAsync(eq(TRANSIENT_SETTING_METADATA), argThat(new BlobNameMatcher(TRANSIENT_SETTINGS_FILENAME)), any()); + doAnswer(invocationOnMock -> { LatchedActionListener latchedActionListener = invocationOnMock.getArgument(2, LatchedActionListener.class); - return (CheckedRunnable) () -> latchedActionListener.onResponse( - new RemoteReadResult(updatedTemplateMetadata, TEMPLATES_METADATA, TEMPLATES_METADATA) - ); - }); - when( - mockedGlobalMetadataManager.getAsyncMetadataReadAction( - argThat(new BlobNameMatcher(HASHES_OF_CONSISTENT_SETTINGS_FILENAME)), - eq(HASHES_OF_CONSISTENT_SETTINGS), - any() - ) - ).thenAnswer(invocationOnMock -> { + latchedActionListener.onResponse(new RemoteReadResult(updatedTemplateMetadata, TEMPLATES_METADATA, TEMPLATES_METADATA)); + return null; + }).when(mockedGlobalMetadataManager) + .readAsync(eq(TEMPLATES_METADATA), argThat(new BlobNameMatcher(TEMPLATES_METADATA_FILENAME)), any()); + doAnswer(invocationOnMock -> { LatchedActionListener latchedActionListener = invocationOnMock.getArgument(2, LatchedActionListener.class); - return (CheckedRunnable) () -> latchedActionListener.onResponse( + latchedActionListener.onResponse( new RemoteReadResult(updatedHashesOfConsistentSettings, HASHES_OF_CONSISTENT_SETTINGS, HASHES_OF_CONSISTENT_SETTINGS) ); - }); - when( - mockedClusterStateAttributeManager.getAsyncMetadataReadAction( - eq(DISCOVERY_NODES), - argThat(new BlobNameMatcher(DISCOVERY_NODES_FILENAME)), - any() - ) - ).thenAnswer(invocationOnMock -> { + return null; + }).when(mockedGlobalMetadataManager) + .readAsync(eq(HASHES_OF_CONSISTENT_SETTINGS), argThat(new BlobNameMatcher(HASHES_OF_CONSISTENT_SETTINGS_FILENAME)), any()); + doAnswer(invocationOnMock -> { LatchedActionListener latchedActionListener = invocationOnMock.getArgument(2, LatchedActionListener.class); - return (CheckedRunnable) () -> latchedActionListener.onResponse( - new RemoteReadResult(updatedDiscoveryNodes, CLUSTER_STATE_ATTRIBUTE, DISCOVERY_NODES) - ); - }); - when( - mockedClusterStateAttributeManager.getAsyncMetadataReadAction( - eq(CLUSTER_BLOCKS), - argThat(new BlobNameMatcher(CLUSTER_BLOCKS_FILENAME)), - any() - ) - ).thenAnswer(invocationOnMock -> { + latchedActionListener.onResponse(new RemoteReadResult(updatedDiscoveryNodes, CLUSTER_STATE_ATTRIBUTE, DISCOVERY_NODES)); + return null; + }).when(mockedClusterStateAttributeManager) + .readAsync(eq(DISCOVERY_NODES), argThat(new BlobNameMatcher(DISCOVERY_NODES_FILENAME)), any()); + doAnswer(invocationOnMock -> { LatchedActionListener latchedActionListener = invocationOnMock.getArgument(2, LatchedActionListener.class); - return (CheckedRunnable) () -> latchedActionListener.onResponse( - new RemoteReadResult(updatedClusterBlocks, CLUSTER_STATE_ATTRIBUTE, CLUSTER_BLOCKS) - ); - }); - when( - mockedClusterStateAttributeManager.getAsyncMetadataReadAction( - eq(String.join(CUSTOM_DELIMITER, CLUSTER_STATE_CUSTOM, updatedClusterStateCustom3.getWriteableName())), - argThat(new BlobNameMatcher(clusterStateCustomFilename)), - any() - ) - ).thenAnswer(invocationOnMock -> { + latchedActionListener.onResponse(new RemoteReadResult(updatedClusterBlocks, CLUSTER_STATE_ATTRIBUTE, CLUSTER_BLOCKS)); + return null; + }).when(mockedClusterStateAttributeManager) + .readAsync(eq(CLUSTER_BLOCKS), argThat(new BlobNameMatcher(CLUSTER_BLOCKS_FILENAME)), any()); + doAnswer(invocationOnMock -> { LatchedActionListener latchedActionListener = invocationOnMock.getArgument(2, LatchedActionListener.class); - return (CheckedRunnable) () -> latchedActionListener.onResponse( + latchedActionListener.onResponse( new RemoteReadResult( updatedClusterStateCustom3, CLUSTER_STATE_ATTRIBUTE, String.join(CUSTOM_DELIMITER, CLUSTER_STATE_CUSTOM, updatedClusterStateCustom3.getWriteableName()) ) ); - }); + return null; + }).when(mockedClusterStateAttributeManager) + .readAsync( + eq(String.join(CUSTOM_DELIMITER, CLUSTER_STATE_CUSTOM, updatedClusterStateCustom3.getWriteableName())), + argThat(new BlobNameMatcher(clusterStateCustomFilename)), + any() + ); remoteClusterStateService.start(); remoteClusterStateService.setRemoteIndexMetadataManager(mockedIndexManager); @@ -1665,56 +1675,56 @@ public void testReadClusterStateInParallel_Success() throws IOException { uploadedClusterStateCustomMap.keySet().forEach(key -> assertTrue(updatedClusterState.customs().containsKey(key))); assertEquals(updatedClusterStateCustom3, updatedClusterState.custom("custom_3")); newIndicesToRead.forEach( - uploadedIndexMetadata -> verify(mockedIndexManager, times(1)).getAsyncIndexMetadataReadAction( - eq(previousClusterState.getMetadata().clusterUUID()), - eq(uploadedIndexMetadata.getUploadedFilename()), + uploadedIndexMetadata -> verify(mockedIndexManager, times(1)).readAsync( + eq("test-index-1"), + argThat(new BlobNameMatcher(uploadedIndexMetadata.getUploadedFilename())), any() ) ); - verify(mockedGlobalMetadataManager, times(1)).getAsyncMetadataReadAction( - argThat(new BlobNameMatcher(COORDINATION_METADATA_FILENAME)), + verify(mockedGlobalMetadataManager, times(1)).readAsync( eq(COORDINATION_METADATA), + argThat(new BlobNameMatcher(COORDINATION_METADATA_FILENAME)), any() ); - verify(mockedGlobalMetadataManager, times(1)).getAsyncMetadataReadAction( - argThat(new BlobNameMatcher(PERSISTENT_SETTINGS_FILENAME)), + verify(mockedGlobalMetadataManager, times(1)).readAsync( eq(SETTING_METADATA), + argThat(new BlobNameMatcher(PERSISTENT_SETTINGS_FILENAME)), any() ); - verify(mockedGlobalMetadataManager, times(1)).getAsyncMetadataReadAction( - argThat(new BlobNameMatcher(TRANSIENT_SETTINGS_FILENAME)), + verify(mockedGlobalMetadataManager, times(1)).readAsync( eq(TRANSIENT_SETTING_METADATA), + argThat(new BlobNameMatcher(TRANSIENT_SETTINGS_FILENAME)), any() ); - verify(mockedGlobalMetadataManager, times(1)).getAsyncMetadataReadAction( - argThat(new BlobNameMatcher(TEMPLATES_METADATA_FILENAME)), + verify(mockedGlobalMetadataManager, times(1)).readAsync( eq(TEMPLATES_METADATA), + argThat(new BlobNameMatcher(TEMPLATES_METADATA_FILENAME)), any() ); - verify(mockedGlobalMetadataManager, times(1)).getAsyncMetadataReadAction( - argThat(new BlobNameMatcher(HASHES_OF_CONSISTENT_SETTINGS_FILENAME)), + verify(mockedGlobalMetadataManager, times(1)).readAsync( eq(HASHES_OF_CONSISTENT_SETTINGS), + argThat(new BlobNameMatcher(HASHES_OF_CONSISTENT_SETTINGS_FILENAME)), any() ); newCustomMetadataMap.keySet().forEach(uploadedCustomMetadataKey -> { - verify(mockedGlobalMetadataManager, times(1)).getAsyncMetadataReadAction( - argThat(new BlobNameMatcher(newCustomMetadataMap.get(uploadedCustomMetadataKey).getUploadedFilename())), + verify(mockedGlobalMetadataManager, times(1)).readAsync( eq(uploadedCustomMetadataKey), + argThat(new BlobNameMatcher(newCustomMetadataMap.get(uploadedCustomMetadataKey).getUploadedFilename())), any() ); }); - verify(mockedClusterStateAttributeManager, times(1)).getAsyncMetadataReadAction( + verify(mockedClusterStateAttributeManager, times(1)).readAsync( eq(DISCOVERY_NODES), argThat(new BlobNameMatcher(DISCOVERY_NODES_FILENAME)), any() ); - verify(mockedClusterStateAttributeManager, times(1)).getAsyncMetadataReadAction( + verify(mockedClusterStateAttributeManager, times(1)).readAsync( eq(CLUSTER_BLOCKS), argThat(new BlobNameMatcher(CLUSTER_BLOCKS_FILENAME)), any() ); newClusterStateCustoms.keySet().forEach(uploadedClusterStateCustomMetadataKey -> { - verify(mockedClusterStateAttributeManager, times(1)).getAsyncMetadataReadAction( + verify(mockedClusterStateAttributeManager, times(1)).readAsync( eq(String.join(CUSTOM_DELIMITER, CLUSTER_STATE_CUSTOM, uploadedClusterStateCustomMetadataKey)), argThat(new BlobNameMatcher(newClusterStateCustoms.get(uploadedClusterStateCustomMetadataKey).getUploadedFilename())), any() diff --git a/server/src/test/java/org/opensearch/gateway/remote/RemoteGlobalMetadataManagerTests.java b/server/src/test/java/org/opensearch/gateway/remote/RemoteGlobalMetadataManagerTests.java index 917794ec03c3a..a2da1e8b0fdb2 100644 --- a/server/src/test/java/org/opensearch/gateway/remote/RemoteGlobalMetadataManagerTests.java +++ b/server/src/test/java/org/opensearch/gateway/remote/RemoteGlobalMetadataManagerTests.java @@ -158,7 +158,7 @@ public void testGlobalMetadataUploadWaitTimeSetting() { assertEquals(globalMetadataUploadTimeout, remoteGlobalMetadataManager.getGlobalMetadataUploadTimeout().seconds()); } - public void testGetReadMetadataAsyncAction_CoordinationMetadata() throws Exception { + public void testGetAsyncReadRunnable_CoordinationMetadata() throws Exception { CoordinationMetadata coordinationMetadata = getCoordinationMetadata(); String fileName = randomAlphaOfLength(10); RemoteCoordinationMetadata coordinationMetadataForDownload = new RemoteCoordinationMetadata( @@ -173,11 +173,11 @@ public void testGetReadMetadataAsyncAction_CoordinationMetadata() throws Excepti TestCapturingListener listener = new TestCapturingListener<>(); CountDownLatch latch = new CountDownLatch(1); - remoteGlobalMetadataManager.getAsyncMetadataReadAction( - coordinationMetadataForDownload, + remoteGlobalMetadataManager.readAsync( COORDINATION_METADATA, + coordinationMetadataForDownload, new LatchedActionListener<>(listener, latch) - ).run(); + ); latch.await(); assertNull(listener.getFailure()); assertNotNull(listener.getResult()); @@ -186,7 +186,7 @@ public void testGetReadMetadataAsyncAction_CoordinationMetadata() throws Excepti assertEquals(COORDINATION_METADATA, listener.getResult().getComponentName()); } - public void testGetAsyncMetadataWriteAction_CoordinationMetadata() throws Exception { + public void testGetAsyncWriteRunnable_CoordinationMetadata() throws Exception { CoordinationMetadata coordinationMetadata = getCoordinationMetadata(); RemoteCoordinationMetadata remoteCoordinationMetadata = new RemoteCoordinationMetadata( coordinationMetadata, @@ -203,8 +203,11 @@ public void testGetAsyncMetadataWriteAction_CoordinationMetadata() throws Except TestCapturingListener listener = new TestCapturingListener<>(); CountDownLatch latch = new CountDownLatch(1); - remoteGlobalMetadataManager.getAsyncMetadataWriteAction(remoteCoordinationMetadata, new LatchedActionListener<>(listener, latch)) - .run(); + remoteGlobalMetadataManager.writeAsync( + COORDINATION_METADATA, + remoteCoordinationMetadata, + new LatchedActionListener<>(listener, latch) + ); latch.await(); assertNull(listener.getFailure()); assertNotNull(listener.getResult()); @@ -224,7 +227,7 @@ public void testGetAsyncMetadataWriteAction_CoordinationMetadata() throws Except assertEquals(GLOBAL_METADATA_CURRENT_CODEC_VERSION, Integer.parseInt(splitFileName[3])); } - public void testGetReadMetadataAsyncAction_PersistentSettings() throws Exception { + public void testGetAsyncReadRunnable_PersistentSettings() throws Exception { Settings settingsMetadata = getSettings(); String fileName = randomAlphaOfLength(10); RemotePersistentSettingsMetadata persistentSettings = new RemotePersistentSettingsMetadata( @@ -240,11 +243,7 @@ public void testGetReadMetadataAsyncAction_PersistentSettings() throws Exception TestCapturingListener listener = new TestCapturingListener<>(); CountDownLatch latch = new CountDownLatch(1); - remoteGlobalMetadataManager.getAsyncMetadataReadAction( - persistentSettings, - SETTING_METADATA, - new LatchedActionListener<>(listener, latch) - ).run(); + remoteGlobalMetadataManager.readAsync(SETTING_METADATA, persistentSettings, new LatchedActionListener<>(listener, latch)); latch.await(); assertNull(listener.getFailure()); assertNotNull(listener.getResult()); @@ -253,7 +252,7 @@ public void testGetReadMetadataAsyncAction_PersistentSettings() throws Exception assertEquals(SETTING_METADATA, listener.getResult().getComponentName()); } - public void testGetAsyncMetadataWriteAction_PersistentSettings() throws Exception { + public void testGetAsyncWriteRunnable_PersistentSettings() throws Exception { Settings settingsMetadata = getSettings(); RemotePersistentSettingsMetadata persistentSettings = new RemotePersistentSettingsMetadata( settingsMetadata, @@ -269,7 +268,7 @@ public void testGetAsyncMetadataWriteAction_PersistentSettings() throws Exceptio .uploadBlob(any(InputStream.class), anyIterable(), anyString(), eq(URGENT), any(ActionListener.class)); TestCapturingListener listener = new TestCapturingListener<>(); CountDownLatch latch = new CountDownLatch(1); - remoteGlobalMetadataManager.getAsyncMetadataWriteAction(persistentSettings, new LatchedActionListener<>(listener, latch)).run(); + remoteGlobalMetadataManager.writeAsync(SETTING_METADATA, persistentSettings, new LatchedActionListener<>(listener, latch)); latch.await(); assertNull(listener.getFailure()); @@ -290,7 +289,7 @@ public void testGetAsyncMetadataWriteAction_PersistentSettings() throws Exceptio assertEquals(GLOBAL_METADATA_CURRENT_CODEC_VERSION, Integer.parseInt(splitFileName[3])); } - public void testGetReadMetadataAsyncAction_TransientSettings() throws Exception { + public void testGetAsyncReadRunnable_TransientSettings() throws Exception { Settings settingsMetadata = getSettings(); String fileName = randomAlphaOfLength(10); RemoteTransientSettingsMetadata transientSettings = new RemoteTransientSettingsMetadata( @@ -306,11 +305,7 @@ public void testGetReadMetadataAsyncAction_TransientSettings() throws Exception TestCapturingListener listener = new TestCapturingListener<>(); CountDownLatch latch = new CountDownLatch(1); - remoteGlobalMetadataManager.getAsyncMetadataReadAction( - transientSettings, - TRANSIENT_SETTING_METADATA, - new LatchedActionListener<>(listener, latch) - ).run(); + remoteGlobalMetadataManager.readAsync(TRANSIENT_SETTING_METADATA, transientSettings, new LatchedActionListener<>(listener, latch)); latch.await(); assertNull(listener.getFailure()); assertNotNull(listener.getResult()); @@ -319,7 +314,7 @@ public void testGetReadMetadataAsyncAction_TransientSettings() throws Exception assertEquals(TRANSIENT_SETTING_METADATA, listener.getResult().getComponentName()); } - public void testGetAsyncMetadataWriteAction_TransientSettings() throws Exception { + public void testGetAsyncWriteRunnable_TransientSettings() throws Exception { Settings settingsMetadata = getSettings(); RemoteTransientSettingsMetadata transientSettings = new RemoteTransientSettingsMetadata( settingsMetadata, @@ -335,7 +330,7 @@ public void testGetAsyncMetadataWriteAction_TransientSettings() throws Exception .uploadBlob(any(InputStream.class), anyIterable(), anyString(), eq(URGENT), any(ActionListener.class)); TestCapturingListener listener = new TestCapturingListener<>(); CountDownLatch latch = new CountDownLatch(1); - remoteGlobalMetadataManager.getAsyncMetadataWriteAction(transientSettings, new LatchedActionListener<>(listener, latch)).run(); + remoteGlobalMetadataManager.writeAsync(TRANSIENT_SETTING_METADATA, transientSettings, new LatchedActionListener<>(listener, latch)); latch.await(); assertNull(listener.getFailure()); assertNotNull(listener.getResult()); @@ -355,7 +350,7 @@ public void testGetAsyncMetadataWriteAction_TransientSettings() throws Exception assertEquals(GLOBAL_METADATA_CURRENT_CODEC_VERSION, Integer.parseInt(splitFileName[3])); } - public void testGetReadMetadataAsyncAction_HashesOfConsistentSettings() throws Exception { + public void testGetAsyncReadRunnable_HashesOfConsistentSettings() throws Exception { DiffableStringMap hashesOfConsistentSettings = getHashesOfConsistentSettings(); String fileName = randomAlphaOfLength(10); RemoteHashesOfConsistentSettings hashesOfConsistentSettingsForDownload = new RemoteHashesOfConsistentSettings( @@ -369,11 +364,11 @@ public void testGetReadMetadataAsyncAction_HashesOfConsistentSettings() throws E TestCapturingListener listener = new TestCapturingListener<>(); CountDownLatch latch = new CountDownLatch(1); - remoteGlobalMetadataManager.getAsyncMetadataReadAction( - hashesOfConsistentSettingsForDownload, + remoteGlobalMetadataManager.readAsync( HASHES_OF_CONSISTENT_SETTINGS, + hashesOfConsistentSettingsForDownload, new LatchedActionListener<>(listener, latch) - ).run(); + ); latch.await(); assertNull(listener.getFailure()); assertNotNull(listener.getResult()); @@ -382,7 +377,7 @@ public void testGetReadMetadataAsyncAction_HashesOfConsistentSettings() throws E assertEquals(HASHES_OF_CONSISTENT_SETTINGS, listener.getResult().getComponentName()); } - public void testGetAsyncMetadataWriteAction_HashesOfConsistentSettings() throws Exception { + public void testGetAsyncWriteRunnable_HashesOfConsistentSettings() throws Exception { DiffableStringMap hashesOfConsistentSettings = getHashesOfConsistentSettings(); RemoteHashesOfConsistentSettings hashesOfConsistentSettingsForUpload = new RemoteHashesOfConsistentSettings( hashesOfConsistentSettings, @@ -397,10 +392,11 @@ public void testGetAsyncMetadataWriteAction_HashesOfConsistentSettings() throws .uploadBlob(any(InputStream.class), anyIterable(), anyString(), eq(URGENT), any(ActionListener.class)); TestCapturingListener listener = new TestCapturingListener<>(); CountDownLatch latch = new CountDownLatch(1); - remoteGlobalMetadataManager.getAsyncMetadataWriteAction( + remoteGlobalMetadataManager.writeAsync( + HASHES_OF_CONSISTENT_SETTINGS, hashesOfConsistentSettingsForUpload, new LatchedActionListener<>(listener, latch) - ).run(); + ); latch.await(); assertNull(listener.getFailure()); assertNotNull(listener.getResult()); @@ -420,7 +416,7 @@ public void testGetAsyncMetadataWriteAction_HashesOfConsistentSettings() throws assertEquals(GLOBAL_METADATA_CURRENT_CODEC_VERSION, Integer.parseInt(splitFileName[3])); } - public void testGetReadMetadataAsyncAction_TemplatesMetadata() throws Exception { + public void testGetAsyncReadRunnable_TemplatesMetadata() throws Exception { TemplatesMetadata templatesMetadata = getTemplatesMetadata(); String fileName = randomAlphaOfLength(10); RemoteTemplatesMetadata templatesMetadataForDownload = new RemoteTemplatesMetadata( @@ -434,11 +430,11 @@ public void testGetReadMetadataAsyncAction_TemplatesMetadata() throws Exception ); TestCapturingListener listener = new TestCapturingListener<>(); CountDownLatch latch = new CountDownLatch(1); - remoteGlobalMetadataManager.getAsyncMetadataReadAction( - templatesMetadataForDownload, + remoteGlobalMetadataManager.readAsync( TEMPLATES_METADATA, + templatesMetadataForDownload, new LatchedActionListener<>(listener, latch) - ).run(); + ); latch.await(); assertNull(listener.getFailure()); assertNotNull(listener.getResult()); @@ -447,7 +443,7 @@ public void testGetReadMetadataAsyncAction_TemplatesMetadata() throws Exception assertEquals(TEMPLATES_METADATA, listener.getResult().getComponentName()); } - public void testGetAsyncMetadataWriteAction_TemplatesMetadata() throws Exception { + public void testGetAsyncWriteRunnable_TemplatesMetadata() throws Exception { TemplatesMetadata templatesMetadata = getTemplatesMetadata(); RemoteTemplatesMetadata templateMetadataForUpload = new RemoteTemplatesMetadata( templatesMetadata, @@ -463,8 +459,7 @@ public void testGetAsyncMetadataWriteAction_TemplatesMetadata() throws Exception .uploadBlob(any(InputStream.class), anyIterable(), anyString(), eq(URGENT), any(ActionListener.class)); TestCapturingListener listener = new TestCapturingListener<>(); CountDownLatch latch = new CountDownLatch(1); - remoteGlobalMetadataManager.getAsyncMetadataWriteAction(templateMetadataForUpload, new LatchedActionListener<>(listener, latch)) - .run(); + remoteGlobalMetadataManager.writeAsync(TEMPLATES_METADATA, templateMetadataForUpload, new LatchedActionListener<>(listener, latch)); latch.await(); assertNull(listener.getFailure()); assertNotNull(listener.getResult()); @@ -484,7 +479,7 @@ public void testGetAsyncMetadataWriteAction_TemplatesMetadata() throws Exception assertEquals(GLOBAL_METADATA_CURRENT_CODEC_VERSION, Integer.parseInt(splitFileName[3])); } - public void testGetReadMetadataAsyncAction_CustomMetadata() throws Exception { + public void testGetAsyncReadRunnable_CustomMetadata() throws Exception { Metadata.Custom customMetadata = getCustomMetadata(); String fileName = randomAlphaOfLength(10); RemoteCustomMetadata customMetadataForDownload = new RemoteCustomMetadata( @@ -499,11 +494,7 @@ public void testGetReadMetadataAsyncAction_CustomMetadata() throws Exception { ); TestCapturingListener listener = new TestCapturingListener<>(); CountDownLatch latch = new CountDownLatch(1); - remoteGlobalMetadataManager.getAsyncMetadataReadAction( - customMetadataForDownload, - IndexGraveyard.TYPE, - new LatchedActionListener<>(listener, latch) - ).run(); + remoteGlobalMetadataManager.readAsync(IndexGraveyard.TYPE, customMetadataForDownload, new LatchedActionListener<>(listener, latch)); latch.await(); assertNull(listener.getFailure()); assertNotNull(listener.getResult()); @@ -512,7 +503,7 @@ public void testGetReadMetadataAsyncAction_CustomMetadata() throws Exception { assertEquals(IndexGraveyard.TYPE, listener.getResult().getComponentName()); } - public void testGetAsyncMetadataWriteAction_CustomMetadata() throws Exception { + public void testGetAsyncWriteRunnable_CustomMetadata() throws Exception { Metadata.Custom customMetadata = getCustomMetadata(); RemoteCustomMetadata customMetadataForUpload = new RemoteCustomMetadata( customMetadata, @@ -529,8 +520,11 @@ public void testGetAsyncMetadataWriteAction_CustomMetadata() throws Exception { .uploadBlob(any(InputStream.class), anyIterable(), anyString(), eq(URGENT), any(ActionListener.class)); TestCapturingListener listener = new TestCapturingListener<>(); CountDownLatch latch = new CountDownLatch(1); - remoteGlobalMetadataManager.getAsyncMetadataWriteAction(customMetadataForUpload, new LatchedActionListener<>(listener, latch)) - .run(); + remoteGlobalMetadataManager.writeAsync( + customMetadataForUpload.getType(), + customMetadataForUpload, + new LatchedActionListener<>(listener, latch) + ); latch.await(); assertNull(listener.getFailure()); assertNotNull(listener.getResult()); @@ -550,7 +544,7 @@ public void testGetAsyncMetadataWriteAction_CustomMetadata() throws Exception { assertEquals(GLOBAL_METADATA_CURRENT_CODEC_VERSION, Integer.parseInt(splitFileName[3])); } - public void testGetReadMetadataAsyncAction_GlobalMetadata() throws Exception { + public void testGetAsyncReadRunnable_GlobalMetadata() throws Exception { Metadata metadata = getGlobalMetadata(); String fileName = randomAlphaOfLength(10); RemoteGlobalMetadata globalMetadataForDownload = new RemoteGlobalMetadata(fileName, CLUSTER_UUID, compressor, xContentRegistry); @@ -559,11 +553,7 @@ public void testGetReadMetadataAsyncAction_GlobalMetadata() throws Exception { ); TestCapturingListener listener = new TestCapturingListener<>(); CountDownLatch latch = new CountDownLatch(1); - remoteGlobalMetadataManager.getAsyncMetadataReadAction( - globalMetadataForDownload, - GLOBAL_METADATA, - new LatchedActionListener<>(listener, latch) - ).run(); + remoteGlobalMetadataManager.readAsync(GLOBAL_METADATA, globalMetadataForDownload, new LatchedActionListener<>(listener, latch)); latch.await(); assertNull(listener.getFailure()); assertNotNull(listener.getResult()); @@ -572,7 +562,7 @@ public void testGetReadMetadataAsyncAction_GlobalMetadata() throws Exception { assertEquals(GLOBAL_METADATA, listener.getResult().getComponentName()); } - public void testGetReadMetadataAsyncAction_IOException() throws Exception { + public void testGetAsyncReadRunnable_IOException() throws Exception { String fileName = randomAlphaOfLength(10); RemoteCoordinationMetadata coordinationMetadataForDownload = new RemoteCoordinationMetadata( fileName, @@ -584,18 +574,19 @@ public void testGetReadMetadataAsyncAction_IOException() throws Exception { when(blobStoreTransferService.downloadBlob(anyIterable(), anyString())).thenThrow(ioException); TestCapturingListener listener = new TestCapturingListener<>(); CountDownLatch latch = new CountDownLatch(1); - remoteGlobalMetadataManager.getAsyncMetadataReadAction( - coordinationMetadataForDownload, + remoteGlobalMetadataManager.readAsync( COORDINATION_METADATA, + coordinationMetadataForDownload, new LatchedActionListener<>(listener, latch) - ).run(); + ); latch.await(); assertNull(listener.getResult()); assertNotNull(listener.getFailure()); - assertEquals(ioException, listener.getFailure()); + assertEquals(ioException, listener.getFailure().getCause()); + assertTrue(listener.getFailure() instanceof RemoteStateTransferException); } - public void testGetAsyncMetadataWriteAction_IOException() throws Exception { + public void testGetAsyncWriteRunnable_IOException() throws Exception { CoordinationMetadata coordinationMetadata = getCoordinationMetadata(); RemoteCoordinationMetadata remoteCoordinationMetadata = new RemoteCoordinationMetadata( coordinationMetadata, @@ -613,8 +604,11 @@ public void testGetAsyncMetadataWriteAction_IOException() throws Exception { TestCapturingListener listener = new TestCapturingListener<>(); CountDownLatch latch = new CountDownLatch(1); - remoteGlobalMetadataManager.getAsyncMetadataWriteAction(remoteCoordinationMetadata, new LatchedActionListener<>(listener, latch)) - .run(); + remoteGlobalMetadataManager.writeAsync( + COORDINATION_METADATA, + remoteCoordinationMetadata, + new LatchedActionListener<>(listener, latch) + ); assertNull(listener.getResult()); assertNotNull(listener.getFailure()); assertTrue(listener.getFailure() instanceof RemoteStateTransferException); diff --git a/server/src/test/java/org/opensearch/gateway/remote/RemoteIndexMetadataManagerTests.java b/server/src/test/java/org/opensearch/gateway/remote/RemoteIndexMetadataManagerTests.java index 817fc7b55d09a..76c5792677ea0 100644 --- a/server/src/test/java/org/opensearch/gateway/remote/RemoteIndexMetadataManagerTests.java +++ b/server/src/test/java/org/opensearch/gateway/remote/RemoteIndexMetadataManagerTests.java @@ -24,6 +24,7 @@ import org.opensearch.core.action.ActionListener; import org.opensearch.core.compress.Compressor; import org.opensearch.core.compress.NoneCompressor; +import org.opensearch.gateway.remote.model.RemoteIndexMetadata; import org.opensearch.gateway.remote.model.RemoteReadResult; import org.opensearch.index.remote.RemoteStoreUtils; import org.opensearch.index.translog.transfer.BlobStoreTransferService; @@ -83,7 +84,7 @@ public void tearDown() throws Exception { threadPool.shutdown(); } - public void testGetAsyncIndexMetadataWriteAction_Success() throws Exception { + public void testGetAsyncWriteRunnable_Success() throws Exception { IndexMetadata indexMetadata = getIndexMetadata(randomAlphaOfLength(10), randomBoolean(), randomAlphaOfLength(10)); BlobContainer blobContainer = mock(AsyncMultiStreamBlobContainer.class); BlobStore blobStore = mock(BlobStore.class); @@ -97,11 +98,11 @@ public void testGetAsyncIndexMetadataWriteAction_Success() throws Exception { return null; })).when(blobStoreTransferService).uploadBlob(any(), any(), any(), eq(WritePriority.URGENT), any(ActionListener.class)); - remoteIndexMetadataManager.getAsyncIndexMetadataWriteAction( - indexMetadata, - "cluster-uuid", + remoteIndexMetadataManager.writeAsync( + INDEX, + new RemoteIndexMetadata(indexMetadata, "cluster-uuid", compressor, null), new LatchedActionListener<>(listener, latch) - ).run(); + ); latch.await(); assertNull(listener.getFailure()); @@ -116,7 +117,7 @@ public void testGetAsyncIndexMetadataWriteAction_Success() throws Exception { assertTrue(pathTokens[6].startsWith(expectedFilePrefix)); } - public void testGetAsyncIndexMetadataWriteAction_IOFailure() throws Exception { + public void testGetAsyncWriteRunnable_IOFailure() throws Exception { IndexMetadata indexMetadata = getIndexMetadata(randomAlphaOfLength(10), randomBoolean(), randomAlphaOfLength(10)); BlobContainer blobContainer = mock(AsyncMultiStreamBlobContainer.class); BlobStore blobStore = mock(BlobStore.class); @@ -129,18 +130,18 @@ public void testGetAsyncIndexMetadataWriteAction_IOFailure() throws Exception { return null; })).when(blobStoreTransferService).uploadBlob(any(), any(), any(), eq(WritePriority.URGENT), any(ActionListener.class)); - remoteIndexMetadataManager.getAsyncIndexMetadataWriteAction( - indexMetadata, - "cluster-uuid", + remoteIndexMetadataManager.writeAsync( + INDEX, + new RemoteIndexMetadata(indexMetadata, "cluster-uuid", compressor, null), new LatchedActionListener<>(listener, latch) - ).run(); + ); latch.await(); assertNull(listener.getResult()); assertNotNull(listener.getFailure()); assertTrue(listener.getFailure() instanceof RemoteStateTransferException); } - public void testGetAsyncIndexMetadataReadAction_Success() throws Exception { + public void testGetAsyncReadRunnable_Success() throws Exception { IndexMetadata indexMetadata = getIndexMetadata(randomAlphaOfLength(10), randomBoolean(), randomAlphaOfLength(10)); String fileName = randomAlphaOfLength(10); fileName = fileName + DELIMITER + '2'; @@ -150,15 +151,18 @@ public void testGetAsyncIndexMetadataReadAction_Success() throws Exception { TestCapturingListener listener = new TestCapturingListener<>(); CountDownLatch latch = new CountDownLatch(1); - remoteIndexMetadataManager.getAsyncIndexMetadataReadAction("cluster-uuid", fileName, new LatchedActionListener<>(listener, latch)) - .run(); + remoteIndexMetadataManager.readAsync( + INDEX, + new RemoteIndexMetadata(fileName, "cluster-uuid", compressor, null), + new LatchedActionListener<>(listener, latch) + ); latch.await(); assertNull(listener.getFailure()); assertNotNull(listener.getResult()); assertEquals(indexMetadata, listener.getResult().getObj()); } - public void testGetAsyncIndexMetadataReadAction_IOFailure() throws Exception { + public void testGetAsyncReadRunnable_IOFailure() throws Exception { String fileName = randomAlphaOfLength(10); fileName = fileName + DELIMITER + '2'; Exception exception = new IOException("testing failure"); @@ -166,12 +170,16 @@ public void testGetAsyncIndexMetadataReadAction_IOFailure() throws Exception { TestCapturingListener listener = new TestCapturingListener<>(); CountDownLatch latch = new CountDownLatch(1); - remoteIndexMetadataManager.getAsyncIndexMetadataReadAction("cluster-uuid", fileName, new LatchedActionListener<>(listener, latch)) - .run(); + remoteIndexMetadataManager.readAsync( + INDEX, + new RemoteIndexMetadata(fileName, "cluster-uuid", compressor, null), + new LatchedActionListener<>(listener, latch) + ); latch.await(); assertNull(listener.getResult()); assertNotNull(listener.getFailure()); - assertEquals(exception, listener.getFailure()); + assertEquals(exception, listener.getFailure().getCause()); + assertTrue(listener.getFailure() instanceof RemoteStateTransferException); } private IndexMetadata getIndexMetadata(String name, @Nullable Boolean writeIndex, String... aliases) { From 0040f4b766459610fae3a0342f26d8f78735778e Mon Sep 17 00:00:00 2001 From: Pranshu Shukla <55992439+Pranshu-S@users.noreply.github.com> Date: Mon, 22 Jul 2024 17:09:56 +0530 Subject: [PATCH 089/167] =?UTF-8?q?Optimise=20TransportNodesAction=20to=20?= =?UTF-8?q?not=20send=20DiscoveryNodes=20for=20NodeStat=E2=80=A6=20(#14749?= =?UTF-8?q?)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Optimize TransportNodesAction to not send DiscoveryNodes for NodeStats, NodesInfo and ClusterStats call Signed-off-by: Pranshu Shukla --- CHANGELOG.md | 1 + .../node/info/TransportNodesInfoAction.java | 2 +- .../node/stats/TransportNodesStatsAction.java | 2 +- .../stats/TransportClusterStatsAction.java | 2 +- .../support/nodes/BaseNodesRequest.java | 16 ++ .../support/nodes/TransportNodesAction.java | 12 +- .../admin/cluster/RestClusterStatsAction.java | 1 + .../admin/cluster/RestNodesInfoAction.java | 2 +- .../admin/cluster/RestNodesStatsAction.java | 1 + .../rest/action/cat/RestNodesAction.java | 2 + .../action/RestStatsActionTests.java | 59 +++++++ .../TransportClusterStatsActionTests.java | 165 ++++++++++++++++++ .../nodes/TransportNodesActionTests.java | 13 +- .../nodes/TransportNodesInfoActionTests.java | 131 ++++++++++++++ .../nodes/TransportNodesStatsActionTests.java | 130 ++++++++++++++ 15 files changed, 528 insertions(+), 11 deletions(-) create mode 100644 server/src/test/java/org/opensearch/action/RestStatsActionTests.java create mode 100644 server/src/test/java/org/opensearch/action/support/nodes/TransportClusterStatsActionTests.java create mode 100644 server/src/test/java/org/opensearch/action/support/nodes/TransportNodesInfoActionTests.java create mode 100644 server/src/test/java/org/opensearch/action/support/nodes/TransportNodesStatsActionTests.java diff --git a/CHANGELOG.md b/CHANGELOG.md index 29e70c5026bb8..ab0c80e37e14c 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -21,6 +21,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Add Plugin interface for loading application based configuration templates (([#14659](https://github.com/opensearch-project/OpenSearch/issues/14659))) - Refactor remote-routing-table service inline with remote state interfaces([#14668](https://github.com/opensearch-project/OpenSearch/pull/14668)) - Add prefix mode verification setting for repository verification (([#14790](https://github.com/opensearch-project/OpenSearch/pull/14790))) +- Optimize TransportNodesAction to not send DiscoveryNodes for NodeStats, NodesInfo and ClusterStats call ([14749](https://github.com/opensearch-project/OpenSearch/pull/14749)) ### Dependencies - Bump `org.gradle.test-retry` from 1.5.8 to 1.5.9 ([#13442](https://github.com/opensearch-project/OpenSearch/pull/13442)) diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/node/info/TransportNodesInfoAction.java b/server/src/main/java/org/opensearch/action/admin/cluster/node/info/TransportNodesInfoAction.java index 2c4f8522a5a5c..dda54cce334ec 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/node/info/TransportNodesInfoAction.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/node/info/TransportNodesInfoAction.java @@ -129,7 +129,7 @@ protected NodeInfo nodeOperation(NodeInfoRequest nodeRequest) { */ public static class NodeInfoRequest extends TransportRequest { - NodesInfoRequest request; + protected NodesInfoRequest request; public NodeInfoRequest(StreamInput in) throws IOException { super(in); diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/node/stats/TransportNodesStatsAction.java b/server/src/main/java/org/opensearch/action/admin/cluster/node/stats/TransportNodesStatsAction.java index 2e93e5e7841cb..2c808adc97c7a 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/node/stats/TransportNodesStatsAction.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/node/stats/TransportNodesStatsAction.java @@ -140,7 +140,7 @@ protected NodeStats nodeOperation(NodeStatsRequest nodeStatsRequest) { */ public static class NodeStatsRequest extends TransportRequest { - NodesStatsRequest request; + protected NodesStatsRequest request; public NodeStatsRequest(StreamInput in) throws IOException { super(in); diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/stats/TransportClusterStatsAction.java b/server/src/main/java/org/opensearch/action/admin/cluster/stats/TransportClusterStatsAction.java index e4f483f796f44..c7d03596a2a36 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/stats/TransportClusterStatsAction.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/stats/TransportClusterStatsAction.java @@ -223,7 +223,7 @@ protected ClusterStatsNodeResponse nodeOperation(ClusterStatsNodeRequest nodeReq */ public static class ClusterStatsNodeRequest extends TransportRequest { - ClusterStatsRequest request; + protected ClusterStatsRequest request; public ClusterStatsNodeRequest(StreamInput in) throws IOException { super(in); diff --git a/server/src/main/java/org/opensearch/action/support/nodes/BaseNodesRequest.java b/server/src/main/java/org/opensearch/action/support/nodes/BaseNodesRequest.java index 4d54ce51c923c..a4f6d8afeaf38 100644 --- a/server/src/main/java/org/opensearch/action/support/nodes/BaseNodesRequest.java +++ b/server/src/main/java/org/opensearch/action/support/nodes/BaseNodesRequest.java @@ -65,6 +65,14 @@ public abstract class BaseNodesRequest * will be ignored and this will be used. * */ private DiscoveryNode[] concreteNodes; + + /** + * Since do not use the discovery nodes coming from the request in all code paths following a request extended off from + * BaseNodeRequest, we do not require it to sent around across all nodes. + * + * Setting default behavior as `true` but can be explicitly changed in requests that do not require. + */ + private boolean includeDiscoveryNodes = true; private final TimeValue DEFAULT_TIMEOUT_SECS = TimeValue.timeValueSeconds(30); private TimeValue timeout; @@ -119,6 +127,14 @@ public void setConcreteNodes(DiscoveryNode[] concreteNodes) { this.concreteNodes = concreteNodes; } + public void setIncludeDiscoveryNodes(boolean value) { + includeDiscoveryNodes = value; + } + + public boolean getIncludeDiscoveryNodes() { + return includeDiscoveryNodes; + } + @Override public ActionRequestValidationException validate() { return null; diff --git a/server/src/main/java/org/opensearch/action/support/nodes/TransportNodesAction.java b/server/src/main/java/org/opensearch/action/support/nodes/TransportNodesAction.java index 9a1a28dd70636..3acd12f632e0f 100644 --- a/server/src/main/java/org/opensearch/action/support/nodes/TransportNodesAction.java +++ b/server/src/main/java/org/opensearch/action/support/nodes/TransportNodesAction.java @@ -226,6 +226,7 @@ class AsyncAction { private final NodesRequest request; private final ActionListener listener; private final AtomicReferenceArray responses; + private final DiscoveryNode[] concreteNodes; private final AtomicInteger counter = new AtomicInteger(); private final Task task; @@ -238,10 +239,18 @@ class AsyncAction { assert request.concreteNodes() != null; } this.responses = new AtomicReferenceArray<>(request.concreteNodes().length); + this.concreteNodes = request.concreteNodes(); + + if (request.getIncludeDiscoveryNodes() == false) { + // As we transfer the ownership of discovery nodes to route the request to into the AsyncAction class, we + // remove the list of DiscoveryNodes from the request. This reduces the payload of the request and improves + // the number of concrete nodes in the memory. + request.setConcreteNodes(null); + } } void start() { - final DiscoveryNode[] nodes = request.concreteNodes(); + final DiscoveryNode[] nodes = this.concreteNodes; if (nodes.length == 0) { // nothing to notify threadPool.generic().execute(() -> listener.onResponse(newResponse(request, responses))); @@ -260,7 +269,6 @@ void start() { if (task != null) { nodeRequest.setParentTask(clusterService.localNode().getId(), task.getId()); } - transportService.sendRequest( node, getTransportNodeAction(node), diff --git a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestClusterStatsAction.java b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestClusterStatsAction.java index 0766e838210fa..913db3c81e951 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestClusterStatsAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestClusterStatsAction.java @@ -66,6 +66,7 @@ public String getName() { public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { ClusterStatsRequest clusterStatsRequest = new ClusterStatsRequest().nodesIds(request.paramAsStringArray("nodeId", null)); clusterStatsRequest.timeout(request.param("timeout")); + clusterStatsRequest.setIncludeDiscoveryNodes(false); return channel -> client.admin().cluster().clusterStats(clusterStatsRequest, new NodesResponseRestListener<>(channel)); } diff --git a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestNodesInfoAction.java b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestNodesInfoAction.java index 3b83bf9d6f68c..4ac51933ea382 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestNodesInfoAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestNodesInfoAction.java @@ -88,7 +88,7 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC final NodesInfoRequest nodesInfoRequest = prepareRequest(request); nodesInfoRequest.timeout(request.param("timeout")); settingsFilter.addFilterSettingParams(request); - + nodesInfoRequest.setIncludeDiscoveryNodes(false); return channel -> client.admin().cluster().nodesInfo(nodesInfoRequest, new NodesResponseRestListener<>(channel)); } diff --git a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestNodesStatsAction.java b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestNodesStatsAction.java index 267bfde576dec..ed9c0b171aa56 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestNodesStatsAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestNodesStatsAction.java @@ -232,6 +232,7 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC // If no levels are passed in this results in an empty array. String[] levels = Strings.splitStringByCommaToArray(request.param("level")); nodesStatsRequest.indices().setLevels(levels); + nodesStatsRequest.setIncludeDiscoveryNodes(false); return channel -> client.admin().cluster().nodesStats(nodesStatsRequest, new NodesResponseRestListener<>(channel)); } diff --git a/server/src/main/java/org/opensearch/rest/action/cat/RestNodesAction.java b/server/src/main/java/org/opensearch/rest/action/cat/RestNodesAction.java index bffb50cc63401..0330fe627ccd0 100644 --- a/server/src/main/java/org/opensearch/rest/action/cat/RestNodesAction.java +++ b/server/src/main/java/org/opensearch/rest/action/cat/RestNodesAction.java @@ -125,6 +125,7 @@ public RestChannelConsumer doCatRequest(final RestRequest request, final NodeCli public void processResponse(final ClusterStateResponse clusterStateResponse) { NodesInfoRequest nodesInfoRequest = new NodesInfoRequest(); nodesInfoRequest.timeout(request.param("timeout")); + nodesInfoRequest.setIncludeDiscoveryNodes(false); nodesInfoRequest.clear() .addMetrics( NodesInfoRequest.Metric.JVM.metricName(), @@ -137,6 +138,7 @@ public void processResponse(final ClusterStateResponse clusterStateResponse) { public void processResponse(final NodesInfoResponse nodesInfoResponse) { NodesStatsRequest nodesStatsRequest = new NodesStatsRequest(); nodesStatsRequest.timeout(request.param("timeout")); + nodesStatsRequest.setIncludeDiscoveryNodes(false); nodesStatsRequest.clear() .indices(true) .addMetrics( diff --git a/server/src/test/java/org/opensearch/action/RestStatsActionTests.java b/server/src/test/java/org/opensearch/action/RestStatsActionTests.java new file mode 100644 index 0000000000000..9b8a0640ee343 --- /dev/null +++ b/server/src/test/java/org/opensearch/action/RestStatsActionTests.java @@ -0,0 +1,59 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.action; + +import org.opensearch.client.node.NodeClient; +import org.opensearch.common.settings.Settings; +import org.opensearch.common.settings.SettingsFilter; +import org.opensearch.rest.action.admin.cluster.RestClusterStatsAction; +import org.opensearch.rest.action.admin.cluster.RestNodesInfoAction; +import org.opensearch.rest.action.admin.cluster.RestNodesStatsAction; +import org.opensearch.test.OpenSearchTestCase; +import org.opensearch.test.rest.FakeRestRequest; +import org.opensearch.threadpool.TestThreadPool; +import org.junit.After; + +import java.util.Collections; + +public class RestStatsActionTests extends OpenSearchTestCase { + private final TestThreadPool threadPool = new TestThreadPool(RestStatsActionTests.class.getName()); + private final NodeClient client = new NodeClient(Settings.EMPTY, threadPool); + + @After + public void terminateThreadPool() { + terminate(threadPool); + } + + public void testClusterStatsActionPrepareRequestNoError() { + RestClusterStatsAction action = new RestClusterStatsAction(); + try { + action.prepareRequest(new FakeRestRequest(), client); + } catch (Throwable t) { + fail(t.getMessage()); + } + } + + public void testNodesStatsActionPrepareRequestNoError() { + RestNodesStatsAction action = new RestNodesStatsAction(); + try { + action.prepareRequest(new FakeRestRequest(), client); + } catch (Throwable t) { + fail(t.getMessage()); + } + } + + public void testNodesInfoActionPrepareRequestNoError() { + RestNodesInfoAction action = new RestNodesInfoAction(new SettingsFilter(Collections.singleton("foo.filtered"))); + try { + action.prepareRequest(new FakeRestRequest(), client); + } catch (Throwable t) { + fail(t.getMessage()); + } + } +} diff --git a/server/src/test/java/org/opensearch/action/support/nodes/TransportClusterStatsActionTests.java b/server/src/test/java/org/opensearch/action/support/nodes/TransportClusterStatsActionTests.java new file mode 100644 index 0000000000000..f8e14b477b8ef --- /dev/null +++ b/server/src/test/java/org/opensearch/action/support/nodes/TransportClusterStatsActionTests.java @@ -0,0 +1,165 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.action.support.nodes; + +import org.opensearch.action.admin.cluster.node.stats.NodesStatsRequest; +import org.opensearch.action.admin.cluster.stats.ClusterStatsRequest; +import org.opensearch.action.admin.cluster.stats.TransportClusterStatsAction; +import org.opensearch.action.support.ActionFilters; +import org.opensearch.action.support.PlainActionFuture; +import org.opensearch.cluster.node.DiscoveryNode; +import org.opensearch.cluster.service.ClusterService; +import org.opensearch.common.io.stream.BytesStreamOutput; +import org.opensearch.core.common.io.stream.StreamInput; +import org.opensearch.indices.IndicesService; +import org.opensearch.node.NodeService; +import org.opensearch.test.transport.CapturingTransport; +import org.opensearch.threadpool.ThreadPool; +import org.opensearch.transport.TransportService; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collection; +import java.util.Collections; +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +public class TransportClusterStatsActionTests extends TransportNodesActionTests { + + /** + * By default, we send discovery nodes list to each request that is sent across from the coordinator node. This + * behavior is asserted in this test. + */ + public void testClusterStatsActionWithRetentionOfDiscoveryNodesList() { + ClusterStatsRequest request = new ClusterStatsRequest(); + request.setIncludeDiscoveryNodes(true); + Map> combinedSentRequest = performNodesInfoAction(request); + + assertNotNull(combinedSentRequest); + combinedSentRequest.forEach((node, capturedRequestList) -> { + assertNotNull(capturedRequestList); + capturedRequestList.forEach(sentRequest -> { + assertNotNull(sentRequest.getDiscoveryNodes()); + assertEquals(sentRequest.getDiscoveryNodes().length, clusterService.state().nodes().getSize()); + }); + }); + } + + public void testClusterStatsActionWithPreFilledConcreteNodesAndWithRetentionOfDiscoveryNodesList() { + ClusterStatsRequest request = new ClusterStatsRequest(); + Collection discoveryNodes = clusterService.state().getNodes().getNodes().values(); + request.setConcreteNodes(discoveryNodes.toArray(DiscoveryNode[]::new)); + Map> combinedSentRequest = performNodesInfoAction(request); + + assertNotNull(combinedSentRequest); + combinedSentRequest.forEach((node, capturedRequestList) -> { + assertNotNull(capturedRequestList); + capturedRequestList.forEach(sentRequest -> { + assertNotNull(sentRequest.getDiscoveryNodes()); + assertEquals(sentRequest.getDiscoveryNodes().length, clusterService.state().nodes().getSize()); + }); + }); + } + + /** + * In the optimized ClusterStats Request, we do not send the DiscoveryNodes List to each node. This behavior is + * asserted in this test. + */ + public void testClusterStatsActionWithoutRetentionOfDiscoveryNodesList() { + ClusterStatsRequest request = new ClusterStatsRequest(); + request.setIncludeDiscoveryNodes(false); + Map> combinedSentRequest = performNodesInfoAction(request); + + assertNotNull(combinedSentRequest); + combinedSentRequest.forEach((node, capturedRequestList) -> { + assertNotNull(capturedRequestList); + capturedRequestList.forEach(sentRequest -> { assertNull(sentRequest.getDiscoveryNodes()); }); + }); + } + + public void testClusterStatsActionWithPreFilledConcreteNodesAndWithoutRetentionOfDiscoveryNodesList() { + ClusterStatsRequest request = new ClusterStatsRequest(); + Collection discoveryNodes = clusterService.state().getNodes().getNodes().values(); + request.setConcreteNodes(discoveryNodes.toArray(DiscoveryNode[]::new)); + request.setIncludeDiscoveryNodes(false); + Map> combinedSentRequest = performNodesInfoAction(request); + + assertNotNull(combinedSentRequest); + combinedSentRequest.forEach((node, capturedRequestList) -> { + assertNotNull(capturedRequestList); + capturedRequestList.forEach(sentRequest -> { assertNull(sentRequest.getDiscoveryNodes()); }); + }); + } + + private Map> performNodesInfoAction(ClusterStatsRequest request) { + TransportNodesAction action = getTestTransportClusterStatsAction(); + PlainActionFuture listener = new PlainActionFuture<>(); + action.new AsyncAction(null, request, listener).start(); + Map> capturedRequests = transport.getCapturedRequestsByTargetNodeAndClear(); + Map> combinedSentRequest = new HashMap<>(); + + capturedRequests.forEach((node, capturedRequestList) -> { + List sentRequestList = new ArrayList<>(); + + capturedRequestList.forEach(preSentRequest -> { + BytesStreamOutput out = new BytesStreamOutput(); + try { + TransportClusterStatsAction.ClusterStatsNodeRequest clusterStatsNodeRequestFromCoordinator = + (TransportClusterStatsAction.ClusterStatsNodeRequest) preSentRequest.request; + clusterStatsNodeRequestFromCoordinator.writeTo(out); + StreamInput in = out.bytes().streamInput(); + MockClusterStatsNodeRequest mockClusterStatsNodeRequest = new MockClusterStatsNodeRequest(in); + sentRequestList.add(mockClusterStatsNodeRequest); + } catch (IOException e) { + throw new RuntimeException(e); + } + }); + + combinedSentRequest.put(node, sentRequestList); + }); + + return combinedSentRequest; + } + + private TestTransportClusterStatsAction getTestTransportClusterStatsAction() { + return new TestTransportClusterStatsAction( + THREAD_POOL, + clusterService, + transportService, + nodeService, + indicesService, + new ActionFilters(Collections.emptySet()) + ); + } + + private static class TestTransportClusterStatsAction extends TransportClusterStatsAction { + public TestTransportClusterStatsAction( + ThreadPool threadPool, + ClusterService clusterService, + TransportService transportService, + NodeService nodeService, + IndicesService indicesService, + ActionFilters actionFilters + ) { + super(threadPool, clusterService, transportService, nodeService, indicesService, actionFilters); + } + } + + private static class MockClusterStatsNodeRequest extends TransportClusterStatsAction.ClusterStatsNodeRequest { + + public MockClusterStatsNodeRequest(StreamInput in) throws IOException { + super(in); + } + + public DiscoveryNode[] getDiscoveryNodes() { + return this.request.concreteNodes(); + } + } +} diff --git a/server/src/test/java/org/opensearch/action/support/nodes/TransportNodesActionTests.java b/server/src/test/java/org/opensearch/action/support/nodes/TransportNodesActionTests.java index 445934b0ccdfd..7e968aa8fb199 100644 --- a/server/src/test/java/org/opensearch/action/support/nodes/TransportNodesActionTests.java +++ b/server/src/test/java/org/opensearch/action/support/nodes/TransportNodesActionTests.java @@ -46,6 +46,8 @@ import org.opensearch.core.common.io.stream.StreamInput; import org.opensearch.core.common.io.stream.StreamOutput; import org.opensearch.core.common.io.stream.Writeable; +import org.opensearch.indices.IndicesService; +import org.opensearch.node.NodeService; import org.opensearch.telemetry.tracing.noop.NoopTracer; import org.opensearch.test.OpenSearchTestCase; import org.opensearch.test.transport.CapturingTransport; @@ -76,11 +78,12 @@ public class TransportNodesActionTests extends OpenSearchTestCase { - private static ThreadPool THREAD_POOL; - - private ClusterService clusterService; - private CapturingTransport transport; - private TransportService transportService; + protected static ThreadPool THREAD_POOL; + protected ClusterService clusterService; + protected CapturingTransport transport; + protected TransportService transportService; + protected NodeService nodeService; + protected IndicesService indicesService; public void testRequestIsSentToEachNode() throws Exception { TransportNodesAction action = getTestTransportNodesAction(); diff --git a/server/src/test/java/org/opensearch/action/support/nodes/TransportNodesInfoActionTests.java b/server/src/test/java/org/opensearch/action/support/nodes/TransportNodesInfoActionTests.java new file mode 100644 index 0000000000000..e9e09d0dbbbf9 --- /dev/null +++ b/server/src/test/java/org/opensearch/action/support/nodes/TransportNodesInfoActionTests.java @@ -0,0 +1,131 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.action.support.nodes; + +import org.opensearch.action.admin.cluster.node.info.NodesInfoRequest; +import org.opensearch.action.admin.cluster.node.info.TransportNodesInfoAction; +import org.opensearch.action.admin.cluster.node.stats.NodesStatsRequest; +import org.opensearch.action.support.ActionFilters; +import org.opensearch.action.support.PlainActionFuture; +import org.opensearch.cluster.node.DiscoveryNode; +import org.opensearch.cluster.service.ClusterService; +import org.opensearch.common.io.stream.BytesStreamOutput; +import org.opensearch.core.common.io.stream.StreamInput; +import org.opensearch.node.NodeService; +import org.opensearch.test.transport.CapturingTransport; +import org.opensearch.threadpool.ThreadPool; +import org.opensearch.transport.TransportService; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +public class TransportNodesInfoActionTests extends TransportNodesActionTests { + + /** + * By default, we send discovery nodes list to each request that is sent across from the coordinator node. This + * behavior is asserted in this test. + */ + public void testNodesInfoActionWithRetentionOfDiscoveryNodesList() { + NodesInfoRequest request = new NodesInfoRequest(); + request.setIncludeDiscoveryNodes(true); + Map> combinedSentRequest = performNodesInfoAction(request); + + assertNotNull(combinedSentRequest); + combinedSentRequest.forEach((node, capturedRequestList) -> { + assertNotNull(capturedRequestList); + capturedRequestList.forEach(sentRequest -> { + assertNotNull(sentRequest.getDiscoveryNodes()); + assertEquals(sentRequest.getDiscoveryNodes().length, clusterService.state().nodes().getSize()); + }); + }); + } + + /** + * In the optimized ClusterStats Request, we do not send the DiscoveryNodes List to each node. This behavior is + * asserted in this test. + */ + public void testNodesInfoActionWithoutRetentionOfDiscoveryNodesList() { + NodesInfoRequest request = new NodesInfoRequest(); + request.setIncludeDiscoveryNodes(false); + Map> combinedSentRequest = performNodesInfoAction(request); + + assertNotNull(combinedSentRequest); + combinedSentRequest.forEach((node, capturedRequestList) -> { + assertNotNull(capturedRequestList); + capturedRequestList.forEach(sentRequest -> { assertNull(sentRequest.getDiscoveryNodes()); }); + }); + } + + private Map> performNodesInfoAction(NodesInfoRequest request) { + TransportNodesAction action = getTestTransportNodesInfoAction(); + PlainActionFuture listener = new PlainActionFuture<>(); + action.new AsyncAction(null, request, listener).start(); + Map> capturedRequests = transport.getCapturedRequestsByTargetNodeAndClear(); + Map> combinedSentRequest = new HashMap<>(); + + capturedRequests.forEach((node, capturedRequestList) -> { + List sentRequestList = new ArrayList<>(); + + capturedRequestList.forEach(preSentRequest -> { + BytesStreamOutput out = new BytesStreamOutput(); + try { + TransportNodesInfoAction.NodeInfoRequest nodesInfoRequestFromCoordinator = + (TransportNodesInfoAction.NodeInfoRequest) preSentRequest.request; + nodesInfoRequestFromCoordinator.writeTo(out); + StreamInput in = out.bytes().streamInput(); + MockNodesInfoRequest nodesStatsRequest = new MockNodesInfoRequest(in); + sentRequestList.add(nodesStatsRequest); + } catch (IOException e) { + throw new RuntimeException(e); + } + }); + + combinedSentRequest.put(node, sentRequestList); + }); + + return combinedSentRequest; + } + + private TestTransportNodesInfoAction getTestTransportNodesInfoAction() { + return new TestTransportNodesInfoAction( + THREAD_POOL, + clusterService, + transportService, + nodeService, + new ActionFilters(Collections.emptySet()) + ); + } + + private static class TestTransportNodesInfoAction extends TransportNodesInfoAction { + public TestTransportNodesInfoAction( + ThreadPool threadPool, + ClusterService clusterService, + TransportService transportService, + NodeService nodeService, + ActionFilters actionFilters + ) { + super(threadPool, clusterService, transportService, nodeService, actionFilters); + } + } + + private static class MockNodesInfoRequest extends TransportNodesInfoAction.NodeInfoRequest { + + public MockNodesInfoRequest(StreamInput in) throws IOException { + super(in); + } + + public DiscoveryNode[] getDiscoveryNodes() { + return this.request.concreteNodes(); + } + } +} diff --git a/server/src/test/java/org/opensearch/action/support/nodes/TransportNodesStatsActionTests.java b/server/src/test/java/org/opensearch/action/support/nodes/TransportNodesStatsActionTests.java new file mode 100644 index 0000000000000..c7c420e353e1a --- /dev/null +++ b/server/src/test/java/org/opensearch/action/support/nodes/TransportNodesStatsActionTests.java @@ -0,0 +1,130 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.action.support.nodes; + +import org.opensearch.action.admin.cluster.node.stats.NodesStatsRequest; +import org.opensearch.action.admin.cluster.node.stats.TransportNodesStatsAction; +import org.opensearch.action.support.ActionFilters; +import org.opensearch.action.support.PlainActionFuture; +import org.opensearch.cluster.node.DiscoveryNode; +import org.opensearch.cluster.service.ClusterService; +import org.opensearch.common.io.stream.BytesStreamOutput; +import org.opensearch.core.common.io.stream.StreamInput; +import org.opensearch.node.NodeService; +import org.opensearch.test.transport.CapturingTransport; +import org.opensearch.threadpool.ThreadPool; +import org.opensearch.transport.TransportService; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +public class TransportNodesStatsActionTests extends TransportNodesActionTests { + + /** + * By default, we send discovery nodes list to each request that is sent across from the coordinator node. This + * behavior is asserted in this test. + */ + public void testNodesStatsActionWithRetentionOfDiscoveryNodesList() { + NodesStatsRequest request = new NodesStatsRequest(); + request.setIncludeDiscoveryNodes(true); + Map> combinedSentRequest = performNodesStatsAction(request); + + assertNotNull(combinedSentRequest); + combinedSentRequest.forEach((node, capturedRequestList) -> { + assertNotNull(capturedRequestList); + capturedRequestList.forEach(sentRequest -> { + assertNotNull(sentRequest.getDiscoveryNodes()); + assertEquals(sentRequest.getDiscoveryNodes().length, clusterService.state().nodes().getSize()); + }); + }); + } + + /** + * By default, we send discovery nodes list to each request that is sent across from the coordinator node. This + * behavior is asserted in this test. + */ + public void testNodesStatsActionWithoutRetentionOfDiscoveryNodesList() { + NodesStatsRequest request = new NodesStatsRequest(); + request.setIncludeDiscoveryNodes(false); + Map> combinedSentRequest = performNodesStatsAction(request); + + assertNotNull(combinedSentRequest); + combinedSentRequest.forEach((node, capturedRequestList) -> { + assertNotNull(capturedRequestList); + capturedRequestList.forEach(sentRequest -> { assertNull(sentRequest.getDiscoveryNodes()); }); + }); + } + + private Map> performNodesStatsAction(NodesStatsRequest request) { + TransportNodesAction action = getTestTransportNodesStatsAction(); + PlainActionFuture listener = new PlainActionFuture<>(); + action.new AsyncAction(null, request, listener).start(); + Map> capturedRequests = transport.getCapturedRequestsByTargetNodeAndClear(); + Map> combinedSentRequest = new HashMap<>(); + + capturedRequests.forEach((node, capturedRequestList) -> { + List sentRequestList = new ArrayList<>(); + + capturedRequestList.forEach(preSentRequest -> { + BytesStreamOutput out = new BytesStreamOutput(); + try { + TransportNodesStatsAction.NodeStatsRequest nodesStatsRequestFromCoordinator = + (TransportNodesStatsAction.NodeStatsRequest) preSentRequest.request; + nodesStatsRequestFromCoordinator.writeTo(out); + StreamInput in = out.bytes().streamInput(); + MockNodeStatsRequest nodesStatsRequest = new MockNodeStatsRequest(in); + sentRequestList.add(nodesStatsRequest); + } catch (IOException e) { + throw new RuntimeException(e); + } + }); + + combinedSentRequest.put(node, sentRequestList); + }); + + return combinedSentRequest; + } + + private TestTransportNodesStatsAction getTestTransportNodesStatsAction() { + return new TestTransportNodesStatsAction( + THREAD_POOL, + clusterService, + transportService, + nodeService, + new ActionFilters(Collections.emptySet()) + ); + } + + private static class TestTransportNodesStatsAction extends TransportNodesStatsAction { + public TestTransportNodesStatsAction( + ThreadPool threadPool, + ClusterService clusterService, + TransportService transportService, + NodeService nodeService, + ActionFilters actionFilters + ) { + super(threadPool, clusterService, transportService, nodeService, actionFilters); + } + } + + private static class MockNodeStatsRequest extends TransportNodesStatsAction.NodeStatsRequest { + + public MockNodeStatsRequest(StreamInput in) throws IOException { + super(in); + } + + public DiscoveryNode[] getDiscoveryNodes() { + return this.request.concreteNodes(); + } + } +} From ceb60d0720c55433dd3135314618129fc595c57a Mon Sep 17 00:00:00 2001 From: rajiv-kv <157019998+rajiv-kv@users.noreply.github.com> Date: Mon, 22 Jul 2024 20:54:37 +0530 Subject: [PATCH 090/167] Enabling term version check on local state for all ClusterManager Read Transport Actions (#14273) * enabling term version check on local state for all admin read actions Signed-off-by: Rajiv Kumar Vaidyanathan --- CHANGELOG.md | 1 + .../opensearch/action/IndicesRequestIT.java | 29 ++- .../AdmissionForClusterManagerIT.java | 34 ++- .../TransportGetDecommissionStateAction.java | 3 +- .../health/TransportClusterHealthAction.java | 5 + .../get/TransportGetRepositoriesAction.java | 3 +- .../TransportClusterSearchShardsAction.java | 3 +- .../TransportGetWeightedRoutingAction.java | 3 +- .../state/TransportClusterStateAction.java | 6 +- .../TransportGetStoredScriptAction.java | 3 +- .../TransportPendingClusterTasksAction.java | 5 + .../alias/get/TransportGetAliasesAction.java | 3 +- .../indices/TransportIndicesExistsAction.java | 3 +- .../TransportIndicesShardStoresAction.java | 3 +- .../TransportGetComponentTemplateAction.java | 3 +- ...sportGetComposableIndexTemplateAction.java | 3 +- .../get/TransportGetIndexTemplatesAction.java | 3 +- .../ingest/GetPipelineTransportAction.java | 3 +- .../GetSearchPipelineTransportAction.java | 3 +- .../TransportClusterManagerNodeAction.java | 60 +++-- ...TransportClusterManagerNodeReadAction.java | 23 +- .../info/TransportClusterInfoAction.java | 1 + .../mapping/get/GetMappingsActionTests.java | 227 ++++++++++++++++++ 23 files changed, 382 insertions(+), 48 deletions(-) create mode 100644 server/src/test/java/org/opensearch/action/admin/indices/mapping/get/GetMappingsActionTests.java diff --git a/CHANGELOG.md b/CHANGELOG.md index ab0c80e37e14c..e77b183601674 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -22,6 +22,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Refactor remote-routing-table service inline with remote state interfaces([#14668](https://github.com/opensearch-project/OpenSearch/pull/14668)) - Add prefix mode verification setting for repository verification (([#14790](https://github.com/opensearch-project/OpenSearch/pull/14790))) - Optimize TransportNodesAction to not send DiscoveryNodes for NodeStats, NodesInfo and ClusterStats call ([14749](https://github.com/opensearch-project/OpenSearch/pull/14749)) +- Enabling term version check on local state for all ClusterManager Read Transport Actions ([#14273](https://github.com/opensearch-project/OpenSearch/pull/14273)) ### Dependencies - Bump `org.gradle.test-retry` from 1.5.8 to 1.5.9 ([#13442](https://github.com/opensearch-project/OpenSearch/pull/13442)) diff --git a/server/src/internalClusterTest/java/org/opensearch/action/IndicesRequestIT.java b/server/src/internalClusterTest/java/org/opensearch/action/IndicesRequestIT.java index 84d833569edcb..927a79d4884ef 100644 --- a/server/src/internalClusterTest/java/org/opensearch/action/IndicesRequestIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/action/IndicesRequestIT.java @@ -84,6 +84,8 @@ import org.opensearch.action.search.SearchResponse; import org.opensearch.action.search.SearchTransportService; import org.opensearch.action.search.SearchType; +import org.opensearch.action.support.clustermanager.term.GetTermVersionAction; +import org.opensearch.action.support.clustermanager.term.GetTermVersionRequest; import org.opensearch.action.support.replication.TransportReplicationActionTests; import org.opensearch.action.termvectors.MultiTermVectorsAction; import org.opensearch.action.termvectors.MultiTermVectorsRequest; @@ -195,6 +197,7 @@ public void cleanUp() { } public void testGetFieldMappings() { + String getFieldMappingsShardAction = GetFieldMappingsAction.NAME + "[index][s]"; interceptTransportActions(getFieldMappingsShardAction); @@ -545,13 +548,14 @@ public void testDeleteIndex() { } public void testGetMappings() { - interceptTransportActions(GetMappingsAction.NAME); - + interceptTransportActions(GetTermVersionAction.NAME, GetMappingsAction.NAME); GetMappingsRequest getMappingsRequest = new GetMappingsRequest().indices(randomIndicesOrAliases()); internalCluster().coordOnlyNodeClient().admin().indices().getMappings(getMappingsRequest).actionGet(); clearInterceptedActions(); - assertSameIndices(getMappingsRequest, GetMappingsAction.NAME); + + assertActionInvocation(GetTermVersionAction.NAME, GetTermVersionRequest.class); + assertNoActionInvocation(GetMappingsAction.NAME); } public void testPutMapping() { @@ -565,8 +569,8 @@ public void testPutMapping() { } public void testGetSettings() { - interceptTransportActions(GetSettingsAction.NAME); + interceptTransportActions(GetSettingsAction.NAME); GetSettingsRequest getSettingsRequest = new GetSettingsRequest().indices(randomIndicesOrAliases()); internalCluster().coordOnlyNodeClient().admin().indices().getSettings(getSettingsRequest).actionGet(); @@ -662,6 +666,21 @@ private static void assertSameIndices(IndicesRequest originalRequest, boolean op } } + private static void assertActionInvocation(String action, Class requestClass) { + List requests = consumeTransportRequests(action); + assertFalse(requests.isEmpty()); + for (TransportRequest internalRequest : requests) { + assertTrue(internalRequest.getClass() == requestClass); + } + } + + private static void assertNoActionInvocation(String... actions) { + for (String action : actions) { + List requests = consumeTransportRequests(action); + assertTrue(requests.isEmpty()); + } + } + private static void assertIndicesSubset(List indices, String... actions) { // indices returned by each bulk shard request need to be a subset of the original indices for (String action : actions) { @@ -781,7 +800,6 @@ public List getTransportInterceptors( } private final Set actions = new HashSet<>(); - private final Map> requests = new HashMap<>(); @Override @@ -831,6 +849,7 @@ public void messageReceived(T request, TransportChannel channel, Task task) thro } } requestHandler.messageReceived(request, channel, task); + } } } diff --git a/server/src/internalClusterTest/java/org/opensearch/ratelimitting/admissioncontrol/AdmissionForClusterManagerIT.java b/server/src/internalClusterTest/java/org/opensearch/ratelimitting/admissioncontrol/AdmissionForClusterManagerIT.java index b9da5ffb86af0..e3a4216e772fb 100644 --- a/server/src/internalClusterTest/java/org/opensearch/ratelimitting/admissioncontrol/AdmissionForClusterManagerIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/ratelimitting/admissioncontrol/AdmissionForClusterManagerIT.java @@ -12,7 +12,11 @@ import org.apache.logging.log4j.Logger; import org.opensearch.action.admin.indices.alias.get.GetAliasesRequest; import org.opensearch.action.admin.indices.alias.get.GetAliasesResponse; +import org.opensearch.action.support.clustermanager.term.GetTermVersionAction; +import org.opensearch.action.support.clustermanager.term.GetTermVersionResponse; import org.opensearch.client.node.NodeClient; +import org.opensearch.cluster.coordination.ClusterStateTermVersion; +import org.opensearch.cluster.service.ClusterService; import org.opensearch.common.settings.Settings; import org.opensearch.common.unit.TimeValue; import org.opensearch.core.concurrency.OpenSearchRejectedExecutionException; @@ -20,6 +24,7 @@ import org.opensearch.node.IoUsageStats; import org.opensearch.node.ResourceUsageCollectorService; import org.opensearch.node.resource.tracker.ResourceTrackerSettings; +import org.opensearch.plugins.Plugin; import org.opensearch.ratelimitting.admissioncontrol.controllers.CpuBasedAdmissionController; import org.opensearch.ratelimitting.admissioncontrol.enums.AdmissionControlActionType; import org.opensearch.ratelimitting.admissioncontrol.enums.AdmissionControlMode; @@ -29,9 +34,13 @@ import org.opensearch.rest.action.admin.indices.RestGetAliasesAction; import org.opensearch.test.OpenSearchIntegTestCase; import org.opensearch.test.rest.FakeRestRequest; +import org.opensearch.test.transport.MockTransportService; +import org.opensearch.transport.TransportService; import org.junit.Before; +import java.util.Collection; import java.util.HashMap; +import java.util.List; import java.util.Map; import java.util.concurrent.CountDownLatch; import java.util.concurrent.atomic.AtomicReference; @@ -62,6 +71,10 @@ public class AdmissionForClusterManagerIT extends OpenSearchIntegTestCase { .put(CLUSTER_ADMIN_CPU_USAGE_LIMIT.getKey(), 50) .build(); + protected Collection> nodePlugins() { + return List.of(MockTransportService.TestPlugin.class); + } + @Before public void init() { String clusterManagerNode = internalCluster().startClusterManagerOnlyNode( @@ -79,6 +92,25 @@ public void init() { // Enable admission control client().admin().cluster().prepareUpdateSettings().setTransientSettings(ENFORCE_ADMISSION_CONTROL).execute().actionGet(); + MockTransportService primaryService = (MockTransportService) internalCluster().getInstance( + TransportService.class, + clusterManagerNode + ); + + // Force always fetch from ClusterManager + ClusterService clusterService = internalCluster().clusterService(); + GetTermVersionResponse oosTerm = new GetTermVersionResponse( + new ClusterStateTermVersion( + clusterService.state().getClusterName(), + clusterService.state().metadata().clusterUUID(), + clusterService.state().term() - 1, + clusterService.state().version() - 1 + ) + ); + primaryService.addRequestHandlingBehavior( + GetTermVersionAction.NAME, + (handler, request, channel, task) -> channel.sendResponse(oosTerm) + ); } public void testAdmissionControlEnforced() throws Exception { @@ -86,8 +118,8 @@ public void testAdmissionControlEnforced() throws Exception { // Write API on ClusterManager assertAcked(prepareCreate("test").setMapping("field", "type=text").setAliases("{\"alias1\" : {}}")); - // Read API on ClusterManager + GetAliasesRequest aliasesRequest = new GetAliasesRequest(); aliasesRequest.aliases("alias1"); try { diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/decommission/awareness/get/TransportGetDecommissionStateAction.java b/server/src/main/java/org/opensearch/action/admin/cluster/decommission/awareness/get/TransportGetDecommissionStateAction.java index 22feb4d99297a..c8a3be78a790e 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/decommission/awareness/get/TransportGetDecommissionStateAction.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/decommission/awareness/get/TransportGetDecommissionStateAction.java @@ -48,7 +48,8 @@ public TransportGetDecommissionStateAction( threadPool, actionFilters, GetDecommissionStateRequest::new, - indexNameExpressionResolver + indexNameExpressionResolver, + true ); } diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/health/TransportClusterHealthAction.java b/server/src/main/java/org/opensearch/action/admin/cluster/health/TransportClusterHealthAction.java index 1cc357a4c20f4..f69f462372888 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/health/TransportClusterHealthAction.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/health/TransportClusterHealthAction.java @@ -534,4 +534,9 @@ private ClusterHealthResponse clusterHealth( pendingTaskTimeInQueue ); } + + @Override + protected boolean localExecuteSupportedByAction() { + return false; + } } diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/repositories/get/TransportGetRepositoriesAction.java b/server/src/main/java/org/opensearch/action/admin/cluster/repositories/get/TransportGetRepositoriesAction.java index c7d784dbc96e7..c99b52dfe34f4 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/repositories/get/TransportGetRepositoriesAction.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/repositories/get/TransportGetRepositoriesAction.java @@ -79,7 +79,8 @@ public TransportGetRepositoriesAction( threadPool, actionFilters, GetRepositoriesRequest::new, - indexNameExpressionResolver + indexNameExpressionResolver, + true ); } diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/shards/TransportClusterSearchShardsAction.java b/server/src/main/java/org/opensearch/action/admin/cluster/shards/TransportClusterSearchShardsAction.java index a2a65b6400c97..83e104236f640 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/shards/TransportClusterSearchShardsAction.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/shards/TransportClusterSearchShardsAction.java @@ -85,7 +85,8 @@ public TransportClusterSearchShardsAction( threadPool, actionFilters, ClusterSearchShardsRequest::new, - indexNameExpressionResolver + indexNameExpressionResolver, + true ); this.indicesService = indicesService; } diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/shards/routing/weighted/get/TransportGetWeightedRoutingAction.java b/server/src/main/java/org/opensearch/action/admin/cluster/shards/routing/weighted/get/TransportGetWeightedRoutingAction.java index 50368d85e0011..6c110c0ea2a73 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/shards/routing/weighted/get/TransportGetWeightedRoutingAction.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/shards/routing/weighted/get/TransportGetWeightedRoutingAction.java @@ -55,7 +55,8 @@ public TransportGetWeightedRoutingAction( threadPool, actionFilters, ClusterGetWeightedRoutingRequest::new, - indexNameExpressionResolver + indexNameExpressionResolver, + true ); this.weightedRoutingService = weightedRoutingService; } diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/state/TransportClusterStateAction.java b/server/src/main/java/org/opensearch/action/admin/cluster/state/TransportClusterStateAction.java index cae465a90446e..13ea7eaa43bf8 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/state/TransportClusterStateAction.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/state/TransportClusterStateAction.java @@ -92,6 +92,7 @@ public TransportClusterStateAction( ClusterStateRequest::new, indexNameExpressionResolver ); + this.localExecuteSupported = true; } @Override @@ -233,9 +234,4 @@ private ClusterStateResponse buildResponse(final ClusterStateRequest request, fi return new ClusterStateResponse(currentState.getClusterName(), builder.build(), false); } - - @Override - protected boolean localExecuteSupportedByAction() { - return true; - } } diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/storedscripts/TransportGetStoredScriptAction.java b/server/src/main/java/org/opensearch/action/admin/cluster/storedscripts/TransportGetStoredScriptAction.java index db1f1edde2812..c34ec49406802 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/storedscripts/TransportGetStoredScriptAction.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/storedscripts/TransportGetStoredScriptAction.java @@ -73,7 +73,8 @@ public TransportGetStoredScriptAction( threadPool, actionFilters, GetStoredScriptRequest::new, - indexNameExpressionResolver + indexNameExpressionResolver, + true ); this.scriptService = scriptService; } diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/tasks/TransportPendingClusterTasksAction.java b/server/src/main/java/org/opensearch/action/admin/cluster/tasks/TransportPendingClusterTasksAction.java index 5d5053cc80738..01846ef46c1ed 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/tasks/TransportPendingClusterTasksAction.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/tasks/TransportPendingClusterTasksAction.java @@ -110,4 +110,9 @@ protected void clusterManagerOperation( logger.trace("done fetching pending tasks from cluster service"); listener.onResponse(new PendingClusterTasksResponse(pendingTasks)); } + + @Override + protected boolean localExecuteSupportedByAction() { + return false; + } } diff --git a/server/src/main/java/org/opensearch/action/admin/indices/alias/get/TransportGetAliasesAction.java b/server/src/main/java/org/opensearch/action/admin/indices/alias/get/TransportGetAliasesAction.java index 3aca9c1976f16..4f4e3bd481ee7 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/alias/get/TransportGetAliasesAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/alias/get/TransportGetAliasesAction.java @@ -86,7 +86,8 @@ public TransportGetAliasesAction( threadPool, actionFilters, GetAliasesRequest::new, - indexNameExpressionResolver + indexNameExpressionResolver, + true ); this.systemIndices = systemIndices; } diff --git a/server/src/main/java/org/opensearch/action/admin/indices/exists/indices/TransportIndicesExistsAction.java b/server/src/main/java/org/opensearch/action/admin/indices/exists/indices/TransportIndicesExistsAction.java index 428a0eb35513d..a298eae1aa865 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/exists/indices/TransportIndicesExistsAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/exists/indices/TransportIndicesExistsAction.java @@ -71,7 +71,8 @@ public TransportIndicesExistsAction( threadPool, actionFilters, IndicesExistsRequest::new, - indexNameExpressionResolver + indexNameExpressionResolver, + true ); } diff --git a/server/src/main/java/org/opensearch/action/admin/indices/shards/TransportIndicesShardStoresAction.java b/server/src/main/java/org/opensearch/action/admin/indices/shards/TransportIndicesShardStoresAction.java index 3fbf9ac1bb570..a8b97d0f344ae 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/shards/TransportIndicesShardStoresAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/shards/TransportIndicesShardStoresAction.java @@ -105,7 +105,8 @@ public TransportIndicesShardStoresAction( threadPool, actionFilters, IndicesShardStoresRequest::new, - indexNameExpressionResolver + indexNameExpressionResolver, + true ); this.listShardStoresInfo = listShardStoresInfo; } diff --git a/server/src/main/java/org/opensearch/action/admin/indices/template/get/TransportGetComponentTemplateAction.java b/server/src/main/java/org/opensearch/action/admin/indices/template/get/TransportGetComponentTemplateAction.java index e2594cd792cd3..c3217d109044d 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/template/get/TransportGetComponentTemplateAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/template/get/TransportGetComponentTemplateAction.java @@ -76,7 +76,8 @@ public TransportGetComponentTemplateAction( threadPool, actionFilters, GetComponentTemplateAction.Request::new, - indexNameExpressionResolver + indexNameExpressionResolver, + true ); } diff --git a/server/src/main/java/org/opensearch/action/admin/indices/template/get/TransportGetComposableIndexTemplateAction.java b/server/src/main/java/org/opensearch/action/admin/indices/template/get/TransportGetComposableIndexTemplateAction.java index b1ef32db7274f..84fbb59481c10 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/template/get/TransportGetComposableIndexTemplateAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/template/get/TransportGetComposableIndexTemplateAction.java @@ -76,7 +76,8 @@ public TransportGetComposableIndexTemplateAction( threadPool, actionFilters, GetComposableIndexTemplateAction.Request::new, - indexNameExpressionResolver + indexNameExpressionResolver, + true ); } diff --git a/server/src/main/java/org/opensearch/action/admin/indices/template/get/TransportGetIndexTemplatesAction.java b/server/src/main/java/org/opensearch/action/admin/indices/template/get/TransportGetIndexTemplatesAction.java index 10b4975f7b9d0..522234dda509f 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/template/get/TransportGetIndexTemplatesAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/template/get/TransportGetIndexTemplatesAction.java @@ -76,7 +76,8 @@ public TransportGetIndexTemplatesAction( threadPool, actionFilters, GetIndexTemplatesRequest::new, - indexNameExpressionResolver + indexNameExpressionResolver, + true ); } diff --git a/server/src/main/java/org/opensearch/action/ingest/GetPipelineTransportAction.java b/server/src/main/java/org/opensearch/action/ingest/GetPipelineTransportAction.java index 80333c7346f92..7bc0380bccbc0 100644 --- a/server/src/main/java/org/opensearch/action/ingest/GetPipelineTransportAction.java +++ b/server/src/main/java/org/opensearch/action/ingest/GetPipelineTransportAction.java @@ -70,7 +70,8 @@ public GetPipelineTransportAction( threadPool, actionFilters, GetPipelineRequest::new, - indexNameExpressionResolver + indexNameExpressionResolver, + true ); } diff --git a/server/src/main/java/org/opensearch/action/search/GetSearchPipelineTransportAction.java b/server/src/main/java/org/opensearch/action/search/GetSearchPipelineTransportAction.java index a7fcb8f1cfbae..215b7ae1a610c 100644 --- a/server/src/main/java/org/opensearch/action/search/GetSearchPipelineTransportAction.java +++ b/server/src/main/java/org/opensearch/action/search/GetSearchPipelineTransportAction.java @@ -48,7 +48,8 @@ public GetSearchPipelineTransportAction( threadPool, actionFilters, GetSearchPipelineRequest::new, - indexNameExpressionResolver + indexNameExpressionResolver, + true ); } diff --git a/server/src/main/java/org/opensearch/action/support/clustermanager/TransportClusterManagerNodeAction.java b/server/src/main/java/org/opensearch/action/support/clustermanager/TransportClusterManagerNodeAction.java index 080b0d607e991..4e869f29878cd 100644 --- a/server/src/main/java/org/opensearch/action/support/clustermanager/TransportClusterManagerNodeAction.java +++ b/server/src/main/java/org/opensearch/action/support/clustermanager/TransportClusterManagerNodeAction.java @@ -267,24 +267,7 @@ protected void doStart(ClusterState clusterState) { final DiscoveryNodes nodes = clusterState.nodes(); if (nodes.isLocalNodeElectedClusterManager() || localExecute(request)) { // check for block, if blocked, retry, else, execute locally - final ClusterBlockException blockException = checkBlock(request, clusterState); - if (blockException != null) { - if (!blockException.retryable()) { - listener.onFailure(blockException); - } else { - logger.debug("can't execute due to a cluster block, retrying", blockException); - retry(clusterState, blockException, newState -> { - try { - ClusterBlockException newException = checkBlock(request, newState); - return (newException == null || !newException.retryable()); - } catch (Exception e) { - // accept state as block will be rechecked by doStart() and listener.onFailure() then called - logger.trace("exception occurred during cluster block checking, accepting state", e); - return true; - } - }); - } - } else { + if (!checkForBlock(request, clusterState)) { threadPool.executor(executor) .execute( ActionRunnable.wrap( @@ -422,12 +405,43 @@ public GetTermVersionResponse read(StreamInput in) throws IOException { }; } + private boolean checkForBlock(Request request, ClusterState localClusterState) { + final ClusterBlockException blockException = checkBlock(request, localClusterState); + if (blockException != null) { + if (!blockException.retryable()) { + listener.onFailure(blockException); + } else { + logger.debug("can't execute due to a cluster block, retrying", blockException); + retry(localClusterState, blockException, newState -> { + try { + ClusterBlockException newException = checkBlock(request, newState); + return (newException == null || !newException.retryable()); + } catch (Exception e) { + // accept state as block will be rechecked by doStart() and listener.onFailure() then called + logger.trace("exception occurred during cluster block checking, accepting state", e); + return true; + } + }); + } + return true; + } else { + return false; + } + } + private void executeOnLocalNode(ClusterState localClusterState) { - Runnable runTask = ActionRunnable.wrap( - getDelegateForLocalExecute(localClusterState), - l -> clusterManagerOperation(task, request, localClusterState, l) - ); - threadPool.executor(executor).execute(runTask); + try { + // check for block, if blocked, retry, else, execute locally + if (!checkForBlock(request, localClusterState)) { + Runnable runTask = ActionRunnable.wrap( + getDelegateForLocalExecute(localClusterState), + l -> clusterManagerOperation(task, request, localClusterState, l) + ); + threadPool.executor(executor).execute(runTask); + } + } catch (Exception e) { + listener.onFailure(e); + } } private void executeOnClusterManager(DiscoveryNode clusterManagerNode, ClusterState clusterState) { diff --git a/server/src/main/java/org/opensearch/action/support/clustermanager/TransportClusterManagerNodeReadAction.java b/server/src/main/java/org/opensearch/action/support/clustermanager/TransportClusterManagerNodeReadAction.java index d58487a475bcf..88cb2ed6a9bf0 100644 --- a/server/src/main/java/org/opensearch/action/support/clustermanager/TransportClusterManagerNodeReadAction.java +++ b/server/src/main/java/org/opensearch/action/support/clustermanager/TransportClusterManagerNodeReadAction.java @@ -51,6 +51,8 @@ public abstract class TransportClusterManagerNodeReadAction< Request extends ClusterManagerNodeReadRequest, Response extends ActionResponse> extends TransportClusterManagerNodeAction { + protected boolean localExecuteSupported = false; + protected TransportClusterManagerNodeReadAction( String actionName, TransportService transportService, @@ -58,7 +60,8 @@ protected TransportClusterManagerNodeReadAction( ThreadPool threadPool, ActionFilters actionFilters, Writeable.Reader request, - IndexNameExpressionResolver indexNameExpressionResolver + IndexNameExpressionResolver indexNameExpressionResolver, + boolean localExecuteSupported ) { this( actionName, @@ -71,6 +74,19 @@ protected TransportClusterManagerNodeReadAction( request, indexNameExpressionResolver ); + this.localExecuteSupported = localExecuteSupported; + } + + protected TransportClusterManagerNodeReadAction( + String actionName, + TransportService transportService, + ClusterService clusterService, + ThreadPool threadPool, + ActionFilters actionFilters, + Writeable.Reader request, + IndexNameExpressionResolver indexNameExpressionResolver + ) { + this(actionName, transportService, clusterService, threadPool, actionFilters, request, indexNameExpressionResolver, false); } protected TransportClusterManagerNodeReadAction( @@ -124,4 +140,9 @@ protected TransportClusterManagerNodeReadAction( protected final boolean localExecute(Request request) { return request.local(); } + + protected boolean localExecuteSupportedByAction() { + return localExecuteSupported; + } + } diff --git a/server/src/main/java/org/opensearch/action/support/clustermanager/info/TransportClusterInfoAction.java b/server/src/main/java/org/opensearch/action/support/clustermanager/info/TransportClusterInfoAction.java index 65f00a4731ab5..8a0082ad05f66 100644 --- a/server/src/main/java/org/opensearch/action/support/clustermanager/info/TransportClusterInfoAction.java +++ b/server/src/main/java/org/opensearch/action/support/clustermanager/info/TransportClusterInfoAction.java @@ -62,6 +62,7 @@ public TransportClusterInfoAction( IndexNameExpressionResolver indexNameExpressionResolver ) { super(actionName, transportService, clusterService, threadPool, actionFilters, request, indexNameExpressionResolver); + this.localExecuteSupported = true; } @Override diff --git a/server/src/test/java/org/opensearch/action/admin/indices/mapping/get/GetMappingsActionTests.java b/server/src/test/java/org/opensearch/action/admin/indices/mapping/get/GetMappingsActionTests.java new file mode 100644 index 0000000000000..87f218760038e --- /dev/null +++ b/server/src/test/java/org/opensearch/action/admin/indices/mapping/get/GetMappingsActionTests.java @@ -0,0 +1,227 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +/* + * Modifications Copyright OpenSearch Contributors. See + * GitHub history for details. + */ + +package org.opensearch.action.admin.indices.mapping.get; + +import org.opensearch.Version; +import org.opensearch.action.support.ActionFilters; +import org.opensearch.action.support.clustermanager.term.GetTermVersionResponse; +import org.opensearch.action.support.replication.ClusterStateCreationUtils; +import org.opensearch.cluster.ClusterState; +import org.opensearch.cluster.block.ClusterBlock; +import org.opensearch.cluster.block.ClusterBlockException; +import org.opensearch.cluster.block.ClusterBlockLevel; +import org.opensearch.cluster.block.ClusterBlocks; +import org.opensearch.cluster.coordination.ClusterStateTermVersion; +import org.opensearch.cluster.metadata.IndexNameExpressionResolver; +import org.opensearch.cluster.node.DiscoveryNode; +import org.opensearch.cluster.node.DiscoveryNodeRole; +import org.opensearch.cluster.service.ClusterService; +import org.opensearch.common.settings.Settings; +import org.opensearch.common.settings.SettingsFilter; +import org.opensearch.common.settings.SettingsModule; +import org.opensearch.common.util.concurrent.ThreadContext; +import org.opensearch.core.action.ActionListener; +import org.opensearch.core.rest.RestStatus; +import org.opensearch.indices.IndicesService; +import org.opensearch.telemetry.tracing.noop.NoopTracer; +import org.opensearch.test.OpenSearchTestCase; +import org.opensearch.test.transport.CapturingTransport; +import org.opensearch.threadpool.TestThreadPool; +import org.opensearch.threadpool.ThreadPool; +import org.opensearch.transport.TransportService; +import org.junit.After; +import org.junit.Assert; +import org.junit.Before; + +import java.util.Collections; +import java.util.EnumSet; +import java.util.HashMap; +import java.util.concurrent.TimeUnit; + +import static java.util.Collections.emptyList; +import static java.util.Collections.emptySet; +import static org.opensearch.test.ClusterServiceUtils.createClusterService; +import static org.opensearch.test.ClusterServiceUtils.setState; +import static org.hamcrest.Matchers.equalTo; +import static org.mockito.Mockito.mock; + +public class GetMappingsActionTests extends OpenSearchTestCase { + private TransportService transportService; + private ClusterService clusterService; + private ThreadPool threadPool; + private SettingsFilter settingsFilter; + private final String indexName = "test_index"; + CapturingTransport capturingTransport = new CapturingTransport(); + private DiscoveryNode localNode; + private DiscoveryNode remoteNode; + private DiscoveryNode[] allNodes; + private TransportGetMappingsAction transportAction = null; + + @Before + public void setUp() throws Exception { + super.setUp(); + + settingsFilter = new SettingsModule(Settings.EMPTY, emptyList(), emptyList(), emptySet()).getSettingsFilter(); + threadPool = new TestThreadPool("GetIndexActionTests"); + clusterService = createClusterService(threadPool); + + transportService = capturingTransport.createTransportService( + clusterService.getSettings(), + threadPool, + TransportService.NOOP_TRANSPORT_INTERCEPTOR, + boundAddress -> clusterService.localNode(), + null, + emptySet(), + NoopTracer.INSTANCE + ); + transportService.start(); + transportService.acceptIncomingRequests(); + + localNode = new DiscoveryNode( + "local_node", + buildNewFakeTransportAddress(), + Collections.emptyMap(), + Collections.singleton(DiscoveryNodeRole.DATA_ROLE), + Version.CURRENT + ); + remoteNode = new DiscoveryNode( + "remote_node", + buildNewFakeTransportAddress(), + Collections.emptyMap(), + Collections.singleton(DiscoveryNodeRole.CLUSTER_MANAGER_ROLE), + Version.CURRENT + ); + allNodes = new DiscoveryNode[] { localNode, remoteNode }; + setState(clusterService, ClusterStateCreationUtils.state(localNode, remoteNode, allNodes)); + transportAction = new TransportGetMappingsAction( + GetMappingsActionTests.this.transportService, + GetMappingsActionTests.this.clusterService, + GetMappingsActionTests.this.threadPool, + new ActionFilters(emptySet()), + new IndexNameExpressionResolver(new ThreadContext(Settings.EMPTY)), + mock(IndicesService.class) + ); + + } + + @After + public void tearDown() throws Exception { + super.tearDown(); + clusterService.close(); + transportService.close(); + ThreadPool.terminate(threadPool, 30, TimeUnit.SECONDS); + } + + public void testGetTransportWithoutMatchingTerm() { + transportAction.execute(null, new GetMappingsRequest(), ActionListener.wrap(Assert::assertNotNull, exception -> { + throw new AssertionError(exception); + })); + assertThat(capturingTransport.capturedRequests().length, equalTo(1)); + CapturingTransport.CapturedRequest capturedRequest = capturingTransport.capturedRequests()[0]; + // mismatch term and version + GetTermVersionResponse termResp = new GetTermVersionResponse( + new ClusterStateTermVersion( + clusterService.state().getClusterName(), + clusterService.state().metadata().clusterUUID(), + clusterService.state().term() - 1, + clusterService.state().version() - 1 + ) + ); + capturingTransport.handleResponse(capturedRequest.requestId, termResp); + + assertThat(capturingTransport.capturedRequests().length, equalTo(2)); + CapturingTransport.CapturedRequest capturedRequest1 = capturingTransport.capturedRequests()[1]; + + capturingTransport.handleResponse(capturedRequest1.requestId, new GetMappingsResponse(new HashMap<>())); + } + + public void testGetTransportWithMatchingTerm() { + transportAction.execute(null, new GetMappingsRequest(), ActionListener.wrap(Assert::assertNotNull, exception -> { + throw new AssertionError(exception); + })); + assertThat(capturingTransport.capturedRequests().length, equalTo(1)); + CapturingTransport.CapturedRequest capturedRequest = capturingTransport.capturedRequests()[0]; + GetTermVersionResponse termResp = new GetTermVersionResponse( + new ClusterStateTermVersion( + clusterService.state().getClusterName(), + clusterService.state().metadata().clusterUUID(), + clusterService.state().term(), + clusterService.state().version() + ) + ); + capturingTransport.handleResponse(capturedRequest.requestId, termResp); + + // no more transport calls + assertThat(capturingTransport.capturedRequests().length, equalTo(1)); + } + + public void testGetTransportClusterBlockWithMatchingTerm() { + ClusterBlock readClusterBlock = new ClusterBlock( + 1, + "uuid", + "", + false, + true, + true, + RestStatus.OK, + EnumSet.of(ClusterBlockLevel.METADATA_READ) + ); + ClusterBlocks.Builder builder = ClusterBlocks.builder(); + builder.addGlobalBlock(readClusterBlock); + ClusterState metadataReadBlockedState = ClusterState.builder(ClusterStateCreationUtils.state(localNode, remoteNode, allNodes)) + .blocks(builder) + .build(); + setState(clusterService, metadataReadBlockedState); + + transportAction.execute( + null, + new GetMappingsRequest(), + ActionListener.wrap(response -> { throw new AssertionError(response); }, exception -> { + Assert.assertTrue(exception instanceof ClusterBlockException); + }) + ); + assertThat(capturingTransport.capturedRequests().length, equalTo(1)); + CapturingTransport.CapturedRequest capturedRequest = capturingTransport.capturedRequests()[0]; + GetTermVersionResponse termResp = new GetTermVersionResponse( + new ClusterStateTermVersion( + clusterService.state().getClusterName(), + clusterService.state().metadata().clusterUUID(), + clusterService.state().term(), + clusterService.state().version() + ) + ); + capturingTransport.handleResponse(capturedRequest.requestId, termResp); + + // no more transport calls + assertThat(capturingTransport.capturedRequests().length, equalTo(1)); + } +} From b35690c886f42d2ca01fa3081e80cb4ba4aa19d9 Mon Sep 17 00:00:00 2001 From: Sumit Bansal Date: Mon, 22 Jul 2024 22:03:37 +0530 Subject: [PATCH 091/167] Reduce logging in DEBUG for MasterService:run (#14795) * Reduce logging in DEBUG for MasteService:run by introducing short and long summary in Taskbatcher Signed-off-by: Sumit Bansal --- CHANGELOG.md | 1 + .../cluster/service/MasterService.java | 55 ++-- .../cluster/service/TaskBatcher.java | 28 +- .../cluster/service/MasterServiceTests.java | 285 ++++++++++++++++-- .../cluster/service/TaskBatcherTests.java | 3 +- 5 files changed, 309 insertions(+), 63 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index e77b183601674..adbb69ff72a0e 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -22,6 +22,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Refactor remote-routing-table service inline with remote state interfaces([#14668](https://github.com/opensearch-project/OpenSearch/pull/14668)) - Add prefix mode verification setting for repository verification (([#14790](https://github.com/opensearch-project/OpenSearch/pull/14790))) - Optimize TransportNodesAction to not send DiscoveryNodes for NodeStats, NodesInfo and ClusterStats call ([14749](https://github.com/opensearch-project/OpenSearch/pull/14749)) +- Reduce logging in DEBUG for MasterService:run ([#14795](https://github.com/opensearch-project/OpenSearch/pull/14795)) - Enabling term version check on local state for all ClusterManager Read Transport Actions ([#14273](https://github.com/opensearch-project/OpenSearch/pull/14273)) ### Dependencies diff --git a/server/src/main/java/org/opensearch/cluster/service/MasterService.java b/server/src/main/java/org/opensearch/cluster/service/MasterService.java index 686e9793a8fd3..4ab8255df7658 100644 --- a/server/src/main/java/org/opensearch/cluster/service/MasterService.java +++ b/server/src/main/java/org/opensearch/cluster/service/MasterService.java @@ -84,6 +84,7 @@ import java.util.Objects; import java.util.Optional; import java.util.concurrent.TimeUnit; +import java.util.function.Function; import java.util.function.Supplier; import java.util.stream.Collectors; @@ -221,10 +222,10 @@ protected void onTimeout(List tasks, TimeValue timeout) { } @Override - protected void run(Object batchingKey, List tasks, String tasksSummary) { + protected void run(Object batchingKey, List tasks, Function taskSummaryGenerator) { ClusterStateTaskExecutor taskExecutor = (ClusterStateTaskExecutor) batchingKey; List updateTasks = (List) tasks; - runTasks(new TaskInputs(taskExecutor, updateTasks, tasksSummary)); + runTasks(new TaskInputs(taskExecutor, updateTasks, taskSummaryGenerator)); } class UpdateTask extends BatchedTask { @@ -297,26 +298,33 @@ public static boolean assertNotMasterUpdateThread(String reason) { } private void runTasks(TaskInputs taskInputs) { - final String summary = taskInputs.summary; + final String longSummary = logger.isTraceEnabled() ? taskInputs.taskSummaryGenerator.apply(true) : ""; + final String shortSummary = taskInputs.taskSummaryGenerator.apply(false); + if (!lifecycle.started()) { - logger.debug("processing [{}]: ignoring, cluster-manager service not started", summary); + logger.debug("processing [{}]: ignoring, cluster-manager service not started", shortSummary); return; } - logger.debug("executing cluster state update for [{}]", summary); + if (logger.isTraceEnabled()) { + logger.trace("executing cluster state update for [{}]", longSummary); + } else { + logger.debug("executing cluster state update for [{}]", shortSummary); + } + final ClusterState previousClusterState = state(); if (!previousClusterState.nodes().isLocalNodeElectedClusterManager() && taskInputs.runOnlyWhenClusterManager()) { - logger.debug("failing [{}]: local node is no longer cluster-manager", summary); + logger.debug("failing [{}]: local node is no longer cluster-manager", shortSummary); taskInputs.onNoLongerClusterManager(); return; } final long computationStartTime = threadPool.preciseRelativeTimeInNanos(); - final TaskOutputs taskOutputs = calculateTaskOutputs(taskInputs, previousClusterState); + final TaskOutputs taskOutputs = calculateTaskOutputs(taskInputs, previousClusterState, shortSummary); taskOutputs.notifyFailedTasks(); final TimeValue computationTime = getTimeSince(computationStartTime); - logExecutionTime(computationTime, "compute cluster state update", summary); + logExecutionTime(computationTime, "compute cluster state update", shortSummary); clusterManagerMetrics.recordLatency( clusterManagerMetrics.clusterStateComputeHistogram, @@ -328,17 +336,17 @@ private void runTasks(TaskInputs taskInputs) { final long notificationStartTime = threadPool.preciseRelativeTimeInNanos(); taskOutputs.notifySuccessfulTasksOnUnchangedClusterState(); final TimeValue executionTime = getTimeSince(notificationStartTime); - logExecutionTime(executionTime, "notify listeners on unchanged cluster state", summary); + logExecutionTime(executionTime, "notify listeners on unchanged cluster state", shortSummary); } else { final ClusterState newClusterState = taskOutputs.newClusterState; if (logger.isTraceEnabled()) { - logger.trace("cluster state updated, source [{}]\n{}", summary, newClusterState); + logger.trace("cluster state updated, source [{}]\n{}", longSummary, newClusterState); } else { - logger.debug("cluster state updated, version [{}], source [{}]", newClusterState.version(), summary); + logger.debug("cluster state updated, version [{}], source [{}]", newClusterState.version(), shortSummary); } final long publicationStartTime = threadPool.preciseRelativeTimeInNanos(); try { - ClusterChangedEvent clusterChangedEvent = new ClusterChangedEvent(summary, newClusterState, previousClusterState); + ClusterChangedEvent clusterChangedEvent = new ClusterChangedEvent(shortSummary, newClusterState, previousClusterState); // new cluster state, notify all listeners final DiscoveryNodes.Delta nodesDelta = clusterChangedEvent.nodesDelta(); if (nodesDelta.hasChanges() && logger.isInfoEnabled()) { @@ -346,7 +354,7 @@ private void runTasks(TaskInputs taskInputs) { if (nodesDeltaSummary.length() > 0) { logger.info( "{}, term: {}, version: {}, delta: {}", - summary, + shortSummary, newClusterState.term(), newClusterState.version(), nodesDeltaSummary @@ -357,7 +365,7 @@ private void runTasks(TaskInputs taskInputs) { logger.debug("publishing cluster state version [{}]", newClusterState.version()); publish(clusterChangedEvent, taskOutputs, publicationStartTime); } catch (Exception e) { - handleException(summary, publicationStartTime, newClusterState, e); + handleException(shortSummary, publicationStartTime, newClusterState, e); } } } @@ -452,8 +460,8 @@ private void handleException(String summary, long startTimeMillis, ClusterState // TODO: do we want to call updateTask.onFailure here? } - private TaskOutputs calculateTaskOutputs(TaskInputs taskInputs, ClusterState previousClusterState) { - ClusterTasksResult clusterTasksResult = executeTasks(taskInputs, previousClusterState); + private TaskOutputs calculateTaskOutputs(TaskInputs taskInputs, ClusterState previousClusterState, String taskSummary) { + ClusterTasksResult clusterTasksResult = executeTasks(taskInputs, previousClusterState, taskSummary); ClusterState newClusterState = patchVersions(previousClusterState, clusterTasksResult); return new TaskOutputs( taskInputs, @@ -897,7 +905,7 @@ public void onTimeout() { } } - private ClusterTasksResult executeTasks(TaskInputs taskInputs, ClusterState previousClusterState) { + private ClusterTasksResult executeTasks(TaskInputs taskInputs, ClusterState previousClusterState, String taskSummary) { ClusterTasksResult clusterTasksResult; try { List inputs = taskInputs.updateTasks.stream().map(tUpdateTask -> tUpdateTask.task).collect(Collectors.toList()); @@ -913,7 +921,7 @@ private ClusterTasksResult executeTasks(TaskInputs taskInputs, ClusterSt "failed to execute cluster state update (on version: [{}], uuid: [{}]) for [{}]\n{}{}{}", previousClusterState.version(), previousClusterState.stateUUID(), - taskInputs.summary, + taskSummary, previousClusterState.nodes(), previousClusterState.routingTable(), previousClusterState.getRoutingNodes() @@ -955,14 +963,19 @@ private List getNonFailedTasks(TaskInputs taskInputs, Cluste * Represents a set of tasks to be processed together with their executor */ private class TaskInputs { - final String summary; + final List updateTasks; final ClusterStateTaskExecutor executor; + final Function taskSummaryGenerator; - TaskInputs(ClusterStateTaskExecutor executor, List updateTasks, String summary) { - this.summary = summary; + TaskInputs( + ClusterStateTaskExecutor executor, + List updateTasks, + final Function taskSummaryGenerator + ) { this.executor = executor; this.updateTasks = updateTasks; + this.taskSummaryGenerator = taskSummaryGenerator; } boolean runOnlyWhenClusterManager() { diff --git a/server/src/main/java/org/opensearch/cluster/service/TaskBatcher.java b/server/src/main/java/org/opensearch/cluster/service/TaskBatcher.java index 5e58f495a16fb..3513bfffb7157 100644 --- a/server/src/main/java/org/opensearch/cluster/service/TaskBatcher.java +++ b/server/src/main/java/org/opensearch/cluster/service/TaskBatcher.java @@ -177,7 +177,6 @@ void runIfNotProcessed(BatchedTask updateTask) { // to give other tasks with different batching key a chance to execute. if (updateTask.processed.get() == false) { final List toExecute = new ArrayList<>(); - final Map> processTasksBySource = new HashMap<>(); // While removing task, need to remove task first from taskMap and then remove identity from identityMap. // Changing this order might lead to duplicate task during submission. LinkedHashSet pending = tasksPerBatchingKey.remove(updateTask.batchingKey); @@ -187,7 +186,6 @@ void runIfNotProcessed(BatchedTask updateTask) { if (task.processed.getAndSet(true) == false) { logger.trace("will process {}", task); toExecute.add(task); - processTasksBySource.computeIfAbsent(task.source, s -> new ArrayList<>()).add(task); } else { logger.trace("skipping {}, already processed", task); } @@ -195,22 +193,34 @@ void runIfNotProcessed(BatchedTask updateTask) { } if (toExecute.isEmpty() == false) { - final String tasksSummary = processTasksBySource.entrySet().stream().map(entry -> { - String tasks = updateTask.describeTasks(entry.getValue()); - return tasks.isEmpty() ? entry.getKey() : entry.getKey() + "[" + tasks + "]"; - }).reduce((s1, s2) -> s1 + ", " + s2).orElse(""); - + Function taskSummaryGenerator = (longSummaryRequired) -> { + if (longSummaryRequired == null || !longSummaryRequired) { + return buildShortSummary(updateTask.batchingKey, toExecute.size()); + } + final Map> processTasksBySource = new HashMap<>(); + for (final BatchedTask task : toExecute) { + processTasksBySource.computeIfAbsent(task.source, s -> new ArrayList<>()).add(task); + } + return processTasksBySource.entrySet().stream().map(entry -> { + String tasks = updateTask.describeTasks(entry.getValue()); + return tasks.isEmpty() ? entry.getKey() : entry.getKey() + "[" + tasks + "]"; + }).reduce((s1, s2) -> s1 + ", " + s2).orElse(""); + }; taskBatcherListener.onBeginProcessing(toExecute); - run(updateTask.batchingKey, toExecute, tasksSummary); + run(updateTask.batchingKey, toExecute, taskSummaryGenerator); } } } + private String buildShortSummary(final Object batchingKey, final int taskCount) { + return "Tasks batched with key: " + batchingKey.toString().split("\\$")[0] + " and count: " + taskCount; + } + /** * Action to be implemented by the specific batching implementation * All tasks have the given batching key. */ - protected abstract void run(Object batchingKey, List tasks, String tasksSummary); + protected abstract void run(Object batchingKey, List tasks, Function taskSummaryGenerator); /** * Represents a runnable task that supports batching. diff --git a/server/src/test/java/org/opensearch/cluster/service/MasterServiceTests.java b/server/src/test/java/org/opensearch/cluster/service/MasterServiceTests.java index 8c84ac365dfd1..7562dfc2e9d33 100644 --- a/server/src/test/java/org/opensearch/cluster/service/MasterServiceTests.java +++ b/server/src/test/java/org/opensearch/cluster/service/MasterServiceTests.java @@ -376,13 +376,13 @@ public void onFailure(String source, Exception e) {} } @TestLogging(value = "org.opensearch.cluster.service:TRACE", reason = "to ensure that we log cluster state events on TRACE level") - public void testClusterStateUpdateLogging() throws Exception { + public void testClusterStateUpdateLoggingWithTraceEnabled() throws Exception { try (MockLogAppender mockAppender = MockLogAppender.createForLoggers(LogManager.getLogger(MasterService.class))) { mockAppender.addExpectation( new MockLogAppender.SeenEventExpectation( "test1 start", MasterService.class.getCanonicalName(), - Level.DEBUG, + Level.TRACE, "executing cluster state update for [test1]" ) ); @@ -391,7 +391,7 @@ public void testClusterStateUpdateLogging() throws Exception { "test1 computation", MasterService.class.getCanonicalName(), Level.DEBUG, - "took [1s] to compute cluster state update for [test1]" + "took [1s] to compute cluster state update for [Tasks batched with key: org.opensearch.cluster.service.MasterServiceTests and count: 1]" ) ); mockAppender.addExpectation( @@ -399,7 +399,7 @@ public void testClusterStateUpdateLogging() throws Exception { "test1 notification", MasterService.class.getCanonicalName(), Level.DEBUG, - "took [0s] to notify listeners on unchanged cluster state for [test1]" + "took [0s] to notify listeners on unchanged cluster state for [Tasks batched with key: org.opensearch.cluster.service.MasterServiceTests and count: 1]" ) ); @@ -407,7 +407,7 @@ public void testClusterStateUpdateLogging() throws Exception { new MockLogAppender.SeenEventExpectation( "test2 start", MasterService.class.getCanonicalName(), - Level.DEBUG, + Level.TRACE, "executing cluster state update for [test2]" ) ); @@ -416,7 +416,7 @@ public void testClusterStateUpdateLogging() throws Exception { "test2 failure", MasterService.class.getCanonicalName(), Level.TRACE, - "failed to execute cluster state update (on version: [*], uuid: [*]) for [test2]*" + "failed to execute cluster state update (on version: [*], uuid: [*]) for [Tasks batched with key: org.opensearch.cluster.service.MasterServiceTests and count: 1]*" ) ); mockAppender.addExpectation( @@ -424,7 +424,7 @@ public void testClusterStateUpdateLogging() throws Exception { "test2 computation", MasterService.class.getCanonicalName(), Level.DEBUG, - "took [2s] to compute cluster state update for [test2]" + "took [2s] to compute cluster state update for [Tasks batched with key: org.opensearch.cluster.service.MasterServiceTests and count: 1]" ) ); mockAppender.addExpectation( @@ -432,7 +432,7 @@ public void testClusterStateUpdateLogging() throws Exception { "test2 notification", MasterService.class.getCanonicalName(), Level.DEBUG, - "took [0s] to notify listeners on unchanged cluster state for [test2]" + "took [0s] to notify listeners on unchanged cluster state for [Tasks batched with key: org.opensearch.cluster.service.MasterServiceTests and count: 1]" ) ); @@ -440,7 +440,7 @@ public void testClusterStateUpdateLogging() throws Exception { new MockLogAppender.SeenEventExpectation( "test3 start", MasterService.class.getCanonicalName(), - Level.DEBUG, + Level.TRACE, "executing cluster state update for [test3]" ) ); @@ -449,7 +449,7 @@ public void testClusterStateUpdateLogging() throws Exception { "test3 computation", MasterService.class.getCanonicalName(), Level.DEBUG, - "took [3s] to compute cluster state update for [test3]" + "took [3s] to compute cluster state update for [Tasks batched with key: org.opensearch.cluster.service.MasterServiceTests and count: 1]" ) ); mockAppender.addExpectation( @@ -457,7 +457,7 @@ public void testClusterStateUpdateLogging() throws Exception { "test3 notification", MasterService.class.getCanonicalName(), Level.DEBUG, - "took [4s] to notify listeners on successful publication of cluster state (version: *, uuid: *) for [test3]" + "took [4s] to notify listeners on successful publication of cluster state (version: *, uuid: *) for [Tasks batched with key: org.opensearch.cluster.service.MasterServiceTests and count: 1]" ) ); @@ -465,7 +465,7 @@ public void testClusterStateUpdateLogging() throws Exception { new MockLogAppender.SeenEventExpectation( "test4", MasterService.class.getCanonicalName(), - Level.DEBUG, + Level.TRACE, "executing cluster state update for [test4]" ) ); @@ -540,6 +540,171 @@ public void onFailure(String source, Exception e) { } } + @TestLogging(value = "org.opensearch.cluster.service:DEBUG", reason = "to ensure that we log cluster state events on DEBUG level") + public void testClusterStateUpdateLoggingWithDebugEnabled() throws Exception { + try (MockLogAppender mockAppender = MockLogAppender.createForLoggers(LogManager.getLogger(MasterService.class))) { + mockAppender.addExpectation( + new MockLogAppender.SeenEventExpectation( + "test1 start", + MasterService.class.getCanonicalName(), + Level.DEBUG, + "executing cluster state update for [Tasks batched with key: org.opensearch.cluster.service.MasterServiceTests and count: 1]" + ) + ); + mockAppender.addExpectation( + new MockLogAppender.SeenEventExpectation( + "test1 computation", + MasterService.class.getCanonicalName(), + Level.DEBUG, + "took [1s] to compute cluster state update for [Tasks batched with key: org.opensearch.cluster.service.MasterServiceTests and count: 1]" + ) + ); + mockAppender.addExpectation( + new MockLogAppender.SeenEventExpectation( + "test1 notification", + MasterService.class.getCanonicalName(), + Level.DEBUG, + "took [0s] to notify listeners on unchanged cluster state for [Tasks batched with key: org.opensearch.cluster.service.MasterServiceTests and count: 1]" + ) + ); + + mockAppender.addExpectation( + new MockLogAppender.SeenEventExpectation( + "test2 start", + MasterService.class.getCanonicalName(), + Level.DEBUG, + "executing cluster state update for [Tasks batched with key: org.opensearch.cluster.service.MasterServiceTests and count: 1]" + ) + ); + mockAppender.addExpectation( + new MockLogAppender.UnseenEventExpectation( + "test2 failure", + MasterService.class.getCanonicalName(), + Level.DEBUG, + "failed to execute cluster state update (on version: [*], uuid: [*]) for [Tasks batched with key: org.opensearch.cluster.service.MasterServiceTests and count: 1]*" + ) + ); + mockAppender.addExpectation( + new MockLogAppender.SeenEventExpectation( + "test2 computation", + MasterService.class.getCanonicalName(), + Level.DEBUG, + "took [2s] to compute cluster state update for [Tasks batched with key: org.opensearch.cluster.service.MasterServiceTests and count: 1]" + ) + ); + mockAppender.addExpectation( + new MockLogAppender.SeenEventExpectation( + "test2 notification", + MasterService.class.getCanonicalName(), + Level.DEBUG, + "took [0s] to notify listeners on unchanged cluster state for [Tasks batched with key: org.opensearch.cluster.service.MasterServiceTests and count: 1]" + ) + ); + + mockAppender.addExpectation( + new MockLogAppender.SeenEventExpectation( + "test3 start", + MasterService.class.getCanonicalName(), + Level.DEBUG, + "executing cluster state update for [Tasks batched with key: org.opensearch.cluster.service.MasterServiceTests and count: 1]" + ) + ); + mockAppender.addExpectation( + new MockLogAppender.SeenEventExpectation( + "test3 computation", + MasterService.class.getCanonicalName(), + Level.DEBUG, + "took [3s] to compute cluster state update for [Tasks batched with key: org.opensearch.cluster.service.MasterServiceTests and count: 1]" + ) + ); + mockAppender.addExpectation( + new MockLogAppender.SeenEventExpectation( + "test3 notification", + MasterService.class.getCanonicalName(), + Level.DEBUG, + "took [4s] to notify listeners on successful publication of cluster state (version: *, uuid: *) for [Tasks batched with key: org.opensearch.cluster.service.MasterServiceTests and count: 1]" + ) + ); + + mockAppender.addExpectation( + new MockLogAppender.SeenEventExpectation( + "test4", + MasterService.class.getCanonicalName(), + Level.DEBUG, + "executing cluster state update for [Tasks batched with key: org.opensearch.cluster.service.MasterServiceTests and count: 1]" + ) + ); + + try (ClusterManagerService clusterManagerService = createClusterManagerService(true)) { + clusterManagerService.submitStateUpdateTask("test1", new ClusterStateUpdateTask() { + @Override + public ClusterState execute(ClusterState currentState) { + timeDiffInMillis += TimeValue.timeValueSeconds(1).millis(); + return currentState; + } + + @Override + public void clusterStateProcessed(String source, ClusterState oldState, ClusterState newState) {} + + @Override + public void onFailure(String source, Exception e) { + fail(); + } + }); + clusterManagerService.submitStateUpdateTask("test2", new ClusterStateUpdateTask() { + @Override + public ClusterState execute(ClusterState currentState) { + timeDiffInMillis += TimeValue.timeValueSeconds(2).millis(); + throw new IllegalArgumentException("Testing handling of exceptions in the cluster state task"); + } + + @Override + public void clusterStateProcessed(String source, ClusterState oldState, ClusterState newState) { + fail(); + } + + @Override + public void onFailure(String source, Exception e) {} + }); + clusterManagerService.submitStateUpdateTask("test3", new ClusterStateUpdateTask() { + @Override + public ClusterState execute(ClusterState currentState) { + timeDiffInMillis += TimeValue.timeValueSeconds(3).millis(); + return ClusterState.builder(currentState).incrementVersion().build(); + } + + @Override + public void clusterStateProcessed(String source, ClusterState oldState, ClusterState newState) { + timeDiffInMillis += TimeValue.timeValueSeconds(4).millis(); + } + + @Override + public void onFailure(String source, Exception e) { + fail(); + } + }); + clusterManagerService.submitStateUpdateTask("test4", new ClusterStateUpdateTask() { + @Override + public ClusterState execute(ClusterState currentState) { + return currentState; + } + + @Override + public void clusterStateProcessed(String source, ClusterState oldState, ClusterState newState) {} + + @Override + public void onFailure(String source, Exception e) { + fail(); + } + }); + assertBusy(mockAppender::assertAllExpectationsMatched); + // verify stats values after state is published + assertEquals(1, clusterManagerService.getClusterStateStats().getUpdateSuccess()); + assertEquals(0, clusterManagerService.getClusterStateStats().getUpdateFailed()); + } + } + } + public void testClusterStateBatchedUpdates() throws BrokenBarrierException, InterruptedException { AtomicInteger counter = new AtomicInteger(); class Task { @@ -1073,7 +1238,7 @@ public void testLongClusterStateUpdateLogging() throws Exception { "test2", MasterService.class.getCanonicalName(), Level.WARN, - "*took [*], which is over [10s], to compute cluster state update for [test2]" + "*took [*], which is over [10s], to compute cluster state update for [Tasks batched with key: org.opensearch.cluster.service.MasterServiceTests and count: 1]" ) ); mockAppender.addExpectation( @@ -1081,7 +1246,7 @@ public void testLongClusterStateUpdateLogging() throws Exception { "test3", MasterService.class.getCanonicalName(), Level.WARN, - "*took [*], which is over [10s], to compute cluster state update for [test3]" + "*took [*], which is over [10s], to compute cluster state update for [Tasks batched with key: org.opensearch.cluster.service.MasterServiceTests and count: 1]" ) ); mockAppender.addExpectation( @@ -1089,7 +1254,7 @@ public void testLongClusterStateUpdateLogging() throws Exception { "test4", MasterService.class.getCanonicalName(), Level.WARN, - "*took [*], which is over [10s], to compute cluster state update for [test4]" + "*took [*], which is over [10s], to compute cluster state update for [Tasks batched with key: org.opensearch.cluster.service.MasterServiceTests and count: 1]" ) ); mockAppender.addExpectation( @@ -1100,14 +1265,6 @@ public void testLongClusterStateUpdateLogging() throws Exception { "*took*test5*" ) ); - mockAppender.addExpectation( - new MockLogAppender.SeenEventExpectation( - "test6 should log due to slow and failing publication", - MasterService.class.getCanonicalName(), - Level.WARN, - "took [*] and then failed to publish updated cluster state (version: *, uuid: *) for [test6]:*" - ) - ); try ( ClusterManagerService clusterManagerService = new ClusterManagerService( @@ -1139,19 +1296,13 @@ public void testLongClusterStateUpdateLogging() throws Exception { Settings.EMPTY ).millis() + randomLongBetween(1, 1000000); } - if (event.source().contains("test6")) { - timeDiffInMillis += ClusterManagerService.CLUSTER_MANAGER_SERVICE_SLOW_TASK_LOGGING_THRESHOLD_SETTING.get( - Settings.EMPTY - ).millis() + randomLongBetween(1, 1000000); - throw new OpenSearchException("simulated error during slow publication which should trigger logging"); - } clusterStateRef.set(event.state()); publishListener.onResponse(null); }); clusterManagerService.setClusterStateSupplier(clusterStateRef::get); clusterManagerService.start(); - final CountDownLatch latch = new CountDownLatch(6); + final CountDownLatch latch = new CountDownLatch(5); final CountDownLatch processedFirstTask = new CountDownLatch(1); clusterManagerService.submitStateUpdateTask("test1", new ClusterStateUpdateTask() { @Override @@ -1249,7 +1400,77 @@ public void onFailure(String source, Exception e) { fail(); } }); + // Additional update task to make sure all previous logging made it to the loggerName + // We don't check logging for this on since there is no guarantee that it will occur before our check clusterManagerService.submitStateUpdateTask("test6", new ClusterStateUpdateTask() { + @Override + public ClusterState execute(ClusterState currentState) { + return currentState; + } + + @Override + public void clusterStateProcessed(String source, ClusterState oldState, ClusterState newState) { + latch.countDown(); + } + + @Override + public void onFailure(String source, Exception e) { + fail(); + } + }); + latch.await(); + } + mockAppender.assertAllExpectationsMatched(); + } + } + + @TestLogging(value = "org.opensearch.cluster.service:WARN", reason = "to ensure that we log failed cluster state events on WARN level") + public void testLongClusterStateUpdateLoggingForFailedPublication() throws Exception { + try (MockLogAppender mockAppender = MockLogAppender.createForLoggers(LogManager.getLogger(MasterService.class))) { + mockAppender.addExpectation( + new MockLogAppender.SeenEventExpectation( + "test1 should log due to slow and failing publication", + MasterService.class.getCanonicalName(), + Level.WARN, + "took [*] and then failed to publish updated cluster state (version: *, uuid: *) for [Tasks batched with key: org.opensearch.cluster.service.MasterServiceTests and count: 1]:*" + ) + ); + + try ( + ClusterManagerService clusterManagerService = new ClusterManagerService( + Settings.builder() + .put(ClusterName.CLUSTER_NAME_SETTING.getKey(), MasterServiceTests.class.getSimpleName()) + .put(Node.NODE_NAME_SETTING.getKey(), "test_node") + .build(), + new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS), + threadPool, + new ClusterManagerMetrics(NoopMetricsRegistry.INSTANCE) + ) + ) { + + final DiscoveryNode localNode = new DiscoveryNode( + "node1", + buildNewFakeTransportAddress(), + emptyMap(), + emptySet(), + Version.CURRENT + ); + final ClusterState initialClusterState = ClusterState.builder(new ClusterName(MasterServiceTests.class.getSimpleName())) + .nodes(DiscoveryNodes.builder().add(localNode).localNodeId(localNode.getId()).masterNodeId(localNode.getId())) + .blocks(ClusterBlocks.EMPTY_CLUSTER_BLOCK) + .build(); + final AtomicReference clusterStateRef = new AtomicReference<>(initialClusterState); + clusterManagerService.setClusterStatePublisher((event, publishListener, ackListener) -> { + timeDiffInMillis += ClusterManagerService.CLUSTER_MANAGER_SERVICE_SLOW_TASK_LOGGING_THRESHOLD_SETTING.get( + Settings.EMPTY + ).millis() + randomLongBetween(1, 1000000); + throw new OpenSearchException("simulated error during slow publication which should trigger logging"); + }); + clusterManagerService.setClusterStateSupplier(clusterStateRef::get); + clusterManagerService.start(); + + final CountDownLatch latch = new CountDownLatch(1); + clusterManagerService.submitStateUpdateTask("test1", new ClusterStateUpdateTask() { @Override public ClusterState execute(ClusterState currentState) { return ClusterState.builder(currentState).incrementVersion().build(); @@ -1262,12 +1483,12 @@ public void clusterStateProcessed(String source, ClusterState oldState, ClusterS @Override public void onFailure(String source, Exception e) { - fail(); // maybe we should notify here? + fail(); } }); // Additional update task to make sure all previous logging made it to the loggerName // We don't check logging for this on since there is no guarantee that it will occur before our check - clusterManagerService.submitStateUpdateTask("test7", new ClusterStateUpdateTask() { + clusterManagerService.submitStateUpdateTask("test2", new ClusterStateUpdateTask() { @Override public ClusterState execute(ClusterState currentState) { return currentState; diff --git a/server/src/test/java/org/opensearch/cluster/service/TaskBatcherTests.java b/server/src/test/java/org/opensearch/cluster/service/TaskBatcherTests.java index b0916ce9236f7..0ebcb51b557ae 100644 --- a/server/src/test/java/org/opensearch/cluster/service/TaskBatcherTests.java +++ b/server/src/test/java/org/opensearch/cluster/service/TaskBatcherTests.java @@ -55,6 +55,7 @@ import java.util.concurrent.CountDownLatch; import java.util.concurrent.CyclicBarrier; import java.util.concurrent.Semaphore; +import java.util.function.Function; import java.util.stream.Collectors; import static org.hamcrest.Matchers.containsString; @@ -78,7 +79,7 @@ static class TestTaskBatcher extends TaskBatcher { } @Override - protected void run(Object batchingKey, List tasks, String tasksSummary) { + protected void run(Object batchingKey, List tasks, Function taskSummaryGenerator) { List updateTasks = (List) tasks; ((TestExecutor) batchingKey).execute(updateTasks.stream().map(t -> t.task).collect(Collectors.toList())); updateTasks.forEach(updateTask -> updateTask.listener.processed(updateTask.source)); From 45c5f8d3154db84e28a6dc201d9b31ff91288fde Mon Sep 17 00:00:00 2001 From: Daniel Widdis Date: Mon, 22 Jul 2024 09:57:00 -0700 Subject: [PATCH 092/167] Add SplitResponseProcessor to Search Pipelines (#14800) * Add SplitResponseProcessor for search pipelines Signed-off-by: Daniel Widdis * Register the split processor factory Signed-off-by: Daniel Widdis * Address code review comments Signed-off-by: Daniel Widdis * Avoid list copy by casting array Signed-off-by: Daniel Widdis --------- Signed-off-by: Daniel Widdis --- CHANGELOG.md | 1 + .../SearchPipelineCommonModulePlugin.java | 4 +- .../common/SplitResponseProcessor.java | 162 +++++++++++++ ...SearchPipelineCommonModulePluginTests.java | 2 +- .../common/SplitResponseProcessorTests.java | 213 ++++++++++++++++++ 5 files changed, 380 insertions(+), 2 deletions(-) create mode 100644 modules/search-pipeline-common/src/main/java/org/opensearch/search/pipeline/common/SplitResponseProcessor.java create mode 100644 modules/search-pipeline-common/src/test/java/org/opensearch/search/pipeline/common/SplitResponseProcessorTests.java diff --git a/CHANGELOG.md b/CHANGELOG.md index adbb69ff72a0e..e32b6de84a195 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -21,6 +21,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Add Plugin interface for loading application based configuration templates (([#14659](https://github.com/opensearch-project/OpenSearch/issues/14659))) - Refactor remote-routing-table service inline with remote state interfaces([#14668](https://github.com/opensearch-project/OpenSearch/pull/14668)) - Add prefix mode verification setting for repository verification (([#14790](https://github.com/opensearch-project/OpenSearch/pull/14790))) +- Add SplitResponseProcessor to Search Pipelines (([#14800](https://github.com/opensearch-project/OpenSearch/issues/14800))) - Optimize TransportNodesAction to not send DiscoveryNodes for NodeStats, NodesInfo and ClusterStats call ([14749](https://github.com/opensearch-project/OpenSearch/pull/14749)) - Reduce logging in DEBUG for MasterService:run ([#14795](https://github.com/opensearch-project/OpenSearch/pull/14795)) - Enabling term version check on local state for all ClusterManager Read Transport Actions ([#14273](https://github.com/opensearch-project/OpenSearch/pull/14273)) diff --git a/modules/search-pipeline-common/src/main/java/org/opensearch/search/pipeline/common/SearchPipelineCommonModulePlugin.java b/modules/search-pipeline-common/src/main/java/org/opensearch/search/pipeline/common/SearchPipelineCommonModulePlugin.java index 1574621a8200e..d05101da2817c 100644 --- a/modules/search-pipeline-common/src/main/java/org/opensearch/search/pipeline/common/SearchPipelineCommonModulePlugin.java +++ b/modules/search-pipeline-common/src/main/java/org/opensearch/search/pipeline/common/SearchPipelineCommonModulePlugin.java @@ -96,7 +96,9 @@ public Map> getResponseProces TruncateHitsResponseProcessor.TYPE, new TruncateHitsResponseProcessor.Factory(), CollapseResponseProcessor.TYPE, - new CollapseResponseProcessor.Factory() + new CollapseResponseProcessor.Factory(), + SplitResponseProcessor.TYPE, + new SplitResponseProcessor.Factory() ) ); } diff --git a/modules/search-pipeline-common/src/main/java/org/opensearch/search/pipeline/common/SplitResponseProcessor.java b/modules/search-pipeline-common/src/main/java/org/opensearch/search/pipeline/common/SplitResponseProcessor.java new file mode 100644 index 0000000000000..0762f8f59b76e --- /dev/null +++ b/modules/search-pipeline-common/src/main/java/org/opensearch/search/pipeline/common/SplitResponseProcessor.java @@ -0,0 +1,162 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.search.pipeline.common; + +import org.opensearch.action.search.SearchRequest; +import org.opensearch.action.search.SearchResponse; +import org.opensearch.common.collect.Tuple; +import org.opensearch.common.document.DocumentField; +import org.opensearch.common.xcontent.XContentHelper; +import org.opensearch.core.common.bytes.BytesReference; +import org.opensearch.core.xcontent.MediaType; +import org.opensearch.core.xcontent.XContentBuilder; +import org.opensearch.ingest.ConfigurationUtils; +import org.opensearch.search.SearchHit; +import org.opensearch.search.pipeline.AbstractProcessor; +import org.opensearch.search.pipeline.Processor; +import org.opensearch.search.pipeline.SearchResponseProcessor; + +import java.util.Arrays; +import java.util.Map; +import java.util.Objects; + +/** + * Processor that sorts an array of items. + * Throws exception is the specified field is not an array. + */ +public class SplitResponseProcessor extends AbstractProcessor implements SearchResponseProcessor { + /** Key to reference this processor type from a search pipeline. */ + public static final String TYPE = "split"; + /** Key defining the string field to be split. */ + public static final String SPLIT_FIELD = "field"; + /** Key defining the delimiter used to split the string. This can be a regular expression pattern. */ + public static final String SEPARATOR = "separator"; + /** Optional key for handling empty trailing fields. */ + public static final String PRESERVE_TRAILING = "preserve_trailing"; + /** Optional key to put the split values in a different field. */ + public static final String TARGET_FIELD = "target_field"; + + private final String splitField; + private final String separator; + private final boolean preserveTrailing; + private final String targetField; + + SplitResponseProcessor( + String tag, + String description, + boolean ignoreFailure, + String splitField, + String separator, + boolean preserveTrailing, + String targetField + ) { + super(tag, description, ignoreFailure); + this.splitField = Objects.requireNonNull(splitField); + this.separator = Objects.requireNonNull(separator); + this.preserveTrailing = preserveTrailing; + this.targetField = targetField == null ? splitField : targetField; + } + + /** + * Getter function for splitField + * @return sortField + */ + public String getSplitField() { + return splitField; + } + + /** + * Getter function for separator + * @return separator + */ + public String getSeparator() { + return separator; + } + + /** + * Getter function for preserveTrailing + * @return preserveTrailing; + */ + public boolean isPreserveTrailing() { + return preserveTrailing; + } + + /** + * Getter function for targetField + * @return targetField + */ + public String getTargetField() { + return targetField; + } + + @Override + public String getType() { + return TYPE; + } + + @Override + public SearchResponse processResponse(SearchRequest request, SearchResponse response) throws Exception { + SearchHit[] hits = response.getHits().getHits(); + for (SearchHit hit : hits) { + Map fields = hit.getFields(); + if (fields.containsKey(splitField)) { + DocumentField docField = hit.getFields().get(splitField); + if (docField == null) { + throw new IllegalArgumentException("field [" + splitField + "] is null, cannot split."); + } + Object val = docField.getValue(); + if (val == null || !String.class.isAssignableFrom(val.getClass())) { + throw new IllegalArgumentException("field [" + splitField + "] is not a string, cannot split"); + } + Object[] strings = ((String) val).split(separator, preserveTrailing ? -1 : 0); + hit.setDocumentField(targetField, new DocumentField(targetField, Arrays.asList(strings))); + } + if (hit.hasSource()) { + BytesReference sourceRef = hit.getSourceRef(); + Tuple> typeAndSourceMap = XContentHelper.convertToMap( + sourceRef, + false, + (MediaType) null + ); + + Map sourceAsMap = typeAndSourceMap.v2(); + if (sourceAsMap.containsKey(splitField)) { + Object val = sourceAsMap.get(splitField); + if (val instanceof String) { + Object[] strings = ((String) val).split(separator, preserveTrailing ? -1 : 0); + sourceAsMap.put(targetField, Arrays.asList(strings)); + } + XContentBuilder builder = XContentBuilder.builder(typeAndSourceMap.v1().xContent()); + builder.map(sourceAsMap); + hit.sourceRef(BytesReference.bytes(builder)); + } + } + } + return response; + } + + static class Factory implements Processor.Factory { + + @Override + public SplitResponseProcessor create( + Map> processorFactories, + String tag, + String description, + boolean ignoreFailure, + Map config, + PipelineContext pipelineContext + ) { + String splitField = ConfigurationUtils.readStringProperty(TYPE, tag, config, SPLIT_FIELD); + String separator = ConfigurationUtils.readStringProperty(TYPE, tag, config, SEPARATOR); + boolean preserveTrailing = ConfigurationUtils.readBooleanProperty(TYPE, tag, config, PRESERVE_TRAILING, false); + String targetField = ConfigurationUtils.readStringProperty(TYPE, tag, config, TARGET_FIELD, splitField); + return new SplitResponseProcessor(tag, description, ignoreFailure, splitField, separator, preserveTrailing, targetField); + } + } +} diff --git a/modules/search-pipeline-common/src/test/java/org/opensearch/search/pipeline/common/SearchPipelineCommonModulePluginTests.java b/modules/search-pipeline-common/src/test/java/org/opensearch/search/pipeline/common/SearchPipelineCommonModulePluginTests.java index 519468ebe17ff..d4f9ae2490a10 100644 --- a/modules/search-pipeline-common/src/test/java/org/opensearch/search/pipeline/common/SearchPipelineCommonModulePluginTests.java +++ b/modules/search-pipeline-common/src/test/java/org/opensearch/search/pipeline/common/SearchPipelineCommonModulePluginTests.java @@ -82,7 +82,7 @@ public void testAllowlistNotSpecified() throws IOException { try (SearchPipelineCommonModulePlugin plugin = new SearchPipelineCommonModulePlugin()) { assertEquals(Set.of("oversample", "filter_query", "script"), plugin.getRequestProcessors(createParameters(settings)).keySet()); assertEquals( - Set.of("rename_field", "truncate_hits", "collapse"), + Set.of("rename_field", "truncate_hits", "collapse", "split"), plugin.getResponseProcessors(createParameters(settings)).keySet() ); assertEquals(Set.of(), plugin.getSearchPhaseResultsProcessors(createParameters(settings)).keySet()); diff --git a/modules/search-pipeline-common/src/test/java/org/opensearch/search/pipeline/common/SplitResponseProcessorTests.java b/modules/search-pipeline-common/src/test/java/org/opensearch/search/pipeline/common/SplitResponseProcessorTests.java new file mode 100644 index 0000000000000..fcbc8ccf43cff --- /dev/null +++ b/modules/search-pipeline-common/src/test/java/org/opensearch/search/pipeline/common/SplitResponseProcessorTests.java @@ -0,0 +1,213 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a.java + * compatible open source license. + */ + +package org.opensearch.search.pipeline.common; + +import org.apache.lucene.search.TotalHits; +import org.opensearch.OpenSearchParseException; +import org.opensearch.action.search.SearchRequest; +import org.opensearch.action.search.SearchResponse; +import org.opensearch.action.search.SearchResponseSections; +import org.opensearch.common.document.DocumentField; +import org.opensearch.core.common.bytes.BytesArray; +import org.opensearch.index.query.QueryBuilder; +import org.opensearch.index.query.TermQueryBuilder; +import org.opensearch.ingest.RandomDocumentPicks; +import org.opensearch.search.SearchHit; +import org.opensearch.search.SearchHits; +import org.opensearch.search.builder.SearchSourceBuilder; +import org.opensearch.test.OpenSearchTestCase; + +import java.util.Collections; +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +public class SplitResponseProcessorTests extends OpenSearchTestCase { + + private static final String NO_TRAILING = "one,two,three"; + private static final String TRAILING = "alpha,beta,gamma,"; + private static final String REGEX_DELIM = "one1two2three"; + + private SearchRequest createDummyRequest() { + QueryBuilder query = new TermQueryBuilder("field", "value"); + SearchSourceBuilder source = new SearchSourceBuilder().query(query); + return new SearchRequest().source(source); + } + + private SearchResponse createTestResponse() { + SearchHit[] hits = new SearchHit[2]; + + // one response with source + Map csvMap = new HashMap<>(); + csvMap.put("csv", new DocumentField("csv", List.of(NO_TRAILING))); + hits[0] = new SearchHit(0, "doc 1", csvMap, Collections.emptyMap()); + hits[0].sourceRef(new BytesArray("{ \"csv\" : \"" + NO_TRAILING + "\" }")); + hits[0].score(1f); + + // one without source + csvMap = new HashMap<>(); + csvMap.put("csv", new DocumentField("csv", List.of(TRAILING))); + hits[1] = new SearchHit(1, "doc 2", csvMap, Collections.emptyMap()); + hits[1].score(2f); + + SearchHits searchHits = new SearchHits(hits, new TotalHits(2, TotalHits.Relation.EQUAL_TO), 2); + SearchResponseSections searchResponseSections = new SearchResponseSections(searchHits, null, null, false, false, null, 0); + return new SearchResponse(searchResponseSections, null, 1, 1, 0, 10, null, null); + } + + private SearchResponse createTestResponseRegex() { + SearchHit[] hits = new SearchHit[1]; + + Map dsvMap = new HashMap<>(); + dsvMap.put("dsv", new DocumentField("dsv", List.of(REGEX_DELIM))); + hits[0] = new SearchHit(0, "doc 1", dsvMap, Collections.emptyMap()); + hits[0].sourceRef(new BytesArray("{ \"dsv\" : \"" + REGEX_DELIM + "\" }")); + hits[0].score(1f); + + SearchHits searchHits = new SearchHits(hits, new TotalHits(1, TotalHits.Relation.EQUAL_TO), 1); + SearchResponseSections searchResponseSections = new SearchResponseSections(searchHits, null, null, false, false, null, 0); + return new SearchResponse(searchResponseSections, null, 1, 1, 0, 10, null, null); + } + + private SearchResponse createTestResponseNullField() { + SearchHit[] hits = new SearchHit[1]; + + Map map = new HashMap<>(); + map.put("csv", null); + hits[0] = new SearchHit(0, "doc 1", map, Collections.emptyMap()); + hits[0].sourceRef(new BytesArray("{ \"csv\" : null }")); + hits[0].score(1f); + + SearchHits searchHits = new SearchHits(hits, new TotalHits(1, TotalHits.Relation.EQUAL_TO), 1); + SearchResponseSections searchResponseSections = new SearchResponseSections(searchHits, null, null, false, false, null, 0); + return new SearchResponse(searchResponseSections, null, 1, 1, 0, 10, null, null); + } + + private SearchResponse createTestResponseEmptyList() { + SearchHit[] hits = new SearchHit[1]; + + Map map = new HashMap<>(); + map.put("empty", new DocumentField("empty", List.of())); + hits[0] = new SearchHit(0, "doc 1", map, Collections.emptyMap()); + hits[0].sourceRef(new BytesArray("{ \"empty\" : [] }")); + hits[0].score(1f); + + SearchHits searchHits = new SearchHits(hits, new TotalHits(1, TotalHits.Relation.EQUAL_TO), 1); + SearchResponseSections searchResponseSections = new SearchResponseSections(searchHits, null, null, false, false, null, 0); + return new SearchResponse(searchResponseSections, null, 1, 1, 0, 10, null, null); + } + + private SearchResponse createTestResponseNotString() { + SearchHit[] hits = new SearchHit[1]; + + Map piMap = new HashMap<>(); + piMap.put("maps", new DocumentField("maps", List.of(Map.of("foo", "I'm the Map!")))); + hits[0] = new SearchHit(0, "doc 1", piMap, Collections.emptyMap()); + hits[0].sourceRef(new BytesArray("{ \"maps\" : [{ \"foo\" : \"I'm the Map!\"}]] }")); + hits[0].score(1f); + + SearchHits searchHits = new SearchHits(hits, new TotalHits(1, TotalHits.Relation.EQUAL_TO), 1); + SearchResponseSections searchResponseSections = new SearchResponseSections(searchHits, null, null, false, false, null, 0); + return new SearchResponse(searchResponseSections, null, 1, 1, 0, 10, null, null); + } + + public void testSplitResponse() throws Exception { + SearchRequest request = createDummyRequest(); + + SplitResponseProcessor splitResponseProcessor = new SplitResponseProcessor(null, null, false, "csv", ",", false, "split"); + SearchResponse response = createTestResponse(); + SearchResponse splitResponse = splitResponseProcessor.processResponse(request, response); + + assertEquals(response.getHits(), splitResponse.getHits()); + + assertEquals(NO_TRAILING, splitResponse.getHits().getHits()[0].field("csv").getValue()); + assertEquals(List.of("one", "two", "three"), splitResponse.getHits().getHits()[0].field("split").getValues()); + Map map = splitResponse.getHits().getHits()[0].getSourceAsMap(); + assertNotNull(map); + assertEquals(List.of("one", "two", "three"), map.get("split")); + + assertEquals(TRAILING, splitResponse.getHits().getHits()[1].field("csv").getValue()); + assertEquals(List.of("alpha", "beta", "gamma"), splitResponse.getHits().getHits()[1].field("split").getValues()); + assertNull(splitResponse.getHits().getHits()[1].getSourceAsMap()); + } + + public void testSplitResponseRegex() throws Exception { + SearchRequest request = createDummyRequest(); + + SplitResponseProcessor splitResponseProcessor = new SplitResponseProcessor(null, null, false, "dsv", "\\d", false, "split"); + SearchResponse response = createTestResponseRegex(); + SearchResponse splitResponse = splitResponseProcessor.processResponse(request, response); + + assertEquals(response.getHits(), splitResponse.getHits()); + + assertEquals(REGEX_DELIM, splitResponse.getHits().getHits()[0].field("dsv").getValue()); + assertEquals(List.of("one", "two", "three"), splitResponse.getHits().getHits()[0].field("split").getValues()); + Map map = splitResponse.getHits().getHits()[0].getSourceAsMap(); + assertNotNull(map); + assertEquals(List.of("one", "two", "three"), map.get("split")); + } + + public void testSplitResponseSameField() throws Exception { + SearchRequest request = createDummyRequest(); + + SplitResponseProcessor splitResponseProcessor = new SplitResponseProcessor(null, null, false, "csv", ",", true, null); + SearchResponse response = createTestResponse(); + SearchResponse splitResponse = splitResponseProcessor.processResponse(request, response); + + assertEquals(response.getHits(), splitResponse.getHits()); + assertEquals(List.of("one", "two", "three"), splitResponse.getHits().getHits()[0].field("csv").getValues()); + assertEquals(List.of("alpha", "beta", "gamma", ""), splitResponse.getHits().getHits()[1].field("csv").getValues()); + } + + public void testSplitResponseEmptyList() { + SearchRequest request = createDummyRequest(); + + SplitResponseProcessor splitResponseProcessor = new SplitResponseProcessor(null, null, false, "empty", ",", false, null); + assertThrows(IllegalArgumentException.class, () -> splitResponseProcessor.processResponse(request, createTestResponseEmptyList())); + } + + public void testNullField() { + SearchRequest request = createDummyRequest(); + + SplitResponseProcessor splitResponseProcessor = new SplitResponseProcessor(null, null, false, "csv", ",", false, null); + + assertThrows(IllegalArgumentException.class, () -> splitResponseProcessor.processResponse(request, createTestResponseNullField())); + } + + public void testNotStringField() { + SearchRequest request = createDummyRequest(); + + SplitResponseProcessor splitResponseProcessor = new SplitResponseProcessor(null, null, false, "maps", ",", false, null); + + assertThrows(IllegalArgumentException.class, () -> splitResponseProcessor.processResponse(request, createTestResponseNotString())); + } + + public void testFactory() { + String splitField = RandomDocumentPicks.randomFieldName(random()); + String targetField = RandomDocumentPicks.randomFieldName(random()); + Map config = new HashMap<>(); + config.put("field", splitField); + config.put("separator", ","); + config.put("preserve_trailing", true); + config.put("target_field", targetField); + + SplitResponseProcessor.Factory factory = new SplitResponseProcessor.Factory(); + SplitResponseProcessor processor = factory.create(Collections.emptyMap(), null, null, false, config, null); + assertEquals("split", processor.getType()); + assertEquals(splitField, processor.getSplitField()); + assertEquals(",", processor.getSeparator()); + assertTrue(processor.isPreserveTrailing()); + assertEquals(targetField, processor.getTargetField()); + + expectThrows( + OpenSearchParseException.class, + () -> factory.create(Collections.emptyMap(), null, null, false, Collections.emptyMap(), null) + ); + } +} From b9d5804a7868feb1214caf8a7c02bcd7117a3b3a Mon Sep 17 00:00:00 2001 From: shailendra0811 <167273922+shailendra0811@users.noreply.github.com> Date: Mon, 22 Jul 2024 23:56:53 +0530 Subject: [PATCH 093/167] Add integration tests for RemoteRoutingTable Service. (#14631) Signed-off-by: Shailendra Singh --- .../RemoteClusterStateCleanupManagerIT.java | 135 ++++++++ .../remote/RemoteRoutingTableServiceIT.java | 297 ++++++++++++++++++ .../RemoteStoreBaseIntegTestCase.java | 17 + .../coordination/PersistedStateStats.java | 4 + .../PersistedStateStatsTests.java | 62 ++++ .../test/OpenSearchIntegTestCase.java | 121 +++++++ 6 files changed, 636 insertions(+) create mode 100644 server/src/internalClusterTest/java/org/opensearch/gateway/remote/RemoteRoutingTableServiceIT.java create mode 100644 server/src/test/java/org/opensearch/cluster/coordination/PersistedStateStatsTests.java diff --git a/server/src/internalClusterTest/java/org/opensearch/gateway/remote/RemoteClusterStateCleanupManagerIT.java b/server/src/internalClusterTest/java/org/opensearch/gateway/remote/RemoteClusterStateCleanupManagerIT.java index 5074971ab1a1f..7d2e24c777da3 100644 --- a/server/src/internalClusterTest/java/org/opensearch/gateway/remote/RemoteClusterStateCleanupManagerIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/gateway/remote/RemoteClusterStateCleanupManagerIT.java @@ -8,9 +8,17 @@ package org.opensearch.gateway.remote; +import org.opensearch.action.admin.cluster.node.stats.NodesStatsRequest; +import org.opensearch.action.admin.cluster.node.stats.NodesStatsResponse; import org.opensearch.action.admin.cluster.settings.ClusterUpdateSettingsResponse; +import org.opensearch.cluster.coordination.PersistedStateStats; +import org.opensearch.cluster.routing.IndexRoutingTable; import org.opensearch.common.blobstore.BlobPath; import org.opensearch.common.settings.Settings; +import org.opensearch.discovery.DiscoveryStats; +import org.opensearch.gateway.remote.model.RemoteRoutingTableBlobStore; +import org.opensearch.index.remote.RemoteStoreEnums; +import org.opensearch.index.remote.RemoteStorePathStrategy; import org.opensearch.remotestore.RemoteStoreBaseIntegTestCase; import org.opensearch.repositories.RepositoriesService; import org.opensearch.repositories.blobstore.BlobStoreRepository; @@ -18,21 +26,29 @@ import org.junit.Before; import java.nio.charset.StandardCharsets; +import java.nio.file.Path; +import java.util.ArrayList; import java.util.Base64; +import java.util.List; import java.util.Map; import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicLong; +import static org.opensearch.common.util.FeatureFlags.REMOTE_PUBLICATION_EXPERIMENTAL; import static org.opensearch.gateway.remote.RemoteClusterStateCleanupManager.CLUSTER_STATE_CLEANUP_INTERVAL_DEFAULT; import static org.opensearch.gateway.remote.RemoteClusterStateCleanupManager.REMOTE_CLUSTER_STATE_CLEANUP_INTERVAL_SETTING; import static org.opensearch.gateway.remote.RemoteClusterStateCleanupManager.RETAINED_MANIFESTS; import static org.opensearch.gateway.remote.RemoteClusterStateCleanupManager.SKIP_CLEANUP_STATE_CHANGES; import static org.opensearch.gateway.remote.RemoteClusterStateService.REMOTE_CLUSTER_STATE_ENABLED_SETTING; +import static org.opensearch.gateway.remote.routingtable.RemoteIndexRoutingTable.INDEX_ROUTING_TABLE; import static org.opensearch.indices.IndicesService.CLUSTER_DEFAULT_INDEX_REFRESH_INTERVAL_SETTING; +import static org.opensearch.node.remotestore.RemoteStoreNodeAttribute.REMOTE_STORE_ROUTING_TABLE_REPOSITORY_NAME_ATTRIBUTE_KEY; @OpenSearchIntegTestCase.ClusterScope(scope = OpenSearchIntegTestCase.Scope.TEST, numDataNodes = 0) public class RemoteClusterStateCleanupManagerIT extends RemoteStoreBaseIntegTestCase { private static final String INDEX_NAME = "test-index"; + private final RemoteStoreEnums.PathType pathType = RemoteStoreEnums.PathType.HASHED_PREFIX; @Before public void setup() { @@ -52,6 +68,11 @@ private Map initialTestSetup(int shardCount, int replicaCount, int return indexStats; } + private void initialTestSetup(int shardCount, int replicaCount, int dataNodeCount, int clusterManagerNodeCount, Settings settings) { + prepareCluster(clusterManagerNodeCount, dataNodeCount, INDEX_NAME, replicaCount, shardCount, settings); + ensureGreen(INDEX_NAME); + } + public void testRemoteCleanupTaskUpdated() { int shardCount = randomIntBetween(1, 2); int replicaCount = 1; @@ -144,6 +165,102 @@ public void testRemoteCleanupDeleteStale() throws Exception { assertTrue(response.isAcknowledged()); } + public void testRemoteCleanupDeleteStaleIndexRoutingFiles() throws Exception { + clusterSettingsSuppliedByTest = true; + Path segmentRepoPath = randomRepoPath(); + Path translogRepoPath = randomRepoPath(); + Path remoteRoutingTableRepoPath = randomRepoPath(); + Settings.Builder settingsBuilder = Settings.builder(); + settingsBuilder.put( + buildRemoteStoreNodeAttributes( + REPOSITORY_NAME, + segmentRepoPath, + REPOSITORY_2_NAME, + translogRepoPath, + REMOTE_ROUTING_TABLE_REPO, + remoteRoutingTableRepoPath, + false + ) + ); + settingsBuilder.put( + RemoteRoutingTableBlobStore.REMOTE_ROUTING_TABLE_PATH_TYPE_SETTING.getKey(), + RemoteStoreEnums.PathType.HASHED_PREFIX.toString() + ) + .put("node.attr." + REMOTE_STORE_ROUTING_TABLE_REPOSITORY_NAME_ATTRIBUTE_KEY, REMOTE_ROUTING_TABLE_REPO) + .put(REMOTE_PUBLICATION_EXPERIMENTAL, true); + + int shardCount = randomIntBetween(1, 2); + int replicaCount = 1; + int dataNodeCount = shardCount * (replicaCount + 1); + int clusterManagerNodeCount = 1; + initialTestSetup(shardCount, replicaCount, dataNodeCount, clusterManagerNodeCount, settingsBuilder.build()); + + // update cluster state 21 times to ensure that clean up has run after this will upload 42 manifest files + // to repository, if manifest files are less than that it means clean up has run + updateClusterStateNTimes(RETAINED_MANIFESTS + SKIP_CLEANUP_STATE_CHANGES + 1); + + RepositoriesService repositoriesService = internalCluster().getClusterManagerNodeInstance(RepositoriesService.class); + BlobStoreRepository repository = (BlobStoreRepository) repositoriesService.repository(REPOSITORY_NAME); + BlobPath baseMetadataPath = getBaseMetadataPath(repository); + + BlobStoreRepository routingTableRepository = (BlobStoreRepository) repositoriesService.repository(REMOTE_ROUTING_TABLE_REPO); + List indexRoutingTables = new ArrayList<>(getClusterState().routingTable().indicesRouting().values()); + BlobPath indexRoutingPath = getIndexRoutingPath(baseMetadataPath, indexRoutingTables.get(0).getIndex().getUUID()); + assertBusy(() -> { + // There would be >=3 files as shards will transition from UNASSIGNED -> INIT -> STARTED state + assertTrue(routingTableRepository.blobStore().blobContainer(indexRoutingPath).listBlobs().size() >= 3); + }); + + RemoteClusterStateCleanupManager remoteClusterStateCleanupManager = internalCluster().getClusterManagerNodeInstance( + RemoteClusterStateCleanupManager.class + ); + + // set cleanup interval to 100 ms to make the test faster + ClusterUpdateSettingsResponse response = client().admin() + .cluster() + .prepareUpdateSettings() + .setPersistentSettings(Settings.builder().put(REMOTE_CLUSTER_STATE_CLEANUP_INTERVAL_SETTING.getKey(), "100ms")) + .get(); + + assertTrue(response.isAcknowledged()); + assertBusy(() -> assertEquals(100, remoteClusterStateCleanupManager.getStaleFileDeletionTask().getInterval().getMillis())); + + String clusterManagerNode = internalCluster().getClusterManagerName(); + NodesStatsResponse nodesStatsResponse = client().admin() + .cluster() + .prepareNodesStats(clusterManagerNode) + .addMetric(NodesStatsRequest.Metric.DISCOVERY.metricName()) + .get(); + verifyIndexRoutingFilesDeletion(routingTableRepository, indexRoutingPath, nodesStatsResponse); + + // disable the clean up to avoid race condition during shutdown + response = client().admin() + .cluster() + .prepareUpdateSettings() + .setPersistentSettings(Settings.builder().put(REMOTE_CLUSTER_STATE_CLEANUP_INTERVAL_SETTING.getKey(), "-1")) + .get(); + assertTrue(response.isAcknowledged()); + } + + private void verifyIndexRoutingFilesDeletion( + BlobStoreRepository routingTableRepository, + BlobPath indexRoutingPath, + NodesStatsResponse nodesStatsResponse + ) throws Exception { + assertBusy(() -> { assertEquals(1, routingTableRepository.blobStore().blobContainer(indexRoutingPath).listBlobs().size()); }); + + // Verify index routing files delete stats + DiscoveryStats discoveryStats = nodesStatsResponse.getNodes().get(0).getDiscoveryStats(); + assertNotNull(discoveryStats.getClusterStateStats()); + for (PersistedStateStats persistedStateStats : discoveryStats.getClusterStateStats().getPersistenceStats()) { + Map extendedFields = persistedStateStats.getExtendedFields(); + assertTrue(extendedFields.containsKey(RemotePersistenceStats.INDEX_ROUTING_FILES_CLEANUP_ATTEMPT_FAILED_COUNT)); + long cleanupAttemptFailedCount = extendedFields.get(RemotePersistenceStats.INDEX_ROUTING_FILES_CLEANUP_ATTEMPT_FAILED_COUNT) + .get(); + assertEquals(0, cleanupAttemptFailedCount); + } + } + private void updateClusterStateNTimes(int n) { int newReplicaCount = randomIntBetween(0, 3); for (int i = n; i > 0; i--) { @@ -155,4 +272,22 @@ private void updateClusterStateNTimes(int n) { assertTrue(response.isAcknowledged()); } } + + private BlobPath getBaseMetadataPath(BlobStoreRepository repository) { + return repository.basePath() + .add( + Base64.getUrlEncoder() + .withoutPadding() + .encodeToString(getClusterState().getClusterName().value().getBytes(StandardCharsets.UTF_8)) + ) + .add("cluster-state") + .add(getClusterState().metadata().clusterUUID()); + } + + private BlobPath getIndexRoutingPath(BlobPath baseMetadataPath, String indexUUID) { + return pathType.path( + RemoteStorePathStrategy.PathInput.builder().basePath(baseMetadataPath.add(INDEX_ROUTING_TABLE)).indexUUID(indexUUID).build(), + RemoteStoreEnums.PathHashAlgorithm.FNV_1A_BASE64 + ); + } } diff --git a/server/src/internalClusterTest/java/org/opensearch/gateway/remote/RemoteRoutingTableServiceIT.java b/server/src/internalClusterTest/java/org/opensearch/gateway/remote/RemoteRoutingTableServiceIT.java new file mode 100644 index 0000000000000..53764c0b4d0e8 --- /dev/null +++ b/server/src/internalClusterTest/java/org/opensearch/gateway/remote/RemoteRoutingTableServiceIT.java @@ -0,0 +1,297 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.gateway.remote; + +import org.opensearch.action.admin.cluster.state.ClusterStateRequest; +import org.opensearch.cluster.metadata.IndexMetadata; +import org.opensearch.cluster.routing.IndexRoutingTable; +import org.opensearch.cluster.routing.RoutingTable; +import org.opensearch.common.blobstore.BlobPath; +import org.opensearch.common.settings.Settings; +import org.opensearch.gateway.remote.model.RemoteRoutingTableBlobStore; +import org.opensearch.index.remote.RemoteStoreEnums; +import org.opensearch.index.remote.RemoteStorePathStrategy; +import org.opensearch.remotestore.RemoteStoreBaseIntegTestCase; +import org.opensearch.repositories.RepositoriesService; +import org.opensearch.repositories.blobstore.BlobStoreRepository; +import org.opensearch.test.OpenSearchIntegTestCase; +import org.junit.Before; + +import java.nio.charset.StandardCharsets; +import java.nio.file.Path; +import java.util.ArrayList; +import java.util.Base64; +import java.util.List; +import java.util.Map; +import java.util.Optional; +import java.util.Set; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.atomic.AtomicInteger; + +import static org.opensearch.common.util.FeatureFlags.REMOTE_PUBLICATION_EXPERIMENTAL; +import static org.opensearch.gateway.remote.RemoteClusterStateService.REMOTE_CLUSTER_STATE_ENABLED_SETTING; +import static org.opensearch.gateway.remote.routingtable.RemoteIndexRoutingTable.INDEX_ROUTING_TABLE; +import static org.opensearch.node.remotestore.RemoteStoreNodeAttribute.REMOTE_STORE_ROUTING_TABLE_REPOSITORY_NAME_ATTRIBUTE_KEY; + +@OpenSearchIntegTestCase.ClusterScope(scope = OpenSearchIntegTestCase.Scope.TEST, numDataNodes = 0) +public class RemoteRoutingTableServiceIT extends RemoteStoreBaseIntegTestCase { + private static final String INDEX_NAME = "test-index"; + BlobPath indexRoutingPath; + AtomicInteger indexRoutingFiles = new AtomicInteger(); + private final RemoteStoreEnums.PathType pathType = RemoteStoreEnums.PathType.HASHED_PREFIX; + + @Before + public void setup() { + asyncUploadMockFsRepo = false; + } + + @Override + protected Settings nodeSettings(int nodeOrdinal) { + return Settings.builder() + .put(super.nodeSettings(nodeOrdinal)) + .put(REMOTE_CLUSTER_STATE_ENABLED_SETTING.getKey(), true) + .put( + RemoteRoutingTableBlobStore.REMOTE_ROUTING_TABLE_PATH_TYPE_SETTING.getKey(), + RemoteStoreEnums.PathType.HASHED_PREFIX.toString() + ) + .put("node.attr." + REMOTE_STORE_ROUTING_TABLE_REPOSITORY_NAME_ATTRIBUTE_KEY, REMOTE_ROUTING_TABLE_REPO) + .put(REMOTE_PUBLICATION_EXPERIMENTAL, true) + .build(); + } + + public void testRemoteRoutingTableIndexLifecycle() throws Exception { + BlobStoreRepository repository = prepareClusterAndVerifyRepository(); + + RemoteClusterStateService remoteClusterStateService = internalCluster().getClusterManagerNodeInstance( + RemoteClusterStateService.class + ); + RemoteManifestManager remoteManifestManager = remoteClusterStateService.getRemoteManifestManager(); + verifyUpdatesInManifestFile(remoteManifestManager); + + List routingTableVersions = getRoutingTableFromAllNodes(); + assertTrue(areRoutingTablesSame(routingTableVersions)); + + // Update index settings + updateIndexSettings(INDEX_NAME, IndexMetadata.SETTING_NUMBER_OF_REPLICAS, 2); + ensureGreen(INDEX_NAME); + assertBusy(() -> { + int indexRoutingFilesAfterUpdate = repository.blobStore().blobContainer(indexRoutingPath).listBlobs().size(); + // At-least 3 new index routing files will be created as shards will transition from INIT -> UNASSIGNED -> STARTED state + assertTrue(indexRoutingFilesAfterUpdate >= indexRoutingFiles.get() + 3); + }); + + verifyUpdatesInManifestFile(remoteManifestManager); + + routingTableVersions = getRoutingTableFromAllNodes(); + assertTrue(areRoutingTablesSame(routingTableVersions)); + + // Delete the index and assert its deletion + deleteIndexAndVerify(remoteManifestManager); + + routingTableVersions = getRoutingTableFromAllNodes(); + assertTrue(areRoutingTablesSame(routingTableVersions)); + } + + public void testRemoteRoutingTableIndexNodeRestart() throws Exception { + BlobStoreRepository repository = prepareClusterAndVerifyRepository(); + + List routingTableVersions = getRoutingTableFromAllNodes(); + assertTrue(areRoutingTablesSame(routingTableVersions)); + + // Ensure node comes healthy after restart + Set dataNodes = internalCluster().getDataNodeNames(); + internalCluster().restartNode(randomFrom(dataNodes)); + ensureGreen(); + ensureGreen(INDEX_NAME); + + // ensure restarted node joins and the cluster is stable + assertEquals(3, internalCluster().clusterService().state().nodes().getDataNodes().size()); + ensureStableCluster(4); + assertRemoteStoreRepositoryOnAllNodes(REMOTE_ROUTING_TABLE_REPO); + + assertBusy(() -> { + int indexRoutingFilesAfterNodeDrop = repository.blobStore().blobContainer(indexRoutingPath).listBlobs().size(); + assertTrue(indexRoutingFilesAfterNodeDrop > indexRoutingFiles.get()); + }); + + RemoteClusterStateService remoteClusterStateService = internalCluster().getClusterManagerNodeInstance( + RemoteClusterStateService.class + ); + RemoteManifestManager remoteManifestManager = remoteClusterStateService.getRemoteManifestManager(); + verifyUpdatesInManifestFile(remoteManifestManager); + } + + public void testRemoteRoutingTableIndexMasterRestart1() throws Exception { + BlobStoreRepository repository = prepareClusterAndVerifyRepository(); + + List routingTableVersions = getRoutingTableFromAllNodes(); + assertTrue(areRoutingTablesSame(routingTableVersions)); + + // Ensure node comes healthy after restart + String clusterManagerName = internalCluster().getClusterManagerName(); + internalCluster().restartNode(clusterManagerName); + ensureGreen(); + ensureGreen(INDEX_NAME); + + // ensure master is elected and the cluster is stable + assertNotNull(internalCluster().clusterService().state().nodes().getClusterManagerNode()); + ensureStableCluster(4); + assertRemoteStoreRepositoryOnAllNodes(REMOTE_ROUTING_TABLE_REPO); + + assertBusy(() -> { + int indexRoutingFilesAfterNodeDrop = repository.blobStore().blobContainer(indexRoutingPath).listBlobs().size(); + assertTrue(indexRoutingFilesAfterNodeDrop > indexRoutingFiles.get()); + }); + + RemoteClusterStateService remoteClusterStateService = internalCluster().getClusterManagerNodeInstance( + RemoteClusterStateService.class + ); + RemoteManifestManager remoteManifestManager = remoteClusterStateService.getRemoteManifestManager(); + verifyUpdatesInManifestFile(remoteManifestManager); + } + + private BlobStoreRepository prepareClusterAndVerifyRepository() throws Exception { + clusterSettingsSuppliedByTest = true; + Path segmentRepoPath = randomRepoPath(); + Path translogRepoPath = randomRepoPath(); + Path remoteRoutingTableRepoPath = randomRepoPath(); + Settings settings = buildRemoteStoreNodeAttributes( + REPOSITORY_NAME, + segmentRepoPath, + REPOSITORY_2_NAME, + translogRepoPath, + REMOTE_ROUTING_TABLE_REPO, + remoteRoutingTableRepoPath, + false + ); + prepareCluster(1, 3, INDEX_NAME, 1, 5, settings); + ensureGreen(INDEX_NAME); + + RepositoriesService repositoriesService = internalCluster().getClusterManagerNodeInstance(RepositoriesService.class); + BlobStoreRepository repository = (BlobStoreRepository) repositoriesService.repository(REMOTE_ROUTING_TABLE_REPO); + + BlobPath baseMetadataPath = getBaseMetadataPath(repository); + List indexRoutingTables = new ArrayList<>(getClusterState().routingTable().indicesRouting().values()); + indexRoutingPath = getIndexRoutingPath(baseMetadataPath.add(INDEX_ROUTING_TABLE), indexRoutingTables.get(0).getIndex().getUUID()); + + assertBusy(() -> { + indexRoutingFiles.set(repository.blobStore().blobContainer(indexRoutingPath).listBlobs().size()); + // There would be >=3 files as shards will transition from UNASSIGNED -> INIT -> STARTED state + assertTrue(indexRoutingFiles.get() >= 3); + }); + assertRemoteStoreRepositoryOnAllNodes(REMOTE_ROUTING_TABLE_REPO); + return repository; + } + + private BlobPath getBaseMetadataPath(BlobStoreRepository repository) { + return repository.basePath() + .add( + Base64.getUrlEncoder() + .withoutPadding() + .encodeToString(getClusterState().getClusterName().value().getBytes(StandardCharsets.UTF_8)) + ) + .add("cluster-state") + .add(getClusterState().metadata().clusterUUID()); + } + + private BlobPath getIndexRoutingPath(BlobPath indexRoutingPath, String indexUUID) { + RemoteStoreEnums.PathHashAlgorithm pathHashAlgo = RemoteStoreEnums.PathHashAlgorithm.FNV_1A_BASE64; + return pathType.path( + RemoteStorePathStrategy.PathInput.builder().basePath(indexRoutingPath).indexUUID(indexUUID).build(), + pathHashAlgo + ); + } + + private void verifyUpdatesInManifestFile(RemoteManifestManager remoteManifestManager) { + Optional latestManifest = remoteManifestManager.getLatestClusterMetadataManifest( + getClusterState().getClusterName().value(), + getClusterState().getMetadata().clusterUUID() + ); + assertTrue(latestManifest.isPresent()); + ClusterMetadataManifest manifest = latestManifest.get(); + assertTrue(manifest.getDiffManifest().getIndicesRoutingUpdated().contains(INDEX_NAME)); + assertTrue(manifest.getDiffManifest().getIndicesDeleted().isEmpty()); + assertFalse(manifest.getIndicesRouting().isEmpty()); + assertEquals(1, manifest.getIndicesRouting().size()); + assertTrue(manifest.getIndicesRouting().get(0).getUploadedFilename().contains(indexRoutingPath.buildAsString())); + } + + private List getRoutingTableFromAllNodes() throws ExecutionException, InterruptedException { + String[] allNodes = internalCluster().getNodeNames(); + List routingTables = new ArrayList<>(); + for (String node : allNodes) { + RoutingTable routingTable = internalCluster().client(node) + .admin() + .cluster() + .state(new ClusterStateRequest().local(true)) + .get() + .getState() + .routingTable(); + routingTables.add(routingTable); + } + return routingTables; + } + + private boolean areRoutingTablesSame(List routingTables) { + if (routingTables == null || routingTables.isEmpty()) { + return false; + } + + RoutingTable firstRoutingTable = routingTables.get(0); + for (RoutingTable routingTable : routingTables) { + if (!compareRoutingTables(firstRoutingTable, routingTable)) { + logger.info("Responses are not the same: {} {}", firstRoutingTable, routingTable); + return false; + } + } + return true; + } + + private boolean compareRoutingTables(RoutingTable a, RoutingTable b) { + if (a == b) return true; + if (b == null || a.getClass() != b.getClass()) return false; + if (a.version() != b.version()) return false; + if (a.indicesRouting().size() != b.indicesRouting().size()) return false; + + for (Map.Entry entry : a.indicesRouting().entrySet()) { + IndexRoutingTable thisIndexRoutingTable = entry.getValue(); + IndexRoutingTable thatIndexRoutingTable = b.indicesRouting().get(entry.getKey()); + if (!thatIndexRoutingTable.equals(thatIndexRoutingTable)) { + return false; + } + } + return true; + } + + private void updateIndexSettings(String indexName, String settingKey, int settingValue) { + client().admin() + .indices() + .prepareUpdateSettings(indexName) + .setSettings(Settings.builder().put(settingKey, settingValue)) + .execute() + .actionGet(); + } + + private void deleteIndexAndVerify(RemoteManifestManager remoteManifestManager) { + client().admin().indices().prepareDelete(INDEX_NAME).execute().actionGet(); + assertFalse(client().admin().indices().prepareExists(INDEX_NAME).get().isExists()); + + // Verify index is marked deleted in manifest + Optional latestManifest = remoteManifestManager.getLatestClusterMetadataManifest( + getClusterState().getClusterName().value(), + getClusterState().getMetadata().clusterUUID() + ); + assertTrue(latestManifest.isPresent()); + ClusterMetadataManifest manifest = latestManifest.get(); + assertTrue(manifest.getDiffManifest().getIndicesRoutingUpdated().isEmpty()); + assertTrue(manifest.getDiffManifest().getIndicesDeleted().contains(INDEX_NAME)); + assertTrue(manifest.getIndicesRouting().isEmpty()); + } + +} diff --git a/server/src/internalClusterTest/java/org/opensearch/remotestore/RemoteStoreBaseIntegTestCase.java b/server/src/internalClusterTest/java/org/opensearch/remotestore/RemoteStoreBaseIntegTestCase.java index 64efcee6ef1b5..63a9451a27a12 100644 --- a/server/src/internalClusterTest/java/org/opensearch/remotestore/RemoteStoreBaseIntegTestCase.java +++ b/server/src/internalClusterTest/java/org/opensearch/remotestore/RemoteStoreBaseIntegTestCase.java @@ -69,6 +69,7 @@ public class RemoteStoreBaseIntegTestCase extends OpenSearchIntegTestCase { protected static final String REPOSITORY_NAME = "test-remote-store-repo"; protected static final String REPOSITORY_2_NAME = "test-remote-store-repo-2"; + protected static final String REMOTE_ROUTING_TABLE_REPO = "remote-routing-table-repo"; protected static final int SHARD_COUNT = 1; protected static int REPLICA_COUNT = 1; protected static final String TOTAL_OPERATIONS = "total-operations"; @@ -360,4 +361,20 @@ protected void prepareCluster(int numClusterManagerNodes, int numDataOnlyNodes, ensureGreen(index); } } + + protected void prepareCluster( + int numClusterManagerNodes, + int numDataOnlyNodes, + String indices, + int replicaCount, + int shardCount, + Settings settings + ) { + internalCluster().startClusterManagerOnlyNodes(numClusterManagerNodes, settings); + internalCluster().startDataOnlyNodes(numDataOnlyNodes, settings); + for (String index : indices.split(",")) { + createIndex(index, remoteStoreIndexSettings(replicaCount, shardCount)); + ensureGreen(index); + } + } } diff --git a/server/src/main/java/org/opensearch/cluster/coordination/PersistedStateStats.java b/server/src/main/java/org/opensearch/cluster/coordination/PersistedStateStats.java index 0b7ed4fee5775..023c2db1a574a 100644 --- a/server/src/main/java/org/opensearch/cluster/coordination/PersistedStateStats.java +++ b/server/src/main/java/org/opensearch/cluster/coordination/PersistedStateStats.java @@ -117,6 +117,10 @@ protected void addToExtendedFields(String extendedField, AtomicLong extendedFiel this.extendedFields.put(extendedField, extendedFieldValue); } + public Map getExtendedFields() { + return extendedFields; + } + public String getStatsName() { return statsName; } diff --git a/server/src/test/java/org/opensearch/cluster/coordination/PersistedStateStatsTests.java b/server/src/test/java/org/opensearch/cluster/coordination/PersistedStateStatsTests.java new file mode 100644 index 0000000000000..15c7d3ea206ef --- /dev/null +++ b/server/src/test/java/org/opensearch/cluster/coordination/PersistedStateStatsTests.java @@ -0,0 +1,62 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.cluster.coordination; + +import org.opensearch.test.OpenSearchTestCase; +import org.junit.Before; + +import java.util.concurrent.atomic.AtomicLong; + +public class PersistedStateStatsTests extends OpenSearchTestCase { + private PersistedStateStats persistedStateStats; + + @Before + public void setup() { + persistedStateStats = new PersistedStateStats("testStats"); + } + + public void testAddToExtendedFieldsNewField() { + String fieldName = "testField"; + AtomicLong fieldValue = new AtomicLong(42); + + persistedStateStats.addToExtendedFields(fieldName, fieldValue); + + assertTrue(persistedStateStats.getExtendedFields().containsKey(fieldName)); + assertEquals(42, persistedStateStats.getExtendedFields().get(fieldName).get()); + } + + public void testAddToExtendedFieldsExistingField() { + String fieldName = "testField"; + AtomicLong initialValue = new AtomicLong(42); + persistedStateStats.addToExtendedFields(fieldName, initialValue); + + AtomicLong newValue = new AtomicLong(84); + persistedStateStats.addToExtendedFields(fieldName, newValue); + + assertTrue(persistedStateStats.getExtendedFields().containsKey(fieldName)); + assertEquals(84, persistedStateStats.getExtendedFields().get(fieldName).get()); + } + + public void testAddMultipleFields() { + String fieldName1 = "testField1"; + AtomicLong fieldValue1 = new AtomicLong(42); + + String fieldName2 = "testField2"; + AtomicLong fieldValue2 = new AtomicLong(84); + + persistedStateStats.addToExtendedFields(fieldName1, fieldValue1); + persistedStateStats.addToExtendedFields(fieldName2, fieldValue2); + + assertTrue(persistedStateStats.getExtendedFields().containsKey(fieldName1)); + assertTrue(persistedStateStats.getExtendedFields().containsKey(fieldName2)); + + assertEquals(42, persistedStateStats.getExtendedFields().get(fieldName1).get()); + assertEquals(84, persistedStateStats.getExtendedFields().get(fieldName2).get()); + } +} diff --git a/test/framework/src/main/java/org/opensearch/test/OpenSearchIntegTestCase.java b/test/framework/src/main/java/org/opensearch/test/OpenSearchIntegTestCase.java index 9853cef482254..b86cce682c68e 100644 --- a/test/framework/src/main/java/org/opensearch/test/OpenSearchIntegTestCase.java +++ b/test/framework/src/main/java/org/opensearch/test/OpenSearchIntegTestCase.java @@ -153,6 +153,7 @@ import org.opensearch.plugins.NetworkPlugin; import org.opensearch.plugins.Plugin; import org.opensearch.repositories.blobstore.BlobStoreRepository; +import org.opensearch.repositories.fs.FsRepository; import org.opensearch.repositories.fs.ReloadableFsRepository; import org.opensearch.script.MockScriptService; import org.opensearch.search.MockSearchService; @@ -220,6 +221,7 @@ import static org.opensearch.node.remotestore.RemoteStoreNodeAttribute.REMOTE_STORE_CLUSTER_STATE_REPOSITORY_NAME_ATTRIBUTE_KEY; import static org.opensearch.node.remotestore.RemoteStoreNodeAttribute.REMOTE_STORE_REPOSITORY_SETTINGS_ATTRIBUTE_KEY_PREFIX; import static org.opensearch.node.remotestore.RemoteStoreNodeAttribute.REMOTE_STORE_REPOSITORY_TYPE_ATTRIBUTE_KEY_FORMAT; +import static org.opensearch.node.remotestore.RemoteStoreNodeAttribute.REMOTE_STORE_ROUTING_TABLE_REPOSITORY_NAME_ATTRIBUTE_KEY; import static org.opensearch.node.remotestore.RemoteStoreNodeAttribute.REMOTE_STORE_SEGMENT_REPOSITORY_NAME_ATTRIBUTE_KEY; import static org.opensearch.node.remotestore.RemoteStoreNodeAttribute.REMOTE_STORE_TRANSLOG_REPOSITORY_NAME_ATTRIBUTE_KEY; import static org.opensearch.node.remotestore.RemoteStoreNodeService.MIGRATION_DIRECTION_SETTING; @@ -2580,6 +2582,35 @@ public static Settings remoteStoreClusterSettings( return settingsBuilder.build(); } + public static Settings remoteStoreClusterSettings( + String segmentRepoName, + Path segmentRepoPath, + String segmentRepoType, + String translogRepoName, + Path translogRepoPath, + String translogRepoType, + String routingTableRepoName, + Path routingTableRepoPath, + String routingTableRepoType + ) { + Settings.Builder settingsBuilder = Settings.builder(); + settingsBuilder.put( + buildRemoteStoreNodeAttributes( + segmentRepoName, + segmentRepoPath, + segmentRepoType, + translogRepoName, + translogRepoPath, + translogRepoType, + routingTableRepoName, + routingTableRepoPath, + routingTableRepoType, + false + ) + ); + return settingsBuilder.build(); + } + public static Settings remoteStoreClusterSettings( String segmentRepoName, Path segmentRepoPath, @@ -2591,6 +2622,29 @@ public static Settings remoteStoreClusterSettings( return settingsBuilder.build(); } + public static Settings remoteStoreClusterSettings( + String segmentRepoName, + Path segmentRepoPath, + String translogRepoName, + Path translogRepoPath, + String remoteRoutingTableRepoName, + Path remoteRoutingTableRepoPath + ) { + Settings.Builder settingsBuilder = Settings.builder(); + settingsBuilder.put( + buildRemoteStoreNodeAttributes( + segmentRepoName, + segmentRepoPath, + translogRepoName, + translogRepoPath, + remoteRoutingTableRepoName, + remoteRoutingTableRepoPath, + false + ) + ); + return settingsBuilder.build(); + } + public static Settings buildRemoteStoreNodeAttributes( String segmentRepoName, Path segmentRepoPath, @@ -2609,6 +2663,29 @@ public static Settings buildRemoteStoreNodeAttributes( ); } + public static Settings buildRemoteStoreNodeAttributes( + String segmentRepoName, + Path segmentRepoPath, + String translogRepoName, + Path translogRepoPath, + String remoteRoutingTableRepoName, + Path remoteRoutingTableRepoPath, + boolean withRateLimiterAttributes + ) { + return buildRemoteStoreNodeAttributes( + segmentRepoName, + segmentRepoPath, + ReloadableFsRepository.TYPE, + translogRepoName, + translogRepoPath, + ReloadableFsRepository.TYPE, + remoteRoutingTableRepoName, + remoteRoutingTableRepoPath, + FsRepository.TYPE, + withRateLimiterAttributes + ); + } + private static Settings buildRemoteStoreNodeAttributes( String segmentRepoName, Path segmentRepoPath, @@ -2617,6 +2694,32 @@ private static Settings buildRemoteStoreNodeAttributes( Path translogRepoPath, String translogRepoType, boolean withRateLimiterAttributes + ) { + return buildRemoteStoreNodeAttributes( + segmentRepoName, + segmentRepoPath, + segmentRepoType, + translogRepoName, + translogRepoPath, + translogRepoType, + null, + null, + null, + withRateLimiterAttributes + ); + } + + private static Settings buildRemoteStoreNodeAttributes( + String segmentRepoName, + Path segmentRepoPath, + String segmentRepoType, + String translogRepoName, + Path translogRepoPath, + String translogRepoType, + String routingTableRepoName, + Path routingTableRepoPath, + String routingTableRepoType, + boolean withRateLimiterAttributes ) { String segmentRepoTypeAttributeKey = String.format( Locale.getDefault(), @@ -2648,6 +2751,19 @@ private static Settings buildRemoteStoreNodeAttributes( "node.attr." + REMOTE_STORE_REPOSITORY_SETTINGS_ATTRIBUTE_KEY_PREFIX, segmentRepoName ); + String routingTableRepoAttributeKey = null, routingTableRepoSettingsAttributeKeyPrefix = null; + if (routingTableRepoName != null) { + routingTableRepoAttributeKey = String.format( + Locale.getDefault(), + "node.attr." + REMOTE_STORE_REPOSITORY_TYPE_ATTRIBUTE_KEY_FORMAT, + routingTableRepoName + ); + routingTableRepoSettingsAttributeKeyPrefix = String.format( + Locale.getDefault(), + "node.attr." + REMOTE_STORE_REPOSITORY_SETTINGS_ATTRIBUTE_KEY_PREFIX, + routingTableRepoName + ); + } String prefixModeVerificationSuffix = BlobStoreRepository.PREFIX_MODE_VERIFICATION_SETTING.getKey(); @@ -2664,6 +2780,11 @@ private static Settings buildRemoteStoreNodeAttributes( .put(stateRepoTypeAttributeKey, segmentRepoType) .put(stateRepoSettingsAttributeKeyPrefix + "location", segmentRepoPath) .put(stateRepoSettingsAttributeKeyPrefix + prefixModeVerificationSuffix, prefixModeVerificationEnable); + if (routingTableRepoName != null) { + settings.put("node.attr." + REMOTE_STORE_ROUTING_TABLE_REPOSITORY_NAME_ATTRIBUTE_KEY, routingTableRepoName) + .put(routingTableRepoAttributeKey, routingTableRepoType) + .put(routingTableRepoSettingsAttributeKeyPrefix + "location", routingTableRepoPath); + } if (withRateLimiterAttributes) { settings.put(segmentRepoSettingsAttributeKeyPrefix + "compress", randomBoolean()) From 5de0c8a7a3a63455758a8bfe24199f4955f29dca Mon Sep 17 00:00:00 2001 From: Daniel Widdis Date: Mon, 22 Jul 2024 11:48:53 -0700 Subject: [PATCH 094/167] Add SortResponseProcessor to Search Pipelines (#14785) * Add SortResponseProcessor for search pipelines Signed-off-by: Daniel Widdis * Add stupid and unnecessary javadocs to satisfy overly strict CI Signed-off-by: Daniel Widdis * Split casting and sorting methods for readability Signed-off-by: Daniel Widdis * Register the sort processor factory Signed-off-by: Daniel Widdis * Address code review comments Signed-off-by: Daniel Widdis * Cast individual list elements to avoid creating two lists Signed-off-by: Daniel Widdis * Add yamlRestTests Signed-off-by: Daniel Widdis * Clarify why there's unusual sorting Signed-off-by: Daniel Widdis * Use instanceof instead of isAssignableFrom Signed-off-by: Daniel Widdis --------- Signed-off-by: Daniel Widdis --- CHANGELOG.md | 1 + .../SearchPipelineCommonModulePlugin.java | 4 +- .../common/SortResponseProcessor.java | 209 ++++++++++++++++ .../common/SplitResponseProcessor.java | 2 +- ...SearchPipelineCommonModulePluginTests.java | 2 +- .../common/SortResponseProcessorTests.java | 230 ++++++++++++++++++ .../test/search_pipeline/80_sort_response.yml | 152 ++++++++++++ 7 files changed, 596 insertions(+), 4 deletions(-) create mode 100644 modules/search-pipeline-common/src/main/java/org/opensearch/search/pipeline/common/SortResponseProcessor.java create mode 100644 modules/search-pipeline-common/src/test/java/org/opensearch/search/pipeline/common/SortResponseProcessorTests.java create mode 100644 modules/search-pipeline-common/src/yamlRestTest/resources/rest-api-spec/test/search_pipeline/80_sort_response.yml diff --git a/CHANGELOG.md b/CHANGELOG.md index e32b6de84a195..80dd5a27ffdaa 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -20,6 +20,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Add matchesPluginSystemIndexPattern to SystemIndexRegistry ([#14750](https://github.com/opensearch-project/OpenSearch/pull/14750)) - Add Plugin interface for loading application based configuration templates (([#14659](https://github.com/opensearch-project/OpenSearch/issues/14659))) - Refactor remote-routing-table service inline with remote state interfaces([#14668](https://github.com/opensearch-project/OpenSearch/pull/14668)) +- Add SortResponseProcessor to Search Pipelines (([#14785](https://github.com/opensearch-project/OpenSearch/issues/14785))) - Add prefix mode verification setting for repository verification (([#14790](https://github.com/opensearch-project/OpenSearch/pull/14790))) - Add SplitResponseProcessor to Search Pipelines (([#14800](https://github.com/opensearch-project/OpenSearch/issues/14800))) - Optimize TransportNodesAction to not send DiscoveryNodes for NodeStats, NodesInfo and ClusterStats call ([14749](https://github.com/opensearch-project/OpenSearch/pull/14749)) diff --git a/modules/search-pipeline-common/src/main/java/org/opensearch/search/pipeline/common/SearchPipelineCommonModulePlugin.java b/modules/search-pipeline-common/src/main/java/org/opensearch/search/pipeline/common/SearchPipelineCommonModulePlugin.java index d05101da2817c..2a2de9debb9d9 100644 --- a/modules/search-pipeline-common/src/main/java/org/opensearch/search/pipeline/common/SearchPipelineCommonModulePlugin.java +++ b/modules/search-pipeline-common/src/main/java/org/opensearch/search/pipeline/common/SearchPipelineCommonModulePlugin.java @@ -97,8 +97,8 @@ public Map> getResponseProces new TruncateHitsResponseProcessor.Factory(), CollapseResponseProcessor.TYPE, new CollapseResponseProcessor.Factory(), - SplitResponseProcessor.TYPE, - new SplitResponseProcessor.Factory() + SortResponseProcessor.TYPE, + new SortResponseProcessor.Factory() ) ); } diff --git a/modules/search-pipeline-common/src/main/java/org/opensearch/search/pipeline/common/SortResponseProcessor.java b/modules/search-pipeline-common/src/main/java/org/opensearch/search/pipeline/common/SortResponseProcessor.java new file mode 100644 index 0000000000000..e0bfd38b26376 --- /dev/null +++ b/modules/search-pipeline-common/src/main/java/org/opensearch/search/pipeline/common/SortResponseProcessor.java @@ -0,0 +1,209 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.search.pipeline.common; + +import org.opensearch.action.search.SearchRequest; +import org.opensearch.action.search.SearchResponse; +import org.opensearch.common.collect.Tuple; +import org.opensearch.common.document.DocumentField; +import org.opensearch.common.xcontent.XContentHelper; +import org.opensearch.core.common.bytes.BytesReference; +import org.opensearch.core.xcontent.MediaType; +import org.opensearch.core.xcontent.XContentBuilder; +import org.opensearch.ingest.ConfigurationUtils; +import org.opensearch.search.SearchHit; +import org.opensearch.search.pipeline.AbstractProcessor; +import org.opensearch.search.pipeline.Processor; +import org.opensearch.search.pipeline.SearchResponseProcessor; + +import java.util.Comparator; +import java.util.List; +import java.util.Map; +import java.util.Objects; +import java.util.stream.Collectors; + +/** + * Processor that sorts an array of items. + * Throws exception is the specified field is not an array. + */ +public class SortResponseProcessor extends AbstractProcessor implements SearchResponseProcessor { + /** Key to reference this processor type from a search pipeline. */ + public static final String TYPE = "sort"; + /** Key defining the array field to be sorted. */ + public static final String SORT_FIELD = "field"; + /** Optional key defining the sort order. */ + public static final String SORT_ORDER = "order"; + /** Optional key to put the sorted values in a different field. */ + public static final String TARGET_FIELD = "target_field"; + /** Default sort order if not specified */ + public static final String DEFAULT_ORDER = "asc"; + + /** Enum defining how elements will be sorted */ + public enum SortOrder { + /** Sort in ascending (natural) order */ + ASCENDING("asc"), + /** Sort in descending (reverse) order */ + DESCENDING("desc"); + + private final String direction; + + SortOrder(String direction) { + this.direction = direction; + } + + @Override + public String toString() { + return this.direction; + } + + /** + * Converts the string representation of the enum value to the enum. + * @param value A string ("asc" or "desc") + * @return the corresponding enum value + */ + public static SortOrder fromString(String value) { + if (value == null) { + throw new IllegalArgumentException("Sort direction cannot be null"); + } + + if (value.equals(ASCENDING.toString())) { + return ASCENDING; + } else if (value.equals(DESCENDING.toString())) { + return DESCENDING; + } + throw new IllegalArgumentException("Sort direction [" + value + "] not recognized." + " Valid values are: [asc, desc]"); + } + } + + private final String sortField; + private final SortOrder sortOrder; + private final String targetField; + + SortResponseProcessor( + String tag, + String description, + boolean ignoreFailure, + String sortField, + SortOrder sortOrder, + String targetField + ) { + super(tag, description, ignoreFailure); + this.sortField = Objects.requireNonNull(sortField); + this.sortOrder = Objects.requireNonNull(sortOrder); + this.targetField = targetField == null ? sortField : targetField; + } + + /** + * Getter function for sortField + * @return sortField + */ + public String getSortField() { + return sortField; + } + + /** + * Getter function for targetField + * @return targetField + */ + public String getTargetField() { + return targetField; + } + + /** + * Getter function for sortOrder + * @return sortOrder + */ + public SortOrder getSortOrder() { + return sortOrder; + } + + @Override + public String getType() { + return TYPE; + } + + @Override + public SearchResponse processResponse(SearchRequest request, SearchResponse response) throws Exception { + SearchHit[] hits = response.getHits().getHits(); + for (SearchHit hit : hits) { + Map fields = hit.getFields(); + if (fields.containsKey(sortField)) { + DocumentField docField = hit.getFields().get(sortField); + if (docField == null) { + throw new IllegalArgumentException("field [" + sortField + "] is null, cannot sort."); + } + hit.setDocumentField(targetField, new DocumentField(targetField, getSortedValues(docField.getValues()))); + } + if (hit.hasSource()) { + BytesReference sourceRef = hit.getSourceRef(); + Tuple> typeAndSourceMap = XContentHelper.convertToMap( + sourceRef, + false, + (MediaType) null + ); + + Map sourceAsMap = typeAndSourceMap.v2(); + if (sourceAsMap.containsKey(sortField)) { + Object val = sourceAsMap.get(sortField); + if (val instanceof List) { + @SuppressWarnings("unchecked") + List listVal = (List) val; + sourceAsMap.put(targetField, getSortedValues(listVal)); + } + XContentBuilder builder = XContentBuilder.builder(typeAndSourceMap.v1().xContent()); + builder.map(sourceAsMap); + hit.sourceRef(BytesReference.bytes(builder)); + } + } + } + return response; + } + + private List getSortedValues(List values) { + return values.stream() + .map(this::downcastToComparable) + .sorted(sortOrder.equals(SortOrder.ASCENDING) ? Comparator.naturalOrder() : Comparator.reverseOrder()) + .collect(Collectors.toList()); + } + + @SuppressWarnings("unchecked") + private Comparable downcastToComparable(Object obj) { + if (obj instanceof Comparable) { + return (Comparable) obj; + } else if (obj == null) { + throw new IllegalArgumentException("field [" + sortField + "] contains a null value.]"); + } else { + throw new IllegalArgumentException("field [" + sortField + "] of type [" + obj.getClass().getName() + "] is not comparable.]"); + } + } + + static class Factory implements Processor.Factory { + + @Override + public SortResponseProcessor create( + Map> processorFactories, + String tag, + String description, + boolean ignoreFailure, + Map config, + PipelineContext pipelineContext + ) { + String sortField = ConfigurationUtils.readStringProperty(TYPE, tag, config, SORT_FIELD); + String targetField = ConfigurationUtils.readStringProperty(TYPE, tag, config, TARGET_FIELD, sortField); + try { + SortOrder sortOrder = SortOrder.fromString( + ConfigurationUtils.readStringProperty(TYPE, tag, config, SORT_ORDER, DEFAULT_ORDER) + ); + return new SortResponseProcessor(tag, description, ignoreFailure, sortField, sortOrder, targetField); + } catch (IllegalArgumentException e) { + throw ConfigurationUtils.newConfigurationException(TYPE, tag, SORT_ORDER, e.getMessage()); + } + } + } +} diff --git a/modules/search-pipeline-common/src/main/java/org/opensearch/search/pipeline/common/SplitResponseProcessor.java b/modules/search-pipeline-common/src/main/java/org/opensearch/search/pipeline/common/SplitResponseProcessor.java index 0762f8f59b76e..bb3db4d9bc2c1 100644 --- a/modules/search-pipeline-common/src/main/java/org/opensearch/search/pipeline/common/SplitResponseProcessor.java +++ b/modules/search-pipeline-common/src/main/java/org/opensearch/search/pipeline/common/SplitResponseProcessor.java @@ -111,7 +111,7 @@ public SearchResponse processResponse(SearchRequest request, SearchResponse resp throw new IllegalArgumentException("field [" + splitField + "] is null, cannot split."); } Object val = docField.getValue(); - if (val == null || !String.class.isAssignableFrom(val.getClass())) { + if (!(val instanceof String)) { throw new IllegalArgumentException("field [" + splitField + "] is not a string, cannot split"); } Object[] strings = ((String) val).split(separator, preserveTrailing ? -1 : 0); diff --git a/modules/search-pipeline-common/src/test/java/org/opensearch/search/pipeline/common/SearchPipelineCommonModulePluginTests.java b/modules/search-pipeline-common/src/test/java/org/opensearch/search/pipeline/common/SearchPipelineCommonModulePluginTests.java index d4f9ae2490a10..404842742629c 100644 --- a/modules/search-pipeline-common/src/test/java/org/opensearch/search/pipeline/common/SearchPipelineCommonModulePluginTests.java +++ b/modules/search-pipeline-common/src/test/java/org/opensearch/search/pipeline/common/SearchPipelineCommonModulePluginTests.java @@ -82,7 +82,7 @@ public void testAllowlistNotSpecified() throws IOException { try (SearchPipelineCommonModulePlugin plugin = new SearchPipelineCommonModulePlugin()) { assertEquals(Set.of("oversample", "filter_query", "script"), plugin.getRequestProcessors(createParameters(settings)).keySet()); assertEquals( - Set.of("rename_field", "truncate_hits", "collapse", "split"), + Set.of("rename_field", "truncate_hits", "collapse", "sort"), plugin.getResponseProcessors(createParameters(settings)).keySet() ); assertEquals(Set.of(), plugin.getSearchPhaseResultsProcessors(createParameters(settings)).keySet()); diff --git a/modules/search-pipeline-common/src/test/java/org/opensearch/search/pipeline/common/SortResponseProcessorTests.java b/modules/search-pipeline-common/src/test/java/org/opensearch/search/pipeline/common/SortResponseProcessorTests.java new file mode 100644 index 0000000000000..c18c6b34b05d1 --- /dev/null +++ b/modules/search-pipeline-common/src/test/java/org/opensearch/search/pipeline/common/SortResponseProcessorTests.java @@ -0,0 +1,230 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a.java + * compatible open source license. + */ + +package org.opensearch.search.pipeline.common; + +import org.apache.lucene.search.TotalHits; +import org.opensearch.OpenSearchParseException; +import org.opensearch.action.search.SearchRequest; +import org.opensearch.action.search.SearchResponse; +import org.opensearch.action.search.SearchResponseSections; +import org.opensearch.common.document.DocumentField; +import org.opensearch.core.common.bytes.BytesArray; +import org.opensearch.index.query.QueryBuilder; +import org.opensearch.index.query.TermQueryBuilder; +import org.opensearch.ingest.RandomDocumentPicks; +import org.opensearch.search.SearchHit; +import org.opensearch.search.SearchHits; +import org.opensearch.search.builder.SearchSourceBuilder; +import org.opensearch.test.OpenSearchTestCase; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +public class SortResponseProcessorTests extends OpenSearchTestCase { + + private static final List PI = List.of(3, 1, 4, 1, 5, 9, 2, 6); + private static final List E = List.of(2, 7, 1, 8, 2, 8, 1, 8); + private static final List X; + static { + List x = new ArrayList<>(); + x.add(1); + x.add(null); + x.add(3); + X = x; + } + + private SearchRequest createDummyRequest() { + QueryBuilder query = new TermQueryBuilder("field", "value"); + SearchSourceBuilder source = new SearchSourceBuilder().query(query); + return new SearchRequest().source(source); + } + + private SearchResponse createTestResponse() { + SearchHit[] hits = new SearchHit[2]; + + // one response with source + Map piMap = new HashMap<>(); + piMap.put("digits", new DocumentField("digits", PI)); + hits[0] = new SearchHit(0, "doc 1", piMap, Collections.emptyMap()); + hits[0].sourceRef(new BytesArray("{ \"digits\" : " + PI + " }")); + hits[0].score((float) Math.PI); + + // one without source + Map eMap = new HashMap<>(); + eMap.put("digits", new DocumentField("digits", E)); + hits[1] = new SearchHit(1, "doc 2", eMap, Collections.emptyMap()); + hits[1].score((float) Math.E); + + SearchHits searchHits = new SearchHits(hits, new TotalHits(2, TotalHits.Relation.EQUAL_TO), 2); + SearchResponseSections searchResponseSections = new SearchResponseSections(searchHits, null, null, false, false, null, 0); + return new SearchResponse(searchResponseSections, null, 1, 1, 0, 10, null, null); + } + + private SearchResponse createTestResponseNullField() { + SearchHit[] hits = new SearchHit[1]; + + Map map = new HashMap<>(); + map.put("digits", null); + hits[0] = new SearchHit(0, "doc 1", map, Collections.emptyMap()); + hits[0].sourceRef(new BytesArray("{ \"digits\" : null }")); + hits[0].score((float) Math.PI); + + SearchHits searchHits = new SearchHits(hits, new TotalHits(1, TotalHits.Relation.EQUAL_TO), 1); + SearchResponseSections searchResponseSections = new SearchResponseSections(searchHits, null, null, false, false, null, 0); + return new SearchResponse(searchResponseSections, null, 1, 1, 0, 10, null, null); + } + + private SearchResponse createTestResponseNullListEntry() { + SearchHit[] hits = new SearchHit[1]; + + Map xMap = new HashMap<>(); + xMap.put("digits", new DocumentField("digits", X)); + hits[0] = new SearchHit(0, "doc 1", xMap, Collections.emptyMap()); + hits[0].sourceRef(new BytesArray("{ \"digits\" : " + X + " }")); + hits[0].score((float) Math.PI); + + SearchHits searchHits = new SearchHits(hits, new TotalHits(1, TotalHits.Relation.EQUAL_TO), 1); + SearchResponseSections searchResponseSections = new SearchResponseSections(searchHits, null, null, false, false, null, 0); + return new SearchResponse(searchResponseSections, null, 1, 1, 0, 10, null, null); + } + + private SearchResponse createTestResponseNotComparable() { + SearchHit[] hits = new SearchHit[1]; + + Map piMap = new HashMap<>(); + piMap.put("maps", new DocumentField("maps", List.of(Map.of("foo", "I'm incomparable!")))); + hits[0] = new SearchHit(0, "doc 1", piMap, Collections.emptyMap()); + hits[0].sourceRef(new BytesArray("{ \"maps\" : [{ \"foo\" : \"I'm incomparable!\"}]] }")); + hits[0].score((float) Math.PI); + + SearchHits searchHits = new SearchHits(hits, new TotalHits(1, TotalHits.Relation.EQUAL_TO), 1); + SearchResponseSections searchResponseSections = new SearchResponseSections(searchHits, null, null, false, false, null, 0); + return new SearchResponse(searchResponseSections, null, 1, 1, 0, 10, null, null); + } + + public void testSortResponse() throws Exception { + SearchRequest request = createDummyRequest(); + + SortResponseProcessor sortResponseProcessor = new SortResponseProcessor( + null, + null, + false, + "digits", + SortResponseProcessor.SortOrder.ASCENDING, + "sorted" + ); + SearchResponse response = createTestResponse(); + SearchResponse sortResponse = sortResponseProcessor.processResponse(request, response); + + assertEquals(response.getHits(), sortResponse.getHits()); + + assertEquals(PI, sortResponse.getHits().getHits()[0].field("digits").getValues()); + assertEquals(List.of(1, 1, 2, 3, 4, 5, 6, 9), sortResponse.getHits().getHits()[0].field("sorted").getValues()); + Map map = sortResponse.getHits().getHits()[0].getSourceAsMap(); + assertNotNull(map); + assertEquals(List.of(1, 1, 2, 3, 4, 5, 6, 9), map.get("sorted")); + + assertEquals(E, sortResponse.getHits().getHits()[1].field("digits").getValues()); + assertEquals(List.of(1, 1, 2, 2, 7, 8, 8, 8), sortResponse.getHits().getHits()[1].field("sorted").getValues()); + assertNull(sortResponse.getHits().getHits()[1].getSourceAsMap()); + } + + public void testSortResponseSameField() throws Exception { + SearchRequest request = createDummyRequest(); + + SortResponseProcessor sortResponseProcessor = new SortResponseProcessor( + null, + null, + false, + "digits", + SortResponseProcessor.SortOrder.DESCENDING, + null + ); + SearchResponse response = createTestResponse(); + SearchResponse sortResponse = sortResponseProcessor.processResponse(request, response); + + assertEquals(response.getHits(), sortResponse.getHits()); + assertEquals(List.of(9, 6, 5, 4, 3, 2, 1, 1), sortResponse.getHits().getHits()[0].field("digits").getValues()); + assertEquals(List.of(8, 8, 8, 7, 2, 2, 1, 1), sortResponse.getHits().getHits()[1].field("digits").getValues()); + } + + public void testSortResponseNullListEntry() { + SearchRequest request = createDummyRequest(); + + SortResponseProcessor sortResponseProcessor = new SortResponseProcessor( + null, + null, + false, + "digits", + SortResponseProcessor.SortOrder.ASCENDING, + null + ); + assertThrows( + IllegalArgumentException.class, + () -> sortResponseProcessor.processResponse(request, createTestResponseNullListEntry()) + ); + } + + public void testNullField() { + SearchRequest request = createDummyRequest(); + + SortResponseProcessor sortResponseProcessor = new SortResponseProcessor( + null, + null, + false, + "digits", + SortResponseProcessor.SortOrder.DESCENDING, + null + ); + + assertThrows(IllegalArgumentException.class, () -> sortResponseProcessor.processResponse(request, createTestResponseNullField())); + } + + public void testNotComparableField() { + SearchRequest request = createDummyRequest(); + + SortResponseProcessor sortResponseProcessor = new SortResponseProcessor( + null, + null, + false, + "maps", + SortResponseProcessor.SortOrder.ASCENDING, + null + ); + + assertThrows( + IllegalArgumentException.class, + () -> sortResponseProcessor.processResponse(request, createTestResponseNotComparable()) + ); + } + + public void testFactory() { + String sortField = RandomDocumentPicks.randomFieldName(random()); + String targetField = RandomDocumentPicks.randomFieldName(random()); + Map config = new HashMap<>(); + config.put("field", sortField); + config.put("order", "desc"); + config.put("target_field", targetField); + + SortResponseProcessor.Factory factory = new SortResponseProcessor.Factory(); + SortResponseProcessor processor = factory.create(Collections.emptyMap(), null, null, false, config, null); + assertEquals("sort", processor.getType()); + assertEquals(sortField, processor.getSortField()); + assertEquals(targetField, processor.getTargetField()); + assertEquals(SortResponseProcessor.SortOrder.DESCENDING, processor.getSortOrder()); + + expectThrows( + OpenSearchParseException.class, + () -> factory.create(Collections.emptyMap(), null, null, false, Collections.emptyMap(), null) + ); + } +} diff --git a/modules/search-pipeline-common/src/yamlRestTest/resources/rest-api-spec/test/search_pipeline/80_sort_response.yml b/modules/search-pipeline-common/src/yamlRestTest/resources/rest-api-spec/test/search_pipeline/80_sort_response.yml new file mode 100644 index 0000000000000..c160b550b2a6e --- /dev/null +++ b/modules/search-pipeline-common/src/yamlRestTest/resources/rest-api-spec/test/search_pipeline/80_sort_response.yml @@ -0,0 +1,152 @@ +--- +teardown: + - do: + search_pipeline.delete: + id: "my_pipeline" + ignore: 404 + +--- +"Test sort processor": + - do: + search_pipeline.put: + id: "my_pipeline" + body: > + { + "description": "test pipeline", + "response_processors": [ + { + "sort": + { + "field": "a", + "target_field": "b" + } + } + ] + } + - match: { acknowledged: true } + + - do: + search_pipeline.put: + id: "my_pipeline_2" + body: > + { + "description": "test pipeline with ignore failure true", + "response_processors": [ + { + "sort": + { + "field": "aa", + "ignore_failure": true + } + } + ] + } + - match: { acknowledged: true } + + - do: + search_pipeline.put: + id: "my_pipeline_3" + body: > + { + "description": "test pipeline", + "response_processors": [ + { + "sort": + { + "field": "a", + "order": "desc", + "target_field": "b" + } + } + ] + } + - match: { acknowledged: true } + + - do: + indices.create: + index: test + + - do: + indices.put_mapping: + index: test + body: + properties: + a: + type: integer + store: true + doc_values: true + + - do: + index: + index: test + id: 1 + body: { + "a": [ 3, 1, 4 ] + } + + - do: + indices.refresh: + index: test + + - do: + search: + body: { } + - match: { hits.total.value: 1 } + + - do: + search: + index: test + search_pipeline: "my_pipeline" + body: { } + - match: { hits.total.value: 1 } + - match: { hits.hits.0._source: { "a": [3, 1, 4], "b": [1, 3, 4] } } + + # Should also work with no search body specified + - do: + search: + index: test + search_pipeline: "my_pipeline" + - match: { hits.total.value: 1 } + - match: { hits.hits.0._source: { "a": [3, 1, 4], "b": [1, 3, 4] } } + + # Pipeline with ignore_failure set to true + # Should return while catching error + - do: + search: + index: test + search_pipeline: "my_pipeline_2" + - match: { hits.total.value: 1 } + - match: { hits.hits.0._source: { "a": [3, 1, 4] } } + + # Pipeline with desc sort order + - do: + search: + index: test + search_pipeline: "my_pipeline_3" + body: { } + - match: { hits.total.value: 1 } + - match: { hits.hits.0._source: { "a": [3, 1, 4], "b": [4, 3, 1] } } + + # No source, using stored_fields + - do: + search: + index: test + search_pipeline: "my_pipeline" + body: { + "_source": false, + "stored_fields": [ "a" ] + } + - match: { hits.hits.0.fields: { "a": [3, 1, 4], "b": [1, 3, 4] } } + + # No source, using docvalue_fields + - do: + search: + index: test + search_pipeline: "my_pipeline_3" + body: { + "_source": false, + "docvalue_fields": [ "a" ] + } + # a is stored sorted because docvalue_fields is pre-sorted to optimize aggregations + # this is poorly documented which makes it really hard to write "expected" values on tests + - match: { hits.hits.0.fields: { "a": [1, 3, 4], "b": [4, 3, 1] } } From 6227dc6ae70d82b7826f8f08bcc57b277c254056 Mon Sep 17 00:00:00 2001 From: "Park, Yeongwu" Date: Tue, 23 Jul 2024 05:24:51 +0900 Subject: [PATCH 095/167] Fix allowUnmappedFields, mapUnmappedFieldAsString settings to be applied when parsing query string query (#13957) * Modify to invoke QueryShardContext.fieldMapper() method to apply allowUnmappedFields and mapUnmappedFieldAsString settings Signed-off-by: imyp92 * Add test cases to verify returning 400 responses if unmapped fields are included for some types of query Signed-off-by: imyp92 * Add changelog Signed-off-by: imyp92 --------- Signed-off-by: imyp92 Signed-off-by: gaobinlong Co-authored-by: gaobinlong --- CHANGELOG.md | 1 + .../resources/rest-api-spec/test/10_basic.yml | 45 +++++++++++++++++++ .../index/query/ExistsQueryBuilder.java | 10 ++--- .../index/search/QueryParserHelper.java | 2 +- 4 files changed, 50 insertions(+), 8 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 80dd5a27ffdaa..ad655f3849b7e 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -63,6 +63,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Remove query categorization changes ([#14759](https://github.com/opensearch-project/OpenSearch/pull/14759)) ### Fixed +- Fix allowUnmappedFields, mapUnmappedFieldAsString settings are not applied when parsing certain types of query string query ([#13957](https://github.com/opensearch-project/OpenSearch/pull/13957)) - Fix bug in SBP cancellation logic ([#13259](https://github.com/opensearch-project/OpenSearch/pull/13474)) - Fix handling of Short and Byte data types in ScriptProcessor ingest pipeline ([#14379](https://github.com/opensearch-project/OpenSearch/issues/14379)) - Switch to iterative version of WKT format parser ([#14086](https://github.com/opensearch-project/OpenSearch/pull/14086)) diff --git a/modules/percolator/src/yamlRestTest/resources/rest-api-spec/test/10_basic.yml b/modules/percolator/src/yamlRestTest/resources/rest-api-spec/test/10_basic.yml index 35ebb2b099139..61f79326dab06 100644 --- a/modules/percolator/src/yamlRestTest/resources/rest-api-spec/test/10_basic.yml +++ b/modules/percolator/src/yamlRestTest/resources/rest-api-spec/test/10_basic.yml @@ -83,3 +83,48 @@ index: documents_index id: some_id - match: { responses.0.hits.total: 1 } + + - do: + catch: bad_request + index: + index: queries_index + body: + query: + query_string: + query: "unmapped: *" + + - do: + catch: bad_request + index: + index: queries_index + body: + query: + query_string: + query: "_exists_: unmappedField" + + - do: + catch: bad_request + index: + index: queries_index + body: + query: + query_string: + query: "unmappedField: <100" + + - do: + catch: bad_request + index: + index: queries_index + body: + query: + query_string: + query: "unmappedField: test~" + + - do: + catch: bad_request + index: + index: queries_index + body: + query: + query_string: + query: "unmappedField: test*" diff --git a/server/src/main/java/org/opensearch/index/query/ExistsQueryBuilder.java b/server/src/main/java/org/opensearch/index/query/ExistsQueryBuilder.java index 3011a48fbb296..6ae40fe1b1e64 100644 --- a/server/src/main/java/org/opensearch/index/query/ExistsQueryBuilder.java +++ b/server/src/main/java/org/opensearch/index/query/ExistsQueryBuilder.java @@ -230,20 +230,16 @@ private static Collection getMappedField(QueryShardContext context, Stri if (context.getObjectMapper(fieldPattern) != null) { // the _field_names field also indexes objects, so we don't have to // do any more work to support exists queries on whole objects - fields = Collections.singleton(fieldPattern); + return Collections.singleton(fieldPattern); } else { fields = context.simpleMatchToIndexNames(fieldPattern); } if (fields.size() == 1) { String field = fields.iterator().next(); - MappedFieldType fieldType = context.getMapperService().fieldType(field); + MappedFieldType fieldType = context.fieldMapper(field); if (fieldType == null) { - // The field does not exist as a leaf but could be an object so - // check for an object mapper - if (context.getObjectMapper(field) == null) { - return Collections.emptySet(); - } + return Collections.emptySet(); } } diff --git a/server/src/main/java/org/opensearch/index/search/QueryParserHelper.java b/server/src/main/java/org/opensearch/index/search/QueryParserHelper.java index 06f450f090e63..603e81f6bf113 100644 --- a/server/src/main/java/org/opensearch/index/search/QueryParserHelper.java +++ b/server/src/main/java/org/opensearch/index/search/QueryParserHelper.java @@ -143,7 +143,7 @@ static Map resolveMappingField( fieldName = fieldName + fieldSuffix; } - MappedFieldType fieldType = context.getMapperService().fieldType(fieldName); + MappedFieldType fieldType = context.fieldMapper(fieldName); if (fieldType == null) { fieldType = context.resolveDerivedFieldType(fieldName); } From 250feb29cdb87e8dec3bde32e27f9202e90c532b Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Mon, 22 Jul 2024 16:39:33 -0400 Subject: [PATCH 096/167] Bump com.microsoft.azure:msal4j from 1.16.0 to 1.16.1 in /plugins/repository-azure (#14857) * Bump com.microsoft.azure:msal4j in /plugins/repository-azure Bumps [com.microsoft.azure:msal4j](https://github.com/AzureAD/microsoft-authentication-library-for-java) from 1.16.0 to 1.16.1. - [Release notes](https://github.com/AzureAD/microsoft-authentication-library-for-java/releases) - [Changelog](https://github.com/AzureAD/microsoft-authentication-library-for-java/blob/dev/changelog.txt) - [Commits](https://github.com/AzureAD/microsoft-authentication-library-for-java/compare/v1.16.0...v1.16.1) --- updated-dependencies: - dependency-name: com.microsoft.azure:msal4j dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] * Updating SHAs Signed-off-by: dependabot[bot] * Update changelog Signed-off-by: dependabot[bot] --------- Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] --- CHANGELOG.md | 2 +- plugins/repository-azure/build.gradle | 2 +- plugins/repository-azure/licenses/msal4j-1.16.0.jar.sha1 | 1 - plugins/repository-azure/licenses/msal4j-1.16.1.jar.sha1 | 1 + 4 files changed, 3 insertions(+), 3 deletions(-) delete mode 100644 plugins/repository-azure/licenses/msal4j-1.16.0.jar.sha1 create mode 100644 plugins/repository-azure/licenses/msal4j-1.16.1.jar.sha1 diff --git a/CHANGELOG.md b/CHANGELOG.md index ad655f3849b7e..2f16af3c50c61 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -42,7 +42,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Bump `opentelemetry-semconv` from 1.25.0-alpha to 1.26.0-alpha ([#14674](https://github.com/opensearch-project/OpenSearch/pull/14674)) - Bump `azure-identity` from 1.11.4 to 1.13.0, Bump `msal4j` from 1.14.3 to 1.15.1, Bump `msal4j-persistence-extension` from 1.2.0 to 1.3.0 ([#14506](https://github.com/opensearch-project/OpenSearch/pull/14673)) - Bump `com.azure:azure-storage-common` from 12.21.2 to 12.25.1 ([#14517](https://github.com/opensearch-project/OpenSearch/pull/14517)) -- Bump `com.microsoft.azure:msal4j` from 1.15.1 to 1.16.0 ([#14610](https://github.com/opensearch-project/OpenSearch/pull/14610)) +- Bump `com.microsoft.azure:msal4j` from 1.15.1 to 1.16.1 ([#14610](https://github.com/opensearch-project/OpenSearch/pull/14610), [#14857](https://github.com/opensearch-project/OpenSearch/pull/14857)) - Bump `com.github.spullara.mustache.java:compiler` from 0.9.13 to 0.9.14 ([#14672](https://github.com/opensearch-project/OpenSearch/pull/14672)) - Bump `net.minidev:accessors-smart` from 2.5.0 to 2.5.1 ([#14673](https://github.com/opensearch-project/OpenSearch/pull/14673)) - Bump `jackson` from 2.17.1 to 2.17.2 ([#14687](https://github.com/opensearch-project/OpenSearch/pull/14687)) diff --git a/plugins/repository-azure/build.gradle b/plugins/repository-azure/build.gradle index 980940e35b0b0..7bd7be1481a2f 100644 --- a/plugins/repository-azure/build.gradle +++ b/plugins/repository-azure/build.gradle @@ -61,7 +61,7 @@ dependencies { // Start of transitive dependencies for azure-identity api 'com.microsoft.azure:msal4j-persistence-extension:1.3.0' api "net.java.dev.jna:jna-platform:${versions.jna}" - api 'com.microsoft.azure:msal4j:1.16.0' + api 'com.microsoft.azure:msal4j:1.16.1' api 'com.nimbusds:oauth2-oidc-sdk:11.9.1' api 'com.nimbusds:nimbus-jose-jwt:9.40' api 'com.nimbusds:content-type:2.3' diff --git a/plugins/repository-azure/licenses/msal4j-1.16.0.jar.sha1 b/plugins/repository-azure/licenses/msal4j-1.16.0.jar.sha1 deleted file mode 100644 index 29fe5022a1570..0000000000000 --- a/plugins/repository-azure/licenses/msal4j-1.16.0.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -708a0a986ed091054f1c08866712e5b41aec6700 \ No newline at end of file diff --git a/plugins/repository-azure/licenses/msal4j-1.16.1.jar.sha1 b/plugins/repository-azure/licenses/msal4j-1.16.1.jar.sha1 new file mode 100644 index 0000000000000..7d24922196be4 --- /dev/null +++ b/plugins/repository-azure/licenses/msal4j-1.16.1.jar.sha1 @@ -0,0 +1 @@ +4ad89b4632ef9abab883114e77c079843a206862 \ No newline at end of file From c7cebc5cc9ccc61b9798b30aa975901de1e343c3 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Mon, 22 Jul 2024 16:40:57 -0400 Subject: [PATCH 097/167] Bump com.gradle.develocity from 3.17.5 to 3.17.6 (#14856) * Bump com.gradle.develocity from 3.17.5 to 3.17.6 Bumps com.gradle.develocity from 3.17.5 to 3.17.6. --- updated-dependencies: - dependency-name: com.gradle.develocity dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] * Update changelog Signed-off-by: dependabot[bot] --------- Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] --- CHANGELOG.md | 2 +- settings.gradle | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 2f16af3c50c61..fdce3a5e24342 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -37,7 +37,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Bump `commons-net:commons-net` from 3.10.0 to 3.11.1 ([#14396](https://github.com/opensearch-project/OpenSearch/pull/14396)) - Bump `com.nimbusds:nimbus-jose-jwt` from 9.37.3 to 9.40 ([#14398](https://github.com/opensearch-project/OpenSearch/pull/14398)) - Bump `org.apache.commons:commons-configuration2` from 2.10.1 to 2.11.0 ([#14399](https://github.com/opensearch-project/OpenSearch/pull/14399)) -- Bump `com.gradle.develocity` from 3.17.4 to 3.17.5 ([#14397](https://github.com/opensearch-project/OpenSearch/pull/14397)) +- Bump `com.gradle.develocity` from 3.17.4 to 3.17.6 ([#14397](https://github.com/opensearch-project/OpenSearch/pull/14397), [#14856](https://github.com/opensearch-project/OpenSearch/pull/14856)) - Bump `opentelemetry` from 1.36.0 to 1.40.0 ([#14457](https://github.com/opensearch-project/OpenSearch/pull/14457), [#14674](https://github.com/opensearch-project/OpenSearch/pull/14674)) - Bump `opentelemetry-semconv` from 1.25.0-alpha to 1.26.0-alpha ([#14674](https://github.com/opensearch-project/OpenSearch/pull/14674)) - Bump `azure-identity` from 1.11.4 to 1.13.0, Bump `msal4j` from 1.14.3 to 1.15.1, Bump `msal4j-persistence-extension` from 1.2.0 to 1.3.0 ([#14506](https://github.com/opensearch-project/OpenSearch/pull/14673)) diff --git a/settings.gradle b/settings.gradle index a96d00a4ab863..ae9f5384be592 100644 --- a/settings.gradle +++ b/settings.gradle @@ -10,7 +10,7 @@ */ plugins { - id "com.gradle.develocity" version "3.17.5" + id "com.gradle.develocity" version "3.17.6" } ext.disableBuildCache = hasProperty('DISABLE_BUILD_CACHE') || System.getenv().containsKey('DISABLE_BUILD_CACHE') From 11a9730196ba9f789c5114033aa1596d86013880 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Mon, 22 Jul 2024 16:47:01 -0400 Subject: [PATCH 098/167] Bump org.jline:jline in /test/fixtures/hdfs-fixture (#14859) Bumps [org.jline:jline](https://github.com/jline/jline3) from 3.26.2 to 3.26.3. - [Release notes](https://github.com/jline/jline3/releases) - [Changelog](https://github.com/jline/jline3/blob/master/changelog.md) - [Commits](https://github.com/jline/jline3/compare/jline-parent-3.26.2...jline-parent-3.26.3) --- updated-dependencies: - dependency-name: org.jline:jline dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- test/fixtures/hdfs-fixture/build.gradle | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/test/fixtures/hdfs-fixture/build.gradle b/test/fixtures/hdfs-fixture/build.gradle index a3c2932be64c4..9b8f62b8c55b8 100644 --- a/test/fixtures/hdfs-fixture/build.gradle +++ b/test/fixtures/hdfs-fixture/build.gradle @@ -76,7 +76,7 @@ dependencies { api "ch.qos.logback:logback-core:1.5.6" api "ch.qos.logback:logback-classic:1.2.13" api "org.jboss.xnio:xnio-nio:3.8.16.Final" - api 'org.jline:jline:3.26.2' + api 'org.jline:jline:3.26.3' api 'org.apache.commons:commons-configuration2:2.11.0' api 'com.nimbusds:nimbus-jose-jwt:9.40' api ('org.apache.kerby:kerb-admin:2.0.3') { From 4e45c9ed68d7a4ba77c8c3406453c05bede170e2 Mon Sep 17 00:00:00 2001 From: ebraminio Date: Tue, 23 Jul 2024 00:55:43 +0330 Subject: [PATCH 099/167] Use Lucene provided Persian stem (#14847) Lucene provided Persian stem apparently isn't hooked yet and this change is doing that based on what is done for Arabic stem support. Signed-off-by: Ebrahim Byagowi Signed-off-by: Daniel (dB.) Doubrovkine Co-authored-by: Daniel (dB.) Doubrovkine --- CHANGELOG.md | 1 + .../common/CommonAnalysisModulePlugin.java | 3 ++ .../common/PersianStemTokenFilterFactory.java | 52 +++++++++++++++++++ .../common/StemmerTokenFilterFactory.java | 3 ++ .../common/CommonAnalysisFactoryTests.java | 2 + .../test/analysis-common/40_token_filters.yml | 31 +++++++++++ .../analysis/AnalysisFactoryTestCase.java | 2 +- 7 files changed, 93 insertions(+), 1 deletion(-) create mode 100644 modules/analysis-common/src/main/java/org/opensearch/analysis/common/PersianStemTokenFilterFactory.java diff --git a/CHANGELOG.md b/CHANGELOG.md index fdce3a5e24342..66322087a73c4 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -26,6 +26,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Optimize TransportNodesAction to not send DiscoveryNodes for NodeStats, NodesInfo and ClusterStats call ([14749](https://github.com/opensearch-project/OpenSearch/pull/14749)) - Reduce logging in DEBUG for MasterService:run ([#14795](https://github.com/opensearch-project/OpenSearch/pull/14795)) - Enabling term version check on local state for all ClusterManager Read Transport Actions ([#14273](https://github.com/opensearch-project/OpenSearch/pull/14273)) +- Add persian_stem filter (([#14847](https://github.com/opensearch-project/OpenSearch/pull/14847))) ### Dependencies - Bump `org.gradle.test-retry` from 1.5.8 to 1.5.9 ([#13442](https://github.com/opensearch-project/OpenSearch/pull/13442)) diff --git a/modules/analysis-common/src/main/java/org/opensearch/analysis/common/CommonAnalysisModulePlugin.java b/modules/analysis-common/src/main/java/org/opensearch/analysis/common/CommonAnalysisModulePlugin.java index cf2736a8583d2..f14e499081ce9 100644 --- a/modules/analysis-common/src/main/java/org/opensearch/analysis/common/CommonAnalysisModulePlugin.java +++ b/modules/analysis-common/src/main/java/org/opensearch/analysis/common/CommonAnalysisModulePlugin.java @@ -75,6 +75,7 @@ import org.apache.lucene.analysis.eu.BasqueAnalyzer; import org.apache.lucene.analysis.fa.PersianAnalyzer; import org.apache.lucene.analysis.fa.PersianNormalizationFilter; +import org.apache.lucene.analysis.fa.PersianStemFilter; import org.apache.lucene.analysis.fi.FinnishAnalyzer; import org.apache.lucene.analysis.fr.FrenchAnalyzer; import org.apache.lucene.analysis.ga.IrishAnalyzer; @@ -315,6 +316,7 @@ public Map> getTokenFilters() { filters.put("pattern_capture", requiresAnalysisSettings(PatternCaptureGroupTokenFilterFactory::new)); filters.put("pattern_replace", requiresAnalysisSettings(PatternReplaceTokenFilterFactory::new)); filters.put("persian_normalization", PersianNormalizationFilterFactory::new); + filters.put("persian_stem", PersianStemTokenFilterFactory::new); filters.put("porter_stem", PorterStemTokenFilterFactory::new); filters.put( "predicate_token_filter", @@ -558,6 +560,7 @@ public List getPreConfiguredTokenFilters() { ); })); filters.add(PreConfiguredTokenFilter.singleton("persian_normalization", true, PersianNormalizationFilter::new)); + filters.add(PreConfiguredTokenFilter.singleton("persian_stem", true, PersianStemFilter::new)); filters.add(PreConfiguredTokenFilter.singleton("porter_stem", false, PorterStemFilter::new)); filters.add(PreConfiguredTokenFilter.singleton("reverse", false, ReverseStringFilter::new)); filters.add(PreConfiguredTokenFilter.singleton("russian_stem", false, input -> new SnowballFilter(input, "Russian"))); diff --git a/modules/analysis-common/src/main/java/org/opensearch/analysis/common/PersianStemTokenFilterFactory.java b/modules/analysis-common/src/main/java/org/opensearch/analysis/common/PersianStemTokenFilterFactory.java new file mode 100644 index 0000000000000..afe8058343e17 --- /dev/null +++ b/modules/analysis-common/src/main/java/org/opensearch/analysis/common/PersianStemTokenFilterFactory.java @@ -0,0 +1,52 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +/* + * Modifications Copyright OpenSearch Contributors. See + * GitHub history for details. + */ + +package org.opensearch.analysis.common; + +import org.apache.lucene.analysis.TokenStream; +import org.apache.lucene.analysis.fa.PersianStemFilter; +import org.opensearch.common.settings.Settings; +import org.opensearch.env.Environment; +import org.opensearch.index.IndexSettings; +import org.opensearch.index.analysis.AbstractTokenFilterFactory; + +public class PersianStemTokenFilterFactory extends AbstractTokenFilterFactory { + + PersianStemTokenFilterFactory(IndexSettings indexSettings, Environment environment, String name, Settings settings) { + super(indexSettings, name, settings); + } + + @Override + public TokenStream create(TokenStream tokenStream) { + return new PersianStemFilter(tokenStream); + } +} diff --git a/modules/analysis-common/src/main/java/org/opensearch/analysis/common/StemmerTokenFilterFactory.java b/modules/analysis-common/src/main/java/org/opensearch/analysis/common/StemmerTokenFilterFactory.java index 5506626e40da0..e81f3c6cc09cc 100644 --- a/modules/analysis-common/src/main/java/org/opensearch/analysis/common/StemmerTokenFilterFactory.java +++ b/modules/analysis-common/src/main/java/org/opensearch/analysis/common/StemmerTokenFilterFactory.java @@ -47,6 +47,7 @@ import org.apache.lucene.analysis.en.KStemFilter; import org.apache.lucene.analysis.en.PorterStemFilter; import org.apache.lucene.analysis.es.SpanishLightStemFilter; +import org.apache.lucene.analysis.fa.PersianStemFilter; import org.apache.lucene.analysis.fi.FinnishLightStemFilter; import org.apache.lucene.analysis.fr.FrenchLightStemFilter; import org.apache.lucene.analysis.fr.FrenchMinimalStemFilter; @@ -239,6 +240,8 @@ public TokenStream create(TokenStream tokenStream) { return new NorwegianLightStemFilter(tokenStream, NorwegianLightStemmer.NYNORSK); } else if ("minimal_nynorsk".equalsIgnoreCase(language) || "minimalNynorsk".equalsIgnoreCase(language)) { return new NorwegianMinimalStemFilter(tokenStream, NorwegianLightStemmer.NYNORSK); + } else if ("persian".equalsIgnoreCase(language)) { + return new PersianStemFilter(tokenStream); // Portuguese stemmers } else if ("portuguese".equalsIgnoreCase(language)) { diff --git a/modules/analysis-common/src/test/java/org/opensearch/analysis/common/CommonAnalysisFactoryTests.java b/modules/analysis-common/src/test/java/org/opensearch/analysis/common/CommonAnalysisFactoryTests.java index 11713f52f5b18..7e3140f8bcba3 100644 --- a/modules/analysis-common/src/test/java/org/opensearch/analysis/common/CommonAnalysisFactoryTests.java +++ b/modules/analysis-common/src/test/java/org/opensearch/analysis/common/CommonAnalysisFactoryTests.java @@ -158,6 +158,7 @@ protected Map> getTokenFilters() { filters.put("brazilianstem", BrazilianStemTokenFilterFactory.class); filters.put("czechstem", CzechStemTokenFilterFactory.class); filters.put("germanstem", GermanStemTokenFilterFactory.class); + filters.put("persianstem", PersianStemTokenFilterFactory.class); filters.put("telugunormalization", TeluguNormalizationFilterFactory.class); filters.put("telugustem", TeluguStemFilterFactory.class); // this filter is not exposed and should only be used internally @@ -220,6 +221,7 @@ protected Map> getPreConfiguredTokenFilters() { filters.put("ngram", null); filters.put("nGram", null); filters.put("persian_normalization", null); + filters.put("persian_stem", null); filters.put("porter_stem", null); filters.put("reverse", ReverseStringFilterFactory.class); filters.put("russian_stem", SnowballPorterFilterFactory.class); diff --git a/modules/analysis-common/src/yamlRestTest/resources/rest-api-spec/test/analysis-common/40_token_filters.yml b/modules/analysis-common/src/yamlRestTest/resources/rest-api-spec/test/analysis-common/40_token_filters.yml index 802c79c780689..c6b075571f221 100644 --- a/modules/analysis-common/src/yamlRestTest/resources/rest-api-spec/test/analysis-common/40_token_filters.yml +++ b/modules/analysis-common/src/yamlRestTest/resources/rest-api-spec/test/analysis-common/40_token_filters.yml @@ -1781,6 +1781,37 @@ - length: { tokens: 1 } - match: { tokens.0.token: abschliess } +--- +"persian_stem": + - do: + indices.create: + index: test + body: + settings: + analysis: + filter: + my_persian_stem: + type: persian_stem + - do: + indices.analyze: + index: test + body: + text: جامدات + tokenizer: keyword + filter: [my_persian_stem] + - length: { tokens: 1 } + - match: { tokens.0.token: جامد } + + # Test pre-configured token filter too: + - do: + indices.analyze: + body: + text: جامدات + tokenizer: keyword + filter: [persian_stem] + - length: { tokens: 1 } + - match: { tokens.0.token: جامد } + --- "russian_stem": - do: diff --git a/test/framework/src/main/java/org/opensearch/indices/analysis/AnalysisFactoryTestCase.java b/test/framework/src/main/java/org/opensearch/indices/analysis/AnalysisFactoryTestCase.java index 5231fe095f0f0..23cf4d47a49d9 100644 --- a/test/framework/src/main/java/org/opensearch/indices/analysis/AnalysisFactoryTestCase.java +++ b/test/framework/src/main/java/org/opensearch/indices/analysis/AnalysisFactoryTestCase.java @@ -139,6 +139,7 @@ public abstract class AnalysisFactoryTestCase extends OpenSearchTestCase { .put("patterncapturegroup", MovedToAnalysisCommon.class) .put("patternreplace", MovedToAnalysisCommon.class) .put("persiannormalization", MovedToAnalysisCommon.class) + .put("persianstem", MovedToAnalysisCommon.class) .put("porterstem", MovedToAnalysisCommon.class) .put("portuguesestem", MovedToAnalysisCommon.class) .put("portugueselightstem", MovedToAnalysisCommon.class) @@ -219,7 +220,6 @@ public abstract class AnalysisFactoryTestCase extends OpenSearchTestCase { .put("spanishpluralstem", Void.class) // LUCENE-10352 .put("daitchmokotoffsoundex", Void.class) - .put("persianstem", Void.class) // https://github.com/apache/lucene/pull/12169 .put("word2vecsynonym", Void.class) // https://github.com/apache/lucene/pull/12915 From 58451061e59c0d811a70367b75d7af6671ee9911 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Mon, 22 Jul 2024 17:27:43 -0400 Subject: [PATCH 100/167] Bump actions/checkout from 2 to 4 (#14858) * Bump actions/checkout from 2 to 4 Bumps [actions/checkout](https://github.com/actions/checkout) from 2 to 4. - [Release notes](https://github.com/actions/checkout/releases) - [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md) - [Commits](https://github.com/actions/checkout/compare/v2...v4) --- updated-dependencies: - dependency-name: actions/checkout dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] * Update changelog Signed-off-by: dependabot[bot] --------- Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] --- .github/workflows/benchmark-pull-request.yml | 4 ++-- CHANGELOG.md | 1 + 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/.github/workflows/benchmark-pull-request.yml b/.github/workflows/benchmark-pull-request.yml index 2e2e83eb132de..9d83331e81d5a 100644 --- a/.github/workflows/benchmark-pull-request.yml +++ b/.github/workflows/benchmark-pull-request.yml @@ -13,7 +13,7 @@ jobs: pull-requests: write steps: - name: Checkout Repository - uses: actions/checkout@v3 + uses: actions/checkout@v4 - name: Set up required env vars run: | echo "PR_NUMBER=${{ github.event.issue.number }}" >> $GITHUB_ENV @@ -117,7 +117,7 @@ jobs: echo "prHeadRepo=$headRepo" >> $GITHUB_ENV echo "prHeadRef=$headRef" >> $GITHUB_ENV - name: Checkout PR Repo - uses: actions/checkout@v2 + uses: actions/checkout@v4 with: repository: ${{ env.prHeadRepo }} ref: ${{ env.prHeadRef }} diff --git a/CHANGELOG.md b/CHANGELOG.md index 66322087a73c4..f90424ab07870 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -48,6 +48,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Bump `net.minidev:accessors-smart` from 2.5.0 to 2.5.1 ([#14673](https://github.com/opensearch-project/OpenSearch/pull/14673)) - Bump `jackson` from 2.17.1 to 2.17.2 ([#14687](https://github.com/opensearch-project/OpenSearch/pull/14687)) - Bump `net.minidev:json-smart` from 2.5.0 to 2.5.1 ([#14748](https://github.com/opensearch-project/OpenSearch/pull/14748)) +- Bump `actions/checkout` from 2 to 4 ([#14858](https://github.com/opensearch-project/OpenSearch/pull/14858)) ### Changed - [Tiered Caching] Move query recomputation logic outside write lock ([#14187](https://github.com/opensearch-project/OpenSearch/pull/14187)) From 97f26ccfd56bc52f91cad74368662f5cfd5811df Mon Sep 17 00:00:00 2001 From: Liyun Xiu Date: Tue, 23 Jul 2024 05:35:07 +0800 Subject: [PATCH 101/167] Deprecate batch_size parameter on bulk API (#14725) By default the full _bulk payload will be passed to ingest processors as a batch, with any sub batching logic to be implemented by each processor if necessary. Signed-off-by: Liyun Xiu --- CHANGELOG.md | 1 + .../rest-api-spec/test/ingest/70_bulk.yml | 33 +------- .../org/opensearch/ingest/IngestClientIT.java | 81 +++++++++++++++++++ .../opensearch/action/bulk/BulkRequest.java | 2 +- .../org/opensearch/ingest/IngestService.java | 64 +-------------- .../rest/action/document/RestBulkAction.java | 8 +- .../opensearch/ingest/IngestServiceTests.java | 53 ++++++++++-- 7 files changed, 141 insertions(+), 101 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index f90424ab07870..0931ff63c145b 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -60,6 +60,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Allow system index warning in OpenSearchRestTestCase.refreshAllIndices ([#14635](https://github.com/opensearch-project/OpenSearch/pull/14635)) ### Deprecated +- Deprecate batch_size parameter on bulk API ([#14725](https://github.com/opensearch-project/OpenSearch/pull/14725)) ### Removed - Remove query categorization changes ([#14759](https://github.com/opensearch-project/OpenSearch/pull/14759)) diff --git a/modules/ingest-common/src/yamlRestTest/resources/rest-api-spec/test/ingest/70_bulk.yml b/modules/ingest-common/src/yamlRestTest/resources/rest-api-spec/test/ingest/70_bulk.yml index 36b2b5351dcad..47cc80d6df310 100644 --- a/modules/ingest-common/src/yamlRestTest/resources/rest-api-spec/test/ingest/70_bulk.yml +++ b/modules/ingest-common/src/yamlRestTest/resources/rest-api-spec/test/ingest/70_bulk.yml @@ -207,7 +207,7 @@ teardown: - match: { _source: {"f1": "v2", "f2": 47, "field1": "value1", "field2": "value2"}} --- -"Test bulk API with batch enabled happy case": +"Test bulk API with default batch size": - skip: version: " - 2.13.99" reason: "Added in 2.14.0" @@ -215,7 +215,6 @@ teardown: - do: bulk: refresh: true - batch_size: 2 pipeline: "pipeline1" body: - '{"index": {"_index": "test_index", "_id": "test_id1"}}' @@ -245,36 +244,6 @@ teardown: id: test_id3 - match: { _source: { "text": "text3", "field1": "value1" } } ---- -"Test bulk API with batch_size missing": - - skip: - version: " - 2.13.99" - reason: "Added in 2.14.0" - - - do: - bulk: - refresh: true - pipeline: "pipeline1" - body: - - '{"index": {"_index": "test_index", "_id": "test_id1"}}' - - '{"text": "text1"}' - - '{"index": {"_index": "test_index", "_id": "test_id2"}}' - - '{"text": "text2"}' - - - match: { errors: false } - - - do: - get: - index: test_index - id: test_id1 - - match: { _source: { "text": "text1", "field1": "value1" } } - - - do: - get: - index: test_index - id: test_id2 - - match: { _source: { "text": "text2", "field1": "value1" } } - --- "Test bulk API with invalid batch_size": - skip: diff --git a/server/src/internalClusterTest/java/org/opensearch/ingest/IngestClientIT.java b/server/src/internalClusterTest/java/org/opensearch/ingest/IngestClientIT.java index 657d0f178e096..0eb37a7b25618 100644 --- a/server/src/internalClusterTest/java/org/opensearch/ingest/IngestClientIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/ingest/IngestClientIT.java @@ -315,6 +315,87 @@ public void testBulkWithUpsert() throws Exception { assertThat(upserted.get("processed"), equalTo(true)); } + public void testSingleDocIngestFailure() throws Exception { + createIndex("test"); + BytesReference source = BytesReference.bytes( + jsonBuilder().startObject() + .field("description", "my_pipeline") + .startArray("processors") + .startObject() + .startObject("test") + .endObject() + .endObject() + .endArray() + .endObject() + ); + PutPipelineRequest putPipelineRequest = new PutPipelineRequest("_id", source, MediaTypeRegistry.JSON); + client().admin().cluster().putPipeline(putPipelineRequest).get(); + + GetPipelineRequest getPipelineRequest = new GetPipelineRequest("_id"); + GetPipelineResponse getResponse = client().admin().cluster().getPipeline(getPipelineRequest).get(); + assertThat(getResponse.isFound(), is(true)); + assertThat(getResponse.pipelines().size(), equalTo(1)); + assertThat(getResponse.pipelines().get(0).getId(), equalTo("_id")); + + assertThrows( + IllegalArgumentException.class, + () -> client().prepareIndex("test") + .setId("1") + .setPipeline("_id") + .setSource(Requests.INDEX_CONTENT_TYPE, "field", "value", "fail", true) + .get() + ); + + DeletePipelineRequest deletePipelineRequest = new DeletePipelineRequest("_id"); + AcknowledgedResponse response = client().admin().cluster().deletePipeline(deletePipelineRequest).get(); + assertThat(response.isAcknowledged(), is(true)); + + getResponse = client().admin().cluster().prepareGetPipeline("_id").get(); + assertThat(getResponse.isFound(), is(false)); + assertThat(getResponse.pipelines().size(), equalTo(0)); + } + + public void testSingleDocIngestDrop() throws Exception { + createIndex("test"); + BytesReference source = BytesReference.bytes( + jsonBuilder().startObject() + .field("description", "my_pipeline") + .startArray("processors") + .startObject() + .startObject("test") + .endObject() + .endObject() + .endArray() + .endObject() + ); + PutPipelineRequest putPipelineRequest = new PutPipelineRequest("_id", source, MediaTypeRegistry.JSON); + client().admin().cluster().putPipeline(putPipelineRequest).get(); + + GetPipelineRequest getPipelineRequest = new GetPipelineRequest("_id"); + GetPipelineResponse getResponse = client().admin().cluster().getPipeline(getPipelineRequest).get(); + assertThat(getResponse.isFound(), is(true)); + assertThat(getResponse.pipelines().size(), equalTo(1)); + assertThat(getResponse.pipelines().get(0).getId(), equalTo("_id")); + + DocWriteResponse indexResponse = client().prepareIndex("test") + .setId("1") + .setPipeline("_id") + .setSource(Requests.INDEX_CONTENT_TYPE, "field", "value", "drop", true) + .get(); + assertEquals(DocWriteResponse.Result.NOOP, indexResponse.getResult()); + + Map doc = client().prepareGet("test", "1").get().getSourceAsMap(); + assertNull(doc); + + DeletePipelineRequest deletePipelineRequest = new DeletePipelineRequest("_id"); + AcknowledgedResponse response = client().admin().cluster().deletePipeline(deletePipelineRequest).get(); + assertThat(response.isAcknowledged(), is(true)); + + getResponse = client().admin().cluster().prepareGetPipeline("_id").get(); + assertThat(getResponse.isFound(), is(false)); + assertThat(getResponse.pipelines().size(), equalTo(0)); + } + public void test() throws Exception { BytesReference source = BytesReference.bytes( jsonBuilder().startObject() diff --git a/server/src/main/java/org/opensearch/action/bulk/BulkRequest.java b/server/src/main/java/org/opensearch/action/bulk/BulkRequest.java index 7614206cd226f..e686585095962 100644 --- a/server/src/main/java/org/opensearch/action/bulk/BulkRequest.java +++ b/server/src/main/java/org/opensearch/action/bulk/BulkRequest.java @@ -96,7 +96,7 @@ public class BulkRequest extends ActionRequest implements CompositeIndicesReques private String globalRouting; private String globalIndex; private Boolean globalRequireAlias; - private int batchSize = 1; + private int batchSize = Integer.MAX_VALUE; private long sizeInBytes = 0; diff --git a/server/src/main/java/org/opensearch/ingest/IngestService.java b/server/src/main/java/org/opensearch/ingest/IngestService.java index 2281ccd4c0382..17eb23422e68b 100644 --- a/server/src/main/java/org/opensearch/ingest/IngestService.java +++ b/server/src/main/java/org/opensearch/ingest/IngestService.java @@ -525,61 +525,7 @@ public void onFailure(Exception e) { @Override protected void doRun() { - int batchSize = originalBulkRequest.batchSize(); - if (shouldExecuteBulkRequestInBatch(originalBulkRequest.requests().size(), batchSize)) { - runBulkRequestInBatch(numberOfActionRequests, actionRequests, onFailure, onCompletion, onDropped, originalBulkRequest); - return; - } - - final Thread originalThread = Thread.currentThread(); - final AtomicInteger counter = new AtomicInteger(numberOfActionRequests); - int i = 0; - for (DocWriteRequest actionRequest : actionRequests) { - IndexRequest indexRequest = TransportBulkAction.getIndexWriteRequest(actionRequest); - if (indexRequest == null) { - if (counter.decrementAndGet() == 0) { - onCompletion.accept(originalThread, null); - } - assert counter.get() >= 0; - i++; - continue; - } - final String pipelineId = indexRequest.getPipeline(); - indexRequest.setPipeline(NOOP_PIPELINE_NAME); - final String finalPipelineId = indexRequest.getFinalPipeline(); - indexRequest.setFinalPipeline(NOOP_PIPELINE_NAME); - boolean hasFinalPipeline = true; - final List pipelines; - if (IngestService.NOOP_PIPELINE_NAME.equals(pipelineId) == false - && IngestService.NOOP_PIPELINE_NAME.equals(finalPipelineId) == false) { - pipelines = Arrays.asList(pipelineId, finalPipelineId); - } else if (IngestService.NOOP_PIPELINE_NAME.equals(pipelineId) == false) { - pipelines = Collections.singletonList(pipelineId); - hasFinalPipeline = false; - } else if (IngestService.NOOP_PIPELINE_NAME.equals(finalPipelineId) == false) { - pipelines = Collections.singletonList(finalPipelineId); - } else { - if (counter.decrementAndGet() == 0) { - onCompletion.accept(originalThread, null); - } - assert counter.get() >= 0; - i++; - continue; - } - - executePipelines( - i, - pipelines.iterator(), - hasFinalPipeline, - indexRequest, - onDropped, - onFailure, - counter, - onCompletion, - originalThread - ); - i++; - } + runBulkRequestInBatch(numberOfActionRequests, actionRequests, onFailure, onCompletion, onDropped, originalBulkRequest); } }); } @@ -635,7 +581,7 @@ private void runBulkRequestInBatch( i++; } - int batchSize = originalBulkRequest.batchSize(); + int batchSize = Math.min(numberOfActionRequests, originalBulkRequest.batchSize()); List> batches = prepareBatches(batchSize, indexRequestWrappers); logger.debug("batchSize: {}, batches: {}", batchSize, batches.size()); @@ -654,10 +600,6 @@ private void runBulkRequestInBatch( } } - private boolean shouldExecuteBulkRequestInBatch(int documentSize, int batchSize) { - return documentSize > 1 && batchSize > 1; - } - /** * IndexRequests are grouped by unique (index + pipeline_ids) before batching. * Only IndexRequests in the same group could be batched. It's to ensure batched documents always @@ -685,7 +627,7 @@ static List> prepareBatches(int batchSize, List> batchedIndexRequests = new ArrayList<>(); for (Map.Entry> indexRequestsPerKey : indexRequestsPerIndexAndPipelines.entrySet()) { - for (int i = 0; i < indexRequestsPerKey.getValue().size(); i += batchSize) { + for (int i = 0; i < indexRequestsPerKey.getValue().size(); i += Math.min(indexRequestsPerKey.getValue().size(), batchSize)) { batchedIndexRequests.add( new ArrayList<>( indexRequestsPerKey.getValue().subList(i, i + Math.min(batchSize, indexRequestsPerKey.getValue().size() - i)) diff --git a/server/src/main/java/org/opensearch/rest/action/document/RestBulkAction.java b/server/src/main/java/org/opensearch/rest/action/document/RestBulkAction.java index 0bc4234c9b8b8..ce52c5620b968 100644 --- a/server/src/main/java/org/opensearch/rest/action/document/RestBulkAction.java +++ b/server/src/main/java/org/opensearch/rest/action/document/RestBulkAction.java @@ -38,6 +38,7 @@ import org.opensearch.action.support.ActiveShardCount; import org.opensearch.client.Requests; import org.opensearch.client.node.NodeClient; +import org.opensearch.common.logging.DeprecationLogger; import org.opensearch.common.settings.Settings; import org.opensearch.rest.BaseRestHandler; import org.opensearch.rest.RestRequest; @@ -66,6 +67,8 @@ public class RestBulkAction extends BaseRestHandler { private final boolean allowExplicitIndex; + private static final DeprecationLogger deprecationLogger = DeprecationLogger.getLogger(RestBulkAction.class); + static final String BATCH_SIZE_DEPRECATED_MESSAGE = "The batch size option in bulk API is deprecated and will be removed in 3.0."; public RestBulkAction(Settings settings) { this.allowExplicitIndex = MULTI_ALLOW_EXPLICIT_INDEX.get(settings); @@ -97,7 +100,10 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC Boolean defaultRequireAlias = request.paramAsBoolean(DocWriteRequest.REQUIRE_ALIAS, null); bulkRequest.timeout(request.paramAsTime("timeout", BulkShardRequest.DEFAULT_TIMEOUT)); bulkRequest.setRefreshPolicy(request.param("refresh")); - bulkRequest.batchSize(request.paramAsInt("batch_size", 1)); + if (request.hasParam("batch_size")) { + deprecationLogger.deprecate("batch_size_deprecation", BATCH_SIZE_DEPRECATED_MESSAGE); + } + bulkRequest.batchSize(request.paramAsInt("batch_size", Integer.MAX_VALUE)); bulkRequest.add( request.requiredContent(), defaultIndex, diff --git a/server/src/test/java/org/opensearch/ingest/IngestServiceTests.java b/server/src/test/java/org/opensearch/ingest/IngestServiceTests.java index e61fbb6e1dbff..9d03127692975 100644 --- a/server/src/test/java/org/opensearch/ingest/IngestServiceTests.java +++ b/server/src/test/java/org/opensearch/ingest/IngestServiceTests.java @@ -1134,10 +1134,14 @@ public void testBulkRequestExecutionWithFailures() throws Exception { Exception error = new RuntimeException(); doAnswer(args -> { @SuppressWarnings("unchecked") - BiConsumer handler = (BiConsumer) args.getArguments()[1]; - handler.accept(null, error); + List ingestDocumentWrappers = (List) args.getArguments()[0]; + Consumer> handler = (Consumer) args.getArguments()[1]; + for (IngestDocumentWrapper wrapper : ingestDocumentWrappers) { + wrapper.update(wrapper.getIngestDocument(), error); + } + handler.accept(ingestDocumentWrappers); return null; - }).when(processor).execute(any(), any()); + }).when(processor).batchExecute(any(), any()); IngestService ingestService = createWithProcessors( Collections.singletonMap("mock", (factories, tag, description, config) -> processor) ); @@ -1192,10 +1196,11 @@ public void testBulkRequestExecution() throws Exception { when(processor.getTag()).thenReturn("mockTag"); doAnswer(args -> { @SuppressWarnings("unchecked") - BiConsumer handler = (BiConsumer) args.getArguments()[1]; - handler.accept(RandomDocumentPicks.randomIngestDocument(random()), null); + List ingestDocumentWrappers = (List) args.getArguments()[0]; + Consumer> handler = (Consumer) args.getArguments()[1]; + handler.accept(ingestDocumentWrappers); return null; - }).when(processor).execute(any(), any()); + }).when(processor).batchExecute(any(), any()); Map map = new HashMap<>(2); map.put("mock", (factories, tag, description, config) -> processor); @@ -1957,6 +1962,42 @@ public void testExecuteBulkRequestInBatchWithExceptionAndDropInCallback() { verify(mockCompoundProcessor, never()).execute(any(), any()); } + public void testExecuteBulkRequestInBatchWithDefaultBatchSize() { + CompoundProcessor mockCompoundProcessor = mockCompoundProcessor(); + IngestService ingestService = createWithProcessors( + Collections.singletonMap("mock", (factories, tag, description, config) -> mockCompoundProcessor) + ); + createPipeline("_id", ingestService); + BulkRequest bulkRequest = new BulkRequest(); + IndexRequest indexRequest1 = new IndexRequest("_index").id("_id1").source(emptyMap()).setPipeline("_id").setFinalPipeline("_none"); + bulkRequest.add(indexRequest1); + IndexRequest indexRequest2 = new IndexRequest("_index").id("_id2").source(emptyMap()).setPipeline("_id").setFinalPipeline("_none"); + bulkRequest.add(indexRequest2); + IndexRequest indexRequest3 = new IndexRequest("_index").id("_id3").source(emptyMap()).setPipeline("_none").setFinalPipeline("_id"); + bulkRequest.add(indexRequest3); + IndexRequest indexRequest4 = new IndexRequest("_index").id("_id4").source(emptyMap()).setPipeline("_id").setFinalPipeline("_none"); + bulkRequest.add(indexRequest4); + @SuppressWarnings("unchecked") + final Map failureHandler = new HashMap<>(); + final Map completionHandler = new HashMap<>(); + final List dropHandler = new ArrayList<>(); + ingestService.executeBulkRequest( + 4, + bulkRequest.requests(), + failureHandler::put, + completionHandler::put, + dropHandler::add, + Names.WRITE, + bulkRequest + ); + assertTrue(failureHandler.isEmpty()); + assertTrue(dropHandler.isEmpty()); + assertEquals(1, completionHandler.size()); + assertNull(completionHandler.get(Thread.currentThread())); + verify(mockCompoundProcessor, times(1)).batchExecute(any(), any()); + verify(mockCompoundProcessor, never()).execute(any(), any()); + } + public void testPrepareBatches_same_index_pipeline() { IngestService.IndexRequestWrapper wrapper1 = createIndexRequestWrapper("index1", Collections.singletonList("p1")); IngestService.IndexRequestWrapper wrapper2 = createIndexRequestWrapper("index1", Collections.singletonList("p1")); From 90d5500ecbf13b08d2f6a9fa6ad67119acd37a17 Mon Sep 17 00:00:00 2001 From: Finn Date: Mon, 22 Jul 2024 16:59:47 -0700 Subject: [PATCH 102/167] Add perms for remote snapshot cache eviction on scripted query (#14411) Signed-off-by: Finn Carroll --- CHANGELOG.md | 1 + .../store/remote/utils/TransferManager.java | 74 +++++++++---------- 2 files changed, 38 insertions(+), 37 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 0931ff63c145b..ec5b838a542c4 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -88,6 +88,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Fix NPE in ReplicaShardAllocator ([#14385](https://github.com/opensearch-project/OpenSearch/pull/14385)) - Fix constant_keyword field type used when creating index ([#14807](https://github.com/opensearch-project/OpenSearch/pull/14807)) - Use circuit breaker in InternalHistogram when adding empty buckets ([#14754](https://github.com/opensearch-project/OpenSearch/pull/14754)) +- Fix searchable snapshot failure with scripted fields ([#14411](https://github.com/opensearch-project/OpenSearch/pull/14411)) ### Security diff --git a/server/src/main/java/org/opensearch/index/store/remote/utils/TransferManager.java b/server/src/main/java/org/opensearch/index/store/remote/utils/TransferManager.java index df26f2f0925f6..f07c4832d982c 100644 --- a/server/src/main/java/org/opensearch/index/store/remote/utils/TransferManager.java +++ b/server/src/main/java/org/opensearch/index/store/remote/utils/TransferManager.java @@ -64,16 +64,22 @@ public IndexInput fetchBlob(BlobFetchRequest blobFetchRequest) throws IOExceptio final Path key = blobFetchRequest.getFilePath(); logger.trace("fetchBlob called for {}", key.toString()); - final CachedIndexInput cacheEntry = fileCache.compute(key, (path, cachedIndexInput) -> { - if (cachedIndexInput == null || cachedIndexInput.isClosed()) { - logger.trace("Transfer Manager - IndexInput closed or not in cache"); - // Doesn't exist or is closed, either way create a new one - return new DelayedCreationCachedIndexInput(fileCache, streamReader, blobFetchRequest); - } else { - logger.trace("Transfer Manager - Already in cache"); - // already in the cache and ready to be used (open) - return cachedIndexInput; - } + // We need to do a privileged action here in order to fetch from remote + // and write/evict from local file cache in case this is invoked as a side + // effect of a plugin (such as a scripted search) that doesn't have the + // necessary permissions. + final CachedIndexInput cacheEntry = AccessController.doPrivileged((PrivilegedAction) () -> { + return fileCache.compute(key, (path, cachedIndexInput) -> { + if (cachedIndexInput == null || cachedIndexInput.isClosed()) { + logger.trace("Transfer Manager - IndexInput closed or not in cache"); + // Doesn't exist or is closed, either way create a new one + return new DelayedCreationCachedIndexInput(fileCache, streamReader, blobFetchRequest); + } else { + logger.trace("Transfer Manager - Already in cache"); + // already in the cache and ready to be used (open) + return cachedIndexInput; + } + }); }); // Cache entry was either retrieved from the cache or newly added, either @@ -88,37 +94,31 @@ public IndexInput fetchBlob(BlobFetchRequest blobFetchRequest) throws IOExceptio @SuppressWarnings("removal") private static FileCachedIndexInput createIndexInput(FileCache fileCache, StreamReader streamReader, BlobFetchRequest request) { - // We need to do a privileged action here in order to fetch from remote - // and write to the local file cache in case this is invoked as a side - // effect of a plugin (such as a scripted search) that doesn't have the - // necessary permissions. - return AccessController.doPrivileged((PrivilegedAction) () -> { - try { - if (Files.exists(request.getFilePath()) == false) { - logger.trace("Fetching from Remote in createIndexInput of Transfer Manager"); - try ( - OutputStream fileOutputStream = Files.newOutputStream(request.getFilePath()); - OutputStream localFileOutputStream = new BufferedOutputStream(fileOutputStream) - ) { - for (BlobFetchRequest.BlobPart blobPart : request.blobParts()) { - try ( - InputStream snapshotFileInputStream = streamReader.read( - blobPart.getBlobName(), - blobPart.getPosition(), - blobPart.getLength() - ); - ) { - snapshotFileInputStream.transferTo(localFileOutputStream); - } + try { + if (Files.exists(request.getFilePath()) == false) { + logger.trace("Fetching from Remote in createIndexInput of Transfer Manager"); + try ( + OutputStream fileOutputStream = Files.newOutputStream(request.getFilePath()); + OutputStream localFileOutputStream = new BufferedOutputStream(fileOutputStream) + ) { + for (BlobFetchRequest.BlobPart blobPart : request.blobParts()) { + try ( + InputStream snapshotFileInputStream = streamReader.read( + blobPart.getBlobName(), + blobPart.getPosition(), + blobPart.getLength() + ); + ) { + snapshotFileInputStream.transferTo(localFileOutputStream); } } } - final IndexInput luceneIndexInput = request.getDirectory().openInput(request.getFileName(), IOContext.READ); - return new FileCachedIndexInput(fileCache, request.getFilePath(), luceneIndexInput); - } catch (IOException e) { - throw new UncheckedIOException(e); } - }); + final IndexInput luceneIndexInput = request.getDirectory().openInput(request.getFileName(), IOContext.READ); + return new FileCachedIndexInput(fileCache, request.getFilePath(), luceneIndexInput); + } catch (IOException e) { + throw new UncheckedIOException(e); + } } /** From c82a282351b4e913d57b50d2fef94d4f046b155c Mon Sep 17 00:00:00 2001 From: Neetika Singhal Date: Mon, 22 Jul 2024 20:39:14 -0700 Subject: [PATCH 103/167] Add rest, transport layer changes for Hot to warm tiering - dedicated setup (#13980) Signed-off-by: Neetika Singhal --- CHANGELOG.md | 1 + .../org/opensearch/action/ActionModule.java | 9 + .../tiering/HotToWarmTieringAction.java | 28 ++ .../tiering/HotToWarmTieringResponse.java | 157 +++++++++ .../tiering/RestWarmTieringAction.java | 61 ++++ .../indices/tiering/TieringIndexRequest.java | 195 +++++++++++ .../tiering/TieringValidationResult.java | 83 +++++ .../TransportHotToWarmTieringAction.java | 110 ++++++ .../admin/indices/tiering/package-info.java | 36 ++ .../common/settings/IndexScopedSettings.java | 2 +- .../org/opensearch/index/IndexModule.java | 20 ++ .../tiering/TieringRequestValidator.java | 277 +++++++++++++++ .../indices/tiering/package-info.java | 36 ++ .../HotToWarmTieringResponseTests.java | 101 ++++++ .../tiering/TieringIndexRequestTests.java | 79 +++++ .../TransportHotToWarmTieringActionTests.java | 118 +++++++ .../tiering/TieringRequestValidatorTests.java | 318 ++++++++++++++++++ 17 files changed, 1630 insertions(+), 1 deletion(-) create mode 100644 server/src/main/java/org/opensearch/action/admin/indices/tiering/HotToWarmTieringAction.java create mode 100644 server/src/main/java/org/opensearch/action/admin/indices/tiering/HotToWarmTieringResponse.java create mode 100644 server/src/main/java/org/opensearch/action/admin/indices/tiering/RestWarmTieringAction.java create mode 100644 server/src/main/java/org/opensearch/action/admin/indices/tiering/TieringIndexRequest.java create mode 100644 server/src/main/java/org/opensearch/action/admin/indices/tiering/TieringValidationResult.java create mode 100644 server/src/main/java/org/opensearch/action/admin/indices/tiering/TransportHotToWarmTieringAction.java create mode 100644 server/src/main/java/org/opensearch/action/admin/indices/tiering/package-info.java create mode 100644 server/src/main/java/org/opensearch/indices/tiering/TieringRequestValidator.java create mode 100644 server/src/main/java/org/opensearch/indices/tiering/package-info.java create mode 100644 server/src/test/java/org/opensearch/action/admin/indices/tiering/HotToWarmTieringResponseTests.java create mode 100644 server/src/test/java/org/opensearch/action/admin/indices/tiering/TieringIndexRequestTests.java create mode 100644 server/src/test/java/org/opensearch/action/admin/indices/tiering/TransportHotToWarmTieringActionTests.java create mode 100644 server/src/test/java/org/opensearch/indices/tiering/TieringRequestValidatorTests.java diff --git a/CHANGELOG.md b/CHANGELOG.md index ec5b838a542c4..e5534577a67a6 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -27,6 +27,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Reduce logging in DEBUG for MasterService:run ([#14795](https://github.com/opensearch-project/OpenSearch/pull/14795)) - Enabling term version check on local state for all ClusterManager Read Transport Actions ([#14273](https://github.com/opensearch-project/OpenSearch/pull/14273)) - Add persian_stem filter (([#14847](https://github.com/opensearch-project/OpenSearch/pull/14847))) +- Add rest, transport layer changes for hot to warm tiering - dedicated setup (([#13980](https://github.com/opensearch-project/OpenSearch/pull/13980)) ### Dependencies - Bump `org.gradle.test-retry` from 1.5.8 to 1.5.9 ([#13442](https://github.com/opensearch-project/OpenSearch/pull/13442)) diff --git a/server/src/main/java/org/opensearch/action/ActionModule.java b/server/src/main/java/org/opensearch/action/ActionModule.java index 16c15f553951c..574b7029a6501 100644 --- a/server/src/main/java/org/opensearch/action/ActionModule.java +++ b/server/src/main/java/org/opensearch/action/ActionModule.java @@ -216,6 +216,9 @@ import org.opensearch.action.admin.indices.template.put.TransportPutComponentTemplateAction; import org.opensearch.action.admin.indices.template.put.TransportPutComposableIndexTemplateAction; import org.opensearch.action.admin.indices.template.put.TransportPutIndexTemplateAction; +import org.opensearch.action.admin.indices.tiering.HotToWarmTieringAction; +import org.opensearch.action.admin.indices.tiering.RestWarmTieringAction; +import org.opensearch.action.admin.indices.tiering.TransportHotToWarmTieringAction; import org.opensearch.action.admin.indices.upgrade.get.TransportUpgradeStatusAction; import org.opensearch.action.admin.indices.upgrade.get.UpgradeStatusAction; import org.opensearch.action.admin.indices.upgrade.post.TransportUpgradeAction; @@ -634,6 +637,9 @@ public void reg actions.register(CreateSnapshotAction.INSTANCE, TransportCreateSnapshotAction.class); actions.register(CloneSnapshotAction.INSTANCE, TransportCloneSnapshotAction.class); actions.register(RestoreSnapshotAction.INSTANCE, TransportRestoreSnapshotAction.class); + if (FeatureFlags.isEnabled(FeatureFlags.TIERED_REMOTE_INDEX)) { + actions.register(HotToWarmTieringAction.INSTANCE, TransportHotToWarmTieringAction.class); + } actions.register(SnapshotsStatusAction.INSTANCE, TransportSnapshotsStatusAction.class); actions.register(ClusterAddWeightedRoutingAction.INSTANCE, TransportAddWeightedRoutingAction.class); @@ -966,6 +972,9 @@ public void initRestHandlers(Supplier nodesInCluster) { registerHandler.accept(new RestNodeAttrsAction()); registerHandler.accept(new RestRepositoriesAction()); registerHandler.accept(new RestSnapshotAction()); + if (FeatureFlags.isEnabled(FeatureFlags.TIERED_REMOTE_INDEX)) { + registerHandler.accept(new RestWarmTieringAction()); + } registerHandler.accept(new RestTemplatesAction()); // Point in time API diff --git a/server/src/main/java/org/opensearch/action/admin/indices/tiering/HotToWarmTieringAction.java b/server/src/main/java/org/opensearch/action/admin/indices/tiering/HotToWarmTieringAction.java new file mode 100644 index 0000000000000..ae34a9a734221 --- /dev/null +++ b/server/src/main/java/org/opensearch/action/admin/indices/tiering/HotToWarmTieringAction.java @@ -0,0 +1,28 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.action.admin.indices.tiering; + +import org.opensearch.action.ActionType; +import org.opensearch.common.annotation.ExperimentalApi; + +/** + * Tiering action to move indices from hot to warm + * + * @opensearch.experimental + */ +@ExperimentalApi +public class HotToWarmTieringAction extends ActionType { + + public static final HotToWarmTieringAction INSTANCE = new HotToWarmTieringAction(); + public static final String NAME = "indices:admin/tier/hot_to_warm"; + + private HotToWarmTieringAction() { + super(NAME, HotToWarmTieringResponse::new); + } +} diff --git a/server/src/main/java/org/opensearch/action/admin/indices/tiering/HotToWarmTieringResponse.java b/server/src/main/java/org/opensearch/action/admin/indices/tiering/HotToWarmTieringResponse.java new file mode 100644 index 0000000000000..275decf7a8ea5 --- /dev/null +++ b/server/src/main/java/org/opensearch/action/admin/indices/tiering/HotToWarmTieringResponse.java @@ -0,0 +1,157 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.action.admin.indices.tiering; + +import org.opensearch.action.support.master.AcknowledgedResponse; +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.core.common.Strings; +import org.opensearch.core.common.io.stream.StreamInput; +import org.opensearch.core.common.io.stream.StreamOutput; +import org.opensearch.core.common.io.stream.Writeable; +import org.opensearch.core.xcontent.MediaTypeRegistry; +import org.opensearch.core.xcontent.ToXContentFragment; +import org.opensearch.core.xcontent.XContentBuilder; + +import java.io.IOException; +import java.util.Collections; +import java.util.Comparator; +import java.util.List; +import java.util.Objects; +import java.util.stream.Collectors; + +/** + * Response object for an {@link TieringIndexRequest} which is sent to client after the initial verification of the request + * by the backend service. The format of the response object will be as below: + * + * { + * "acknowledged": true/false, + * "failed_indices": [ + * { + * "index": "index1", + * "error": "Low disk threshold watermark breached" + * }, + * { + * "index": "index2", + * "error": "Index is not a remote store backed index" + * } + * ] + * } + * + * @opensearch.experimental + */ +@ExperimentalApi +public class HotToWarmTieringResponse extends AcknowledgedResponse { + + private final List failedIndices; + + public HotToWarmTieringResponse(boolean acknowledged) { + super(acknowledged); + this.failedIndices = Collections.emptyList(); + } + + public HotToWarmTieringResponse(boolean acknowledged, List indicesResults) { + super(acknowledged); + this.failedIndices = (indicesResults == null) + ? Collections.emptyList() + : indicesResults.stream().sorted(Comparator.comparing(IndexResult::getIndex)).collect(Collectors.toList()); + } + + public HotToWarmTieringResponse(StreamInput in) throws IOException { + super(in); + failedIndices = Collections.unmodifiableList(in.readList(IndexResult::new)); + } + + public List getFailedIndices() { + return this.failedIndices; + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + out.writeList(this.failedIndices); + } + + @Override + protected void addCustomFields(XContentBuilder builder, Params params) throws IOException { + super.addCustomFields(builder, params); + builder.startArray("failed_indices"); + + for (IndexResult failedIndex : failedIndices) { + failedIndex.toXContent(builder, params); + } + builder.endArray(); + } + + @Override + public String toString() { + return Strings.toString(MediaTypeRegistry.JSON, this); + } + + /** + * Inner class to represent the result of a failed index for tiering. + * @opensearch.experimental + */ + @ExperimentalApi + public static class IndexResult implements Writeable, ToXContentFragment { + private final String index; + private final String failureReason; + + public IndexResult(String index, String failureReason) { + this.index = index; + this.failureReason = failureReason; + } + + IndexResult(StreamInput in) throws IOException { + this.index = in.readString(); + this.failureReason = in.readString(); + } + + public String getIndex() { + return index; + } + + public String getFailureReason() { + return failureReason; + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeString(index); + out.writeString(failureReason); + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); + builder.field("index", index); + builder.field("error", failureReason); + return builder.endObject(); + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + IndexResult that = (IndexResult) o; + return Objects.equals(index, that.index) && Objects.equals(failureReason, that.failureReason); + } + + @Override + public int hashCode() { + int result = Objects.hashCode(index); + result = 31 * result + Objects.hashCode(failureReason); + return result; + } + + @Override + public String toString() { + return Strings.toString(MediaTypeRegistry.JSON, this); + } + } +} diff --git a/server/src/main/java/org/opensearch/action/admin/indices/tiering/RestWarmTieringAction.java b/server/src/main/java/org/opensearch/action/admin/indices/tiering/RestWarmTieringAction.java new file mode 100644 index 0000000000000..6f2eceafa9e77 --- /dev/null +++ b/server/src/main/java/org/opensearch/action/admin/indices/tiering/RestWarmTieringAction.java @@ -0,0 +1,61 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.action.admin.indices.tiering; + +import org.opensearch.action.support.IndicesOptions; +import org.opensearch.client.node.NodeClient; +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.rest.BaseRestHandler; +import org.opensearch.rest.RestHandler; +import org.opensearch.rest.RestRequest; +import org.opensearch.rest.action.RestToXContentListener; + +import java.util.List; + +import static java.util.Collections.singletonList; +import static org.opensearch.core.common.Strings.splitStringByCommaToArray; +import static org.opensearch.rest.RestRequest.Method.POST; + +/** + * Rest Tiering API action to move indices to warm tier + * + * @opensearch.experimental + */ +@ExperimentalApi +public class RestWarmTieringAction extends BaseRestHandler { + + private static final String TARGET_TIER = "warm"; + + @Override + public List routes() { + return singletonList(new RestHandler.Route(POST, "/{index}/_tier/" + TARGET_TIER)); + } + + @Override + public String getName() { + return "warm_tiering_action"; + } + + @Override + protected BaseRestHandler.RestChannelConsumer prepareRequest(RestRequest request, NodeClient client) { + final TieringIndexRequest tieringIndexRequest = new TieringIndexRequest( + TARGET_TIER, + splitStringByCommaToArray(request.param("index")) + ); + tieringIndexRequest.timeout(request.paramAsTime("timeout", tieringIndexRequest.timeout())); + tieringIndexRequest.clusterManagerNodeTimeout( + request.paramAsTime("cluster_manager_timeout", tieringIndexRequest.clusterManagerNodeTimeout()) + ); + tieringIndexRequest.indicesOptions(IndicesOptions.fromRequest(request, tieringIndexRequest.indicesOptions())); + tieringIndexRequest.waitForCompletion(request.paramAsBoolean("wait_for_completion", tieringIndexRequest.waitForCompletion())); + return channel -> client.admin() + .cluster() + .execute(HotToWarmTieringAction.INSTANCE, tieringIndexRequest, new RestToXContentListener<>(channel)); + } +} diff --git a/server/src/main/java/org/opensearch/action/admin/indices/tiering/TieringIndexRequest.java b/server/src/main/java/org/opensearch/action/admin/indices/tiering/TieringIndexRequest.java new file mode 100644 index 0000000000000..ed458a47ddb7d --- /dev/null +++ b/server/src/main/java/org/opensearch/action/admin/indices/tiering/TieringIndexRequest.java @@ -0,0 +1,195 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.action.admin.indices.tiering; + +import org.opensearch.action.ActionRequestValidationException; +import org.opensearch.action.IndicesRequest; +import org.opensearch.action.support.IndicesOptions; +import org.opensearch.action.support.master.AcknowledgedRequest; +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.core.common.io.stream.StreamInput; +import org.opensearch.core.common.io.stream.StreamOutput; + +import java.io.IOException; +import java.util.Arrays; +import java.util.Locale; +import java.util.Objects; + +import static org.opensearch.action.ValidateActions.addValidationError; + +/** + * Represents the tiering request for indices to move to a different tier + * + * @opensearch.experimental + */ +@ExperimentalApi +public class TieringIndexRequest extends AcknowledgedRequest implements IndicesRequest.Replaceable { + + private String[] indices; + private final Tier targetTier; + private IndicesOptions indicesOptions; + private boolean waitForCompletion; + + public TieringIndexRequest(String targetTier, String... indices) { + this.targetTier = Tier.fromString(targetTier); + this.indices = indices; + this.indicesOptions = IndicesOptions.fromOptions(false, false, true, false); + this.waitForCompletion = false; + } + + public TieringIndexRequest(StreamInput in) throws IOException { + super(in); + indices = in.readStringArray(); + targetTier = Tier.fromString(in.readString()); + indicesOptions = IndicesOptions.readIndicesOptions(in); + waitForCompletion = in.readBoolean(); + } + + // pkg private for testing + TieringIndexRequest(Tier targetTier, IndicesOptions indicesOptions, boolean waitForCompletion, String... indices) { + this.indices = indices; + this.targetTier = targetTier; + this.indicesOptions = indicesOptions; + this.waitForCompletion = waitForCompletion; + } + + @Override + public ActionRequestValidationException validate() { + ActionRequestValidationException validationException = null; + if (indices == null) { + validationException = addValidationError("Mandatory parameter - indices is missing from the request", validationException); + } else { + for (String index : indices) { + if (index == null || index.length() == 0) { + validationException = addValidationError( + String.format(Locale.ROOT, "Specified index in the request [%s] is null or empty", index), + validationException + ); + } + } + } + if (!Tier.WARM.equals(targetTier)) { + validationException = addValidationError("The specified tier is not supported", validationException); + } + return validationException; + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + out.writeStringArray(indices); + out.writeString(targetTier.value()); + indicesOptions.writeIndicesOptions(out); + out.writeBoolean(waitForCompletion); + } + + @Override + public String[] indices() { + return indices; + } + + @Override + public IndicesOptions indicesOptions() { + return indicesOptions; + } + + @Override + public boolean includeDataStreams() { + return true; + } + + @Override + public TieringIndexRequest indices(String... indices) { + this.indices = indices; + return this; + } + + public TieringIndexRequest indicesOptions(IndicesOptions indicesOptions) { + this.indicesOptions = indicesOptions; + return this; + } + + /** + * If this parameter is set to true the operation will wait for completion of tiering process before returning. + * + * @param waitForCompletion if true the operation will wait for completion + * @return this request + */ + public TieringIndexRequest waitForCompletion(boolean waitForCompletion) { + this.waitForCompletion = waitForCompletion; + return this; + } + + /** + * Returns wait for completion setting + * + * @return true if the operation will wait for completion + */ + public boolean waitForCompletion() { + return waitForCompletion; + } + + public Tier tier() { + return targetTier; + } + + @Override + public boolean equals(Object o) { + if (this == o) { + return true; + } + if (o == null || getClass() != o.getClass()) { + return false; + } + TieringIndexRequest that = (TieringIndexRequest) o; + return clusterManagerNodeTimeout.equals(that.clusterManagerNodeTimeout) + && timeout.equals(that.timeout) + && Objects.equals(indicesOptions, that.indicesOptions) + && Arrays.equals(indices, that.indices) + && targetTier.equals(that.targetTier) + && waitForCompletion == that.waitForCompletion; + } + + @Override + public int hashCode() { + return Objects.hash(clusterManagerNodeTimeout, timeout, indicesOptions, waitForCompletion, Arrays.hashCode(indices)); + } + + /** + * Represents the supported tiers for an index + * + * @opensearch.experimental + */ + @ExperimentalApi + public enum Tier { + HOT, + WARM; + + public static Tier fromString(String name) { + if (name == null) { + throw new IllegalArgumentException("Tiering type cannot be null"); + } + String upperCase = name.trim().toUpperCase(Locale.ROOT); + switch (upperCase) { + case "HOT": + return HOT; + case "WARM": + return WARM; + default: + throw new IllegalArgumentException( + "Tiering type [" + name + "] is not supported. Supported types are " + HOT + " and " + WARM + ); + } + } + + public String value() { + return name().toLowerCase(Locale.ROOT); + } + } +} diff --git a/server/src/main/java/org/opensearch/action/admin/indices/tiering/TieringValidationResult.java b/server/src/main/java/org/opensearch/action/admin/indices/tiering/TieringValidationResult.java new file mode 100644 index 0000000000000..ccd60daf027ce --- /dev/null +++ b/server/src/main/java/org/opensearch/action/admin/indices/tiering/TieringValidationResult.java @@ -0,0 +1,83 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.action.admin.indices.tiering; + +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.core.index.Index; + +import java.util.HashMap; +import java.util.LinkedList; +import java.util.List; +import java.util.Map; +import java.util.Objects; +import java.util.Set; +import java.util.concurrent.ConcurrentHashMap; + +/** + * Validation result for tiering + * + * @opensearch.experimental + */ + +@ExperimentalApi +public class TieringValidationResult { + private final Set acceptedIndices; + private final Map rejectedIndices; + + public TieringValidationResult(Set concreteIndices) { + // by default all the indices are added to the accepted set + this.acceptedIndices = ConcurrentHashMap.newKeySet(); + acceptedIndices.addAll(concreteIndices); + this.rejectedIndices = new HashMap<>(); + } + + public Set getAcceptedIndices() { + return acceptedIndices; + } + + public Map getRejectedIndices() { + return rejectedIndices; + } + + public void addToRejected(Index index, String reason) { + acceptedIndices.remove(index); + rejectedIndices.put(index, reason); + } + + public HotToWarmTieringResponse constructResponse() { + final List indicesResult = new LinkedList<>(); + for (Map.Entry rejectedIndex : rejectedIndices.entrySet()) { + indicesResult.add(new HotToWarmTieringResponse.IndexResult(rejectedIndex.getKey().getName(), rejectedIndex.getValue())); + } + return new HotToWarmTieringResponse(acceptedIndices.size() > 0, indicesResult); + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + + TieringValidationResult that = (TieringValidationResult) o; + + if (!Objects.equals(acceptedIndices, that.acceptedIndices)) return false; + return Objects.equals(rejectedIndices, that.rejectedIndices); + } + + @Override + public int hashCode() { + int result = acceptedIndices != null ? acceptedIndices.hashCode() : 0; + result = 31 * result + (rejectedIndices != null ? rejectedIndices.hashCode() : 0); + return result; + } + + @Override + public String toString() { + return "TieringValidationResult{" + "acceptedIndices=" + acceptedIndices + ", rejectedIndices=" + rejectedIndices + '}'; + } +} diff --git a/server/src/main/java/org/opensearch/action/admin/indices/tiering/TransportHotToWarmTieringAction.java b/server/src/main/java/org/opensearch/action/admin/indices/tiering/TransportHotToWarmTieringAction.java new file mode 100644 index 0000000000000..8d1ab0bb37cdd --- /dev/null +++ b/server/src/main/java/org/opensearch/action/admin/indices/tiering/TransportHotToWarmTieringAction.java @@ -0,0 +1,110 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.action.admin.indices.tiering; + +import org.apache.logging.log4j.LogManager; +import org.apache.logging.log4j.Logger; +import org.opensearch.action.support.ActionFilters; +import org.opensearch.action.support.clustermanager.TransportClusterManagerNodeAction; +import org.opensearch.cluster.ClusterInfoService; +import org.opensearch.cluster.ClusterState; +import org.opensearch.cluster.block.ClusterBlockException; +import org.opensearch.cluster.block.ClusterBlockLevel; +import org.opensearch.cluster.metadata.IndexNameExpressionResolver; +import org.opensearch.cluster.routing.allocation.DiskThresholdSettings; +import org.opensearch.cluster.service.ClusterService; +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.common.inject.Inject; +import org.opensearch.common.settings.Settings; +import org.opensearch.core.action.ActionListener; +import org.opensearch.core.common.io.stream.StreamInput; +import org.opensearch.core.index.Index; +import org.opensearch.threadpool.ThreadPool; +import org.opensearch.transport.TransportService; + +import java.io.IOException; +import java.util.Set; + +import static org.opensearch.indices.tiering.TieringRequestValidator.validateHotToWarm; + +/** + * Transport Tiering action to move indices from hot to warm + * + * @opensearch.experimental + */ +@ExperimentalApi +public class TransportHotToWarmTieringAction extends TransportClusterManagerNodeAction { + + private static final Logger logger = LogManager.getLogger(TransportHotToWarmTieringAction.class); + private final ClusterInfoService clusterInfoService; + private final DiskThresholdSettings diskThresholdSettings; + + @Inject + public TransportHotToWarmTieringAction( + TransportService transportService, + ClusterService clusterService, + ThreadPool threadPool, + ActionFilters actionFilters, + IndexNameExpressionResolver indexNameExpressionResolver, + ClusterInfoService clusterInfoService, + Settings settings + ) { + super( + HotToWarmTieringAction.NAME, + transportService, + clusterService, + threadPool, + actionFilters, + TieringIndexRequest::new, + indexNameExpressionResolver + ); + this.clusterInfoService = clusterInfoService; + this.diskThresholdSettings = new DiskThresholdSettings(settings, clusterService.getClusterSettings()); + } + + @Override + protected String executor() { + return ThreadPool.Names.SAME; + } + + @Override + protected HotToWarmTieringResponse read(StreamInput in) throws IOException { + return new HotToWarmTieringResponse(in); + } + + @Override + protected ClusterBlockException checkBlock(TieringIndexRequest request, ClusterState state) { + return state.blocks() + .indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, indexNameExpressionResolver.concreteIndexNames(state, request)); + } + + @Override + protected void clusterManagerOperation( + TieringIndexRequest request, + ClusterState state, + ActionListener listener + ) throws Exception { + Index[] concreteIndices = indexNameExpressionResolver.concreteIndices(state, request); + if (concreteIndices == null || concreteIndices.length == 0) { + listener.onResponse(new HotToWarmTieringResponse(true)); + return; + } + final TieringValidationResult tieringValidationResult = validateHotToWarm( + state, + Set.of(concreteIndices), + clusterInfoService.getClusterInfo(), + diskThresholdSettings + ); + + if (tieringValidationResult.getAcceptedIndices().isEmpty()) { + listener.onResponse(tieringValidationResult.constructResponse()); + return; + } + } +} diff --git a/server/src/main/java/org/opensearch/action/admin/indices/tiering/package-info.java b/server/src/main/java/org/opensearch/action/admin/indices/tiering/package-info.java new file mode 100644 index 0000000000000..878e3575a3934 --- /dev/null +++ b/server/src/main/java/org/opensearch/action/admin/indices/tiering/package-info.java @@ -0,0 +1,36 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +/** + * Actions that OpenSearch can take to tier the indices + */ +/* + * Modifications Copyright OpenSearch Contributors. See + * GitHub history for details. + */ + +package org.opensearch.action.admin.indices.tiering; diff --git a/server/src/main/java/org/opensearch/common/settings/IndexScopedSettings.java b/server/src/main/java/org/opensearch/common/settings/IndexScopedSettings.java index ca2c4dab6102b..6e7d77d0c00d4 100644 --- a/server/src/main/java/org/opensearch/common/settings/IndexScopedSettings.java +++ b/server/src/main/java/org/opensearch/common/settings/IndexScopedSettings.java @@ -273,7 +273,7 @@ public final class IndexScopedSettings extends AbstractScopedSettings { */ public static final Map> FEATURE_FLAGGED_INDEX_SETTINGS = Map.of( FeatureFlags.TIERED_REMOTE_INDEX, - List.of(IndexModule.INDEX_STORE_LOCALITY_SETTING) + List.of(IndexModule.INDEX_STORE_LOCALITY_SETTING, IndexModule.INDEX_TIERING_STATE) ); public static final IndexScopedSettings DEFAULT_SCOPED_SETTINGS = new IndexScopedSettings(Settings.EMPTY, BUILT_IN_INDEX_SETTINGS); diff --git a/server/src/main/java/org/opensearch/index/IndexModule.java b/server/src/main/java/org/opensearch/index/IndexModule.java index 09b904394ee09..93ff1b78b1ac5 100644 --- a/server/src/main/java/org/opensearch/index/IndexModule.java +++ b/server/src/main/java/org/opensearch/index/IndexModule.java @@ -48,6 +48,7 @@ import org.opensearch.common.CheckedFunction; import org.opensearch.common.SetOnce; import org.opensearch.common.TriFunction; +import org.opensearch.common.annotation.ExperimentalApi; import org.opensearch.common.annotation.PublicApi; import org.opensearch.common.logging.DeprecationLogger; import org.opensearch.common.settings.Setting; @@ -174,6 +175,14 @@ public final class IndexModule { Property.NodeScope ); + public static final Setting INDEX_TIERING_STATE = new Setting<>( + "index.tiering.state", + TieringState.HOT.name(), + Function.identity(), + Property.IndexScope, + Property.PrivateIndex + ); + /** Which lucene file extensions to load with the mmap directory when using hybridfs store. This settings is ignored if {@link #INDEX_STORE_HYBRID_NIO_EXTENSIONS} is set. * This is an expert setting. * @see Lucene File Extensions. @@ -663,6 +672,17 @@ public static Type defaultStoreType(final boolean allowMmap) { } } + /** + * Represents the tiering state of the index. + */ + @ExperimentalApi + public enum TieringState { + HOT, + HOT_TO_WARM, + WARM, + WARM_TO_HOT; + } + public IndexService newIndexService( IndexService.IndexCreationContext indexCreationContext, NodeEnvironment environment, diff --git a/server/src/main/java/org/opensearch/indices/tiering/TieringRequestValidator.java b/server/src/main/java/org/opensearch/indices/tiering/TieringRequestValidator.java new file mode 100644 index 0000000000000..2de50f4d4295d --- /dev/null +++ b/server/src/main/java/org/opensearch/indices/tiering/TieringRequestValidator.java @@ -0,0 +1,277 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.indices.tiering; + +import org.apache.logging.log4j.LogManager; +import org.apache.logging.log4j.Logger; +import org.opensearch.action.admin.indices.tiering.TieringValidationResult; +import org.opensearch.cluster.ClusterInfo; +import org.opensearch.cluster.ClusterState; +import org.opensearch.cluster.DiskUsage; +import org.opensearch.cluster.health.ClusterHealthStatus; +import org.opensearch.cluster.health.ClusterIndexHealth; +import org.opensearch.cluster.metadata.IndexMetadata; +import org.opensearch.cluster.node.DiscoveryNode; +import org.opensearch.cluster.routing.IndexRoutingTable; +import org.opensearch.cluster.routing.ShardRouting; +import org.opensearch.cluster.routing.allocation.DiskThresholdSettings; +import org.opensearch.core.index.Index; +import org.opensearch.index.IndexModule; + +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.stream.Collectors; + +import static org.opensearch.index.IndexModule.INDEX_TIERING_STATE; + +/** + * Validator class to validate the tiering requests of the index + * @opensearch.experimental + */ +public class TieringRequestValidator { + + private static final Logger logger = LogManager.getLogger(TieringRequestValidator.class); + + /** + * Validates the tiering request for indices going from hot to warm tier + * + * @param currentState current cluster state + * @param concreteIndices set of indices to be validated + * @param clusterInfo the current nodes usage info for the cluster + * @param diskThresholdSettings the disk threshold settings of the cluster + * @return result of the validation + */ + public static TieringValidationResult validateHotToWarm( + final ClusterState currentState, + final Set concreteIndices, + final ClusterInfo clusterInfo, + final DiskThresholdSettings diskThresholdSettings + ) { + final String indexNames = concreteIndices.stream().map(Index::getName).collect(Collectors.joining(", ")); + validateSearchNodes(currentState, indexNames); + validateDiskThresholdWaterMarkNotBreached(currentState, clusterInfo, diskThresholdSettings, indexNames); + + final TieringValidationResult tieringValidationResult = new TieringValidationResult(concreteIndices); + + for (Index index : concreteIndices) { + if (!validateHotIndex(currentState, index)) { + tieringValidationResult.addToRejected(index, "index is not in the HOT tier"); + continue; + } + if (!validateRemoteStoreIndex(currentState, index)) { + tieringValidationResult.addToRejected(index, "index is not backed up by the remote store"); + continue; + } + if (!validateOpenIndex(currentState, index)) { + tieringValidationResult.addToRejected(index, "index is closed"); + continue; + } + if (!validateIndexHealth(currentState, index)) { + tieringValidationResult.addToRejected(index, "index is red"); + continue; + } + } + + validateEligibleNodesCapacity(clusterInfo, currentState, tieringValidationResult); + logger.info( + "Successfully accepted indices for tiering are [{}], rejected indices are [{}]", + tieringValidationResult.getAcceptedIndices(), + tieringValidationResult.getRejectedIndices() + ); + + return tieringValidationResult; + } + + /** + * Validates that there are eligible nodes with the search role in the current cluster state. + * (only for the dedicated case - to be removed later) + * + * @param currentState the current cluster state + * @param indexNames the names of the indices being validated + * @throws IllegalArgumentException if there are no eligible search nodes in the cluster + */ + static void validateSearchNodes(final ClusterState currentState, final String indexNames) { + if (getEligibleNodes(currentState).isEmpty()) { + final String errorMsg = "Rejecting tiering request for indices [" + + indexNames + + "] because there are no nodes found with the search role"; + logger.warn(errorMsg); + throw new IllegalArgumentException(errorMsg); + } + } + + /** + * Validates that the specified index has the remote store setting enabled. + * + * @param state the current cluster state + * @param index the index to be validated + * @return true if the remote store setting is enabled for the index, false otherwise + */ + static boolean validateRemoteStoreIndex(final ClusterState state, final Index index) { + return IndexMetadata.INDEX_REMOTE_STORE_ENABLED_SETTING.get(state.metadata().getIndexSafe(index).getSettings()); + } + + /** + * Validates that the specified index is in the "hot" tiering state. + * + * @param state the current cluster state + * @param index the index to be validated + * @return true if the index is in the "hot" tiering state, false otherwise + */ + static boolean validateHotIndex(final ClusterState state, final Index index) { + return IndexModule.TieringState.HOT.name().equals(INDEX_TIERING_STATE.get(state.metadata().getIndexSafe(index).getSettings())); + } + + /** + * Validates the health of the specified index in the current cluster state. + * + * @param currentState the current cluster state + * @param index the index to be validated + * @return true if the index health is not in the "red" state, false otherwise + */ + static boolean validateIndexHealth(final ClusterState currentState, final Index index) { + final IndexRoutingTable indexRoutingTable = currentState.routingTable().index(index); + final IndexMetadata indexMetadata = currentState.metadata().index(index); + final ClusterIndexHealth indexHealth = new ClusterIndexHealth(indexMetadata, indexRoutingTable); + return !ClusterHealthStatus.RED.equals(indexHealth.getStatus()); + } + + /** + * Validates that the specified index is in the open state in the current cluster state. + * + * @param currentState the current cluster state + * @param index the index to be validated + * @return true if the index is in the open state, false otherwise + */ + static boolean validateOpenIndex(final ClusterState currentState, final Index index) { + return currentState.metadata().index(index).getState() == IndexMetadata.State.OPEN; + } + + /** + * Validates that the disk threshold low watermark is not breached on all the eligible nodes in the cluster. + * + * @param currentState the current cluster state + * @param clusterInfo the current nodes usage info for the cluster + * @param diskThresholdSettings the disk threshold settings of the cluster + * @param indexNames the names of the indices being validated + * @throws IllegalArgumentException if the disk threshold low watermark is breached on all eligible nodes + */ + static void validateDiskThresholdWaterMarkNotBreached( + final ClusterState currentState, + final ClusterInfo clusterInfo, + final DiskThresholdSettings diskThresholdSettings, + final String indexNames + ) { + final Map usages = clusterInfo.getNodeLeastAvailableDiskUsages(); + if (usages == null) { + logger.trace("skipping monitor as no disk usage information is available"); + return; + } + final Set nodeIds = getEligibleNodes(currentState).stream().map(DiscoveryNode::getId).collect(Collectors.toSet()); + for (String node : nodeIds) { + final DiskUsage nodeUsage = usages.get(node); + if (nodeUsage != null && nodeUsage.getFreeBytes() > diskThresholdSettings.getFreeBytesThresholdLow().getBytes()) { + return; + } + } + throw new IllegalArgumentException( + "Disk threshold low watermark is breached on all the search nodes, rejecting tiering request for indices: " + indexNames + ); + } + + /** + * Validates the capacity of eligible nodes in the cluster to accommodate the specified indices + * and adds the rejected indices to tieringValidationResult + * + * @param clusterInfo the current nodes usage info for the cluster + * @param currentState the current cluster state + * @param tieringValidationResult contains the indices to validate + */ + static void validateEligibleNodesCapacity( + final ClusterInfo clusterInfo, + final ClusterState currentState, + final TieringValidationResult tieringValidationResult + ) { + + final Set eligibleNodeIds = getEligibleNodes(currentState).stream().map(DiscoveryNode::getId).collect(Collectors.toSet()); + long totalAvailableBytesInWarmTier = getTotalAvailableBytesInWarmTier( + clusterInfo.getNodeLeastAvailableDiskUsages(), + eligibleNodeIds + ); + + Map indexSizes = new HashMap<>(); + for (Index index : tieringValidationResult.getAcceptedIndices()) { + indexSizes.put(index, getIndexPrimaryStoreSize(currentState, clusterInfo, index.getName())); + } + + if (indexSizes.values().stream().mapToLong(Long::longValue).sum() < totalAvailableBytesInWarmTier) { + return; + } + HashMap sortedIndexSizes = indexSizes.entrySet() + .stream() + .sorted(Map.Entry.comparingByValue()) + .collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue, (e1, e2) -> e1, HashMap::new)); + + long requestIndexBytes = 0L; + for (Index index : sortedIndexSizes.keySet()) { + requestIndexBytes += sortedIndexSizes.get(index); + if (requestIndexBytes >= totalAvailableBytesInWarmTier) { + tieringValidationResult.addToRejected(index, "insufficient node capacity"); + } + } + } + + /** + * Calculates the total size of the specified index in the cluster. + * Note: This function only accounts for the primary shard size. + * + * @param clusterState the current state of the cluster + * @param clusterInfo the current nodes usage info for the cluster + * @param index the name of the index for which the total size is to be calculated + * @return the total size of the specified index in the cluster + */ + static long getIndexPrimaryStoreSize(ClusterState clusterState, ClusterInfo clusterInfo, String index) { + long totalIndexSize = 0; + List shardRoutings = clusterState.routingTable().allShards(index); + for (ShardRouting shardRouting : shardRoutings) { + if (shardRouting.primary()) { + totalIndexSize += clusterInfo.getShardSize(shardRouting, 0); + } + } + return totalIndexSize; + } + + /** + * Calculates the total available bytes in the warm tier of the cluster. + * + * @param usages the current disk usage of the cluster + * @param nodeIds the set of warm nodes ids in the cluster + * @return the total available bytes in the warm tier + */ + static long getTotalAvailableBytesInWarmTier(final Map usages, final Set nodeIds) { + long totalAvailableBytes = 0; + for (String node : nodeIds) { + totalAvailableBytes += usages.get(node).getFreeBytes(); + } + return totalAvailableBytes; + } + + /** + * Retrieves the set of eligible(search) nodes from the current cluster state. + * + * @param currentState the current cluster state + * @return the set of eligible nodes + */ + static Set getEligibleNodes(final ClusterState currentState) { + final Map nodes = currentState.getNodes().getDataNodes(); + return nodes.values().stream().filter(DiscoveryNode::isSearchNode).collect(Collectors.toSet()); + } +} diff --git a/server/src/main/java/org/opensearch/indices/tiering/package-info.java b/server/src/main/java/org/opensearch/indices/tiering/package-info.java new file mode 100644 index 0000000000000..552f87382ea15 --- /dev/null +++ b/server/src/main/java/org/opensearch/indices/tiering/package-info.java @@ -0,0 +1,36 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +/** + * Validator layer checks that OpenSearch can perform to tier the indices + */ +/* + * Modifications Copyright OpenSearch Contributors. See + * GitHub history for details. + */ + +package org.opensearch.indices.tiering; diff --git a/server/src/test/java/org/opensearch/action/admin/indices/tiering/HotToWarmTieringResponseTests.java b/server/src/test/java/org/opensearch/action/admin/indices/tiering/HotToWarmTieringResponseTests.java new file mode 100644 index 0000000000000..85cabe0fa1491 --- /dev/null +++ b/server/src/test/java/org/opensearch/action/admin/indices/tiering/HotToWarmTieringResponseTests.java @@ -0,0 +1,101 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.action.admin.indices.tiering; + +import org.opensearch.common.xcontent.XContentFactory; +import org.opensearch.common.xcontent.XContentType; +import org.opensearch.core.common.io.stream.Writeable; +import org.opensearch.core.xcontent.ToXContent; +import org.opensearch.core.xcontent.XContentBuilder; +import org.opensearch.test.AbstractWireSerializingTestCase; + +import java.util.LinkedList; +import java.util.List; + +public class HotToWarmTieringResponseTests extends AbstractWireSerializingTestCase { + + @Override + protected Writeable.Reader instanceReader() { + return HotToWarmTieringResponse::new; + } + + @Override + protected HotToWarmTieringResponse createTestInstance() { + return randomHotToWarmTieringResponse(); + } + + @Override + protected void assertEqualInstances(HotToWarmTieringResponse expected, HotToWarmTieringResponse actual) { + assertNotSame(expected, actual); + assertEquals(actual.isAcknowledged(), expected.isAcknowledged()); + + for (int i = 0; i < expected.getFailedIndices().size(); i++) { + HotToWarmTieringResponse.IndexResult expectedIndexResult = expected.getFailedIndices().get(i); + HotToWarmTieringResponse.IndexResult actualIndexResult = actual.getFailedIndices().get(i); + assertNotSame(expectedIndexResult, actualIndexResult); + assertEquals(actualIndexResult.getIndex(), expectedIndexResult.getIndex()); + assertEquals(actualIndexResult.getFailureReason(), expectedIndexResult.getFailureReason()); + } + } + + /** + * Verifies that ToXContent works with any random {@link HotToWarmTieringResponse} object + * @throws Exception - in case of error + */ + public void testToXContentWorksForRandomResponse() throws Exception { + HotToWarmTieringResponse testResponse = randomHotToWarmTieringResponse(); + XContentType xContentType = randomFrom(XContentType.values()); + try (XContentBuilder builder = XContentBuilder.builder(xContentType.xContent())) { + testResponse.toXContent(builder, ToXContent.EMPTY_PARAMS); + } + } + + /** + * Verify the XContent output of the response object + * @throws Exception - in case of error + */ + public void testToXContentOutput() throws Exception { + String[] indices = new String[] { "index2", "index1" }; + String[] errorReasons = new String[] { "reason2", "reason1" }; + List results = new LinkedList<>(); + for (int i = 0; i < indices.length; ++i) { + results.add(new HotToWarmTieringResponse.IndexResult(indices[i], errorReasons[i])); + } + HotToWarmTieringResponse testResponse = new HotToWarmTieringResponse(true, results); + + // generate a corresponding expected xcontent + XContentBuilder content = XContentFactory.jsonBuilder().startObject().field("acknowledged", true).startArray("failed_indices"); + // expected result should be in the sorted order + content.startObject().field("index", "index1").field("error", "reason1").endObject(); + content.startObject().field("index", "index2").field("error", "reason2").endObject(); + content.endArray().endObject(); + assertEquals(content.toString(), testResponse.toString()); + } + + /** + * @return - randomly generated object of type {@link HotToWarmTieringResponse.IndexResult} + */ + private HotToWarmTieringResponse.IndexResult randomIndexResult() { + String indexName = randomAlphaOfLengthBetween(1, 50); + String failureReason = randomAlphaOfLengthBetween(1, 200); + return new HotToWarmTieringResponse.IndexResult(indexName, failureReason); + } + + /** + * @return - randomly generated object of type {@link HotToWarmTieringResponse} + */ + private HotToWarmTieringResponse randomHotToWarmTieringResponse() { + int numIndexResult = randomIntBetween(0, 10); + List indexResults = new LinkedList<>(); + for (int i = 0; i < numIndexResult; ++i) { + indexResults.add(randomIndexResult()); + } + return new HotToWarmTieringResponse(randomBoolean(), indexResults); + } +} diff --git a/server/src/test/java/org/opensearch/action/admin/indices/tiering/TieringIndexRequestTests.java b/server/src/test/java/org/opensearch/action/admin/indices/tiering/TieringIndexRequestTests.java new file mode 100644 index 0000000000000..e33d10268a617 --- /dev/null +++ b/server/src/test/java/org/opensearch/action/admin/indices/tiering/TieringIndexRequestTests.java @@ -0,0 +1,79 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.action.admin.indices.tiering; + +import org.opensearch.action.ActionRequestValidationException; +import org.opensearch.action.support.IndicesOptions; +import org.opensearch.common.io.stream.BytesStreamOutput; +import org.opensearch.core.common.io.stream.StreamInput; +import org.opensearch.test.OpenSearchTestCase; + +import java.io.IOException; + +import static org.hamcrest.CoreMatchers.equalTo; + +public class TieringIndexRequestTests extends OpenSearchTestCase { + + public void testTieringRequestWithListOfIndices() { + TieringIndexRequest request = new TieringIndexRequest( + TieringIndexRequest.Tier.WARM, + IndicesOptions.fromOptions(randomBoolean(), randomBoolean(), randomBoolean(), randomBoolean()), + false, + "foo", + "bar", + "baz" + ); + ActionRequestValidationException validationException = request.validate(); + assertNull(validationException); + } + + public void testTieringRequestWithIndexPattern() { + TieringIndexRequest request = new TieringIndexRequest(TieringIndexRequest.Tier.WARM.name(), "foo-*"); + ActionRequestValidationException validationException = request.validate(); + assertNull(validationException); + } + + public void testTieringRequestWithNullOrEmptyIndices() { + TieringIndexRequest request = new TieringIndexRequest(TieringIndexRequest.Tier.WARM.name(), null, ""); + ActionRequestValidationException validationException = request.validate(); + assertNotNull(validationException); + } + + public void testTieringRequestWithNotSupportedTier() { + TieringIndexRequest request = new TieringIndexRequest(TieringIndexRequest.Tier.HOT.name(), "test"); + ActionRequestValidationException validationException = request.validate(); + assertNotNull(validationException); + } + + public void testTieringTypeFromString() { + expectThrows(IllegalArgumentException.class, () -> TieringIndexRequest.Tier.fromString("tier")); + expectThrows(IllegalArgumentException.class, () -> TieringIndexRequest.Tier.fromString(null)); + } + + public void testSerDeOfTieringRequest() throws IOException { + TieringIndexRequest request = new TieringIndexRequest(TieringIndexRequest.Tier.WARM.name(), "test"); + try (BytesStreamOutput out = new BytesStreamOutput()) { + request.writeTo(out); + try (StreamInput in = out.bytes().streamInput()) { + final TieringIndexRequest deserializedRequest = new TieringIndexRequest(in); + assertEquals(request, deserializedRequest); + } + } + } + + public void testTieringRequestEquals() { + final TieringIndexRequest original = new TieringIndexRequest(TieringIndexRequest.Tier.WARM.name(), "test"); + original.indicesOptions(IndicesOptions.fromOptions(randomBoolean(), randomBoolean(), randomBoolean(), randomBoolean())); + final TieringIndexRequest expected = new TieringIndexRequest(TieringIndexRequest.Tier.WARM.name(), original.indices()); + expected.indicesOptions(original.indicesOptions()); + assertThat(expected, equalTo(original)); + assertThat(expected.indices(), equalTo(original.indices())); + assertThat(expected.indicesOptions(), equalTo(original.indicesOptions())); + } +} diff --git a/server/src/test/java/org/opensearch/action/admin/indices/tiering/TransportHotToWarmTieringActionTests.java b/server/src/test/java/org/opensearch/action/admin/indices/tiering/TransportHotToWarmTieringActionTests.java new file mode 100644 index 0000000000000..10273366af804 --- /dev/null +++ b/server/src/test/java/org/opensearch/action/admin/indices/tiering/TransportHotToWarmTieringActionTests.java @@ -0,0 +1,118 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.action.admin.indices.tiering; + +import com.carrotsearch.randomizedtesting.annotations.ThreadLeakFilters; + +import org.opensearch.action.support.IndicesOptions; +import org.opensearch.cluster.ClusterInfoService; +import org.opensearch.cluster.MockInternalClusterInfoService; +import org.opensearch.cluster.block.ClusterBlockException; +import org.opensearch.cluster.metadata.IndexMetadata; +import org.opensearch.common.settings.Settings; +import org.opensearch.common.util.FeatureFlags; +import org.opensearch.core.common.unit.ByteSizeUnit; +import org.opensearch.core.common.unit.ByteSizeValue; +import org.opensearch.index.IndexNotFoundException; +import org.opensearch.index.store.remote.file.CleanerDaemonThreadLeakFilter; +import org.opensearch.monitor.fs.FsInfo; +import org.opensearch.plugins.Plugin; +import org.opensearch.test.OpenSearchIntegTestCase; +import org.junit.After; +import org.junit.Before; + +import java.util.Collection; +import java.util.Collections; + +import static org.opensearch.cluster.metadata.IndexMetadata.SETTING_READ_ONLY_ALLOW_DELETE; +import static org.opensearch.test.hamcrest.OpenSearchAssertions.assertAcked; + +@ThreadLeakFilters(filters = CleanerDaemonThreadLeakFilter.class) +@OpenSearchIntegTestCase.ClusterScope(scope = OpenSearchIntegTestCase.Scope.TEST, numDataNodes = 0, supportsDedicatedMasters = false) +public class TransportHotToWarmTieringActionTests extends OpenSearchIntegTestCase { + protected static final String TEST_IDX_1 = "test-idx-1"; + protected static final String TEST_IDX_2 = "idx-2"; + protected static final String TARGET_TIER = "warm"; + private String[] indices; + + @Override + protected Settings featureFlagSettings() { + Settings.Builder featureSettings = Settings.builder(); + featureSettings.put(FeatureFlags.TIERED_REMOTE_INDEX, true); + return featureSettings.build(); + } + + @Override + protected Collection> nodePlugins() { + return Collections.singletonList(MockInternalClusterInfoService.TestPlugin.class); + } + + @Before + public void setup() { + internalCluster().startClusterManagerOnlyNode(); + internalCluster().ensureAtLeastNumSearchAndDataNodes(1); + long bytes = new ByteSizeValue(1000, ByteSizeUnit.KB).getBytes(); + final MockInternalClusterInfoService clusterInfoService = getMockInternalClusterInfoService(); + clusterInfoService.setDiskUsageFunctionAndRefresh((discoveryNode, fsInfoPath) -> setDiskUsage(fsInfoPath, bytes, bytes - 1)); + + final int numReplicasIndex = 0; + final Settings settings = Settings.builder() + .put(IndexMetadata.SETTING_NUMBER_OF_SHARDS, 1) + .put(IndexMetadata.SETTING_NUMBER_OF_REPLICAS, numReplicasIndex) + .build(); + + indices = new String[] { TEST_IDX_1, TEST_IDX_2 }; + for (String index : indices) { + assertAcked(client().admin().indices().prepareCreate(index).setSettings(settings).get()); + ensureGreen(index); + } + } + + @After + public void cleanup() { + client().admin().indices().prepareDelete(indices).get(); + } + + MockInternalClusterInfoService getMockInternalClusterInfoService() { + return (MockInternalClusterInfoService) internalCluster().getCurrentClusterManagerNodeInstance(ClusterInfoService.class); + } + + static FsInfo.Path setDiskUsage(FsInfo.Path original, long totalBytes, long freeBytes) { + return new FsInfo.Path(original.getPath(), original.getMount(), totalBytes, freeBytes, freeBytes); + } + + public void testIndexLevelBlocks() { + enableIndexBlock(TEST_IDX_1, SETTING_READ_ONLY_ALLOW_DELETE); + TieringIndexRequest request = new TieringIndexRequest(TARGET_TIER, TEST_IDX_1); + expectThrows(ClusterBlockException.class, () -> client().execute(HotToWarmTieringAction.INSTANCE, request).actionGet()); + } + + public void testIndexNotFound() { + TieringIndexRequest request = new TieringIndexRequest(TARGET_TIER, "foo"); + expectThrows(IndexNotFoundException.class, () -> client().execute(HotToWarmTieringAction.INSTANCE, request).actionGet()); + } + + public void testNoConcreteIndices() { + TieringIndexRequest request = new TieringIndexRequest(TARGET_TIER, "foo"); + request.indicesOptions(IndicesOptions.fromOptions(true, true, true, false)); + HotToWarmTieringResponse response = client().admin().indices().execute(HotToWarmTieringAction.INSTANCE, request).actionGet(); + assertTrue(response.isAcknowledged()); + assertTrue(response.getFailedIndices().isEmpty()); + } + + public void testNoAcceptedIndices() { + TieringIndexRequest request = new TieringIndexRequest(TARGET_TIER, "test-idx-*", "idx-*"); + HotToWarmTieringResponse response = client().admin().indices().execute(HotToWarmTieringAction.INSTANCE, request).actionGet(); + assertFalse(response.isAcknowledged()); + assertEquals(2, response.getFailedIndices().size()); + for (HotToWarmTieringResponse.IndexResult result : response.getFailedIndices()) { + assertEquals("index is not backed up by the remote store", result.getFailureReason()); + } + } +} diff --git a/server/src/test/java/org/opensearch/indices/tiering/TieringRequestValidatorTests.java b/server/src/test/java/org/opensearch/indices/tiering/TieringRequestValidatorTests.java new file mode 100644 index 0000000000000..6b6f74353812b --- /dev/null +++ b/server/src/test/java/org/opensearch/indices/tiering/TieringRequestValidatorTests.java @@ -0,0 +1,318 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.indices.tiering; + +import org.opensearch.Version; +import org.opensearch.action.admin.indices.tiering.TieringValidationResult; +import org.opensearch.cluster.ClusterInfo; +import org.opensearch.cluster.ClusterName; +import org.opensearch.cluster.ClusterState; +import org.opensearch.cluster.DiskUsage; +import org.opensearch.cluster.metadata.IndexMetadata; +import org.opensearch.cluster.metadata.Metadata; +import org.opensearch.cluster.node.DiscoveryNode; +import org.opensearch.cluster.node.DiscoveryNodeRole; +import org.opensearch.cluster.node.DiscoveryNodes; +import org.opensearch.cluster.routing.RoutingTable; +import org.opensearch.cluster.routing.allocation.DiskThresholdSettings; +import org.opensearch.common.settings.ClusterSettings; +import org.opensearch.common.settings.Settings; +import org.opensearch.core.index.Index; +import org.opensearch.index.IndexModule; +import org.opensearch.indices.replication.common.ReplicationType; +import org.opensearch.test.OpenSearchTestCase; + +import java.util.Collections; +import java.util.HashMap; +import java.util.Map; +import java.util.Set; +import java.util.UUID; + +import static org.opensearch.cluster.routing.allocation.DiskThresholdSettings.CLUSTER_ROUTING_ALLOCATION_DISK_FLOOD_STAGE_WATERMARK_SETTING; +import static org.opensearch.cluster.routing.allocation.DiskThresholdSettings.CLUSTER_ROUTING_ALLOCATION_HIGH_DISK_WATERMARK_SETTING; +import static org.opensearch.cluster.routing.allocation.DiskThresholdSettings.CLUSTER_ROUTING_ALLOCATION_LOW_DISK_WATERMARK_SETTING; +import static org.opensearch.indices.tiering.TieringRequestValidator.getEligibleNodes; +import static org.opensearch.indices.tiering.TieringRequestValidator.getIndexPrimaryStoreSize; +import static org.opensearch.indices.tiering.TieringRequestValidator.getTotalAvailableBytesInWarmTier; +import static org.opensearch.indices.tiering.TieringRequestValidator.validateDiskThresholdWaterMarkNotBreached; +import static org.opensearch.indices.tiering.TieringRequestValidator.validateEligibleNodesCapacity; +import static org.opensearch.indices.tiering.TieringRequestValidator.validateHotIndex; +import static org.opensearch.indices.tiering.TieringRequestValidator.validateIndexHealth; +import static org.opensearch.indices.tiering.TieringRequestValidator.validateOpenIndex; +import static org.opensearch.indices.tiering.TieringRequestValidator.validateRemoteStoreIndex; +import static org.opensearch.indices.tiering.TieringRequestValidator.validateSearchNodes; + +public class TieringRequestValidatorTests extends OpenSearchTestCase { + + public void testValidateSearchNodes() { + ClusterState clusterStateWithSearchNodes = ClusterState.builder(ClusterName.CLUSTER_NAME_SETTING.getDefault(Settings.EMPTY)) + .nodes(createNodes(2, 0, 0)) + .build(); + + // throws no errors + validateSearchNodes(clusterStateWithSearchNodes, "test_index"); + } + + public void testWithNoSearchNodesInCluster() { + ClusterState clusterStateWithNoSearchNodes = ClusterState.builder(ClusterName.CLUSTER_NAME_SETTING.getDefault(Settings.EMPTY)) + .nodes(createNodes(0, 1, 1)) + .build(); + // throws error + IllegalArgumentException e = expectThrows( + IllegalArgumentException.class, + () -> validateSearchNodes(clusterStateWithNoSearchNodes, "test") + ); + } + + public void testValidRemoteStoreIndex() { + String indexUuid = UUID.randomUUID().toString(); + String indexName = "test_index"; + + ClusterState clusterState1 = buildClusterState( + indexName, + indexUuid, + Settings.builder() + .put(IndexMetadata.INDEX_REMOTE_STORE_ENABLED_SETTING.getKey(), true) + .put(IndexMetadata.INDEX_REPLICATION_TYPE_SETTING.getKey(), ReplicationType.SEGMENT) + .build() + ); + + assertTrue(validateRemoteStoreIndex(clusterState1, new Index(indexName, indexUuid))); + } + + public void testDocRepIndex() { + String indexUuid = UUID.randomUUID().toString(); + String indexName = "test_index"; + assertFalse(validateRemoteStoreIndex(buildClusterState(indexName, indexUuid, Settings.EMPTY), new Index(indexName, indexUuid))); + } + + public void testValidHotIndex() { + String indexUuid = UUID.randomUUID().toString(); + String indexName = "test_index"; + assertTrue(validateHotIndex(buildClusterState(indexName, indexUuid, Settings.EMPTY), new Index(indexName, indexUuid))); + } + + public void testIndexWithOngoingOrCompletedTiering() { + String indexUuid = UUID.randomUUID().toString(); + String indexName = "test_index"; + + IndexModule.TieringState tieringState = randomBoolean() ? IndexModule.TieringState.HOT_TO_WARM : IndexModule.TieringState.WARM; + + ClusterState clusterState = buildClusterState( + indexName, + indexUuid, + Settings.builder().put(IndexModule.INDEX_TIERING_STATE.getKey(), tieringState).build() + ); + assertFalse(validateHotIndex(clusterState, new Index(indexName, indexUuid))); + } + + public void testValidateIndexHealth() { + String indexUuid = UUID.randomUUID().toString(); + String indexName = "test_index"; + ClusterState clusterState = buildClusterState(indexName, indexUuid, Settings.EMPTY); + assertTrue(validateIndexHealth(clusterState, new Index(indexName, indexUuid))); + } + + public void testValidOpenIndex() { + String indexUuid = UUID.randomUUID().toString(); + String indexName = "test_index"; + assertTrue(validateOpenIndex(buildClusterState(indexName, indexUuid, Settings.EMPTY), new Index(indexName, indexUuid))); + } + + public void testCloseIndex() { + String indexUuid = UUID.randomUUID().toString(); + String indexName = "test_index"; + assertFalse( + validateOpenIndex( + buildClusterState(indexName, indexUuid, Settings.EMPTY, IndexMetadata.State.CLOSE), + new Index(indexName, indexUuid) + ) + ); + } + + public void testValidateDiskThresholdWaterMarkNotBreached() { + int noOfNodes = 2; + ClusterState clusterState = ClusterState.builder(ClusterName.CLUSTER_NAME_SETTING.getDefault(Settings.EMPTY)) + .nodes(createNodes(noOfNodes, 0, 0)) + .build(); + + ClusterInfo clusterInfo = clusterInfo(noOfNodes, 100, 20); + DiskThresholdSettings diskThresholdSettings = diskThresholdSettings("10b", "10b", "5b"); + // throws no error + validateDiskThresholdWaterMarkNotBreached(clusterState, clusterInfo, diskThresholdSettings, "test"); + } + + public void testValidateDiskThresholdWaterMarkNotBreachedThrowsError() { + int noOfNodes = 2; + ClusterState clusterState = ClusterState.builder(ClusterName.CLUSTER_NAME_SETTING.getDefault(Settings.EMPTY)) + .nodes(createNodes(noOfNodes, 0, 0)) + .build(); + ClusterInfo clusterInfo = clusterInfo(noOfNodes, 100, 5); + DiskThresholdSettings diskThresholdSettings = diskThresholdSettings("10b", "10b", "5b"); + // throws error + expectThrows( + IllegalArgumentException.class, + () -> validateDiskThresholdWaterMarkNotBreached(clusterState, clusterInfo, diskThresholdSettings, "test") + ); + } + + public void testGetTotalIndexSize() { + String indexUuid = UUID.randomUUID().toString(); + String indexName = "test_index"; + ClusterState clusterState = ClusterState.builder(buildClusterState(indexName, indexUuid, Settings.EMPTY)) + .nodes(createNodes(1, 0, 0)) + .build(); + Map diskUsages = diskUsages(1, 100, 50); + final Map shardSizes = new HashMap<>(); + shardSizes.put("[test_index][0][p]", 10L); // 10 bytes + ClusterInfo clusterInfo = new ClusterInfo(diskUsages, null, shardSizes, null, Map.of(), Map.of()); + assertEquals(10, getIndexPrimaryStoreSize(clusterState, clusterInfo, indexName)); + } + + public void testValidateEligibleNodesCapacityWithAllAccepted() { + String indexUuid = UUID.randomUUID().toString(); + String indexName = "test_index"; + Set indices = Set.of(new Index(indexName, indexUuid)); + ClusterState clusterState = ClusterState.builder(buildClusterState(indexName, indexUuid, Settings.EMPTY)) + .nodes(createNodes(1, 0, 0)) + .build(); + Map diskUsages = diskUsages(1, 100, 50); + final Map shardSizes = new HashMap<>(); + shardSizes.put("[test_index][0][p]", 10L); // 10 bytes + ClusterInfo clusterInfo = new ClusterInfo(diskUsages, null, shardSizes, null, Map.of(), Map.of()); + TieringValidationResult tieringValidationResult = new TieringValidationResult(indices); + validateEligibleNodesCapacity(clusterInfo, clusterState, tieringValidationResult); + assertEquals(indices, tieringValidationResult.getAcceptedIndices()); + assertTrue(tieringValidationResult.getRejectedIndices().isEmpty()); + } + + public void testValidateEligibleNodesCapacityWithAllRejected() { + String indexUuid = UUID.randomUUID().toString(); + String indexName = "test_index"; + Set indices = Set.of(new Index(indexName, indexUuid)); + ClusterState clusterState = ClusterState.builder(buildClusterState(indexName, indexUuid, Settings.EMPTY)) + .nodes(createNodes(1, 0, 0)) + .build(); + Map diskUsages = diskUsages(1, 100, 10); + final Map shardSizes = new HashMap<>(); + shardSizes.put("[test_index][0][p]", 20L); // 20 bytes + ClusterInfo clusterInfo = new ClusterInfo(diskUsages, null, shardSizes, null, Map.of(), Map.of()); + TieringValidationResult tieringValidationResult = new TieringValidationResult(indices); + validateEligibleNodesCapacity(clusterInfo, clusterState, tieringValidationResult); + assertEquals(indices.size(), tieringValidationResult.getRejectedIndices().size()); + assertEquals(indices, tieringValidationResult.getRejectedIndices().keySet()); + assertTrue(tieringValidationResult.getAcceptedIndices().isEmpty()); + } + + public void testGetTotalAvailableBytesInWarmTier() { + Map diskUsages = diskUsages(2, 500, 100); + assertEquals(200, getTotalAvailableBytesInWarmTier(diskUsages, Set.of("node-s0", "node-s1"))); + } + + public void testEligibleNodes() { + ClusterState clusterState = ClusterState.builder(ClusterName.CLUSTER_NAME_SETTING.getDefault(Settings.EMPTY)) + .nodes(createNodes(2, 0, 0)) + .build(); + + assertEquals(2, getEligibleNodes(clusterState).size()); + + clusterState = ClusterState.builder(ClusterName.CLUSTER_NAME_SETTING.getDefault(Settings.EMPTY)) + .nodes(createNodes(0, 1, 1)) + .build(); + assertEquals(0, getEligibleNodes(clusterState).size()); + } + + private static ClusterState buildClusterState(String indexName, String indexUuid, Settings settings) { + return buildClusterState(indexName, indexUuid, settings, IndexMetadata.State.OPEN); + } + + private static ClusterState buildClusterState(String indexName, String indexUuid, Settings settings, IndexMetadata.State state) { + Settings combinedSettings = Settings.builder().put(settings).put(createDefaultIndexSettings(indexUuid)).build(); + + Metadata metadata = Metadata.builder().put(IndexMetadata.builder(indexName).settings(combinedSettings).state(state)).build(); + + RoutingTable routingTable = RoutingTable.builder().addAsNew(metadata.index(indexName)).build(); + + return ClusterState.builder(ClusterName.CLUSTER_NAME_SETTING.getDefault(Settings.EMPTY)) + .metadata(metadata) + .routingTable(routingTable) + .build(); + } + + private static Settings createDefaultIndexSettings(String indexUuid) { + return Settings.builder() + .put("index.version.created", Version.CURRENT) + .put(IndexMetadata.SETTING_INDEX_UUID, indexUuid) + .put(IndexMetadata.INDEX_NUMBER_OF_SHARDS_SETTING.getKey(), 2) + .put(IndexMetadata.INDEX_NUMBER_OF_REPLICAS_SETTING.getKey(), 1) + .build(); + } + + private DiscoveryNodes createNodes(int numOfSearchNodes, int numOfDataNodes, int numOfIngestNodes) { + DiscoveryNodes.Builder discoveryNodesBuilder = DiscoveryNodes.builder(); + for (int i = 0; i < numOfSearchNodes; i++) { + discoveryNodesBuilder.add( + new DiscoveryNode( + "node-s" + i, + buildNewFakeTransportAddress(), + Collections.emptyMap(), + Collections.singleton(DiscoveryNodeRole.SEARCH_ROLE), + Version.CURRENT + ) + ); + } + for (int i = 0; i < numOfDataNodes; i++) { + discoveryNodesBuilder.add( + new DiscoveryNode( + "node-d" + i, + buildNewFakeTransportAddress(), + Collections.emptyMap(), + Collections.singleton(DiscoveryNodeRole.DATA_ROLE), + Version.CURRENT + ) + ); + } + for (int i = 0; i < numOfIngestNodes; i++) { + discoveryNodesBuilder.add( + new DiscoveryNode( + "node-i" + i, + buildNewFakeTransportAddress(), + Collections.emptyMap(), + Collections.singleton(DiscoveryNodeRole.INGEST_ROLE), + Version.CURRENT + ) + ); + } + return discoveryNodesBuilder.build(); + } + + private static DiskThresholdSettings diskThresholdSettings(String low, String high, String flood) { + return new DiskThresholdSettings( + Settings.builder() + .put(CLUSTER_ROUTING_ALLOCATION_LOW_DISK_WATERMARK_SETTING.getKey(), low) + .put(CLUSTER_ROUTING_ALLOCATION_HIGH_DISK_WATERMARK_SETTING.getKey(), high) + .put(CLUSTER_ROUTING_ALLOCATION_DISK_FLOOD_STAGE_WATERMARK_SETTING.getKey(), flood) + .build(), + new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS) + ); + } + + private static ClusterInfo clusterInfo(int noOfNodes, long totalBytes, long freeBytes) { + final Map diskUsages = diskUsages(noOfNodes, totalBytes, freeBytes); + return new ClusterInfo(diskUsages, null, null, null, Map.of(), Map.of()); + } + + private static Map diskUsages(int noOfSearchNodes, long totalBytes, long freeBytes) { + final Map diskUsages = new HashMap<>(); + for (int i = 0; i < noOfSearchNodes; i++) { + diskUsages.put("node-s" + i, new DiskUsage("node-s" + i, "node-s" + i, "/foo/bar", totalBytes, freeBytes)); + } + return diskUsages; + } +} From 8ff3bcc4632287d8a784a1cba662957d6f921851 Mon Sep 17 00:00:00 2001 From: Siddhant Deshmukh Date: Mon, 22 Jul 2024 21:45:40 -0700 Subject: [PATCH 104/167] Create listener to refresh search thread resource usage (#14832) * [bug fix] fix incorrect coordinator node search resource usages Signed-off-by: Chenyang Ji * fix bug on serialization when passing task resource usage to coordinator Signed-off-by: Chenyang Ji * add more unit tests Signed-off-by: Chenyang Ji * remove query insights plugin related code Signed-off-by: Chenyang Ji * create per request listener to refresh task resource usage Signed-off-by: Chenyang Ji * Make new listener API public Signed-off-by: Siddhant Deshmukh * Add changelog Signed-off-by: Siddhant Deshmukh * Remove wrong files added Signed-off-by: Siddhant Deshmukh * Address review comments Signed-off-by: Siddhant Deshmukh * Build fix Signed-off-by: Siddhant Deshmukh * Make singleton Signed-off-by: Siddhant Deshmukh * Address review comments Signed-off-by: Siddhant Deshmukh * Make sure listener runs before plugin listeners Signed-off-by: Siddhant Deshmukh * Spotless Signed-off-by: Siddhant Deshmukh * Minor fix Signed-off-by: Siddhant Deshmukh --------- Signed-off-by: Chenyang Ji Signed-off-by: Siddhant Deshmukh Signed-off-by: Jay Deng Co-authored-by: Chenyang Ji Co-authored-by: Jay Deng --- CHANGELOG.md | 1 + .../SearchTaskRequestOperationsListener.java | 30 +++++++++++++++++++ .../main/java/org/opensearch/node/Node.java | 18 ++++++----- 3 files changed, 42 insertions(+), 7 deletions(-) create mode 100644 server/src/main/java/org/opensearch/action/search/SearchTaskRequestOperationsListener.java diff --git a/CHANGELOG.md b/CHANGELOG.md index e5534577a67a6..29c78ea7e3e4f 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -27,6 +27,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Reduce logging in DEBUG for MasterService:run ([#14795](https://github.com/opensearch-project/OpenSearch/pull/14795)) - Enabling term version check on local state for all ClusterManager Read Transport Actions ([#14273](https://github.com/opensearch-project/OpenSearch/pull/14273)) - Add persian_stem filter (([#14847](https://github.com/opensearch-project/OpenSearch/pull/14847))) +- Create listener to refresh search thread resource usage ([#14832](https://github.com/opensearch-project/OpenSearch/pull/14832)) - Add rest, transport layer changes for hot to warm tiering - dedicated setup (([#13980](https://github.com/opensearch-project/OpenSearch/pull/13980)) ### Dependencies diff --git a/server/src/main/java/org/opensearch/action/search/SearchTaskRequestOperationsListener.java b/server/src/main/java/org/opensearch/action/search/SearchTaskRequestOperationsListener.java new file mode 100644 index 0000000000000..4434d71793b23 --- /dev/null +++ b/server/src/main/java/org/opensearch/action/search/SearchTaskRequestOperationsListener.java @@ -0,0 +1,30 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.action.search; + +import org.opensearch.tasks.TaskResourceTrackingService; + +/** + * SearchTaskRequestOperationsListener subscriber for operations on search tasks resource usages. + * Listener ensures to refreshResourceStats on request end capturing the search task resource usage + * upon request completion. + * + */ +public final class SearchTaskRequestOperationsListener extends SearchRequestOperationsListener { + private final TaskResourceTrackingService taskResourceTrackingService; + + public SearchTaskRequestOperationsListener(TaskResourceTrackingService taskResourceTrackingService) { + this.taskResourceTrackingService = taskResourceTrackingService; + } + + @Override + public void onRequestEnd(SearchPhaseContext context, SearchRequestContext searchRequestContext) { + taskResourceTrackingService.refreshResourceStats(context.getTask()); + } +} diff --git a/server/src/main/java/org/opensearch/node/Node.java b/server/src/main/java/org/opensearch/node/Node.java index d91b2a45a48c6..448cb3627651c 100644 --- a/server/src/main/java/org/opensearch/node/Node.java +++ b/server/src/main/java/org/opensearch/node/Node.java @@ -52,6 +52,7 @@ import org.opensearch.action.search.SearchRequestOperationsListener; import org.opensearch.action.search.SearchRequestSlowLog; import org.opensearch.action.search.SearchRequestStats; +import org.opensearch.action.search.SearchTaskRequestOperationsListener; import org.opensearch.action.search.SearchTransportService; import org.opensearch.action.support.TransportAction; import org.opensearch.action.update.UpdateHelper; @@ -855,8 +856,17 @@ protected Node( threadPool ); + final TaskResourceTrackingService taskResourceTrackingService = new TaskResourceTrackingService( + settings, + clusterService.getClusterSettings(), + threadPool + ); + final SearchRequestStats searchRequestStats = new SearchRequestStats(clusterService.getClusterSettings()); final SearchRequestSlowLog searchRequestSlowLog = new SearchRequestSlowLog(clusterService); + final SearchTaskRequestOperationsListener searchTaskRequestOperationsListener = new SearchTaskRequestOperationsListener( + taskResourceTrackingService + ); remoteStoreStatsTrackerFactory = new RemoteStoreStatsTrackerFactory(clusterService, settings); CacheModule cacheModule = new CacheModule(pluginsService.filterPlugins(CachePlugin.class), settings); @@ -988,7 +998,7 @@ protected Node( final SearchRequestOperationsCompositeListenerFactory searchRequestOperationsCompositeListenerFactory = new SearchRequestOperationsCompositeListenerFactory( Stream.concat( - Stream.of(searchRequestStats, searchRequestSlowLog), + Stream.of(searchRequestStats, searchRequestSlowLog, searchTaskRequestOperationsListener), pluginComponents.stream() .filter(p -> p instanceof SearchRequestOperationsListener) .map(p -> (SearchRequestOperationsListener) p) @@ -1117,12 +1127,6 @@ protected Node( // development. Then we can deprecate Getter and Setter for IndexingPressureService in ClusterService (#478). clusterService.setIndexingPressureService(indexingPressureService); - final TaskResourceTrackingService taskResourceTrackingService = new TaskResourceTrackingService( - settings, - clusterService.getClusterSettings(), - threadPool - ); - final SearchBackpressureSettings searchBackpressureSettings = new SearchBackpressureSettings( settings, clusterService.getClusterSettings() From 130500218a794f15df522c3ba5a31acbc77209e4 Mon Sep 17 00:00:00 2001 From: rishavz_sagar Date: Tue, 23 Jul 2024 11:08:10 +0530 Subject: [PATCH 105/167] Caching avg total bytes and avg free bytes inside ClusterInfo (#14851) Signed-off-by: RS146BIJAY --- .../org/opensearch/cluster/ClusterInfo.java | 37 +++++++++++++++ .../decider/DiskThresholdDecider.java | 45 +++++++++---------- .../decider/DiskThresholdDeciderTests.java | 13 ------ 3 files changed, 57 insertions(+), 38 deletions(-) diff --git a/server/src/main/java/org/opensearch/cluster/ClusterInfo.java b/server/src/main/java/org/opensearch/cluster/ClusterInfo.java index 4c38d6fd99f5d..7216c447acc3e 100644 --- a/server/src/main/java/org/opensearch/cluster/ClusterInfo.java +++ b/server/src/main/java/org/opensearch/cluster/ClusterInfo.java @@ -33,6 +33,7 @@ package org.opensearch.cluster; import org.opensearch.Version; +import org.opensearch.cluster.routing.RoutingNode; import org.opensearch.cluster.routing.ShardRouting; import org.opensearch.common.annotation.PublicApi; import org.opensearch.core.common.io.stream.StreamInput; @@ -68,6 +69,8 @@ public class ClusterInfo implements ToXContentFragment, Writeable { final Map routingToDataPath; final Map reservedSpace; final Map nodeFileCacheStats; + private long avgTotalBytes; + private long avgFreeByte; protected ClusterInfo() { this(Map.of(), Map.of(), Map.of(), Map.of(), Map.of(), Map.of()); @@ -97,6 +100,7 @@ public ClusterInfo( this.routingToDataPath = routingToDataPath; this.reservedSpace = reservedSpace; this.nodeFileCacheStats = nodeFileCacheStats; + calculateAvgFreeAndTotalBytes(mostAvailableSpaceUsage); } public ClusterInfo(StreamInput in) throws IOException { @@ -117,6 +121,39 @@ public ClusterInfo(StreamInput in) throws IOException { } else { this.nodeFileCacheStats = Map.of(); } + + calculateAvgFreeAndTotalBytes(mostAvailableSpaceUsage); + } + + /** + * Returns a {@link DiskUsage} for the {@link RoutingNode} using the + * average usage of other nodes in the disk usage map. + * @param usages Map of nodeId to DiskUsage for all known nodes + */ + private void calculateAvgFreeAndTotalBytes(final Map usages) { + if (usages == null || usages.isEmpty()) { + this.avgTotalBytes = 0; + this.avgFreeByte = 0; + return; + } + + long totalBytes = 0; + long freeBytes = 0; + for (DiskUsage du : usages.values()) { + totalBytes += du.getTotalBytes(); + freeBytes += du.getFreeBytes(); + } + + this.avgTotalBytes = totalBytes / usages.size(); + this.avgFreeByte = freeBytes / usages.size(); + } + + public long getAvgFreeByte() { + return avgFreeByte; + } + + public long getAvgTotalBytes() { + return avgTotalBytes; } @Override diff --git a/server/src/main/java/org/opensearch/cluster/routing/allocation/decider/DiskThresholdDecider.java b/server/src/main/java/org/opensearch/cluster/routing/allocation/decider/DiskThresholdDecider.java index efa5115939d3c..5fc3f282f33f7 100644 --- a/server/src/main/java/org/opensearch/cluster/routing/allocation/decider/DiskThresholdDecider.java +++ b/server/src/main/java/org/opensearch/cluster/routing/allocation/decider/DiskThresholdDecider.java @@ -140,9 +140,8 @@ public static long sizeOfRelocatingShards( // Where reserved space is unavailable (e.g. stats are out-of-sync) compute a conservative estimate for initialising shards final List initializingShards = node.shardsWithState(ShardRoutingState.INITIALIZING); - initializingShards.removeIf(shardRouting -> reservedSpace.containsShardId(shardRouting.shardId())); for (ShardRouting routing : initializingShards) { - if (routing.relocatingNodeId() == null) { + if (routing.relocatingNodeId() == null || reservedSpace.containsShardId(routing.shardId())) { // in practice the only initializing-but-not-relocating shards with a nonzero expected shard size will be ones created // by a resize (shrink/split/clone) operation which we expect to happen using hard links, so they shouldn't be taking // any additional space and can be ignored here @@ -230,7 +229,14 @@ public Decision canAllocate(ShardRouting shardRouting, RoutingNode node, Routing // subtractLeavingShards is passed as false here, because they still use disk space, and therefore we should be extra careful // and take the size into account - final DiskUsageWithRelocations usage = getDiskUsage(node, allocation, usages, false); + final DiskUsageWithRelocations usage = getDiskUsage( + node, + allocation, + usages, + clusterInfo.getAvgFreeByte(), + clusterInfo.getAvgTotalBytes(), + false + ); // First, check that the node currently over the low watermark double freeDiskPercentage = usage.getFreeDiskAsPercentage(); // Cache the used disk percentage for displaying disk percentages consistent with documentation @@ -492,7 +498,14 @@ public Decision canRemain(ShardRouting shardRouting, RoutingNode node, RoutingAl // subtractLeavingShards is passed as true here, since this is only for shards remaining, we will *eventually* have enough disk // since shards are moving away. No new shards will be incoming since in canAllocate we pass false for this check. - final DiskUsageWithRelocations usage = getDiskUsage(node, allocation, usages, true); + final DiskUsageWithRelocations usage = getDiskUsage( + node, + allocation, + usages, + clusterInfo.getAvgFreeByte(), + clusterInfo.getAvgTotalBytes(), + true + ); final String dataPath = clusterInfo.getDataPath(shardRouting); // If this node is already above the high threshold, the shard cannot remain (get it off!) final double freeDiskPercentage = usage.getFreeDiskAsPercentage(); @@ -581,13 +594,15 @@ private DiskUsageWithRelocations getDiskUsage( RoutingNode node, RoutingAllocation allocation, final Map usages, + final long avgFreeBytes, + final long avgTotalBytes, boolean subtractLeavingShards ) { DiskUsage usage = usages.get(node.nodeId()); if (usage == null) { // If there is no usage, and we have other nodes in the cluster, // use the average usage for all nodes as the usage for this node - usage = averageUsage(node, usages); + usage = new DiskUsage(node.nodeId(), node.node().getName(), "_na_", avgTotalBytes, avgFreeBytes); if (logger.isDebugEnabled()) { logger.debug( "unable to determine disk usage for {}, defaulting to average across nodes [{} total] [{} free] [{}% free]", @@ -619,26 +634,6 @@ private DiskUsageWithRelocations getDiskUsage( return diskUsageWithRelocations; } - /** - * Returns a {@link DiskUsage} for the {@link RoutingNode} using the - * average usage of other nodes in the disk usage map. - * @param node Node to return an averaged DiskUsage object for - * @param usages Map of nodeId to DiskUsage for all known nodes - * @return DiskUsage representing given node using the average disk usage - */ - DiskUsage averageUsage(RoutingNode node, final Map usages) { - if (usages.size() == 0) { - return new DiskUsage(node.nodeId(), node.node().getName(), "_na_", 0, 0); - } - long totalBytes = 0; - long freeBytes = 0; - for (DiskUsage du : usages.values()) { - totalBytes += du.getTotalBytes(); - freeBytes += du.getFreeBytes(); - } - return new DiskUsage(node.nodeId(), node.node().getName(), "_na_", totalBytes / usages.size(), freeBytes / usages.size()); - } - /** * Given the DiskUsage for a node and the size of the shard, return the * percentage of free disk if the shard were to be allocated to the node. diff --git a/server/src/test/java/org/opensearch/cluster/routing/allocation/decider/DiskThresholdDeciderTests.java b/server/src/test/java/org/opensearch/cluster/routing/allocation/decider/DiskThresholdDeciderTests.java index 652633e689b93..2e24640fe858d 100644 --- a/server/src/test/java/org/opensearch/cluster/routing/allocation/decider/DiskThresholdDeciderTests.java +++ b/server/src/test/java/org/opensearch/cluster/routing/allocation/decider/DiskThresholdDeciderTests.java @@ -863,19 +863,6 @@ public void testUnknownDiskUsage() { assertThat(clusterState.getRoutingNodes().node("node1").size(), equalTo(1)); } - public void testAverageUsage() { - RoutingNode rn = new RoutingNode("node1", newNode("node1")); - DiskThresholdDecider decider = makeDecider(Settings.EMPTY); - - final Map usages = new HashMap<>(); - usages.put("node2", new DiskUsage("node2", "n2", "/dev/null", 100, 50)); // 50% used - usages.put("node3", new DiskUsage("node3", "n3", "/dev/null", 100, 0)); // 100% used - - DiskUsage node1Usage = decider.averageUsage(rn, usages); - assertThat(node1Usage.getTotalBytes(), equalTo(100L)); - assertThat(node1Usage.getFreeBytes(), equalTo(25L)); - } - public void testFreeDiskPercentageAfterShardAssigned() { DiskThresholdDecider decider = makeDecider(Settings.EMPTY); From e485856e2794de2b019be34a50df389dac136b89 Mon Sep 17 00:00:00 2001 From: Liyun Xiu Date: Tue, 23 Jul 2024 20:14:26 +0800 Subject: [PATCH 106/167] Use default value when index.number_of_replicas is null (#14812) * Use default value when index.number_of_replicas is null Signed-off-by: Liyun Xiu * Add integration test Signed-off-by: Liyun Xiu * Add changelog Signed-off-by: Liyun Xiu --------- Signed-off-by: Liyun Xiu --- CHANGELOG.md | 1 + .../admin/indices/create/CreateIndexIT.java | 24 +++++++++++++++++ .../metadata/MetadataCreateIndexService.java | 3 ++- .../MetadataCreateIndexServiceTests.java | 27 +++++++++++++++++++ 4 files changed, 54 insertions(+), 1 deletion(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 29c78ea7e3e4f..5a54c5150da76 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -85,6 +85,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Fix create or update alias API doesn't throw exception for unsupported parameters ([#14719](https://github.com/opensearch-project/OpenSearch/pull/14719)) - Refactoring FilterPath.parse by using an iterative approach ([#14200](https://github.com/opensearch-project/OpenSearch/pull/14200)) - Refactoring Grok.validatePatternBank by using an iterative approach ([#14206](https://github.com/opensearch-project/OpenSearch/pull/14206)) +- Fix NPE when creating index with index.number_of_replicas set to null ([#14812](https://github.com/opensearch-project/OpenSearch/pull/14812)) - Update help output for _cat ([#14722](https://github.com/opensearch-project/OpenSearch/pull/14722)) - Fix bulk upsert ignores the default_pipeline and final_pipeline when auto-created index matches the index template ([#12891](https://github.com/opensearch-project/OpenSearch/pull/12891)) - Fix NPE in ReplicaShardAllocator ([#14385](https://github.com/opensearch-project/OpenSearch/pull/14385)) diff --git a/server/src/internalClusterTest/java/org/opensearch/action/admin/indices/create/CreateIndexIT.java b/server/src/internalClusterTest/java/org/opensearch/action/admin/indices/create/CreateIndexIT.java index 1c182b05fa4a8..fbe713d9e22c4 100644 --- a/server/src/internalClusterTest/java/org/opensearch/action/admin/indices/create/CreateIndexIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/action/admin/indices/create/CreateIndexIT.java @@ -406,4 +406,28 @@ public void testIndexNameInResponse() { assertEquals("Should have index name in response", "foo", response.index()); } + public void testCreateIndexWithNullReplicaCountPickUpClusterReplica() { + int numReplicas = 3; + String indexName = "test-idx-1"; + assertAcked( + client().admin() + .cluster() + .prepareUpdateSettings() + .setPersistentSettings(Settings.builder().put("cluster.default_number_of_replicas", numReplicas).build()) + .get() + ); + Settings settings = Settings.builder() + .put(IndexMetadata.INDEX_NUMBER_OF_SHARDS_SETTING.getKey(), 1) + .put(IndexMetadata.INDEX_NUMBER_OF_REPLICAS_SETTING.getKey(), (String) null) + .build(); + assertAcked(client().admin().indices().prepareCreate(indexName).setSettings(settings).get()); + IndicesService indicesService = internalCluster().getInstance(IndicesService.class, internalCluster().getClusterManagerName()); + for (IndexService indexService : indicesService) { + assertEquals(indexName, indexService.index().getName()); + assertEquals( + numReplicas, + (int) indexService.getIndexSettings().getSettings().getAsInt(IndexMetadata.SETTING_NUMBER_OF_REPLICAS, null) + ); + } + } } diff --git a/server/src/main/java/org/opensearch/cluster/metadata/MetadataCreateIndexService.java b/server/src/main/java/org/opensearch/cluster/metadata/MetadataCreateIndexService.java index 7973745ce84b3..50d25b11ef810 100644 --- a/server/src/main/java/org/opensearch/cluster/metadata/MetadataCreateIndexService.java +++ b/server/src/main/java/org/opensearch/cluster/metadata/MetadataCreateIndexService.java @@ -946,7 +946,8 @@ static Settings aggregateIndexSettings( if (INDEX_NUMBER_OF_SHARDS_SETTING.exists(indexSettingsBuilder) == false) { indexSettingsBuilder.put(SETTING_NUMBER_OF_SHARDS, INDEX_NUMBER_OF_SHARDS_SETTING.get(settings)); } - if (INDEX_NUMBER_OF_REPLICAS_SETTING.exists(indexSettingsBuilder) == false) { + if (INDEX_NUMBER_OF_REPLICAS_SETTING.exists(indexSettingsBuilder) == false + || indexSettingsBuilder.get(SETTING_NUMBER_OF_REPLICAS) == null) { indexSettingsBuilder.put(SETTING_NUMBER_OF_REPLICAS, DEFAULT_REPLICA_COUNT_SETTING.get(currentState.metadata().settings())); } if (settings.get(SETTING_AUTO_EXPAND_REPLICAS) != null && indexSettingsBuilder.get(SETTING_AUTO_EXPAND_REPLICAS) == null) { diff --git a/server/src/test/java/org/opensearch/cluster/metadata/MetadataCreateIndexServiceTests.java b/server/src/test/java/org/opensearch/cluster/metadata/MetadataCreateIndexServiceTests.java index 0d86cfcca389c..86ca8b3ad6319 100644 --- a/server/src/test/java/org/opensearch/cluster/metadata/MetadataCreateIndexServiceTests.java +++ b/server/src/test/java/org/opensearch/cluster/metadata/MetadataCreateIndexServiceTests.java @@ -2151,6 +2151,33 @@ public void testAsyncDurabilityThrowsExceptionWhenRestrictSettingTrue() { ); } + public void testAggregateIndexSettingsIndexReplicaIsSetToNull() { + // This checks that aggregateIndexSettings works for the case when the index setting `index.number_of_replicas` is set to null + request = new CreateIndexClusterStateUpdateRequest("create index", "test", "test"); + request.settings(Settings.builder().putNull(SETTING_NUMBER_OF_REPLICAS).build()); + Integer clusterDefaultReplicaNumber = 5; + Metadata metadata = new Metadata.Builder().persistentSettings( + Settings.builder().put("cluster.default_number_of_replicas", clusterDefaultReplicaNumber).build() + ).build(); + ClusterState clusterState = ClusterState.builder(org.opensearch.cluster.ClusterName.CLUSTER_NAME_SETTING.getDefault(Settings.EMPTY)) + .metadata(metadata) + .build(); + Settings settings = Settings.builder().put(CLUSTER_REMOTE_INDEX_RESTRICT_ASYNC_DURABILITY_SETTING.getKey(), true).build(); + clusterSettings = new ClusterSettings(settings, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS); + Settings aggregatedSettings = aggregateIndexSettings( + clusterState, + request, + Settings.EMPTY, + null, + Settings.EMPTY, + IndexScopedSettings.DEFAULT_SCOPED_SETTINGS, + randomShardLimitService(), + Collections.emptySet(), + clusterSettings + ); + assertEquals(clusterDefaultReplicaNumber.toString(), aggregatedSettings.get(SETTING_NUMBER_OF_REPLICAS)); + } + public void testRequestDurabilityWhenRestrictSettingTrue() { // This checks that aggregateIndexSettings works for the case when the cluster setting // cluster.remote_store.index.restrict.async-durability is false or not set, it allows all types of durability modes From f85a58f64e5aaba76eb519e309881f288aff8fa6 Mon Sep 17 00:00:00 2001 From: shailendra0811 <167273922+shailendra0811@users.noreply.github.com> Date: Tue, 23 Jul 2024 18:10:32 +0530 Subject: [PATCH 107/167] [Remote Routing Table] Implement write and read flow for shard diff file. (#14684) * Implement write and read flow to upload/download shard diff file. Signed-off-by: Shailendra Singh --- CHANGELOG.md | 1 + .../remote/RemoteRoutingTableServiceIT.java | 97 +++++- .../routing/RoutingTableIncrementalDiff.java | 168 ++++++++++ .../InternalRemoteRoutingTableService.java | 73 +++- .../remote/NoopRemoteRoutingTableService.java | 33 +- .../remote/RemoteRoutingTableService.java | 48 ++- .../remote/ClusterMetadataManifest.java | 15 +- .../remote/ClusterStateDiffManifest.java | 60 +++- .../RemoteClusterStateCleanupManager.java | 26 ++ .../remote/RemoteClusterStateService.java | 94 +++++- .../remote/RemoteClusterStateUtils.java | 1 + .../remote/RemotePersistenceStats.java | 11 + .../model/RemoteClusterMetadataManifest.java | 7 +- .../routingtable/RemoteRoutingTableDiff.java | 150 +++++++++ .../RemoteRoutingTableServiceTests.java | 165 ++++++++- .../remote/ClusterMetadataManifestTests.java | 81 ++++- ...RemoteClusterStateCleanupManagerTests.java | 146 ++++++++ .../RemoteClusterStateServiceTests.java | 177 +++++++++- .../model/ClusterStateDiffManifestTests.java | 69 +++- .../RemoteIndexRoutingTableDiffTests.java | 317 ++++++++++++++++++ 20 files changed, 1663 insertions(+), 76 deletions(-) create mode 100644 server/src/main/java/org/opensearch/cluster/routing/RoutingTableIncrementalDiff.java create mode 100644 server/src/main/java/org/opensearch/gateway/remote/routingtable/RemoteRoutingTableDiff.java create mode 100644 server/src/test/java/org/opensearch/gateway/remote/routingtable/RemoteIndexRoutingTableDiffTests.java diff --git a/CHANGELOG.md b/CHANGELOG.md index 5a54c5150da76..c8f185ca2bb3d 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -20,6 +20,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Add matchesPluginSystemIndexPattern to SystemIndexRegistry ([#14750](https://github.com/opensearch-project/OpenSearch/pull/14750)) - Add Plugin interface for loading application based configuration templates (([#14659](https://github.com/opensearch-project/OpenSearch/issues/14659))) - Refactor remote-routing-table service inline with remote state interfaces([#14668](https://github.com/opensearch-project/OpenSearch/pull/14668)) +- Add shard-diff path to diff manifest to reduce number of read calls remote store (([#14684](https://github.com/opensearch-project/OpenSearch/pull/14684))) - Add SortResponseProcessor to Search Pipelines (([#14785](https://github.com/opensearch-project/OpenSearch/issues/14785))) - Add prefix mode verification setting for repository verification (([#14790](https://github.com/opensearch-project/OpenSearch/pull/14790))) - Add SplitResponseProcessor to Search Pipelines (([#14800](https://github.com/opensearch-project/OpenSearch/issues/14800))) diff --git a/server/src/internalClusterTest/java/org/opensearch/gateway/remote/RemoteRoutingTableServiceIT.java b/server/src/internalClusterTest/java/org/opensearch/gateway/remote/RemoteRoutingTableServiceIT.java index 53764c0b4d0e8..b0d046cbdf3db 100644 --- a/server/src/internalClusterTest/java/org/opensearch/gateway/remote/RemoteRoutingTableServiceIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/gateway/remote/RemoteRoutingTableServiceIT.java @@ -8,6 +8,7 @@ package org.opensearch.gateway.remote; +import org.opensearch.action.admin.cluster.settings.ClusterUpdateSettingsResponse; import org.opensearch.action.admin.cluster.state.ClusterStateRequest; import org.opensearch.cluster.metadata.IndexMetadata; import org.opensearch.cluster.routing.IndexRoutingTable; @@ -32,16 +33,19 @@ import java.util.Optional; import java.util.Set; import java.util.concurrent.ExecutionException; +import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicInteger; import static org.opensearch.common.util.FeatureFlags.REMOTE_PUBLICATION_EXPERIMENTAL; import static org.opensearch.gateway.remote.RemoteClusterStateService.REMOTE_CLUSTER_STATE_ENABLED_SETTING; import static org.opensearch.gateway.remote.routingtable.RemoteIndexRoutingTable.INDEX_ROUTING_TABLE; +import static org.opensearch.indices.IndicesService.CLUSTER_DEFAULT_INDEX_REFRESH_INTERVAL_SETTING; import static org.opensearch.node.remotestore.RemoteStoreNodeAttribute.REMOTE_STORE_ROUTING_TABLE_REPOSITORY_NAME_ATTRIBUTE_KEY; @OpenSearchIntegTestCase.ClusterScope(scope = OpenSearchIntegTestCase.Scope.TEST, numDataNodes = 0) public class RemoteRoutingTableServiceIT extends RemoteStoreBaseIntegTestCase { private static final String INDEX_NAME = "test-index"; + private static final String INDEX_NAME_1 = "test-index-1"; BlobPath indexRoutingPath; AtomicInteger indexRoutingFiles = new AtomicInteger(); private final RemoteStoreEnums.PathType pathType = RemoteStoreEnums.PathType.HASHED_PREFIX; @@ -72,7 +76,13 @@ public void testRemoteRoutingTableIndexLifecycle() throws Exception { RemoteClusterStateService.class ); RemoteManifestManager remoteManifestManager = remoteClusterStateService.getRemoteManifestManager(); - verifyUpdatesInManifestFile(remoteManifestManager); + Optional latestManifest = remoteManifestManager.getLatestClusterMetadataManifest( + getClusterState().getClusterName().value(), + getClusterState().getMetadata().clusterUUID() + ); + List expectedIndexNames = new ArrayList<>(); + List deletedIndexNames = new ArrayList<>(); + verifyUpdatesInManifestFile(latestManifest, expectedIndexNames, 1, deletedIndexNames, true); List routingTableVersions = getRoutingTableFromAllNodes(); assertTrue(areRoutingTablesSame(routingTableVersions)); @@ -86,7 +96,11 @@ public void testRemoteRoutingTableIndexLifecycle() throws Exception { assertTrue(indexRoutingFilesAfterUpdate >= indexRoutingFiles.get() + 3); }); - verifyUpdatesInManifestFile(remoteManifestManager); + latestManifest = remoteManifestManager.getLatestClusterMetadataManifest( + getClusterState().getClusterName().value(), + getClusterState().getMetadata().clusterUUID() + ); + verifyUpdatesInManifestFile(latestManifest, expectedIndexNames, 1, deletedIndexNames, true); routingTableVersions = getRoutingTableFromAllNodes(); assertTrue(areRoutingTablesSame(routingTableVersions)); @@ -98,6 +112,42 @@ public void testRemoteRoutingTableIndexLifecycle() throws Exception { assertTrue(areRoutingTablesSame(routingTableVersions)); } + public void testRemoteRoutingTableEmptyRoutingTableDiff() throws Exception { + prepareClusterAndVerifyRepository(); + + RemoteClusterStateService remoteClusterStateService = internalCluster().getClusterManagerNodeInstance( + RemoteClusterStateService.class + ); + RemoteManifestManager remoteManifestManager = remoteClusterStateService.getRemoteManifestManager(); + Optional latestManifest = remoteManifestManager.getLatestClusterMetadataManifest( + getClusterState().getClusterName().value(), + getClusterState().getMetadata().clusterUUID() + ); + List expectedIndexNames = new ArrayList<>(); + List deletedIndexNames = new ArrayList<>(); + verifyUpdatesInManifestFile(latestManifest, expectedIndexNames, 1, deletedIndexNames, true); + + List routingTableVersions = getRoutingTableFromAllNodes(); + assertTrue(areRoutingTablesSame(routingTableVersions)); + + // Update cluster settings + ClusterUpdateSettingsResponse response = client().admin() + .cluster() + .prepareUpdateSettings() + .setPersistentSettings(Settings.builder().put(CLUSTER_DEFAULT_INDEX_REFRESH_INTERVAL_SETTING.getKey(), 0, TimeUnit.SECONDS)) + .get(); + assertTrue(response.isAcknowledged()); + + latestManifest = remoteManifestManager.getLatestClusterMetadataManifest( + getClusterState().getClusterName().value(), + getClusterState().getMetadata().clusterUUID() + ); + verifyUpdatesInManifestFile(latestManifest, expectedIndexNames, 1, deletedIndexNames, false); + + routingTableVersions = getRoutingTableFromAllNodes(); + assertTrue(areRoutingTablesSame(routingTableVersions)); + } + public void testRemoteRoutingTableIndexNodeRestart() throws Exception { BlobStoreRepository repository = prepareClusterAndVerifyRepository(); @@ -124,10 +174,16 @@ public void testRemoteRoutingTableIndexNodeRestart() throws Exception { RemoteClusterStateService.class ); RemoteManifestManager remoteManifestManager = remoteClusterStateService.getRemoteManifestManager(); - verifyUpdatesInManifestFile(remoteManifestManager); + Optional latestManifest = remoteManifestManager.getLatestClusterMetadataManifest( + getClusterState().getClusterName().value(), + getClusterState().getMetadata().clusterUUID() + ); + List expectedIndexNames = new ArrayList<>(); + List deletedIndexNames = new ArrayList<>(); + verifyUpdatesInManifestFile(latestManifest, expectedIndexNames, 1, deletedIndexNames, true); } - public void testRemoteRoutingTableIndexMasterRestart1() throws Exception { + public void testRemoteRoutingTableIndexMasterRestart() throws Exception { BlobStoreRepository repository = prepareClusterAndVerifyRepository(); List routingTableVersions = getRoutingTableFromAllNodes(); @@ -153,7 +209,13 @@ public void testRemoteRoutingTableIndexMasterRestart1() throws Exception { RemoteClusterStateService.class ); RemoteManifestManager remoteManifestManager = remoteClusterStateService.getRemoteManifestManager(); - verifyUpdatesInManifestFile(remoteManifestManager); + Optional latestManifest = remoteManifestManager.getLatestClusterMetadataManifest( + getClusterState().getClusterName().value(), + getClusterState().getMetadata().clusterUUID() + ); + List expectedIndexNames = new ArrayList<>(); + List deletedIndexNames = new ArrayList<>(); + verifyUpdatesInManifestFile(latestManifest, expectedIndexNames, 1, deletedIndexNames, true); } private BlobStoreRepository prepareClusterAndVerifyRepository() throws Exception { @@ -208,18 +270,23 @@ private BlobPath getIndexRoutingPath(BlobPath indexRoutingPath, String indexUUID ); } - private void verifyUpdatesInManifestFile(RemoteManifestManager remoteManifestManager) { - Optional latestManifest = remoteManifestManager.getLatestClusterMetadataManifest( - getClusterState().getClusterName().value(), - getClusterState().getMetadata().clusterUUID() - ); + private void verifyUpdatesInManifestFile( + Optional latestManifest, + List expectedIndexNames, + int expectedIndicesRoutingFilesInManifest, + List expectedDeletedIndex, + boolean isRoutingTableDiffFileExpected + ) { assertTrue(latestManifest.isPresent()); ClusterMetadataManifest manifest = latestManifest.get(); - assertTrue(manifest.getDiffManifest().getIndicesRoutingUpdated().contains(INDEX_NAME)); - assertTrue(manifest.getDiffManifest().getIndicesDeleted().isEmpty()); - assertFalse(manifest.getIndicesRouting().isEmpty()); - assertEquals(1, manifest.getIndicesRouting().size()); - assertTrue(manifest.getIndicesRouting().get(0).getUploadedFilename().contains(indexRoutingPath.buildAsString())); + + assertEquals(expectedIndexNames, manifest.getDiffManifest().getIndicesRoutingUpdated()); + assertEquals(expectedDeletedIndex, manifest.getDiffManifest().getIndicesDeleted()); + assertEquals(expectedIndicesRoutingFilesInManifest, manifest.getIndicesRouting().size()); + for (ClusterMetadataManifest.UploadedIndexMetadata uploadedFilename : manifest.getIndicesRouting()) { + assertTrue(uploadedFilename.getUploadedFilename().contains(indexRoutingPath.buildAsString())); + } + assertEquals(isRoutingTableDiffFileExpected, manifest.getDiffManifest().getIndicesRoutingDiffPath() != null); } private List getRoutingTableFromAllNodes() throws ExecutionException, InterruptedException { diff --git a/server/src/main/java/org/opensearch/cluster/routing/RoutingTableIncrementalDiff.java b/server/src/main/java/org/opensearch/cluster/routing/RoutingTableIncrementalDiff.java new file mode 100644 index 0000000000000..3d75b22a8ed7f --- /dev/null +++ b/server/src/main/java/org/opensearch/cluster/routing/RoutingTableIncrementalDiff.java @@ -0,0 +1,168 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.cluster.routing; + +import org.opensearch.cluster.Diff; +import org.opensearch.core.common.io.stream.StreamInput; +import org.opensearch.core.common.io.stream.StreamOutput; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +/** + * Represents a difference between {@link RoutingTable} objects that can be serialized and deserialized. + */ +public class RoutingTableIncrementalDiff implements Diff { + + private final Map> diffs; + + /** + * Constructs a new RoutingTableIncrementalDiff with the given differences. + * + * @param diffs a map containing the differences of {@link IndexRoutingTable}. + */ + public RoutingTableIncrementalDiff(Map> diffs) { + this.diffs = diffs; + } + + /** + * Gets the map of differences of {@link IndexRoutingTable}. + * + * @return a map containing the differences. + */ + public Map> getDiffs() { + return diffs; + } + + /** + * Reads a {@link RoutingTableIncrementalDiff} from the given {@link StreamInput}. + * + * @param in the input stream to read from. + * @return the deserialized RoutingTableIncrementalDiff. + * @throws IOException if an I/O exception occurs while reading from the stream. + */ + public static RoutingTableIncrementalDiff readFrom(StreamInput in) throws IOException { + int size = in.readVInt(); + Map> diffs = new HashMap<>(); + + for (int i = 0; i < size; i++) { + String key = in.readString(); + Diff diff = IndexRoutingTableIncrementalDiff.readFrom(in); + diffs.put(key, diff); + } + return new RoutingTableIncrementalDiff(diffs); + } + + /** + * Applies the differences to the provided {@link RoutingTable}. + * + * @param part the original RoutingTable to which the differences will be applied. + * @return the updated RoutingTable with the applied differences. + */ + @Override + public RoutingTable apply(RoutingTable part) { + RoutingTable.Builder builder = new RoutingTable.Builder(); + for (IndexRoutingTable indexRoutingTable : part) { + builder.add(indexRoutingTable); // Add existing index routing tables to builder + } + + // Apply the diffs + for (Map.Entry> entry : diffs.entrySet()) { + builder.add(entry.getValue().apply(part.index(entry.getKey()))); + } + + return builder.build(); + } + + /** + * Writes the differences to the given {@link StreamOutput}. + * + * @param out the output stream to write to. + * @throws IOException if an I/O exception occurs while writing to the stream. + */ + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeVInt(diffs.size()); + for (Map.Entry> entry : diffs.entrySet()) { + out.writeString(entry.getKey()); + entry.getValue().writeTo(out); + } + } + + /** + * Represents a difference between {@link IndexShardRoutingTable} objects that can be serialized and deserialized. + */ + public static class IndexRoutingTableIncrementalDiff implements Diff { + + private final List indexShardRoutingTables; + + /** + * Constructs a new IndexShardRoutingTableDiff with the given shard routing tables. + * + * @param indexShardRoutingTables a list of IndexShardRoutingTable representing the differences. + */ + public IndexRoutingTableIncrementalDiff(List indexShardRoutingTables) { + this.indexShardRoutingTables = indexShardRoutingTables; + } + + /** + * Applies the differences to the provided {@link IndexRoutingTable}. + * + * @param part the original IndexRoutingTable to which the differences will be applied. + * @return the updated IndexRoutingTable with the applied differences. + */ + @Override + public IndexRoutingTable apply(IndexRoutingTable part) { + IndexRoutingTable.Builder builder = new IndexRoutingTable.Builder(part.getIndex()); + for (IndexShardRoutingTable shardRoutingTable : part) { + builder.addIndexShard(shardRoutingTable); // Add existing shards to builder + } + + // Apply the diff: update or add the new shard routing tables + for (IndexShardRoutingTable diffShard : indexShardRoutingTables) { + builder.addIndexShard(diffShard); + } + return builder.build(); + } + + /** + * Writes the differences to the given {@link StreamOutput}. + * + * @param out the output stream to write to. + * @throws IOException if an I/O exception occurs while writing to the stream. + */ + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeVInt(indexShardRoutingTables.size()); + for (IndexShardRoutingTable shardRoutingTable : indexShardRoutingTables) { + IndexShardRoutingTable.Builder.writeTo(shardRoutingTable, out); + } + } + + /** + * Reads a {@link IndexRoutingTableIncrementalDiff} from the given {@link StreamInput}. + * + * @param in the input stream to read from. + * @return the deserialized IndexShardRoutingTableDiff. + * @throws IOException if an I/O exception occurs while reading from the stream. + */ + public static IndexRoutingTableIncrementalDiff readFrom(StreamInput in) throws IOException { + int size = in.readVInt(); + List indexShardRoutingTables = new ArrayList<>(size); + for (int i = 0; i < size; i++) { + IndexShardRoutingTable shardRoutingTable = IndexShardRoutingTable.Builder.readFrom(in); + indexShardRoutingTables.add(shardRoutingTable); + } + return new IndexRoutingTableIncrementalDiff(indexShardRoutingTables); + } + } +} diff --git a/server/src/main/java/org/opensearch/cluster/routing/remote/InternalRemoteRoutingTableService.java b/server/src/main/java/org/opensearch/cluster/routing/remote/InternalRemoteRoutingTableService.java index d7ebc54598b37..3c578a8c5c01f 100644 --- a/server/src/main/java/org/opensearch/cluster/routing/remote/InternalRemoteRoutingTableService.java +++ b/server/src/main/java/org/opensearch/cluster/routing/remote/InternalRemoteRoutingTableService.java @@ -12,9 +12,11 @@ import org.apache.logging.log4j.Logger; import org.apache.logging.log4j.message.ParameterizedMessage; import org.opensearch.action.LatchedActionListener; +import org.opensearch.cluster.Diff; import org.opensearch.cluster.DiffableUtils; import org.opensearch.cluster.routing.IndexRoutingTable; import org.opensearch.cluster.routing.RoutingTable; +import org.opensearch.cluster.routing.RoutingTableIncrementalDiff; import org.opensearch.common.blobstore.BlobPath; import org.opensearch.common.lifecycle.AbstractLifecycleComponent; import org.opensearch.common.remote.RemoteWritableEntityStore; @@ -25,8 +27,10 @@ import org.opensearch.core.compress.Compressor; import org.opensearch.gateway.remote.ClusterMetadataManifest; import org.opensearch.gateway.remote.RemoteStateTransferException; +import org.opensearch.gateway.remote.model.RemoteClusterStateBlobStore; import org.opensearch.gateway.remote.model.RemoteRoutingTableBlobStore; import org.opensearch.gateway.remote.routingtable.RemoteIndexRoutingTable; +import org.opensearch.gateway.remote.routingtable.RemoteRoutingTableDiff; import org.opensearch.index.translog.transfer.BlobStoreTransferService; import org.opensearch.node.Node; import org.opensearch.node.remotestore.RemoteStoreNodeAttribute; @@ -58,6 +62,7 @@ public class InternalRemoteRoutingTableService extends AbstractLifecycleComponen private final Supplier repositoriesService; private Compressor compressor; private RemoteWritableEntityStore remoteIndexRoutingTableStore; + private RemoteWritableEntityStore remoteRoutingTableDiffStore; private final ClusterSettings clusterSettings; private BlobStoreRepository blobStoreRepository; private final ThreadPool threadPool; @@ -84,9 +89,10 @@ public List getIndicesRouting(RoutingTable routingTable) { /** * Returns diff between the two routing tables, which includes upserts and deletes. + * * @param before previous routing table - * @param after current routing table - * @return diff of the previous and current routing table + * @param after current routing table + * @return incremental diff of the previous and current routing table */ public DiffableUtils.MapDiff> getIndicesRoutingMapDiff( RoutingTable before, @@ -96,7 +102,7 @@ public DiffableUtils.MapDiff> indexRoutingTableDiff, + LatchedActionListener latchedActionListener + ) { + RoutingTableIncrementalDiff routingTableIncrementalDiff = new RoutingTableIncrementalDiff(indexRoutingTableDiff); + RemoteRoutingTableDiff remoteRoutingTableDiff = new RemoteRoutingTableDiff( + routingTableIncrementalDiff, + clusterUUID, + compressor, + term, + version + ); + + ActionListener completionListener = ActionListener.wrap( + resp -> latchedActionListener.onResponse(remoteRoutingTableDiff.getUploadedMetadata()), + ex -> latchedActionListener.onFailure( + new RemoteStateTransferException("Exception in writing index routing diff to remote store", ex) + ) + ); + + remoteRoutingTableDiffStore.writeAsync(remoteRoutingTableDiff, completionListener); + } + /** * Combines IndicesRoutingMetadata from previous manifest and current uploaded indices, removes deleted indices. * @param previousManifest previous manifest, used to get all existing indices routing paths @@ -171,6 +204,22 @@ public void getAsyncIndexRoutingReadAction( remoteIndexRoutingTableStore.readAsync(remoteIndexRoutingTable, actionListener); } + @Override + public void getAsyncIndexRoutingTableDiffReadAction( + String clusterUUID, + String uploadedFilename, + LatchedActionListener latchedActionListener + ) { + ActionListener actionListener = ActionListener.wrap( + latchedActionListener::onResponse, + latchedActionListener::onFailure + ); + + RemoteRoutingTableDiff remoteRoutingTableDiff = new RemoteRoutingTableDiff(uploadedFilename, clusterUUID, compressor); + + remoteRoutingTableDiffStore.readAsync(remoteRoutingTableDiff, actionListener); + } + @Override public List getUpdatedIndexRoutingTableMetadata( List updatedIndicesRouting, @@ -212,6 +261,14 @@ protected void doStart() { ThreadPool.Names.REMOTE_STATE_READ, clusterSettings ); + + this.remoteRoutingTableDiffStore = new RemoteClusterStateBlobStore<>( + new BlobStoreTransferService(blobStoreRepository.blobStore(), threadPool), + blobStoreRepository, + clusterName, + threadPool, + ThreadPool.Names.REMOTE_STATE_READ + ); } @Override @@ -227,4 +284,14 @@ public void deleteStaleIndexRoutingPaths(List stalePaths) throws IOExcep throw e; } } + + public void deleteStaleIndexRoutingDiffPaths(List stalePaths) throws IOException { + try { + logger.debug(() -> "Deleting stale index routing diff files from remote - " + stalePaths); + blobStoreRepository.blobStore().blobContainer(BlobPath.cleanPath()).deleteBlobsIgnoringIfNotExists(stalePaths); + } catch (IOException e) { + logger.error(() -> new ParameterizedMessage("Failed to delete some stale index routing diff paths from {}", stalePaths), e); + throw e; + } + } } diff --git a/server/src/main/java/org/opensearch/cluster/routing/remote/NoopRemoteRoutingTableService.java b/server/src/main/java/org/opensearch/cluster/routing/remote/NoopRemoteRoutingTableService.java index e6e68e01e761f..1ebf3206212a1 100644 --- a/server/src/main/java/org/opensearch/cluster/routing/remote/NoopRemoteRoutingTableService.java +++ b/server/src/main/java/org/opensearch/cluster/routing/remote/NoopRemoteRoutingTableService.java @@ -9,9 +9,11 @@ package org.opensearch.cluster.routing.remote; import org.opensearch.action.LatchedActionListener; +import org.opensearch.cluster.Diff; import org.opensearch.cluster.DiffableUtils; import org.opensearch.cluster.routing.IndexRoutingTable; import org.opensearch.cluster.routing.RoutingTable; +import org.opensearch.cluster.routing.RoutingTableIncrementalDiff; import org.opensearch.common.lifecycle.AbstractLifecycleComponent; import org.opensearch.gateway.remote.ClusterMetadataManifest; @@ -34,7 +36,12 @@ public DiffableUtils.MapDiff> indexRoutingTableDiff, + LatchedActionListener latchedActionListener + ) { + // noop + } + @Override public List getAllUploadedIndicesRouting( ClusterMetadataManifest previousManifest, @@ -67,6 +85,15 @@ public void getAsyncIndexRoutingReadAction( // noop } + @Override + public void getAsyncIndexRoutingTableDiffReadAction( + String clusterUUID, + String uploadedFilename, + LatchedActionListener latchedActionListener + ) { + // noop + } + @Override public List getUpdatedIndexRoutingTableMetadata( List updatedIndicesRouting, @@ -95,4 +122,8 @@ protected void doClose() throws IOException { public void deleteStaleIndexRoutingPaths(List stalePaths) throws IOException { // noop } + + public void deleteStaleIndexRoutingDiffPaths(List stalePaths) throws IOException { + // noop + } } diff --git a/server/src/main/java/org/opensearch/cluster/routing/remote/RemoteRoutingTableService.java b/server/src/main/java/org/opensearch/cluster/routing/remote/RemoteRoutingTableService.java index 0b0b4bb7dbc84..0811a5f3010f4 100644 --- a/server/src/main/java/org/opensearch/cluster/routing/remote/RemoteRoutingTableService.java +++ b/server/src/main/java/org/opensearch/cluster/routing/remote/RemoteRoutingTableService.java @@ -9,15 +9,19 @@ package org.opensearch.cluster.routing.remote; import org.opensearch.action.LatchedActionListener; +import org.opensearch.cluster.Diff; import org.opensearch.cluster.DiffableUtils; import org.opensearch.cluster.routing.IndexRoutingTable; +import org.opensearch.cluster.routing.IndexShardRoutingTable; import org.opensearch.cluster.routing.RoutingTable; +import org.opensearch.cluster.routing.RoutingTableIncrementalDiff; import org.opensearch.common.lifecycle.LifecycleComponent; import org.opensearch.core.common.io.stream.StreamInput; import org.opensearch.core.common.io.stream.StreamOutput; import org.opensearch.gateway.remote.ClusterMetadataManifest; import java.io.IOException; +import java.util.ArrayList; import java.util.List; import java.util.Map; @@ -27,16 +31,36 @@ * @opensearch.internal */ public interface RemoteRoutingTableService extends LifecycleComponent { - public static final DiffableUtils.NonDiffableValueSerializer CUSTOM_ROUTING_TABLE_VALUE_SERIALIZER = - new DiffableUtils.NonDiffableValueSerializer() { + + public static final DiffableUtils.DiffableValueSerializer CUSTOM_ROUTING_TABLE_DIFFABLE_VALUE_SERIALIZER = + new DiffableUtils.DiffableValueSerializer() { + @Override + public IndexRoutingTable read(StreamInput in, String key) throws IOException { + return IndexRoutingTable.readFrom(in); + } + @Override public void write(IndexRoutingTable value, StreamOutput out) throws IOException { value.writeTo(out); } @Override - public IndexRoutingTable read(StreamInput in, String key) throws IOException { - return IndexRoutingTable.readFrom(in); + public Diff readDiff(StreamInput in, String key) throws IOException { + return IndexRoutingTable.readDiffFrom(in); + } + + @Override + public Diff diff(IndexRoutingTable currentState, IndexRoutingTable previousState) { + List diffs = new ArrayList<>(); + for (Map.Entry entry : currentState.getShards().entrySet()) { + Integer index = entry.getKey(); + IndexShardRoutingTable currentShardRoutingTable = entry.getValue(); + IndexShardRoutingTable previousShardRoutingTable = previousState.shard(index); + if (previousShardRoutingTable == null || !previousShardRoutingTable.equals(currentShardRoutingTable)) { + diffs.add(currentShardRoutingTable); + } + } + return new RoutingTableIncrementalDiff.IndexRoutingTableIncrementalDiff(diffs); } }; @@ -48,6 +72,12 @@ void getAsyncIndexRoutingReadAction( LatchedActionListener latchedActionListener ); + void getAsyncIndexRoutingTableDiffReadAction( + String clusterUUID, + String uploadedFilename, + LatchedActionListener latchedActionListener + ); + List getUpdatedIndexRoutingTableMetadata( List updatedIndicesRouting, List allIndicesRouting @@ -66,6 +96,14 @@ void getAsyncIndexRoutingWriteAction( LatchedActionListener latchedActionListener ); + void getAsyncIndexRoutingDiffWriteAction( + String clusterUUID, + long term, + long version, + Map> indexRoutingTableDiff, + LatchedActionListener latchedActionListener + ); + List getAllUploadedIndicesRouting( ClusterMetadataManifest previousManifest, List indicesRoutingUploaded, @@ -74,4 +112,6 @@ List getAllUploadedIndicesRouting public void deleteStaleIndexRoutingPaths(List stalePaths) throws IOException; + public void deleteStaleIndexRoutingDiffPaths(List stalePaths) throws IOException; + } diff --git a/server/src/main/java/org/opensearch/gateway/remote/ClusterMetadataManifest.java b/server/src/main/java/org/opensearch/gateway/remote/ClusterMetadataManifest.java index 3a66419b1dc20..71815b6ee324c 100644 --- a/server/src/main/java/org/opensearch/gateway/remote/ClusterMetadataManifest.java +++ b/server/src/main/java/org/opensearch/gateway/remote/ClusterMetadataManifest.java @@ -44,6 +44,7 @@ public class ClusterMetadataManifest implements Writeable, ToXContentFragment { public static final int CODEC_V2 = 2; // In Codec V2, there are separate metadata files rather than a single global metadata file, // also we introduce index routing-metadata, diff and other attributes as part of manifest // required for state publication + public static final int CODEC_V3 = 3; // In Codec V3, we have introduced new diff field in diff-manifest's routing_table_diff private static final ParseField CLUSTER_TERM_FIELD = new ParseField("cluster_term"); private static final ParseField STATE_VERSION_FIELD = new ParseField("state_version"); @@ -109,6 +110,10 @@ private static ClusterMetadataManifest.Builder manifestV2Builder(Object[] fields .clusterStateCustomMetadataMap(clusterStateCustomMetadata(fields)); } + private static ClusterMetadataManifest.Builder manifestV3Builder(Object[] fields) { + return manifestV2Builder(fields); + } + private static long term(Object[] fields) { return (long) fields[0]; } @@ -226,12 +231,18 @@ private static ClusterStateDiffManifest diffManifest(Object[] fields) { fields -> manifestV2Builder(fields).build() ); - private static final ConstructingObjectParser CURRENT_PARSER = PARSER_V2; + private static final ConstructingObjectParser PARSER_V3 = new ConstructingObjectParser<>( + "cluster_metadata_manifest", + fields -> manifestV3Builder(fields).build() + ); + + private static final ConstructingObjectParser CURRENT_PARSER = PARSER_V3; static { declareParser(PARSER_V0, CODEC_V0); declareParser(PARSER_V1, CODEC_V1); declareParser(PARSER_V2, CODEC_V2); + declareParser(PARSER_V3, CODEC_V3); } private static void declareParser(ConstructingObjectParser parser, long codec_version) { @@ -309,7 +320,7 @@ private static void declareParser(ConstructingObjectParser ClusterStateDiffManifest.fromXContent(p), + (p, c) -> ClusterStateDiffManifest.fromXContent(p, codec_version), DIFF_MANIFEST ); } diff --git a/server/src/main/java/org/opensearch/gateway/remote/ClusterStateDiffManifest.java b/server/src/main/java/org/opensearch/gateway/remote/ClusterStateDiffManifest.java index aca53c92781e4..ab7fa1fddf4bf 100644 --- a/server/src/main/java/org/opensearch/gateway/remote/ClusterStateDiffManifest.java +++ b/server/src/main/java/org/opensearch/gateway/remote/ClusterStateDiffManifest.java @@ -32,8 +32,8 @@ import static org.opensearch.cluster.DiffableUtils.NonDiffableValueSerializer.getAbstractInstance; import static org.opensearch.cluster.DiffableUtils.getStringKeySerializer; -import static org.opensearch.cluster.routing.remote.RemoteRoutingTableService.CUSTOM_ROUTING_TABLE_VALUE_SERIALIZER; import static org.opensearch.core.xcontent.XContentParserUtils.ensureExpectedToken; +import static org.opensearch.gateway.remote.ClusterMetadataManifest.CODEC_V3; /** * Manifest of diff between two cluster states @@ -53,6 +53,7 @@ public class ClusterStateDiffManifest implements ToXContentFragment, Writeable { private static final String METADATA_CUSTOM_DIFF_FIELD = "metadata_custom_diff"; private static final String UPSERTS_FIELD = "upserts"; private static final String DELETES_FIELD = "deletes"; + private static final String DIFF_FIELD = "diff"; private static final String CLUSTER_BLOCKS_UPDATED_FIELD = "cluster_blocks_diff"; private static final String DISCOVERY_NODES_UPDATED_FIELD = "discovery_nodes_diff"; private static final String ROUTING_TABLE_DIFF = "routing_table_diff"; @@ -72,11 +73,17 @@ public class ClusterStateDiffManifest implements ToXContentFragment, Writeable { private final boolean discoveryNodesUpdated; private final List indicesRoutingUpdated; private final List indicesRoutingDeleted; + private String indicesRoutingDiffPath; private final boolean hashesOfConsistentSettingsUpdated; private final List clusterStateCustomUpdated; private final List clusterStateCustomDeleted; - public ClusterStateDiffManifest(ClusterState state, ClusterState previousState) { + public ClusterStateDiffManifest( + ClusterState state, + ClusterState previousState, + DiffableUtils.MapDiff> routingTableIncrementalDiff, + String indicesRoutingDiffPath + ) { fromStateUUID = previousState.stateUUID(); toStateUUID = state.stateUUID(); coordinationMetadataUpdated = !Metadata.isCoordinationMetadataEqual(state.metadata(), previousState.metadata()); @@ -103,17 +110,13 @@ public ClusterStateDiffManifest(ClusterState state, ClusterState previousState) customMetadataUpdated.addAll(customDiff.getUpserts().keySet()); customMetadataDeleted = customDiff.getDeletes(); - DiffableUtils.MapDiff> routingTableDiff = DiffableUtils.diff( - previousState.getRoutingTable().getIndicesRouting(), - state.getRoutingTable().getIndicesRouting(), - DiffableUtils.getStringKeySerializer(), - CUSTOM_ROUTING_TABLE_VALUE_SERIALIZER - ); - indicesRoutingUpdated = new ArrayList<>(); - routingTableDiff.getUpserts().forEach((k, v) -> indicesRoutingUpdated.add(k)); - - indicesRoutingDeleted = routingTableDiff.getDeletes(); + indicesRoutingDeleted = new ArrayList<>(); + this.indicesRoutingDiffPath = indicesRoutingDiffPath; + if (routingTableIncrementalDiff != null) { + routingTableIncrementalDiff.getUpserts().forEach((k, v) -> indicesRoutingUpdated.add(k)); + indicesRoutingDeleted.addAll(routingTableIncrementalDiff.getDeletes()); + } hashesOfConsistentSettingsUpdated = !state.metadata() .hashesOfConsistentSettings() .equals(previousState.metadata().hashesOfConsistentSettings()); @@ -126,6 +129,7 @@ public ClusterStateDiffManifest(ClusterState state, ClusterState previousState) clusterStateCustomUpdated = new ArrayList<>(clusterStateCustomDiff.getDiffs().keySet()); clusterStateCustomUpdated.addAll(clusterStateCustomDiff.getUpserts().keySet()); clusterStateCustomDeleted = clusterStateCustomDiff.getDeletes(); + List indicie1s = indicesRoutingUpdated; } public ClusterStateDiffManifest( @@ -143,6 +147,7 @@ public ClusterStateDiffManifest( boolean discoveryNodesUpdated, List indicesRoutingUpdated, List indicesRoutingDeleted, + String indicesRoutingDiffPath, boolean hashesOfConsistentSettingsUpdated, List clusterStateCustomUpdated, List clusterStateCustomDeleted @@ -164,6 +169,7 @@ public ClusterStateDiffManifest( this.hashesOfConsistentSettingsUpdated = hashesOfConsistentSettingsUpdated; this.clusterStateCustomUpdated = Collections.unmodifiableList(clusterStateCustomUpdated); this.clusterStateCustomDeleted = Collections.unmodifiableList(clusterStateCustomDeleted); + this.indicesRoutingDiffPath = indicesRoutingDiffPath; } public ClusterStateDiffManifest(StreamInput in) throws IOException { @@ -184,6 +190,7 @@ public ClusterStateDiffManifest(StreamInput in) throws IOException { this.hashesOfConsistentSettingsUpdated = in.readBoolean(); this.clusterStateCustomUpdated = in.readStringList(); this.clusterStateCustomDeleted = in.readStringList(); + this.indicesRoutingDiffPath = in.readString(); } @Override @@ -237,6 +244,9 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws builder.value(index); } builder.endArray(); + if (indicesRoutingDiffPath != null) { + builder.field(DIFF_FIELD, indicesRoutingDiffPath); + } builder.endObject(); builder.startObject(CLUSTER_STATE_CUSTOM_DIFF_FIELD); builder.startArray(UPSERTS_FIELD); @@ -253,7 +263,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws return builder; } - public static ClusterStateDiffManifest fromXContent(XContentParser parser) throws IOException { + public static ClusterStateDiffManifest fromXContent(XContentParser parser, long codec_version) throws IOException { Builder builder = new Builder(); if (parser.currentToken() == null) { // fresh parser? move to next token parser.nextToken(); @@ -341,6 +351,11 @@ public static ClusterStateDiffManifest fromXContent(XContentParser parser) throw case DELETES_FIELD: builder.indicesRoutingDeleted(convertListToString(parser.listOrderedMap())); break; + case DIFF_FIELD: + if (codec_version >= CODEC_V3) { + builder.indicesRoutingDiffPath(parser.textOrNull()); + } + break; default: throw new XContentParseException("Unexpected field [" + currentFieldName + "]"); } @@ -456,6 +471,10 @@ public List getIndicesRoutingUpdated() { return indicesRoutingUpdated; } + public String getIndicesRoutingDiffPath() { + return indicesRoutingDiffPath; + } + public List getIndicesRoutingDeleted() { return indicesRoutingDeleted; } @@ -468,6 +487,10 @@ public List getClusterStateCustomDeleted() { return clusterStateCustomDeleted; } + public void setIndicesRoutingDiffPath(String indicesRoutingDiffPath) { + this.indicesRoutingDiffPath = indicesRoutingDiffPath; + } + @Override public boolean equals(Object o) { if (this == o) return true; @@ -489,7 +512,8 @@ public boolean equals(Object o) { && Objects.equals(indicesRoutingUpdated, that.indicesRoutingUpdated) && Objects.equals(indicesRoutingDeleted, that.indicesRoutingDeleted) && Objects.equals(clusterStateCustomUpdated, that.clusterStateCustomUpdated) - && Objects.equals(clusterStateCustomDeleted, that.clusterStateCustomDeleted); + && Objects.equals(clusterStateCustomDeleted, that.clusterStateCustomDeleted) + && Objects.equals(indicesRoutingDiffPath, that.indicesRoutingDiffPath); } @Override @@ -538,6 +562,7 @@ public void writeTo(StreamOutput out) throws IOException { out.writeBoolean(hashesOfConsistentSettingsUpdated); out.writeStringCollection(clusterStateCustomUpdated); out.writeStringCollection(clusterStateCustomDeleted); + out.writeString(indicesRoutingDiffPath); } /** @@ -560,6 +585,7 @@ public static class Builder { private boolean discoveryNodesUpdated; private List indicesRoutingUpdated; private List indicesRoutingDeleted; + private String indicesRoutingDiff; private boolean hashesOfConsistentSettingsUpdated; private List clusterStateCustomUpdated; private List clusterStateCustomDeleted; @@ -650,6 +676,11 @@ public Builder indicesRoutingDeleted(List indicesRoutingDeleted) { return this; } + public Builder indicesRoutingDiffPath(String indicesRoutingDiffPath) { + this.indicesRoutingDiff = indicesRoutingDiffPath; + return this; + } + public Builder clusterStateCustomUpdated(List clusterStateCustomUpdated) { this.clusterStateCustomUpdated = clusterStateCustomUpdated; return this; @@ -676,6 +707,7 @@ public ClusterStateDiffManifest build() { discoveryNodesUpdated, indicesRoutingUpdated, indicesRoutingDeleted, + indicesRoutingDiff, hashesOfConsistentSettingsUpdated, clusterStateCustomUpdated, clusterStateCustomDeleted diff --git a/server/src/main/java/org/opensearch/gateway/remote/RemoteClusterStateCleanupManager.java b/server/src/main/java/org/opensearch/gateway/remote/RemoteClusterStateCleanupManager.java index 99235bc96bfe3..8691187c7fbfa 100644 --- a/server/src/main/java/org/opensearch/gateway/remote/RemoteClusterStateCleanupManager.java +++ b/server/src/main/java/org/opensearch/gateway/remote/RemoteClusterStateCleanupManager.java @@ -179,6 +179,7 @@ void deleteClusterMetadata( Set staleGlobalMetadataPaths = new HashSet<>(); Set staleEphemeralAttributePaths = new HashSet<>(); Set staleIndexRoutingPaths = new HashSet<>(); + Set staleIndexRoutingDiffPaths = new HashSet<>(); activeManifestBlobMetadata.forEach(blobMetadata -> { ClusterMetadataManifest clusterMetadataManifest = remoteManifestManager.fetchRemoteClusterMetadataManifest( clusterName, @@ -222,6 +223,10 @@ void deleteClusterMetadata( clusterMetadataManifest.getIndicesRouting() .forEach(uploadedIndicesRouting -> filesToKeep.add(uploadedIndicesRouting.getUploadedFilename())); } + if (clusterMetadataManifest.getDiffManifest() != null + && clusterMetadataManifest.getDiffManifest().getIndicesRoutingDiffPath() != null) { + filesToKeep.add(clusterMetadataManifest.getDiffManifest().getIndicesRoutingDiffPath()); + } }); staleManifestBlobMetadata.forEach(blobMetadata -> { ClusterMetadataManifest clusterMetadataManifest = remoteManifestManager.fetchRemoteClusterMetadataManifest( @@ -264,6 +269,18 @@ void deleteClusterMetadata( } }); } + if (clusterMetadataManifest.getDiffManifest() != null + && clusterMetadataManifest.getDiffManifest().getIndicesRoutingDiffPath() != null) { + if (!filesToKeep.contains(clusterMetadataManifest.getDiffManifest().getIndicesRoutingDiffPath())) { + staleIndexRoutingDiffPaths.add(clusterMetadataManifest.getDiffManifest().getIndicesRoutingDiffPath()); + logger.debug( + () -> new ParameterizedMessage( + "Indices routing diff paths in stale manifest: {}", + clusterMetadataManifest.getDiffManifest().getIndicesRoutingDiffPath() + ) + ); + } + } clusterMetadataManifest.getIndices().forEach(uploadedIndexMetadata -> { String fileName = RemoteClusterStateUtils.getFormattedIndexFileName(uploadedIndexMetadata.getUploadedFilename()); @@ -316,6 +333,15 @@ void deleteClusterMetadata( ); remoteStateStats.indexRoutingFilesCleanupAttemptFailed(); } + try { + remoteRoutingTableService.deleteStaleIndexRoutingDiffPaths(new ArrayList<>(staleIndexRoutingDiffPaths)); + } catch (IOException e) { + logger.error( + () -> new ParameterizedMessage("Error while deleting stale index routing diff files {}", staleIndexRoutingDiffPaths), + e + ); + remoteStateStats.indicesRoutingDiffFileCleanupAttemptFailed(); + } } catch (IllegalStateException e) { logger.error("Error while fetching Remote Cluster Metadata manifests", e); } catch (IOException e) { diff --git a/server/src/main/java/org/opensearch/gateway/remote/RemoteClusterStateService.java b/server/src/main/java/org/opensearch/gateway/remote/RemoteClusterStateService.java index b34641f77f607..674279f2251bd 100644 --- a/server/src/main/java/org/opensearch/gateway/remote/RemoteClusterStateService.java +++ b/server/src/main/java/org/opensearch/gateway/remote/RemoteClusterStateService.java @@ -14,6 +14,7 @@ import org.opensearch.action.LatchedActionListener; import org.opensearch.cluster.ClusterName; import org.opensearch.cluster.ClusterState; +import org.opensearch.cluster.Diff; import org.opensearch.cluster.DiffableUtils; import org.opensearch.cluster.block.ClusterBlocks; import org.opensearch.cluster.coordination.CoordinationMetadata; @@ -26,6 +27,7 @@ import org.opensearch.cluster.node.DiscoveryNodes.Builder; import org.opensearch.cluster.routing.IndexRoutingTable; import org.opensearch.cluster.routing.RoutingTable; +import org.opensearch.cluster.routing.RoutingTableIncrementalDiff; import org.opensearch.cluster.routing.remote.RemoteRoutingTableService; import org.opensearch.cluster.routing.remote.RemoteRoutingTableServiceFactory; import org.opensearch.cluster.service.ClusterService; @@ -56,6 +58,7 @@ import org.opensearch.gateway.remote.model.RemoteReadResult; import org.opensearch.gateway.remote.model.RemoteTemplatesMetadata; import org.opensearch.gateway.remote.model.RemoteTransientSettingsMetadata; +import org.opensearch.gateway.remote.routingtable.RemoteRoutingTableDiff; import org.opensearch.index.translog.transfer.BlobStoreTransferService; import org.opensearch.node.Node; import org.opensearch.node.remotestore.RemoteStoreNodeAttribute; @@ -234,13 +237,21 @@ public RemoteClusterStateManifestInfo writeFullMetadata(ClusterState clusterStat isPublicationEnabled, isPublicationEnabled ? clusterState.customs() : Collections.emptyMap(), isPublicationEnabled, - remoteRoutingTableService.getIndicesRouting(clusterState.getRoutingTable()) + remoteRoutingTableService.getIndicesRouting(clusterState.getRoutingTable()), + null + ); + + ClusterStateDiffManifest clusterStateDiffManifest = new ClusterStateDiffManifest( + clusterState, + ClusterState.EMPTY_STATE, + null, + null ); final RemoteClusterStateManifestInfo manifestDetails = remoteManifestManager.uploadManifest( clusterState, uploadedMetadataResults, previousClusterUUID, - new ClusterStateDiffManifest(clusterState, ClusterState.EMPTY_STATE), + clusterStateDiffManifest, false ); @@ -330,10 +341,13 @@ public RemoteClusterStateManifestInfo writeIncrementalMetadata( indicesToBeDeletedFromRemote.remove(indexMetadata.getIndex().getName()); } - final DiffableUtils.MapDiff> routingTableDiff = remoteRoutingTableService - .getIndicesRoutingMapDiff(previousClusterState.getRoutingTable(), clusterState.getRoutingTable()); final List indicesRoutingToUpload = new ArrayList<>(); - routingTableDiff.getUpserts().forEach((k, v) -> indicesRoutingToUpload.add(v)); + final DiffableUtils.MapDiff> routingTableIncrementalDiff = + remoteRoutingTableService.getIndicesRoutingMapDiff(previousClusterState.getRoutingTable(), clusterState.getRoutingTable()); + + Map> indexRoutingTableDiffs = routingTableIncrementalDiff.getDiffs(); + routingTableIncrementalDiff.getDiffs().forEach((k, v) -> indicesRoutingToUpload.add(clusterState.getRoutingTable().index(k))); + routingTableIncrementalDiff.getUpserts().forEach((k, v) -> indicesRoutingToUpload.add(v)); UploadedMetadataResults uploadedMetadataResults; // For migration case from codec V0 or V1 to V2, we have added null check on metadata attribute files, @@ -369,7 +383,8 @@ public RemoteClusterStateManifestInfo writeIncrementalMetadata( updateTransientSettingsMetadata, clusterStateCustomsDiff.getUpserts(), updateHashesOfConsistentSettings, - indicesRoutingToUpload + indicesRoutingToUpload, + indexRoutingTableDiffs ); // update the map if the metadata was uploaded @@ -411,14 +426,23 @@ public RemoteClusterStateManifestInfo writeIncrementalMetadata( uploadedMetadataResults.uploadedIndicesRoutingMetadata = remoteRoutingTableService.getAllUploadedIndicesRouting( previousManifest, uploadedMetadataResults.uploadedIndicesRoutingMetadata, - routingTableDiff.getDeletes() + routingTableIncrementalDiff.getDeletes() + ); + + ClusterStateDiffManifest clusterStateDiffManifest = new ClusterStateDiffManifest( + clusterState, + previousClusterState, + routingTableIncrementalDiff, + uploadedMetadataResults.uploadedIndicesRoutingDiffMetadata != null + ? uploadedMetadataResults.uploadedIndicesRoutingDiffMetadata.getUploadedFilename() + : null ); final RemoteClusterStateManifestInfo manifestDetails = remoteManifestManager.uploadManifest( clusterState, uploadedMetadataResults, previousManifest.getPreviousClusterUUID(), - new ClusterStateDiffManifest(clusterState, previousClusterState), + clusterStateDiffManifest, false ); @@ -488,13 +512,15 @@ UploadedMetadataResults writeMetadataInParallel( boolean uploadTransientSettingMetadata, Map clusterStateCustomToUpload, boolean uploadHashesOfConsistentSettings, - List indicesRoutingToUpload + List indicesRoutingToUpload, + Map> indexRoutingTableDiff ) throws IOException { assert Objects.nonNull(indexMetadataUploadListeners) : "indexMetadataUploadListeners can not be null"; int totalUploadTasks = indexToUpload.size() + indexMetadataUploadListeners.size() + customToUpload.size() + (uploadCoordinationMetadata ? 1 : 0) + (uploadSettingsMetadata ? 1 : 0) + (uploadTemplateMetadata ? 1 : 0) + (uploadDiscoveryNodes ? 1 : 0) + (uploadClusterBlock ? 1 : 0) + (uploadTransientSettingMetadata ? 1 : 0) - + clusterStateCustomToUpload.size() + (uploadHashesOfConsistentSettings ? 1 : 0) + indicesRoutingToUpload.size(); + + clusterStateCustomToUpload.size() + (uploadHashesOfConsistentSettings ? 1 : 0) + indicesRoutingToUpload.size() + + (indexRoutingTableDiff != null && !indexRoutingTableDiff.isEmpty() ? 1 : 0); CountDownLatch latch = new CountDownLatch(totalUploadTasks); List uploadTasks = Collections.synchronizedList(new ArrayList<>(totalUploadTasks)); Map results = new ConcurrentHashMap<>(totalUploadTasks); @@ -664,6 +690,16 @@ UploadedMetadataResults writeMetadataInParallel( listener ); }); + if (indexRoutingTableDiff != null && !indexRoutingTableDiff.isEmpty()) { + uploadTasks.add(RemoteRoutingTableDiff.ROUTING_TABLE_DIFF_FILE); + remoteRoutingTableService.getAsyncIndexRoutingDiffWriteAction( + clusterState.metadata().clusterUUID(), + clusterState.term(), + clusterState.version(), + indexRoutingTableDiff, + listener + ); + } invokeIndexMetadataUploadListeners(indexToUpload, prevIndexMetadataByName, latch, exceptionList); try { @@ -710,6 +746,8 @@ UploadedMetadataResults writeMetadataInParallel( if (uploadedMetadata.getClass().equals(UploadedIndexMetadata.class) && uploadedMetadata.getComponent().contains(INDEX_ROUTING_METADATA_PREFIX)) { response.uploadedIndicesRoutingMetadata.add((UploadedIndexMetadata) uploadedMetadata); + } else if (RemoteRoutingTableDiff.ROUTING_TABLE_DIFF_FILE.equals(name)) { + response.uploadedIndicesRoutingDiffMetadata = (UploadedMetadataAttribute) uploadedMetadata; } else if (name.startsWith(CUSTOM_METADATA)) { // component name for custom metadata will look like custom-- String custom = name.split(DELIMITER)[0].split(CUSTOM_DELIMITER)[1]; @@ -979,16 +1017,18 @@ ClusterState readClusterStateInParallel( List indicesRoutingToRead, boolean readHashesOfConsistentSettings, Map clusterStateCustomToRead, + boolean readIndexRoutingTableDiff, boolean includeEphemeral ) throws IOException { int totalReadTasks = indicesToRead.size() + customToRead.size() + (readCoordinationMetadata ? 1 : 0) + (readSettingsMetadata ? 1 : 0) + (readTemplatesMetadata ? 1 : 0) + (readDiscoveryNodes ? 1 : 0) + (readClusterBlocks ? 1 : 0) + (readTransientSettingsMetadata ? 1 : 0) + (readHashesOfConsistentSettings ? 1 : 0) + clusterStateCustomToRead.size() - + indicesRoutingToRead.size(); + + indicesRoutingToRead.size() + (readIndexRoutingTableDiff ? 1 : 0); CountDownLatch latch = new CountDownLatch(totalReadTasks); List readResults = Collections.synchronizedList(new ArrayList<>()); List readIndexRoutingTableResults = Collections.synchronizedList(new ArrayList<>()); + AtomicReference readIndexRoutingTableDiffResults = new AtomicReference<>(); List exceptionList = Collections.synchronizedList(new ArrayList<>(totalReadTasks)); LatchedActionListener listener = new LatchedActionListener<>(ActionListener.wrap(response -> { @@ -1031,6 +1071,25 @@ ClusterState readClusterStateInParallel( ); } + LatchedActionListener routingTableDiffLatchedActionListener = new LatchedActionListener<>( + ActionListener.wrap(response -> { + logger.debug("Successfully read routing table diff component from remote"); + readIndexRoutingTableDiffResults.set(response); + }, ex -> { + logger.error("Failed to read routing table diff from remote", ex); + exceptionList.add(ex); + }), + latch + ); + + if (readIndexRoutingTableDiff) { + remoteRoutingTableService.getAsyncIndexRoutingTableDiffReadAction( + clusterUUID, + manifest.getDiffManifest().getIndicesRoutingDiffPath(), + routingTableDiffLatchedActionListener + ); + } + for (Map.Entry entry : customToRead.entrySet()) { remoteGlobalMetadataManager.readAsync( entry.getValue().getAttributeName(), @@ -1233,6 +1292,14 @@ ClusterState readClusterStateInParallel( readIndexRoutingTableResults.forEach( indexRoutingTable -> indicesRouting.put(indexRoutingTable.getIndex().getName(), indexRoutingTable) ); + RoutingTableIncrementalDiff routingTableDiff = readIndexRoutingTableDiffResults.get(); + if (routingTableDiff != null) { + routingTableDiff.getDiffs().forEach((key, diff) -> { + IndexRoutingTable previousIndexRoutingTable = indicesRouting.get(key); + IndexRoutingTable updatedTable = diff.apply(previousIndexRoutingTable); + indicesRouting.put(key, updatedTable); + }); + } clusterStateBuilder.routingTable(new RoutingTable(manifest.getRoutingTableVersion(), indicesRouting)); return clusterStateBuilder.build(); @@ -1261,6 +1328,7 @@ public ClusterState getClusterStateForManifest( includeEphemeral ? manifest.getIndicesRouting() : emptyList(), includeEphemeral && manifest.getHashesOfConsistentSettings() != null, includeEphemeral ? manifest.getClusterStateCustomMap() : emptyMap(), + false, includeEphemeral ); } else { @@ -1281,6 +1349,7 @@ public ClusterState getClusterStateForManifest( emptyList(), false, emptyMap(), + false, false ); Metadata.Builder mb = Metadata.builder(remoteGlobalMetadataManager.getGlobalMetadata(manifest.getClusterUUID(), manifest)); @@ -1337,6 +1406,9 @@ public ClusterState getClusterStateUsingDiff(ClusterMetadataManifest manifest, C updatedIndexRouting, diff.isHashesOfConsistentSettingsUpdated(), updatedClusterStateCustom, + manifest.getDiffManifest() != null + && manifest.getDiffManifest().getIndicesRoutingDiffPath() != null + && !manifest.getDiffManifest().getIndicesRoutingDiffPath().isEmpty(), true ); ClusterState.Builder clusterStateBuilder = ClusterState.builder(updatedClusterState); diff --git a/server/src/main/java/org/opensearch/gateway/remote/RemoteClusterStateUtils.java b/server/src/main/java/org/opensearch/gateway/remote/RemoteClusterStateUtils.java index f2b93c3784407..74cb838286961 100644 --- a/server/src/main/java/org/opensearch/gateway/remote/RemoteClusterStateUtils.java +++ b/server/src/main/java/org/opensearch/gateway/remote/RemoteClusterStateUtils.java @@ -88,6 +88,7 @@ public static class UploadedMetadataResults { ClusterMetadataManifest.UploadedMetadataAttribute uploadedClusterBlocks; List uploadedIndicesRoutingMetadata; ClusterMetadataManifest.UploadedMetadataAttribute uploadedHashesOfConsistentSettings; + ClusterMetadataManifest.UploadedMetadataAttribute uploadedIndicesRoutingDiffMetadata; public UploadedMetadataResults( List uploadedIndexMetadata, diff --git a/server/src/main/java/org/opensearch/gateway/remote/RemotePersistenceStats.java b/server/src/main/java/org/opensearch/gateway/remote/RemotePersistenceStats.java index 36d107a99d258..efd73e11e46b5 100644 --- a/server/src/main/java/org/opensearch/gateway/remote/RemotePersistenceStats.java +++ b/server/src/main/java/org/opensearch/gateway/remote/RemotePersistenceStats.java @@ -20,15 +20,18 @@ public class RemotePersistenceStats extends PersistedStateStats { static final String CLEANUP_ATTEMPT_FAILED_COUNT = "cleanup_attempt_failed_count"; static final String INDEX_ROUTING_FILES_CLEANUP_ATTEMPT_FAILED_COUNT = "index_routing_files_cleanup_attempt_failed_count"; + static final String INDICES_ROUTING_DIFF_FILES_CLEANUP_ATTEMPT_FAILED_COUNT = "indices_routing_diff_files_cleanup_attempt_failed_count"; static final String REMOTE_UPLOAD = "remote_upload"; private AtomicLong cleanupAttemptFailedCount = new AtomicLong(0); private AtomicLong indexRoutingFilesCleanupAttemptFailedCount = new AtomicLong(0); + private AtomicLong indicesRoutingDiffFilesCleanupAttemptFailedCount = new AtomicLong(0); public RemotePersistenceStats() { super(REMOTE_UPLOAD); addToExtendedFields(CLEANUP_ATTEMPT_FAILED_COUNT, cleanupAttemptFailedCount); addToExtendedFields(INDEX_ROUTING_FILES_CLEANUP_ATTEMPT_FAILED_COUNT, indexRoutingFilesCleanupAttemptFailedCount); + addToExtendedFields(INDICES_ROUTING_DIFF_FILES_CLEANUP_ATTEMPT_FAILED_COUNT, indicesRoutingDiffFilesCleanupAttemptFailedCount); } public void cleanUpAttemptFailed() { @@ -46,4 +49,12 @@ public void indexRoutingFilesCleanupAttemptFailed() { public long getIndexRoutingFilesCleanupAttemptFailedCount() { return indexRoutingFilesCleanupAttemptFailedCount.get(); } + + public void indicesRoutingDiffFileCleanupAttemptFailed() { + indexRoutingFilesCleanupAttemptFailedCount.incrementAndGet(); + } + + public long getIndicesRoutingDiffFileCleanupAttemptFailedCount() { + return indexRoutingFilesCleanupAttemptFailedCount.get(); + } } diff --git a/server/src/main/java/org/opensearch/gateway/remote/model/RemoteClusterMetadataManifest.java b/server/src/main/java/org/opensearch/gateway/remote/model/RemoteClusterMetadataManifest.java index 1dc56712d4ab5..acaae3173315a 100644 --- a/server/src/main/java/org/opensearch/gateway/remote/model/RemoteClusterMetadataManifest.java +++ b/server/src/main/java/org/opensearch/gateway/remote/model/RemoteClusterMetadataManifest.java @@ -35,7 +35,7 @@ public class RemoteClusterMetadataManifest extends AbstractRemoteWritableBlobEnt public static final int SPLITTED_MANIFEST_FILE_LENGTH = 6; public static final String METADATA_MANIFEST_NAME_FORMAT = "%s"; - public static final int MANIFEST_CURRENT_CODEC_VERSION = ClusterMetadataManifest.CODEC_V2; + public static final int MANIFEST_CURRENT_CODEC_VERSION = ClusterMetadataManifest.CODEC_V3; public static final String COMMITTED = "C"; public static final String PUBLISHED = "P"; @@ -50,6 +50,9 @@ public class RemoteClusterMetadataManifest extends AbstractRemoteWritableBlobEnt public static final ChecksumBlobStoreFormat CLUSTER_METADATA_MANIFEST_FORMAT_V1 = new ChecksumBlobStoreFormat<>("cluster-metadata-manifest", METADATA_MANIFEST_NAME_FORMAT, ClusterMetadataManifest::fromXContentV1); + public static final ChecksumBlobStoreFormat CLUSTER_METADATA_MANIFEST_FORMAT_V2 = + new ChecksumBlobStoreFormat<>("cluster-metadata-manifest", METADATA_MANIFEST_NAME_FORMAT, ClusterMetadataManifest::fromXContentV2); + /** * Manifest format compatible with codec v2, where we introduced codec versions/global metadata. */ @@ -149,6 +152,8 @@ private ChecksumBlobStoreFormat getClusterMetadataManif long codecVersion = getManifestCodecVersion(); if (codecVersion == MANIFEST_CURRENT_CODEC_VERSION) { return CLUSTER_METADATA_MANIFEST_FORMAT; + } else if (codecVersion == ClusterMetadataManifest.CODEC_V2) { + return CLUSTER_METADATA_MANIFEST_FORMAT_V2; } else if (codecVersion == ClusterMetadataManifest.CODEC_V1) { return CLUSTER_METADATA_MANIFEST_FORMAT_V1; } else if (codecVersion == ClusterMetadataManifest.CODEC_V0) { diff --git a/server/src/main/java/org/opensearch/gateway/remote/routingtable/RemoteRoutingTableDiff.java b/server/src/main/java/org/opensearch/gateway/remote/routingtable/RemoteRoutingTableDiff.java new file mode 100644 index 0000000000000..e876d939490d0 --- /dev/null +++ b/server/src/main/java/org/opensearch/gateway/remote/routingtable/RemoteRoutingTableDiff.java @@ -0,0 +1,150 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.gateway.remote.routingtable; + +import org.opensearch.cluster.Diff; +import org.opensearch.cluster.routing.IndexRoutingTable; +import org.opensearch.cluster.routing.RoutingTableIncrementalDiff; +import org.opensearch.common.io.Streams; +import org.opensearch.common.remote.AbstractRemoteWritableBlobEntity; +import org.opensearch.common.remote.BlobPathParameters; +import org.opensearch.core.compress.Compressor; +import org.opensearch.gateway.remote.ClusterMetadataManifest; +import org.opensearch.index.remote.RemoteStoreUtils; +import org.opensearch.repositories.blobstore.ChecksumWritableBlobStoreFormat; + +import java.io.IOException; +import java.io.InputStream; +import java.util.List; +import java.util.Map; + +import static org.opensearch.gateway.remote.RemoteClusterStateUtils.DELIMITER; + +/** + * Represents a incremental difference between {@link org.opensearch.cluster.routing.RoutingTable} objects that can be serialized and deserialized. + * This class is responsible for writing and reading the differences between RoutingTables to and from an input/output stream. + */ +public class RemoteRoutingTableDiff extends AbstractRemoteWritableBlobEntity { + private final RoutingTableIncrementalDiff routingTableIncrementalDiff; + + private long term; + private long version; + + public static final String ROUTING_TABLE_DIFF = "routing-table-diff"; + + public static final String ROUTING_TABLE_DIFF_METADATA_PREFIX = "routingTableDiff--"; + + public static final String ROUTING_TABLE_DIFF_FILE = "routing_table_diff"; + private static final String codec = "RemoteRoutingTableDiff"; + public static final String ROUTING_TABLE_DIFF_PATH_TOKEN = "routing-table-diff"; + + public static final int VERSION = 1; + + public static final ChecksumWritableBlobStoreFormat REMOTE_ROUTING_TABLE_DIFF_FORMAT = + new ChecksumWritableBlobStoreFormat<>(codec, RoutingTableIncrementalDiff::readFrom); + + /** + * Constructs a new RemoteRoutingTableDiff with the given differences. + * + * @param routingTableIncrementalDiff a RoutingTableIncrementalDiff object containing the differences of {@link IndexRoutingTable}. + * @param clusterUUID the cluster UUID. + * @param compressor the compressor to be used. + * @param term the term of the routing table. + * @param version the version of the routing table. + */ + public RemoteRoutingTableDiff( + RoutingTableIncrementalDiff routingTableIncrementalDiff, + String clusterUUID, + Compressor compressor, + long term, + long version + ) { + super(clusterUUID, compressor); + this.routingTableIncrementalDiff = routingTableIncrementalDiff; + this.term = term; + this.version = version; + } + + /** + * Constructs a new RemoteRoutingTableDiff with the given differences. + * + * @param routingTableIncrementalDiff a RoutingTableIncrementalDiff object containing the differences of {@link IndexRoutingTable}. + * @param clusterUUID the cluster UUID. + * @param compressor the compressor to be used. + */ + public RemoteRoutingTableDiff(RoutingTableIncrementalDiff routingTableIncrementalDiff, String clusterUUID, Compressor compressor) { + super(clusterUUID, compressor); + this.routingTableIncrementalDiff = routingTableIncrementalDiff; + } + + /** + * Constructs a new RemoteIndexRoutingTableDiff with the given blob name, cluster UUID, and compressor. + * + * @param blobName the name of the blob. + * @param clusterUUID the cluster UUID. + * @param compressor the compressor to be used. + */ + public RemoteRoutingTableDiff(String blobName, String clusterUUID, Compressor compressor) { + super(clusterUUID, compressor); + this.routingTableIncrementalDiff = null; + this.blobName = blobName; + } + + /** + * Gets the map of differences of {@link IndexRoutingTable}. + * + * @return a map containing the differences. + */ + public Map> getDiffs() { + assert routingTableIncrementalDiff != null; + return routingTableIncrementalDiff.getDiffs(); + } + + @Override + public BlobPathParameters getBlobPathParameters() { + return new BlobPathParameters(List.of(ROUTING_TABLE_DIFF_PATH_TOKEN), ROUTING_TABLE_DIFF_METADATA_PREFIX); + } + + @Override + public String getType() { + return ROUTING_TABLE_DIFF; + } + + @Override + public String generateBlobFileName() { + if (blobFileName == null) { + blobFileName = String.join( + DELIMITER, + getBlobPathParameters().getFilePrefix(), + RemoteStoreUtils.invertLong(term), + RemoteStoreUtils.invertLong(version), + RemoteStoreUtils.invertLong(System.currentTimeMillis()) + ); + } + return blobFileName; + } + + @Override + public ClusterMetadataManifest.UploadedMetadata getUploadedMetadata() { + assert blobName != null; + return new ClusterMetadataManifest.UploadedMetadataAttribute(ROUTING_TABLE_DIFF_FILE, blobName); + } + + @Override + public InputStream serialize() throws IOException { + assert routingTableIncrementalDiff != null; + return REMOTE_ROUTING_TABLE_DIFF_FORMAT.serialize(routingTableIncrementalDiff, generateBlobFileName(), getCompressor()) + .streamInput(); + } + + @Override + public RoutingTableIncrementalDiff deserialize(InputStream in) throws IOException { + return REMOTE_ROUTING_TABLE_DIFF_FORMAT.deserialize(blobName, Streams.readFully(in)); + } +} diff --git a/server/src/test/java/org/opensearch/cluster/routing/remote/RemoteRoutingTableServiceTests.java b/server/src/test/java/org/opensearch/cluster/routing/remote/RemoteRoutingTableServiceTests.java index f66e096e9b548..74254f1a1987f 100644 --- a/server/src/test/java/org/opensearch/cluster/routing/remote/RemoteRoutingTableServiceTests.java +++ b/server/src/test/java/org/opensearch/cluster/routing/remote/RemoteRoutingTableServiceTests.java @@ -12,12 +12,15 @@ import org.opensearch.action.LatchedActionListener; import org.opensearch.cluster.ClusterName; import org.opensearch.cluster.ClusterState; +import org.opensearch.cluster.Diff; import org.opensearch.cluster.DiffableUtils; import org.opensearch.cluster.coordination.CoordinationMetadata; import org.opensearch.cluster.metadata.IndexMetadata; import org.opensearch.cluster.metadata.Metadata; import org.opensearch.cluster.routing.IndexRoutingTable; +import org.opensearch.cluster.routing.IndexShardRoutingTable; import org.opensearch.cluster.routing.RoutingTable; +import org.opensearch.cluster.routing.RoutingTableIncrementalDiff; import org.opensearch.cluster.service.ClusterService; import org.opensearch.common.blobstore.BlobContainer; import org.opensearch.common.blobstore.BlobPath; @@ -50,8 +53,11 @@ import java.io.IOException; import java.io.InputStream; +import java.nio.charset.StandardCharsets; import java.util.ArrayList; import java.util.Arrays; +import java.util.Base64; +import java.util.HashMap; import java.util.List; import java.util.Locale; import java.util.Map; @@ -69,6 +75,10 @@ import static org.opensearch.gateway.remote.routingtable.RemoteIndexRoutingTable.INDEX_ROUTING_METADATA_PREFIX; import static org.opensearch.gateway.remote.routingtable.RemoteIndexRoutingTable.INDEX_ROUTING_TABLE; import static org.opensearch.gateway.remote.routingtable.RemoteIndexRoutingTable.INDEX_ROUTING_TABLE_FORMAT; +import static org.opensearch.gateway.remote.routingtable.RemoteRoutingTableDiff.REMOTE_ROUTING_TABLE_DIFF_FORMAT; +import static org.opensearch.gateway.remote.routingtable.RemoteRoutingTableDiff.ROUTING_TABLE_DIFF_FILE; +import static org.opensearch.gateway.remote.routingtable.RemoteRoutingTableDiff.ROUTING_TABLE_DIFF_METADATA_PREFIX; +import static org.opensearch.gateway.remote.routingtable.RemoteRoutingTableDiff.ROUTING_TABLE_DIFF_PATH_TOKEN; import static org.opensearch.index.remote.RemoteStoreEnums.PathHashAlgorithm.FNV_1A_BASE64; import static org.opensearch.index.remote.RemoteStoreEnums.PathType.HASHED_PREFIX; import static org.opensearch.node.remotestore.RemoteStoreNodeAttribute.REMOTE_STORE_ROUTING_TABLE_REPOSITORY_NAME_ATTRIBUTE_KEY; @@ -281,10 +291,14 @@ public void testGetIndicesRoutingMapDiffShardChanged() { DiffableUtils.MapDiff> diff = remoteRoutingTableService .getIndicesRoutingMapDiff(routingTable, routingTable2); - assertEquals(1, diff.getUpserts().size()); - assertNotNull(diff.getUpserts().get(indexName)); - assertEquals(noOfShards + 1, diff.getUpserts().get(indexName).getShards().size()); - assertEquals(noOfReplicas + 1, diff.getUpserts().get(indexName).getShards().get(0).getSize()); + assertEquals(0, diff.getUpserts().size()); + assertEquals(1, diff.getDiffs().size()); + assertNotNull(diff.getDiffs().get(indexName)); + assertEquals(noOfShards + 1, diff.getDiffs().get(indexName).apply(routingTable.indicesRouting().get(indexName)).shards().size()); + assertEquals( + noOfReplicas + 1, + diff.getDiffs().get(indexName).apply(routingTable.indicesRouting().get(indexName)).getShards().get(0).getSize() + ); assertEquals(0, diff.getDeletes().size()); final IndexMetadata indexMetadata3 = new IndexMetadata.Builder(indexName).settings( @@ -296,11 +310,14 @@ public void testGetIndicesRoutingMapDiffShardChanged() { RoutingTable routingTable3 = RoutingTable.builder().addAsNew(indexMetadata3).build(); diff = remoteRoutingTableService.getIndicesRoutingMapDiff(routingTable2, routingTable3); - assertEquals(1, diff.getUpserts().size()); - assertNotNull(diff.getUpserts().get(indexName)); - assertEquals(noOfShards + 1, diff.getUpserts().get(indexName).getShards().size()); - assertEquals(noOfReplicas + 2, diff.getUpserts().get(indexName).getShards().get(0).getSize()); - + assertEquals(0, diff.getUpserts().size()); + assertEquals(1, diff.getDiffs().size()); + assertNotNull(diff.getDiffs().get(indexName)); + assertEquals(noOfShards + 1, diff.getDiffs().get(indexName).apply(routingTable.indicesRouting().get(indexName)).shards().size()); + assertEquals( + noOfReplicas + 2, + diff.getDiffs().get(indexName).apply(routingTable.indicesRouting().get(indexName)).getShards().get(0).getSize() + ); assertEquals(0, diff.getDeletes().size()); } @@ -320,10 +337,10 @@ public void testGetIndicesRoutingMapDiffShardDetailChanged() { DiffableUtils.MapDiff> diff = remoteRoutingTableService .getIndicesRoutingMapDiff(routingTable, routingTable2); - assertEquals(1, diff.getUpserts().size()); - assertNotNull(diff.getUpserts().get(indexName)); - assertEquals(noOfShards, diff.getUpserts().get(indexName).getShards().size()); - assertEquals(noOfReplicas + 1, diff.getUpserts().get(indexName).getShards().get(0).getSize()); + assertEquals(1, diff.getDiffs().size()); + assertNotNull(diff.getDiffs().get(indexName)); + assertEquals(noOfShards, diff.getDiffs().get(indexName).apply(routingTable.indicesRouting().get(indexName)).shards().size()); + assertEquals(0, diff.getUpserts().size()); assertEquals(0, diff.getDeletes().size()); } @@ -552,6 +569,44 @@ public void testGetAsyncIndexRoutingReadAction() throws Exception { assertEquals(clusterState.getRoutingTable().getIndicesRouting().get(indexName), indexRoutingTable); } + public void testGetAsyncIndexRoutingTableDiffReadAction() throws Exception { + String indexName = randomAlphaOfLength(randomIntBetween(1, 50)); + ClusterState currentState = createClusterState(indexName); + + // Get the IndexRoutingTable from the current state + IndexRoutingTable indexRoutingTable = currentState.routingTable().index(indexName); + Map shardRoutingTables = indexRoutingTable.getShards(); + + RoutingTableIncrementalDiff.IndexRoutingTableIncrementalDiff indexRoutingTableDiff = + new RoutingTableIncrementalDiff.IndexRoutingTableIncrementalDiff(new ArrayList<>(shardRoutingTables.values())); + + // Create the map for RoutingTableIncrementalDiff + Map> diffs = new HashMap<>(); + diffs.put(indexName, indexRoutingTableDiff); + + RoutingTableIncrementalDiff diff = new RoutingTableIncrementalDiff(diffs); + + String uploadedFileName = String.format(Locale.ROOT, "routing-table-diff/" + indexName); + when(blobContainer.readBlob(indexName)).thenReturn( + REMOTE_ROUTING_TABLE_DIFF_FORMAT.serialize(diff, uploadedFileName, compressor).streamInput() + ); + + TestCapturingListener listener = new TestCapturingListener<>(); + CountDownLatch latch = new CountDownLatch(1); + + remoteRoutingTableService.getAsyncIndexRoutingTableDiffReadAction( + "cluster-uuid", + uploadedFileName, + new LatchedActionListener<>(listener, latch) + ); + latch.await(); + + assertNull(listener.getFailure()); + assertNotNull(listener.getResult()); + RoutingTableIncrementalDiff resultDiff = listener.getResult(); + assertEquals(diff.getDiffs().size(), resultDiff.getDiffs().size()); + } + public void testGetAsyncIndexRoutingWriteAction() throws Exception { String indexName = randomAlphaOfLength(randomIntBetween(1, 50)); ClusterState clusterState = createClusterState(indexName); @@ -604,6 +659,68 @@ public void testGetAsyncIndexRoutingWriteAction() throws Exception { assertThat(RemoteStoreUtils.invertLong(fileNameTokens[3]), lessThanOrEqualTo(System.currentTimeMillis())); } + public void testGetAsyncIndexRoutingDiffWriteAction() throws Exception { + String indexName = randomAlphaOfLength(randomIntBetween(1, 50)); + ClusterState currentState = createClusterState(indexName); + + // Get the IndexRoutingTable from the current state + IndexRoutingTable indexRoutingTable = currentState.routingTable().index(indexName); + Map shardRoutingTables = indexRoutingTable.getShards(); + + RoutingTableIncrementalDiff.IndexRoutingTableIncrementalDiff indexRoutingTableDiff = + new RoutingTableIncrementalDiff.IndexRoutingTableIncrementalDiff(new ArrayList<>(shardRoutingTables.values())); + + // Create the map for RoutingTableIncrementalDiff + Map> diffs = new HashMap<>(); + diffs.put(indexName, indexRoutingTableDiff); + + // RoutingTableIncrementalDiff diff = new RoutingTableIncrementalDiff(diffs); + + Iterable remotePath = new BlobPath().add("base-path") + .add( + Base64.getUrlEncoder() + .withoutPadding() + .encodeToString(currentState.getClusterName().value().getBytes(StandardCharsets.UTF_8)) + ) + .add("cluster-state") + .add(currentState.metadata().clusterUUID()) + .add(ROUTING_TABLE_DIFF_PATH_TOKEN); + + doAnswer(invocationOnMock -> { + invocationOnMock.getArgument(4, ActionListener.class).onResponse(null); + return null; + }).when(blobStoreTransferService) + .uploadBlob(any(InputStream.class), eq(remotePath), anyString(), eq(WritePriority.URGENT), any(ActionListener.class)); + + TestCapturingListener listener = new TestCapturingListener<>(); + CountDownLatch latch = new CountDownLatch(1); + + remoteRoutingTableService.getAsyncIndexRoutingDiffWriteAction( + currentState.metadata().clusterUUID(), + currentState.term(), + currentState.version(), + diffs, + new LatchedActionListener<>(listener, latch) + ); + latch.await(); + assertNull(listener.getFailure()); + assertNotNull(listener.getResult()); + ClusterMetadataManifest.UploadedMetadata uploadedMetadata = listener.getResult(); + + assertEquals(ROUTING_TABLE_DIFF_FILE, uploadedMetadata.getComponent()); + String uploadedFileName = uploadedMetadata.getUploadedFilename(); + String[] pathTokens = uploadedFileName.split(PATH_DELIMITER); + assertEquals(6, pathTokens.length); + assertEquals(pathTokens[0], "base-path"); + String[] fileNameTokens = pathTokens[5].split(DELIMITER); + + assertEquals(4, fileNameTokens.length); + assertEquals(ROUTING_TABLE_DIFF_METADATA_PREFIX, fileNameTokens[0]); + assertEquals(RemoteStoreUtils.invertLong(1L), fileNameTokens[1]); + assertEquals(RemoteStoreUtils.invertLong(2L), fileNameTokens[2]); + assertThat(RemoteStoreUtils.invertLong(fileNameTokens[3]), lessThanOrEqualTo(System.currentTimeMillis())); + } + public void testGetUpdatedIndexRoutingTableMetadataWhenNoChange() { List updatedIndicesRouting = new ArrayList<>(); List indicesRouting = randomUploadedIndexMetadataList(); @@ -687,4 +804,26 @@ public void testDeleteStaleIndexRoutingPathsThrowsIOException() throws IOExcepti verify(blobContainer).deleteBlobsIgnoringIfNotExists(stalePaths); } + public void testDeleteStaleIndexRoutingDiffPaths() throws IOException { + doNothing().when(blobContainer).deleteBlobsIgnoringIfNotExists(any()); + when(blobStore.blobContainer(any())).thenReturn(blobContainer); + List stalePaths = Arrays.asList("path1", "path2"); + remoteRoutingTableService.doStart(); + remoteRoutingTableService.deleteStaleIndexRoutingDiffPaths(stalePaths); + verify(blobContainer).deleteBlobsIgnoringIfNotExists(stalePaths); + } + + public void testDeleteStaleIndexRoutingDiffPathsThrowsIOException() throws IOException { + when(blobStore.blobContainer(any())).thenReturn(blobContainer); + List stalePaths = Arrays.asList("path1", "path2"); + // Simulate an IOException + doThrow(new IOException("test exception")).when(blobContainer).deleteBlobsIgnoringIfNotExists(Mockito.anyList()); + + remoteRoutingTableService.doStart(); + IOException thrown = assertThrows(IOException.class, () -> { + remoteRoutingTableService.deleteStaleIndexRoutingDiffPaths(stalePaths); + }); + assertEquals("test exception", thrown.getMessage()); + verify(blobContainer).deleteBlobsIgnoringIfNotExists(stalePaths); + } } diff --git a/server/src/test/java/org/opensearch/gateway/remote/ClusterMetadataManifestTests.java b/server/src/test/java/org/opensearch/gateway/remote/ClusterMetadataManifestTests.java index 256161af1a3e2..8a6dd6bc96e72 100644 --- a/server/src/test/java/org/opensearch/gateway/remote/ClusterMetadataManifestTests.java +++ b/server/src/test/java/org/opensearch/gateway/remote/ClusterMetadataManifestTests.java @@ -10,9 +10,11 @@ import org.opensearch.Version; import org.opensearch.cluster.ClusterState; +import org.opensearch.cluster.DiffableUtils; import org.opensearch.cluster.metadata.IndexGraveyard; import org.opensearch.cluster.metadata.RepositoriesMetadata; import org.opensearch.cluster.metadata.WeightedRoutingMetadata; +import org.opensearch.cluster.routing.IndexRoutingTable; import org.opensearch.common.xcontent.json.JsonXContent; import org.opensearch.core.common.bytes.BytesReference; import org.opensearch.core.common.io.stream.NamedWriteableRegistry; @@ -29,9 +31,12 @@ import java.util.Arrays; import java.util.Collections; import java.util.List; +import java.util.Map; import java.util.function.Function; import java.util.stream.Collectors; +import org.mockito.Mockito; + import static org.opensearch.gateway.remote.ClusterMetadataManifest.CODEC_V0; import static org.opensearch.gateway.remote.ClusterMetadataManifest.CODEC_V1; import static org.opensearch.gateway.remote.RemoteClusterStateAttributesManager.CLUSTER_BLOCKS; @@ -157,7 +162,7 @@ public void testClusterMetadataManifestSerializationEqualsHashCode() { .opensearchVersion(Version.CURRENT) .nodeId("B10RX1f5RJenMQvYccCgSQ") .committed(true) - .codecVersion(ClusterMetadataManifest.CODEC_V2) + .codecVersion(ClusterMetadataManifest.CODEC_V3) .indices(randomUploadedIndexMetadataList()) .previousClusterUUID("yfObdx8KSMKKrXf8UyHhM") .clusterUUIDCommitted(true) @@ -191,7 +196,9 @@ public void testClusterMetadataManifestSerializationEqualsHashCode() { .diffManifest( new ClusterStateDiffManifest( RemoteClusterStateServiceTests.generateClusterStateWithOneIndex().build(), - ClusterState.EMPTY_STATE + ClusterState.EMPTY_STATE, + null, + "indicesRoutingDiffPath" ) ) .build(); @@ -523,7 +530,75 @@ public void testClusterMetadataManifestXContentV2() throws IOException { .diffManifest( new ClusterStateDiffManifest( RemoteClusterStateServiceTests.generateClusterStateWithOneIndex().build(), - ClusterState.EMPTY_STATE + ClusterState.EMPTY_STATE, + null, + null + ) + ) + .build(); + final XContentBuilder builder = JsonXContent.contentBuilder(); + builder.startObject(); + originalManifest.toXContent(builder, ToXContent.EMPTY_PARAMS); + builder.endObject(); + + try (XContentParser parser = createParser(JsonXContent.jsonXContent, BytesReference.bytes(builder))) { + final ClusterMetadataManifest fromXContentManifest = ClusterMetadataManifest.fromXContent(parser); + assertEquals(originalManifest, fromXContentManifest); + } + } + + public void testClusterMetadataManifestXContentV3() throws IOException { + UploadedIndexMetadata uploadedIndexMetadata = new UploadedIndexMetadata("test-index", "test-uuid", "/test/upload/path"); + UploadedMetadataAttribute uploadedMetadataAttribute = new UploadedMetadataAttribute("attribute_name", "testing_attribute"); + final DiffableUtils.MapDiff> routingTableIncrementalDiff = Mockito.mock( + DiffableUtils.MapDiff.class + ); + ClusterMetadataManifest originalManifest = ClusterMetadataManifest.builder() + .clusterTerm(1L) + .stateVersion(1L) + .clusterUUID("test-cluster-uuid") + .stateUUID("test-state-uuid") + .opensearchVersion(Version.CURRENT) + .nodeId("test-node-id") + .committed(false) + .codecVersion(ClusterMetadataManifest.CODEC_V3) + .indices(Collections.singletonList(uploadedIndexMetadata)) + .previousClusterUUID("prev-cluster-uuid") + .clusterUUIDCommitted(true) + .coordinationMetadata(uploadedMetadataAttribute) + .settingMetadata(uploadedMetadataAttribute) + .templatesMetadata(uploadedMetadataAttribute) + .customMetadataMap( + Collections.unmodifiableList( + Arrays.asList( + new UploadedMetadataAttribute( + CUSTOM_METADATA + CUSTOM_DELIMITER + RepositoriesMetadata.TYPE, + "custom--repositories-file" + ), + new UploadedMetadataAttribute( + CUSTOM_METADATA + CUSTOM_DELIMITER + IndexGraveyard.TYPE, + "custom--index_graveyard-file" + ), + new UploadedMetadataAttribute( + CUSTOM_METADATA + CUSTOM_DELIMITER + WeightedRoutingMetadata.TYPE, + "custom--weighted_routing_netadata-file" + ) + ) + ).stream().collect(Collectors.toMap(UploadedMetadataAttribute::getAttributeName, Function.identity())) + ) + .routingTableVersion(1L) + .indicesRouting(Collections.singletonList(uploadedIndexMetadata)) + .discoveryNodesMetadata(uploadedMetadataAttribute) + .clusterBlocksMetadata(uploadedMetadataAttribute) + .transientSettingsMetadata(uploadedMetadataAttribute) + .hashesOfConsistentSettings(uploadedMetadataAttribute) + .clusterStateCustomMetadataMap(Collections.emptyMap()) + .diffManifest( + new ClusterStateDiffManifest( + RemoteClusterStateServiceTests.generateClusterStateWithOneIndex().build(), + ClusterState.EMPTY_STATE, + routingTableIncrementalDiff, + uploadedMetadataAttribute.getUploadedFilename() ) ) .build(); diff --git a/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateCleanupManagerTests.java b/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateCleanupManagerTests.java index ec7e3c1ce81d3..b86f23f3d37aa 100644 --- a/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateCleanupManagerTests.java +++ b/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateCleanupManagerTests.java @@ -50,6 +50,7 @@ import static org.opensearch.gateway.remote.ClusterMetadataManifest.CODEC_V1; import static org.opensearch.gateway.remote.ClusterMetadataManifest.CODEC_V2; +import static org.opensearch.gateway.remote.ClusterMetadataManifest.CODEC_V3; import static org.opensearch.gateway.remote.ClusterMetadataManifest.UploadedIndexMetadata; import static org.opensearch.gateway.remote.ClusterMetadataManifest.UploadedMetadataAttribute; import static org.opensearch.gateway.remote.RemoteClusterStateCleanupManager.AsyncStaleFileDeletion; @@ -296,6 +297,74 @@ public void testDeleteClusterMetadata() throws IOException { verify(remoteRoutingTableService).deleteStaleIndexRoutingPaths(List.of(index3Metadata.getUploadedFilename())); } + public void testDeleteStaleIndicesRoutingDiffFile() throws IOException { + String clusterUUID = "clusterUUID"; + String clusterName = "test-cluster"; + List inactiveBlobs = Arrays.asList(new PlainBlobMetadata("manifest1.dat", 1L)); + List activeBlobs = Arrays.asList(new PlainBlobMetadata("manifest2.dat", 1L)); + + UploadedMetadataAttribute coordinationMetadata = new UploadedMetadataAttribute(COORDINATION_METADATA, "coordination_metadata"); + UploadedMetadataAttribute templateMetadata = new UploadedMetadataAttribute(TEMPLATES_METADATA, "template_metadata"); + UploadedMetadataAttribute settingMetadata = new UploadedMetadataAttribute(SETTING_METADATA, "settings_metadata"); + UploadedMetadataAttribute coordinationMetadataUpdated = new UploadedMetadataAttribute( + COORDINATION_METADATA, + "coordination_metadata_updated" + ); + + UploadedIndexMetadata index1Metadata = new UploadedIndexMetadata("index1", "indexUUID1", "index_metadata1__2"); + UploadedIndexMetadata index2Metadata = new UploadedIndexMetadata("index2", "indexUUID2", "index_metadata2__2"); + List indicesRouting1 = List.of(index1Metadata); + List indicesRouting2 = List.of(index2Metadata); + ClusterStateDiffManifest diffManifest1 = ClusterStateDiffManifest.builder().indicesRoutingDiffPath("index1RoutingDiffPath").build(); + ClusterStateDiffManifest diffManifest2 = ClusterStateDiffManifest.builder().indicesRoutingDiffPath("index2RoutingDiffPath").build(); + + ClusterMetadataManifest manifest1 = ClusterMetadataManifest.builder() + .indices(List.of(index1Metadata)) + .coordinationMetadata(coordinationMetadataUpdated) + .templatesMetadata(templateMetadata) + .settingMetadata(settingMetadata) + .clusterTerm(1L) + .stateVersion(1L) + .codecVersion(CODEC_V3) + .stateUUID(randomAlphaOfLength(10)) + .clusterUUID(clusterUUID) + .nodeId("nodeA") + .opensearchVersion(VersionUtils.randomOpenSearchVersion(random())) + .previousClusterUUID(ClusterState.UNKNOWN_UUID) + .committed(true) + .routingTableVersion(0L) + .indicesRouting(indicesRouting1) + .diffManifest(diffManifest1) + .build(); + ClusterMetadataManifest manifest2 = ClusterMetadataManifest.builder(manifest1) + .indices(List.of(index2Metadata)) + .indicesRouting(indicesRouting2) + .diffManifest(diffManifest2) + .build(); + + BlobContainer blobContainer = mock(BlobContainer.class); + doThrow(IOException.class).when(blobContainer).delete(); + when(blobStore.blobContainer(any())).thenReturn(blobContainer); + BlobPath blobPath = new BlobPath().add("random-path"); + when((blobStoreRepository.basePath())).thenReturn(blobPath); + remoteClusterStateCleanupManager.start(); + when(remoteManifestManager.getManifestFolderPath(eq(clusterName), eq(clusterUUID))).thenReturn( + new BlobPath().add(encodeString(clusterName)).add(CLUSTER_STATE_PATH_TOKEN).add(clusterUUID).add(MANIFEST) + ); + when(remoteManifestManager.fetchRemoteClusterMetadataManifest(eq(clusterName), eq(clusterUUID), any())).thenReturn( + manifest2, + manifest1 + ); + remoteClusterStateCleanupManager = new RemoteClusterStateCleanupManager( + remoteClusterStateService, + clusterService, + remoteRoutingTableService + ); + remoteClusterStateCleanupManager.start(); + remoteClusterStateCleanupManager.deleteClusterMetadata(clusterName, clusterUUID, activeBlobs, inactiveBlobs); + verify(remoteRoutingTableService).deleteStaleIndexRoutingDiffPaths(List.of("index1RoutingDiffPath")); + } + public void testDeleteClusterMetadataNoOpsRoutingTableService() throws IOException { String clusterUUID = "clusterUUID"; String clusterName = "test-cluster"; @@ -515,6 +584,83 @@ public void testIndexRoutingFilesCleanupFailureStats() throws Exception { }); } + public void testIndicesRoutingDiffFilesCleanupFailureStats() throws Exception { + String clusterUUID = "clusterUUID"; + String clusterName = "test-cluster"; + List inactiveBlobs = Arrays.asList(new PlainBlobMetadata("manifest1.dat", 1L)); + List activeBlobs = Arrays.asList(new PlainBlobMetadata("manifest2.dat", 1L)); + + UploadedMetadataAttribute coordinationMetadata = new UploadedMetadataAttribute(COORDINATION_METADATA, "coordination_metadata"); + UploadedMetadataAttribute templateMetadata = new UploadedMetadataAttribute(TEMPLATES_METADATA, "template_metadata"); + UploadedMetadataAttribute settingMetadata = new UploadedMetadataAttribute(SETTING_METADATA, "settings_metadata"); + UploadedMetadataAttribute coordinationMetadataUpdated = new UploadedMetadataAttribute( + COORDINATION_METADATA, + "coordination_metadata_updated" + ); + + UploadedIndexMetadata index1Metadata = new UploadedIndexMetadata("index1", "indexUUID1", "index_metadata1__2"); + UploadedIndexMetadata index2Metadata = new UploadedIndexMetadata("index2", "indexUUID2", "index_metadata2__2"); + List indicesRouting1 = List.of(index1Metadata); + List indicesRouting2 = List.of(index2Metadata); + ClusterStateDiffManifest diffManifest1 = ClusterStateDiffManifest.builder().indicesRoutingDiffPath("index1RoutingDiffPath").build(); + ClusterStateDiffManifest diffManifest2 = ClusterStateDiffManifest.builder().indicesRoutingDiffPath("index2RoutingDiffPath").build(); + + ClusterMetadataManifest manifest1 = ClusterMetadataManifest.builder() + .indices(List.of(index1Metadata)) + .coordinationMetadata(coordinationMetadataUpdated) + .templatesMetadata(templateMetadata) + .settingMetadata(settingMetadata) + .clusterTerm(1L) + .stateVersion(1L) + .codecVersion(CODEC_V3) + .stateUUID(randomAlphaOfLength(10)) + .clusterUUID(clusterUUID) + .nodeId("nodeA") + .opensearchVersion(VersionUtils.randomOpenSearchVersion(random())) + .previousClusterUUID(ClusterState.UNKNOWN_UUID) + .committed(true) + .routingTableVersion(0L) + .indicesRouting(indicesRouting1) + .diffManifest(diffManifest1) + .build(); + ClusterMetadataManifest manifest2 = ClusterMetadataManifest.builder(manifest1) + .indices(List.of(index2Metadata)) + .indicesRouting(indicesRouting2) + .diffManifest(diffManifest2) + .build(); + + BlobContainer blobContainer = mock(BlobContainer.class); + doThrow(IOException.class).when(blobContainer).delete(); + when(blobStore.blobContainer(any())).thenReturn(blobContainer); + + BlobPath blobPath = new BlobPath().add("random-path"); + when((blobStoreRepository.basePath())).thenReturn(blobPath); + remoteClusterStateCleanupManager.start(); + when(remoteManifestManager.getManifestFolderPath(eq(clusterName), eq(clusterUUID))).thenReturn( + new BlobPath().add(encodeString(clusterName)).add(CLUSTER_STATE_PATH_TOKEN).add(clusterUUID).add(MANIFEST) + ); + when(remoteManifestManager.fetchRemoteClusterMetadataManifest(eq(clusterName), eq(clusterUUID), any())).thenReturn( + manifest1, + manifest2 + ); + doNothing().when(remoteRoutingTableService).deleteStaleIndexRoutingDiffPaths(any()); + + remoteClusterStateCleanupManager.deleteClusterMetadata(clusterName, clusterUUID, activeBlobs, inactiveBlobs); + assertBusy(() -> { + // wait for stats to get updated + assertNotNull(remoteClusterStateCleanupManager.getStats()); + assertEquals(0, remoteClusterStateCleanupManager.getStats().getIndicesRoutingDiffFileCleanupAttemptFailedCount()); + }); + + doThrow(IOException.class).when(remoteRoutingTableService).deleteStaleIndexRoutingPaths(any()); + remoteClusterStateCleanupManager.deleteClusterMetadata(clusterName, clusterUUID, activeBlobs, inactiveBlobs); + assertBusy(() -> { + // wait for stats to get updated + assertNotNull(remoteClusterStateCleanupManager.getStats()); + assertEquals(1, remoteClusterStateCleanupManager.getStats().getIndicesRoutingDiffFileCleanupAttemptFailedCount()); + }); + } + public void testSingleConcurrentExecutionOfStaleManifestCleanup() throws Exception { BlobContainer blobContainer = mock(BlobContainer.class); when(blobStore.blobContainer(any())).thenReturn(blobContainer); diff --git a/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateServiceTests.java b/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateServiceTests.java index 6c764585c48e7..59ca62dff2aa7 100644 --- a/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateServiceTests.java +++ b/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateServiceTests.java @@ -535,14 +535,15 @@ public void testTimeoutWhileWritingManifestFile() throws IOException { anyBoolean(), anyMap(), anyBoolean(), - anyList() + anyList(), + anyMap() ) ).thenReturn(new RemoteClusterStateUtils.UploadedMetadataResults()); RemoteStateTransferException ex = expectThrows( RemoteStateTransferException.class, () -> spiedService.writeFullMetadata(clusterState, randomAlphaOfLength(10)) ); - assertTrue(ex.getMessage().contains("Timed out waiting for transfer of manifest file to complete")); + assertTrue(ex.getMessage().contains("Timed out waiting for transfer of following metadata to complete")); } public void testWriteFullMetadataInParallelFailureForIndexMetadata() throws IOException { @@ -634,7 +635,8 @@ public void testWriteMetadataInParallelIncompleteUpload() throws IOException { true, clusterState.getCustoms(), true, - emptyList() + emptyList(), + null ) ); assertTrue(exception.getMessage().startsWith("Some metadata components were not uploaded successfully")); @@ -684,7 +686,8 @@ public void testWriteIncrementalMetadataSuccess() throws IOException { eq(false), eq(Collections.emptyMap()), eq(false), - eq(Collections.emptyList()) + eq(Collections.emptyList()), + eq(Collections.emptyMap()) ); assertThat(manifestInfo.getManifestFileName(), notNullValue()); @@ -764,7 +767,8 @@ public void testWriteIncrementalMetadataSuccessWhenPublicationEnabled() throws I eq(false), eq(Collections.emptyMap()), eq(true), - Mockito.anyList() + anyList(), + eq(Collections.emptyMap()) ); assertThat(manifestInfo.getManifestFileName(), notNullValue()); @@ -811,7 +815,8 @@ public void testTimeoutWhileWritingMetadata() throws IOException { true, emptyMap(), true, - emptyList() + emptyList(), + null ) ); assertTrue(exception.getMessage().startsWith("Timed out waiting for transfer of following metadata to complete")); @@ -862,6 +867,7 @@ public void testGetClusterStateForManifest_IncludeEphemeral() throws IOException eq(manifest.getIndicesRouting()), eq(true), eq(manifest.getClusterStateCustomMap()), + eq(false), eq(true) ); } @@ -911,7 +917,9 @@ public void testGetClusterStateForManifest_ExcludeEphemeral() throws IOException eq(emptyList()), eq(false), eq(emptyMap()), + eq(false), eq(false) + ); } @@ -958,6 +966,7 @@ public void testGetClusterStateFromManifest_CodecV1() throws IOException { eq(emptyList()), eq(false), eq(emptyMap()), + eq(false), eq(false) ); verify(mockedGlobalMetadataManager, times(1)).getGlobalMetadata(eq(manifest.getClusterUUID()), eq(manifest)); @@ -1281,6 +1290,7 @@ public void testReadClusterStateInParallel_TimedOut() throws IOException { emptyList(), true, emptyMap(), + false, true ) ); @@ -1312,6 +1322,7 @@ public void testReadClusterStateInParallel_ExceptionDuringRead() throws IOExcept emptyList(), true, emptyMap(), + false, true ) ); @@ -1418,6 +1429,7 @@ public void testReadClusterStateInParallel_UnexpectedResult() throws IOException emptyList(), true, newClusterStateCustoms, + false, true ) ); @@ -1652,6 +1664,7 @@ public void testReadClusterStateInParallel_Success() throws IOException { emptyList(), true, newClusterStateCustoms, + false, true ); @@ -2745,6 +2758,108 @@ public void testWriteIncrementalMetadataSuccessWithRoutingTable() throws IOExcep assertThat(manifest.getIndicesRouting().get(0).getUploadedFilename(), notNullValue()); } + public void testWriteIncrementalMetadataSuccessWithRoutingTableDiff() throws IOException { + initializeRoutingTable(); + final ClusterState clusterState = generateClusterStateWithOneIndex("test-index", 5, 1, false).nodes( + nodesWithLocalNodeClusterManager() + ).build(); + mockBlobStoreObjects(); + List indices = new ArrayList<>(); + final UploadedIndexMetadata uploadedIndiceRoutingMetadata = new UploadedIndexMetadata( + "test-index", + "index-uuid", + "routing-filename", + INDEX_ROUTING_METADATA_PREFIX + ); + indices.add(uploadedIndiceRoutingMetadata); + final ClusterState previousClusterState = generateClusterStateWithOneIndex("test-index", 5, 1, true).nodes( + nodesWithLocalNodeClusterManager() + ).build(); + + final ClusterMetadataManifest previousManifest = ClusterMetadataManifest.builder().indices(indices).build(); + when((blobStoreRepository.basePath())).thenReturn(BlobPath.cleanPath().add("base-path")); + + remoteClusterStateService.start(); + final ClusterMetadataManifest manifest = remoteClusterStateService.writeIncrementalMetadata( + previousClusterState, + clusterState, + previousManifest + ).getClusterMetadataManifest(); + final UploadedIndexMetadata uploadedIndexMetadata = new UploadedIndexMetadata("test-index", "index-uuid", "metadata-filename"); + final ClusterMetadataManifest expectedManifest = ClusterMetadataManifest.builder() + .indices(List.of(uploadedIndexMetadata)) + .clusterTerm(clusterState.term()) + .stateVersion(1L) + .stateUUID("state-uuid") + .clusterUUID("cluster-uuid") + .previousClusterUUID("prev-cluster-uuid") + .routingTableVersion(1) + .indicesRouting(List.of(uploadedIndiceRoutingMetadata)) + .build(); + + assertThat(manifest.getIndices().size(), is(1)); + assertThat(manifest.getClusterTerm(), is(expectedManifest.getClusterTerm())); + assertThat(manifest.getStateVersion(), is(expectedManifest.getStateVersion())); + assertThat(manifest.getClusterUUID(), is(expectedManifest.getClusterUUID())); + assertThat(manifest.getStateUUID(), is(expectedManifest.getStateUUID())); + assertThat(manifest.getRoutingTableVersion(), is(expectedManifest.getRoutingTableVersion())); + assertThat(manifest.getIndicesRouting().get(0).getIndexName(), is(uploadedIndiceRoutingMetadata.getIndexName())); + assertThat(manifest.getIndicesRouting().get(0).getIndexUUID(), is(uploadedIndiceRoutingMetadata.getIndexUUID())); + assertThat(manifest.getIndicesRouting().get(0).getUploadedFilename(), notNullValue()); + assertThat(manifest.getDiffManifest().getIndicesRoutingDiffPath(), notNullValue()); + } + + public void testWriteIncrementalMetadataSuccessWithRoutingTableDiffNull() throws IOException { + initializeRoutingTable(); + final ClusterState clusterState = generateClusterStateWithOneIndex("test-index", 5, 1, false).nodes( + nodesWithLocalNodeClusterManager() + ).build(); + mockBlobStoreObjects(); + List indices = new ArrayList<>(); + final UploadedIndexMetadata uploadedIndiceRoutingMetadata = new UploadedIndexMetadata( + "test-index", + "index-uuid", + "routing-filename", + INDEX_ROUTING_METADATA_PREFIX + ); + indices.add(uploadedIndiceRoutingMetadata); + final ClusterState previousClusterState = generateClusterStateWithOneIndex("test-index2", 5, 1, false).nodes( + nodesWithLocalNodeClusterManager() + ).build(); + + final ClusterMetadataManifest previousManifest = ClusterMetadataManifest.builder().indices(indices).build(); + when((blobStoreRepository.basePath())).thenReturn(BlobPath.cleanPath().add("base-path")); + + remoteClusterStateService.start(); + final ClusterMetadataManifest manifest = remoteClusterStateService.writeIncrementalMetadata( + previousClusterState, + clusterState, + previousManifest + ).getClusterMetadataManifest(); + final UploadedIndexMetadata uploadedIndexMetadata = new UploadedIndexMetadata("test-index", "index-uuid", "metadata-filename"); + final ClusterMetadataManifest expectedManifest = ClusterMetadataManifest.builder() + .indices(List.of(uploadedIndexMetadata)) + .clusterTerm(clusterState.term()) + .stateVersion(1L) + .stateUUID("state-uuid") + .clusterUUID("cluster-uuid") + .previousClusterUUID("prev-cluster-uuid") + .routingTableVersion(1) + .indicesRouting(List.of(uploadedIndiceRoutingMetadata)) + .build(); + + assertThat(manifest.getIndices().size(), is(1)); + assertThat(manifest.getClusterTerm(), is(expectedManifest.getClusterTerm())); + assertThat(manifest.getStateVersion(), is(expectedManifest.getStateVersion())); + assertThat(manifest.getClusterUUID(), is(expectedManifest.getClusterUUID())); + assertThat(manifest.getStateUUID(), is(expectedManifest.getStateUUID())); + assertThat(manifest.getRoutingTableVersion(), is(expectedManifest.getRoutingTableVersion())); + assertThat(manifest.getIndicesRouting().get(0).getIndexName(), is(uploadedIndiceRoutingMetadata.getIndexName())); + assertThat(manifest.getIndicesRouting().get(0).getIndexUUID(), is(uploadedIndiceRoutingMetadata.getIndexUUID())); + assertThat(manifest.getIndicesRouting().get(0).getUploadedFilename(), notNullValue()); + assertThat(manifest.getDiffManifest().getIndicesRoutingDiffPath(), nullValue()); + } + private void initializeRoutingTable() { Settings newSettings = Settings.builder() .put("node.attr." + REMOTE_STORE_ROUTING_TABLE_REPOSITORY_NAME_ATTRIBUTE_KEY, "routing_repository") @@ -3217,6 +3332,54 @@ static ClusterState.Builder generateClusterStateWithOneIndex() { .routingTable(RoutingTable.builder().addAsNew(indexMetadata).version(1L).build()); } + public static ClusterState.Builder generateClusterStateWithOneIndex( + String indexName, + int primaryShards, + int replicaShards, + boolean addAsNew + ) { + + final Index index = new Index(indexName, "index-uuid"); + final Settings idxSettings = Settings.builder() + .put(IndexMetadata.SETTING_VERSION_CREATED, Version.CURRENT) + .put(IndexMetadata.SETTING_INDEX_UUID, index.getUUID()) + .build(); + final IndexMetadata indexMetadata = new IndexMetadata.Builder(index.getName()).settings(idxSettings) + .numberOfShards(primaryShards) + .numberOfReplicas(replicaShards) + .build(); + final CoordinationMetadata coordinationMetadata = CoordinationMetadata.builder().term(1L).build(); + final Settings settings = Settings.builder().put("mock-settings", true).build(); + final TemplatesMetadata templatesMetadata = TemplatesMetadata.builder() + .put(IndexTemplateMetadata.builder("template1").settings(idxSettings).patterns(List.of("test*")).build()) + .build(); + final CustomMetadata1 customMetadata1 = new CustomMetadata1("custom-metadata-1"); + + RoutingTable.Builder routingTableBuilder = RoutingTable.builder(); + if (addAsNew) { + routingTableBuilder.addAsNew(indexMetadata); + } else { + routingTableBuilder.addAsRecovery(indexMetadata); + } + + return ClusterState.builder(ClusterName.DEFAULT) + .version(1L) + .stateUUID("state-uuid") + .metadata( + Metadata.builder() + .version(randomNonNegativeLong()) + .put(indexMetadata, true) + .clusterUUID("cluster-uuid") + .coordinationMetadata(coordinationMetadata) + .persistentSettings(settings) + .templates(templatesMetadata) + .hashesOfConsistentSettings(Map.of("key1", "value1", "key2", "value2")) + .putCustom(customMetadata1.getWriteableName(), customMetadata1) + .build() + ) + .routingTable(routingTableBuilder.version(1L).build()); + } + static ClusterState.Builder generateClusterStateWithAllAttributes() { final Index index = new Index("test-index", "index-uuid"); final Settings idxSettings = Settings.builder() @@ -3296,7 +3459,7 @@ static ClusterMetadataManifest.Builder generateClusterMetadataManifestWithAllAtt ); } - static DiscoveryNodes nodesWithLocalNodeClusterManager() { + public static DiscoveryNodes nodesWithLocalNodeClusterManager() { final DiscoveryNode localNode = new DiscoveryNode("cluster-manager-id", buildNewFakeTransportAddress(), Version.CURRENT); return DiscoveryNodes.builder().clusterManagerNodeId("cluster-manager-id").localNodeId("cluster-manager-id").add(localNode).build(); } diff --git a/server/src/test/java/org/opensearch/gateway/remote/model/ClusterStateDiffManifestTests.java b/server/src/test/java/org/opensearch/gateway/remote/model/ClusterStateDiffManifestTests.java index 897b2f5eeb25d..f89619a09cd52 100644 --- a/server/src/test/java/org/opensearch/gateway/remote/model/ClusterStateDiffManifestTests.java +++ b/server/src/test/java/org/opensearch/gateway/remote/model/ClusterStateDiffManifestTests.java @@ -10,6 +10,7 @@ import org.opensearch.Version; import org.opensearch.cluster.ClusterState; +import org.opensearch.cluster.DiffableUtils; import org.opensearch.cluster.coordination.CoordinationMetadata; import org.opensearch.cluster.metadata.IndexMetadata; import org.opensearch.cluster.metadata.IndexTemplateMetadata; @@ -17,6 +18,7 @@ import org.opensearch.cluster.metadata.TemplatesMetadata; import org.opensearch.cluster.node.DiscoveryNode; import org.opensearch.cluster.node.DiscoveryNodes; +import org.opensearch.cluster.routing.IndexRoutingTable; import org.opensearch.common.settings.Settings; import org.opensearch.common.xcontent.json.JsonXContent; import org.opensearch.core.common.bytes.BytesReference; @@ -40,7 +42,11 @@ import static java.util.stream.Collectors.toList; import static org.opensearch.Version.CURRENT; import static org.opensearch.cluster.ClusterState.EMPTY_STATE; +import static org.opensearch.cluster.routing.remote.RemoteRoutingTableService.CUSTOM_ROUTING_TABLE_DIFFABLE_VALUE_SERIALIZER; import static org.opensearch.core.common.transport.TransportAddress.META_ADDRESS; +import static org.opensearch.gateway.remote.ClusterMetadataManifest.CODEC_V3; +import static org.opensearch.gateway.remote.RemoteClusterStateServiceTests.generateClusterStateWithOneIndex; +import static org.opensearch.gateway.remote.RemoteClusterStateServiceTests.nodesWithLocalNodeClusterManager; import static org.opensearch.gateway.remote.model.RemoteClusterBlocksTests.randomClusterBlocks; public class ClusterStateDiffManifestTests extends OpenSearchTestCase { @@ -114,11 +120,70 @@ public void testClusterStateDiffManifestXContent() throws IOException { diffManifest.toXContent(builder, ToXContent.EMPTY_PARAMS); builder.endObject(); try (XContentParser parser = createParser(JsonXContent.jsonXContent, BytesReference.bytes(builder))) { - final ClusterStateDiffManifest parsedManifest = ClusterStateDiffManifest.fromXContent(parser); + final ClusterStateDiffManifest parsedManifest = ClusterStateDiffManifest.fromXContent(parser, CODEC_V3); assertEquals(diffManifest, parsedManifest); } } + public void testClusterStateWithRoutingTableDiffInDiffManifestXContent() throws IOException { + ClusterState initialState = generateClusterStateWithOneIndex("test-index", 5, 1, true).nodes(nodesWithLocalNodeClusterManager()) + .build(); + + ClusterState updatedState = generateClusterStateWithOneIndex("test-index", 5, 2, false).nodes(nodesWithLocalNodeClusterManager()) + .build(); + + ClusterStateDiffManifest diffManifest = verifyRoutingTableDiffManifest(initialState, updatedState); + final XContentBuilder builder = JsonXContent.contentBuilder(); + builder.startObject(); + diffManifest.toXContent(builder, ToXContent.EMPTY_PARAMS); + builder.endObject(); + try (XContentParser parser = createParser(JsonXContent.jsonXContent, BytesReference.bytes(builder))) { + final ClusterStateDiffManifest parsedManifest = ClusterStateDiffManifest.fromXContent(parser, CODEC_V3); + assertEquals(diffManifest, parsedManifest); + } + } + + public void testClusterStateWithRoutingTableDiffInDiffManifestXContent1() throws IOException { + ClusterState initialState = generateClusterStateWithOneIndex("test-index", 5, 1, true).nodes(nodesWithLocalNodeClusterManager()) + .build(); + + ClusterState updatedState = generateClusterStateWithOneIndex("test-index-1", 5, 2, false).nodes(nodesWithLocalNodeClusterManager()) + .build(); + + ClusterStateDiffManifest diffManifest = verifyRoutingTableDiffManifest(initialState, updatedState); + final XContentBuilder builder = JsonXContent.contentBuilder(); + builder.startObject(); + diffManifest.toXContent(builder, ToXContent.EMPTY_PARAMS); + builder.endObject(); + try (XContentParser parser = createParser(JsonXContent.jsonXContent, BytesReference.bytes(builder))) { + final ClusterStateDiffManifest parsedManifest = ClusterStateDiffManifest.fromXContent(parser, CODEC_V3); + assertEquals(diffManifest, parsedManifest); + } + } + + private ClusterStateDiffManifest verifyRoutingTableDiffManifest(ClusterState previousState, ClusterState currentState) { + // Create initial and updated IndexRoutingTable maps + Map initialRoutingTableMap = previousState.getRoutingTable().indicesRouting(); + Map updatedRoutingTableMap = currentState.getRoutingTable().indicesRouting(); + + DiffableUtils.MapDiff> routingTableIncrementalDiff = DiffableUtils.diff( + initialRoutingTableMap, + updatedRoutingTableMap, + DiffableUtils.getStringKeySerializer(), + CUSTOM_ROUTING_TABLE_DIFFABLE_VALUE_SERIALIZER + ); + ClusterStateDiffManifest manifest = new ClusterStateDiffManifest( + currentState, + previousState, + routingTableIncrementalDiff, + "indicesRoutingDiffPath" + ); + assertEquals("indicesRoutingDiffPath", manifest.getIndicesRoutingDiffPath()); + assertEquals(routingTableIncrementalDiff.getUpserts().size(), manifest.getIndicesRoutingUpdated().size()); + assertEquals(routingTableIncrementalDiff.getDeletes().size(), manifest.getIndicesRoutingDeleted().size()); + return manifest; + } + private ClusterStateDiffManifest updateAndVerifyState( ClusterState initialState, List indicesToAdd, @@ -191,7 +256,7 @@ private ClusterStateDiffManifest updateAndVerifyState( } ClusterState updatedClusterState = clusterStateBuilder.metadata(metadataBuilder.build()).build(); - ClusterStateDiffManifest manifest = new ClusterStateDiffManifest(updatedClusterState, initialState); + ClusterStateDiffManifest manifest = new ClusterStateDiffManifest(updatedClusterState, initialState, null, null); assertEquals(indicesToAdd.stream().map(im -> im.getIndex().getName()).collect(toList()), manifest.getIndicesUpdated()); assertEquals(indicesToRemove, manifest.getIndicesDeleted()); assertEquals(new ArrayList<>(customsToAdd.keySet()), manifest.getCustomMetadataUpdated()); diff --git a/server/src/test/java/org/opensearch/gateway/remote/routingtable/RemoteIndexRoutingTableDiffTests.java b/server/src/test/java/org/opensearch/gateway/remote/routingtable/RemoteIndexRoutingTableDiffTests.java new file mode 100644 index 0000000000000..6ffa7fc5cded8 --- /dev/null +++ b/server/src/test/java/org/opensearch/gateway/remote/routingtable/RemoteIndexRoutingTableDiffTests.java @@ -0,0 +1,317 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.gateway.remote.routingtable; + +import org.opensearch.Version; +import org.opensearch.cluster.Diff; +import org.opensearch.cluster.metadata.IndexMetadata; +import org.opensearch.cluster.metadata.Metadata; +import org.opensearch.cluster.routing.IndexRoutingTable; +import org.opensearch.cluster.routing.RoutingTable; +import org.opensearch.cluster.routing.RoutingTableIncrementalDiff; +import org.opensearch.common.blobstore.BlobPath; +import org.opensearch.common.compress.DeflateCompressor; +import org.opensearch.common.remote.BlobPathParameters; +import org.opensearch.common.settings.ClusterSettings; +import org.opensearch.common.settings.Settings; +import org.opensearch.core.common.io.stream.NamedWriteableRegistry; +import org.opensearch.core.compress.Compressor; +import org.opensearch.core.compress.NoneCompressor; +import org.opensearch.gateway.remote.ClusterMetadataManifest; +import org.opensearch.index.remote.RemoteStoreUtils; +import org.opensearch.index.translog.transfer.BlobStoreTransferService; +import org.opensearch.repositories.blobstore.BlobStoreRepository; +import org.opensearch.test.OpenSearchTestCase; +import org.opensearch.threadpool.TestThreadPool; +import org.opensearch.threadpool.ThreadPool; +import org.junit.After; +import org.junit.Before; + +import java.io.IOException; +import java.io.InputStream; +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +import static org.opensearch.gateway.remote.routingtable.RemoteRoutingTableDiff.ROUTING_TABLE_DIFF_FILE; +import static org.opensearch.gateway.remote.routingtable.RemoteRoutingTableDiff.ROUTING_TABLE_DIFF_METADATA_PREFIX; +import static org.opensearch.gateway.remote.routingtable.RemoteRoutingTableDiff.ROUTING_TABLE_DIFF_PATH_TOKEN; +import static org.hamcrest.Matchers.greaterThan; +import static org.hamcrest.Matchers.is; +import static org.hamcrest.Matchers.lessThanOrEqualTo; +import static org.hamcrest.Matchers.nullValue; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; + +public class RemoteIndexRoutingTableDiffTests extends OpenSearchTestCase { + + private static final String TEST_BLOB_NAME = "/test-path/test-blob-name"; + private static final String TEST_BLOB_PATH = "test-path"; + private static final String TEST_BLOB_FILE_NAME = "test-blob-name"; + private static final long STATE_VERSION = 3L; + private static final long STATE_TERM = 2L; + private String clusterUUID; + private BlobStoreRepository blobStoreRepository; + private BlobStoreTransferService blobStoreTransferService; + private ClusterSettings clusterSettings; + private Compressor compressor; + + private String clusterName; + private NamedWriteableRegistry namedWriteableRegistry; + private final ThreadPool threadPool = new TestThreadPool(getClass().getName()); + + @Before + public void setup() { + clusterSettings = new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS); + this.clusterUUID = "test-cluster-uuid"; + this.blobStoreTransferService = mock(BlobStoreTransferService.class); + this.blobStoreRepository = mock(BlobStoreRepository.class); + BlobPath blobPath = new BlobPath().add("/path"); + when(blobStoreRepository.basePath()).thenReturn(blobPath); + when(blobStoreRepository.getCompressor()).thenReturn(new DeflateCompressor()); + compressor = new NoneCompressor(); + namedWriteableRegistry = writableRegistry(); + this.clusterName = "test-cluster-name"; + } + + @After + public void tearDown() throws Exception { + super.tearDown(); + threadPool.shutdown(); + } + + public void testClusterUUID() { + Map> diffs = new HashMap<>(); + String indexName = randomAlphaOfLength(randomIntBetween(1, 50)); + int numberOfShards = randomIntBetween(1, 10); + int numberOfReplicas = randomIntBetween(1, 10); + + IndexMetadata indexMetadata = IndexMetadata.builder(indexName) + .settings(settings(Version.CURRENT)) + .numberOfShards(numberOfShards) + .numberOfReplicas(numberOfReplicas) + .build(); + + IndexRoutingTable indexRoutingTable = IndexRoutingTable.builder(indexMetadata.getIndex()).initializeAsNew(indexMetadata).build(); + + diffs.put(indexName, indexRoutingTable.diff(indexRoutingTable)); + + RoutingTableIncrementalDiff routingTableIncrementalDiff = new RoutingTableIncrementalDiff(diffs); + + RemoteRoutingTableDiff remoteDiffForUpload = new RemoteRoutingTableDiff( + routingTableIncrementalDiff, + clusterUUID, + compressor, + STATE_TERM, + STATE_VERSION + ); + assertEquals(remoteDiffForUpload.clusterUUID(), clusterUUID); + + RemoteRoutingTableDiff remoteDiffForDownload = new RemoteRoutingTableDiff(TEST_BLOB_NAME, clusterUUID, compressor); + assertEquals(remoteDiffForDownload.clusterUUID(), clusterUUID); + } + + public void testFullBlobName() { + Map> diffs = new HashMap<>(); + String indexName = randomAlphaOfLength(randomIntBetween(1, 50)); + int numberOfShards = randomIntBetween(1, 10); + int numberOfReplicas = randomIntBetween(1, 10); + + IndexMetadata indexMetadata = IndexMetadata.builder(indexName) + .settings(settings(Version.CURRENT)) + .numberOfShards(numberOfShards) + .numberOfReplicas(numberOfReplicas) + .build(); + + IndexRoutingTable indexRoutingTable = IndexRoutingTable.builder(indexMetadata.getIndex()).initializeAsNew(indexMetadata).build(); + + diffs.put(indexName, indexRoutingTable.diff(indexRoutingTable)); + RoutingTableIncrementalDiff routingTableIncrementalDiff = new RoutingTableIncrementalDiff(diffs); + + RemoteRoutingTableDiff remoteDiffForUpload = new RemoteRoutingTableDiff( + routingTableIncrementalDiff, + clusterUUID, + compressor, + STATE_TERM, + STATE_VERSION + ); + assertThat(remoteDiffForUpload.getFullBlobName(), nullValue()); + + RemoteRoutingTableDiff remoteDiffForDownload = new RemoteRoutingTableDiff(TEST_BLOB_NAME, clusterUUID, compressor); + assertThat(remoteDiffForDownload.getFullBlobName(), is(TEST_BLOB_NAME)); + } + + public void testBlobFileName() { + Map> diffs = new HashMap<>(); + String indexName = randomAlphaOfLength(randomIntBetween(1, 50)); + int numberOfShards = randomIntBetween(1, 10); + int numberOfReplicas = randomIntBetween(1, 10); + + IndexMetadata indexMetadata = IndexMetadata.builder(indexName) + .settings(settings(Version.CURRENT)) + .numberOfShards(numberOfShards) + .numberOfReplicas(numberOfReplicas) + .build(); + + IndexRoutingTable indexRoutingTable = IndexRoutingTable.builder(indexMetadata.getIndex()).initializeAsNew(indexMetadata).build(); + + diffs.put(indexName, indexRoutingTable.diff(indexRoutingTable)); + RoutingTableIncrementalDiff routingTableIncrementalDiff = new RoutingTableIncrementalDiff(diffs); + + RemoteRoutingTableDiff remoteDiffForUpload = new RemoteRoutingTableDiff( + routingTableIncrementalDiff, + clusterUUID, + compressor, + STATE_TERM, + STATE_VERSION + ); + assertThat(remoteDiffForUpload.getBlobFileName(), nullValue()); + + RemoteRoutingTableDiff remoteDiffForDownload = new RemoteRoutingTableDiff(TEST_BLOB_NAME, clusterUUID, compressor); + assertThat(remoteDiffForDownload.getBlobFileName(), is(TEST_BLOB_FILE_NAME)); + } + + public void testBlobPathParameters() { + Map> diffs = new HashMap<>(); + String indexName = randomAlphaOfLength(randomIntBetween(1, 50)); + int numberOfShards = randomIntBetween(1, 10); + int numberOfReplicas = randomIntBetween(1, 10); + + IndexMetadata indexMetadata = IndexMetadata.builder(indexName) + .settings(settings(Version.CURRENT)) + .numberOfShards(numberOfShards) + .numberOfReplicas(numberOfReplicas) + .build(); + + IndexRoutingTable indexRoutingTable = IndexRoutingTable.builder(indexMetadata.getIndex()).initializeAsNew(indexMetadata).build(); + + diffs.put(indexName, indexRoutingTable.diff(indexRoutingTable)); + RoutingTableIncrementalDiff routingTableIncrementalDiff = new RoutingTableIncrementalDiff(diffs); + + RemoteRoutingTableDiff remoteDiffForUpload = new RemoteRoutingTableDiff( + routingTableIncrementalDiff, + clusterUUID, + compressor, + STATE_TERM, + STATE_VERSION + ); + assertThat(remoteDiffForUpload.getBlobFileName(), nullValue()); + + BlobPathParameters params = remoteDiffForUpload.getBlobPathParameters(); + assertThat(params.getPathTokens(), is(List.of(ROUTING_TABLE_DIFF_PATH_TOKEN))); + String expectedPrefix = ROUTING_TABLE_DIFF_METADATA_PREFIX; + assertThat(params.getFilePrefix(), is(expectedPrefix)); + } + + public void testGenerateBlobFileName() { + Map> diffs = new HashMap<>(); + String indexName = randomAlphaOfLength(randomIntBetween(1, 50)); + int numberOfShards = randomIntBetween(1, 10); + int numberOfReplicas = randomIntBetween(1, 10); + + IndexMetadata indexMetadata = IndexMetadata.builder(indexName) + .settings(settings(Version.CURRENT)) + .numberOfShards(numberOfShards) + .numberOfReplicas(numberOfReplicas) + .build(); + + IndexRoutingTable indexRoutingTable = IndexRoutingTable.builder(indexMetadata.getIndex()).initializeAsNew(indexMetadata).build(); + + diffs.put(indexName, indexRoutingTable.diff(indexRoutingTable)); + RoutingTableIncrementalDiff routingTableIncrementalDiff = new RoutingTableIncrementalDiff(diffs); + + RemoteRoutingTableDiff remoteDiffForUpload = new RemoteRoutingTableDiff( + routingTableIncrementalDiff, + clusterUUID, + compressor, + STATE_TERM, + STATE_VERSION + ); + + String blobFileName = remoteDiffForUpload.generateBlobFileName(); + String[] nameTokens = blobFileName.split("__"); + assertEquals(ROUTING_TABLE_DIFF_METADATA_PREFIX, nameTokens[0]); + assertEquals(RemoteStoreUtils.invertLong(STATE_TERM), nameTokens[1]); + assertEquals(RemoteStoreUtils.invertLong(STATE_VERSION), nameTokens[2]); + assertThat(RemoteStoreUtils.invertLong(nameTokens[3]), lessThanOrEqualTo(System.currentTimeMillis())); + } + + public void testGetUploadedMetadata() throws IOException { + Map> diffs = new HashMap<>(); + String indexName = randomAlphaOfLength(randomIntBetween(1, 50)); + int numberOfShards = randomIntBetween(1, 10); + int numberOfReplicas = randomIntBetween(1, 10); + + IndexMetadata indexMetadata = IndexMetadata.builder(indexName) + .settings(settings(Version.CURRENT)) + .numberOfShards(numberOfShards) + .numberOfReplicas(numberOfReplicas) + .build(); + + IndexRoutingTable indexRoutingTable = IndexRoutingTable.builder(indexMetadata.getIndex()).initializeAsNew(indexMetadata).build(); + + diffs.put(indexName, indexRoutingTable.diff(indexRoutingTable)); + RoutingTableIncrementalDiff routingTableIncrementalDiff = new RoutingTableIncrementalDiff(diffs); + + RemoteRoutingTableDiff remoteDiffForUpload = new RemoteRoutingTableDiff( + routingTableIncrementalDiff, + clusterUUID, + compressor, + STATE_TERM, + STATE_VERSION + ); + + remoteDiffForUpload.setFullBlobName(new BlobPath().add(TEST_BLOB_PATH)); + ClusterMetadataManifest.UploadedMetadata uploadedMetadataAttribute = remoteDiffForUpload.getUploadedMetadata(); + assertEquals(ROUTING_TABLE_DIFF_FILE, uploadedMetadataAttribute.getComponent()); + } + + public void testStreamOperations() throws IOException { + String indexName = randomAlphaOfLength(randomIntBetween(1, 50)); + int numberOfShards = randomIntBetween(1, 10); + int numberOfReplicas = randomIntBetween(1, 10); + + Metadata metadata = Metadata.builder() + .put( + IndexMetadata.builder(indexName) + .settings(settings(Version.CURRENT)) + .numberOfShards(numberOfShards) + .numberOfReplicas(numberOfReplicas) + ) + .build(); + + RoutingTable initialRoutingTable = RoutingTable.builder().addAsNew(metadata.index(indexName)).build(); + Map> diffs = new HashMap<>(); + + initialRoutingTable.getIndicesRouting().values().forEach(indexRoutingTable -> { + diffs.put(indexName, indexRoutingTable.diff(indexRoutingTable)); + RoutingTableIncrementalDiff routingTableIncrementalDiff = new RoutingTableIncrementalDiff(diffs); + + RemoteRoutingTableDiff remoteDiffForUpload = new RemoteRoutingTableDiff( + routingTableIncrementalDiff, + clusterUUID, + compressor, + STATE_TERM, + STATE_VERSION + ); + + assertThrows(AssertionError.class, remoteDiffForUpload::getUploadedMetadata); + + try (InputStream inputStream = remoteDiffForUpload.serialize()) { + remoteDiffForUpload.setFullBlobName(BlobPath.cleanPath()); + assertThat(inputStream.available(), greaterThan(0)); + + routingTableIncrementalDiff = remoteDiffForUpload.deserialize(inputStream); + assertEquals(remoteDiffForUpload.getDiffs().size(), routingTableIncrementalDiff.getDiffs().size()); + } catch (IOException e) { + throw new RuntimeException(e); + } + }); + } +} From 5026af61b2b8cd5695c3945508a5fae2f4267de8 Mon Sep 17 00:00:00 2001 From: Pranshu Shukla <55992439+Pranshu-S@users.noreply.github.com> Date: Tue, 23 Jul 2024 20:24:00 +0530 Subject: [PATCH 108/167] Optimized ClusterStatsIndices to precomute shard stats (#14426) * Optimize Cluster Stats Indices to precomute node level stats Signed-off-by: Pranshu Shukla --- CHANGELOG.md | 1 + .../admin/cluster/stats/ClusterStatsIT.java | 119 ++++++-- .../cluster/stats/ClusterStatsIndices.java | 67 +++-- .../stats/ClusterStatsNodeResponse.java | 133 ++++++++- .../cluster/stats/ClusterStatsRequest.java | 17 ++ .../stats/ClusterStatsRequestBuilder.java | 5 + .../stats/TransportClusterStatsAction.java | 10 +- .../admin/cluster/RestClusterStatsAction.java | 1 + .../cluster/stats/ClusterStatsNodesTests.java | 269 ++++++++++++++++++ 9 files changed, 584 insertions(+), 38 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index c8f185ca2bb3d..6aa3d7a58dda4 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -30,6 +30,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Add persian_stem filter (([#14847](https://github.com/opensearch-project/OpenSearch/pull/14847))) - Create listener to refresh search thread resource usage ([#14832](https://github.com/opensearch-project/OpenSearch/pull/14832)) - Add rest, transport layer changes for hot to warm tiering - dedicated setup (([#13980](https://github.com/opensearch-project/OpenSearch/pull/13980)) +- Optimize Cluster Stats Indices to precomute node level stats ([#14426](https://github.com/opensearch-project/OpenSearch/pull/14426)) ### Dependencies - Bump `org.gradle.test-retry` from 1.5.8 to 1.5.9 ([#13442](https://github.com/opensearch-project/OpenSearch/pull/13442)) diff --git a/server/src/internalClusterTest/java/org/opensearch/action/admin/cluster/stats/ClusterStatsIT.java b/server/src/internalClusterTest/java/org/opensearch/action/admin/cluster/stats/ClusterStatsIT.java index 085a32593063a..f23cdbb50b37a 100644 --- a/server/src/internalClusterTest/java/org/opensearch/action/admin/cluster/stats/ClusterStatsIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/action/admin/cluster/stats/ClusterStatsIT.java @@ -88,7 +88,11 @@ public void testNodeCounts() { Map expectedCounts = getExpectedCounts(1, 1, 1, 1, 1, 0, 0); int numNodes = randomIntBetween(1, 5); - ClusterStatsResponse response = client().admin().cluster().prepareClusterStats().get(); + ClusterStatsResponse response = client().admin() + .cluster() + .prepareClusterStats() + .useAggregatedNodeLevelResponses(randomBoolean()) + .get(); assertCounts(response.getNodesStats().getCounts(), total, expectedCounts); for (int i = 0; i < numNodes; i++) { @@ -153,7 +157,11 @@ public void testNodeCountsWithDeprecatedMasterRole() throws ExecutionException, Map expectedCounts = getExpectedCounts(0, 1, 1, 0, 0, 0, 0); Client client = client(); - ClusterStatsResponse response = client.admin().cluster().prepareClusterStats().get(); + ClusterStatsResponse response = client.admin() + .cluster() + .prepareClusterStats() + .useAggregatedNodeLevelResponses(randomBoolean()) + .get(); assertCounts(response.getNodesStats().getCounts(), total, expectedCounts); Set expectedRoles = Set.of(DiscoveryNodeRole.MASTER_ROLE.roleName()); @@ -176,15 +184,60 @@ private void assertShardStats(ClusterStatsIndices.ShardStats stats, int indices, assertThat(stats.getReplication(), Matchers.equalTo(replicationFactor)); } - public void testIndicesShardStats() throws ExecutionException, InterruptedException { + public void testIndicesShardStatsWithoutNodeLevelAggregations() { + internalCluster().startNode(); + ensureGreen(); + ClusterStatsResponse response = client().admin().cluster().prepareClusterStats().useAggregatedNodeLevelResponses(false).get(); + assertThat(response.getStatus(), Matchers.equalTo(ClusterHealthStatus.GREEN)); + + prepareCreate("test1").setSettings(Settings.builder().put("number_of_shards", 2).put("number_of_replicas", 1)).get(); + + response = client().admin().cluster().prepareClusterStats().useAggregatedNodeLevelResponses(false).get(); + assertThat(response.getStatus(), Matchers.equalTo(ClusterHealthStatus.YELLOW)); + assertThat(response.indicesStats.getDocs().getCount(), Matchers.equalTo(0L)); + assertThat(response.indicesStats.getIndexCount(), Matchers.equalTo(1)); + assertShardStats(response.getIndicesStats().getShards(), 1, 2, 2, 0.0); + + // add another node, replicas should get assigned + internalCluster().startNode(); + ensureGreen(); + index("test1", "type", "1", "f", "f"); + refresh(); // make the doc visible + response = client().admin().cluster().prepareClusterStats().useAggregatedNodeLevelResponses(false).get(); + assertThat(response.getStatus(), Matchers.equalTo(ClusterHealthStatus.GREEN)); + assertThat(response.indicesStats.getDocs().getCount(), Matchers.equalTo(1L)); + assertShardStats(response.getIndicesStats().getShards(), 1, 4, 2, 1.0); + + prepareCreate("test2").setSettings(Settings.builder().put("number_of_shards", 3).put("number_of_replicas", 0)).get(); + ensureGreen(); + response = client().admin().cluster().prepareClusterStats().useAggregatedNodeLevelResponses(false).get(); + assertThat(response.getStatus(), Matchers.equalTo(ClusterHealthStatus.GREEN)); + assertThat(response.indicesStats.getIndexCount(), Matchers.equalTo(2)); + assertShardStats(response.getIndicesStats().getShards(), 2, 7, 5, 2.0 / 5); + + assertThat(response.getIndicesStats().getShards().getAvgIndexPrimaryShards(), Matchers.equalTo(2.5)); + assertThat(response.getIndicesStats().getShards().getMinIndexPrimaryShards(), Matchers.equalTo(2)); + assertThat(response.getIndicesStats().getShards().getMaxIndexPrimaryShards(), Matchers.equalTo(3)); + + assertThat(response.getIndicesStats().getShards().getAvgIndexShards(), Matchers.equalTo(3.5)); + assertThat(response.getIndicesStats().getShards().getMinIndexShards(), Matchers.equalTo(3)); + assertThat(response.getIndicesStats().getShards().getMaxIndexShards(), Matchers.equalTo(4)); + + assertThat(response.getIndicesStats().getShards().getAvgIndexReplication(), Matchers.equalTo(0.5)); + assertThat(response.getIndicesStats().getShards().getMinIndexReplication(), Matchers.equalTo(0.0)); + assertThat(response.getIndicesStats().getShards().getMaxIndexReplication(), Matchers.equalTo(1.0)); + + } + + public void testIndicesShardStatsWithNodeLevelAggregations() { internalCluster().startNode(); ensureGreen(); - ClusterStatsResponse response = client().admin().cluster().prepareClusterStats().get(); + ClusterStatsResponse response = client().admin().cluster().prepareClusterStats().useAggregatedNodeLevelResponses(true).get(); assertThat(response.getStatus(), Matchers.equalTo(ClusterHealthStatus.GREEN)); prepareCreate("test1").setSettings(Settings.builder().put("number_of_shards", 2).put("number_of_replicas", 1)).get(); - response = client().admin().cluster().prepareClusterStats().get(); + response = client().admin().cluster().prepareClusterStats().useAggregatedNodeLevelResponses(true).get(); assertThat(response.getStatus(), Matchers.equalTo(ClusterHealthStatus.YELLOW)); assertThat(response.indicesStats.getDocs().getCount(), Matchers.equalTo(0L)); assertThat(response.indicesStats.getIndexCount(), Matchers.equalTo(1)); @@ -195,14 +248,14 @@ public void testIndicesShardStats() throws ExecutionException, InterruptedExcept ensureGreen(); index("test1", "type", "1", "f", "f"); refresh(); // make the doc visible - response = client().admin().cluster().prepareClusterStats().get(); + response = client().admin().cluster().prepareClusterStats().useAggregatedNodeLevelResponses(true).get(); assertThat(response.getStatus(), Matchers.equalTo(ClusterHealthStatus.GREEN)); assertThat(response.indicesStats.getDocs().getCount(), Matchers.equalTo(1L)); assertShardStats(response.getIndicesStats().getShards(), 1, 4, 2, 1.0); prepareCreate("test2").setSettings(Settings.builder().put("number_of_shards", 3).put("number_of_replicas", 0)).get(); ensureGreen(); - response = client().admin().cluster().prepareClusterStats().get(); + response = client().admin().cluster().prepareClusterStats().useAggregatedNodeLevelResponses(true).get(); assertThat(response.getStatus(), Matchers.equalTo(ClusterHealthStatus.GREEN)); assertThat(response.indicesStats.getIndexCount(), Matchers.equalTo(2)); assertShardStats(response.getIndicesStats().getShards(), 2, 7, 5, 2.0 / 5); @@ -225,7 +278,11 @@ public void testValuesSmokeScreen() throws IOException, ExecutionException, Inte internalCluster().startNodes(randomIntBetween(1, 3)); index("test1", "type", "1", "f", "f"); - ClusterStatsResponse response = client().admin().cluster().prepareClusterStats().get(); + ClusterStatsResponse response = client().admin() + .cluster() + .prepareClusterStats() + .useAggregatedNodeLevelResponses(randomBoolean()) + .get(); String msg = response.toString(); assertThat(msg, response.getTimestamp(), Matchers.greaterThan(946681200000L)); // 1 Jan 2000 assertThat(msg, response.indicesStats.getStore().getSizeInBytes(), Matchers.greaterThan(0L)); @@ -265,13 +322,21 @@ public void testAllocatedProcessors() throws Exception { internalCluster().startNode(Settings.builder().put(OpenSearchExecutors.NODE_PROCESSORS_SETTING.getKey(), 7).build()); waitForNodes(1); - ClusterStatsResponse response = client().admin().cluster().prepareClusterStats().get(); + ClusterStatsResponse response = client().admin() + .cluster() + .prepareClusterStats() + .useAggregatedNodeLevelResponses(randomBoolean()) + .get(); assertThat(response.getNodesStats().getOs().getAllocatedProcessors(), equalTo(7)); } public void testClusterStatusWhenStateNotRecovered() throws Exception { internalCluster().startClusterManagerOnlyNode(Settings.builder().put("gateway.recover_after_nodes", 2).build()); - ClusterStatsResponse response = client().admin().cluster().prepareClusterStats().get(); + ClusterStatsResponse response = client().admin() + .cluster() + .prepareClusterStats() + .useAggregatedNodeLevelResponses(randomBoolean()) + .get(); assertThat(response.getStatus(), equalTo(ClusterHealthStatus.RED)); if (randomBoolean()) { @@ -281,14 +346,18 @@ public void testClusterStatusWhenStateNotRecovered() throws Exception { } // wait for the cluster status to settle ensureGreen(); - response = client().admin().cluster().prepareClusterStats().get(); + response = client().admin().cluster().prepareClusterStats().useAggregatedNodeLevelResponses(randomBoolean()).get(); assertThat(response.getStatus(), equalTo(ClusterHealthStatus.GREEN)); } public void testFieldTypes() { internalCluster().startNode(); ensureGreen(); - ClusterStatsResponse response = client().admin().cluster().prepareClusterStats().get(); + ClusterStatsResponse response = client().admin() + .cluster() + .prepareClusterStats() + .useAggregatedNodeLevelResponses(randomBoolean()) + .get(); assertThat(response.getStatus(), Matchers.equalTo(ClusterHealthStatus.GREEN)); assertTrue(response.getIndicesStats().getMappings().getFieldTypeStats().isEmpty()); @@ -301,7 +370,7 @@ public void testFieldTypes() { + "\"eggplant\":{\"type\":\"integer\"}}}}}" ) .get(); - response = client().admin().cluster().prepareClusterStats().get(); + response = client().admin().cluster().prepareClusterStats().useAggregatedNodeLevelResponses(randomBoolean()).get(); assertThat(response.getIndicesStats().getMappings().getFieldTypeStats().size(), equalTo(3)); Set stats = response.getIndicesStats().getMappings().getFieldTypeStats(); for (IndexFeatureStats stat : stats) { @@ -329,7 +398,11 @@ public void testNodeRolesWithMasterLegacySettings() throws ExecutionException, I Map expectedCounts = getExpectedCounts(0, 1, 1, 0, 1, 0, 0); Client client = client(); - ClusterStatsResponse clusterStatsResponse = client.admin().cluster().prepareClusterStats().get(); + ClusterStatsResponse clusterStatsResponse = client.admin() + .cluster() + .prepareClusterStats() + .useAggregatedNodeLevelResponses(randomBoolean()) + .get(); assertCounts(clusterStatsResponse.getNodesStats().getCounts(), total, expectedCounts); Set expectedRoles = Set.of( @@ -359,7 +432,11 @@ public void testNodeRolesWithClusterManagerRole() throws ExecutionException, Int Map expectedCounts = getExpectedCounts(0, 1, 1, 0, 1, 0, 0); Client client = client(); - ClusterStatsResponse clusterStatsResponse = client.admin().cluster().prepareClusterStats().get(); + ClusterStatsResponse clusterStatsResponse = client.admin() + .cluster() + .prepareClusterStats() + .useAggregatedNodeLevelResponses(randomBoolean()) + .get(); assertCounts(clusterStatsResponse.getNodesStats().getCounts(), total, expectedCounts); Set expectedRoles = Set.of( @@ -383,7 +460,11 @@ public void testNodeRolesWithSeedDataNodeLegacySettings() throws ExecutionExcept Map expectedRoleCounts = getExpectedCounts(1, 1, 1, 0, 1, 0, 0); Client client = client(); - ClusterStatsResponse clusterStatsResponse = client.admin().cluster().prepareClusterStats().get(); + ClusterStatsResponse clusterStatsResponse = client.admin() + .cluster() + .prepareClusterStats() + .useAggregatedNodeLevelResponses(randomBoolean()) + .get(); assertCounts(clusterStatsResponse.getNodesStats().getCounts(), total, expectedRoleCounts); Set expectedRoles = Set.of( @@ -410,7 +491,11 @@ public void testNodeRolesWithDataNodeLegacySettings() throws ExecutionException, Map expectedRoleCounts = getExpectedCounts(1, 1, 1, 0, 1, 0, 0); Client client = client(); - ClusterStatsResponse clusterStatsResponse = client.admin().cluster().prepareClusterStats().get(); + ClusterStatsResponse clusterStatsResponse = client.admin() + .cluster() + .prepareClusterStats() + .useAggregatedNodeLevelResponses(randomBoolean()) + .get(); assertCounts(clusterStatsResponse.getNodesStats().getCounts(), total, expectedRoleCounts); Set> expectedNodesRoles = Set.of( diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/stats/ClusterStatsIndices.java b/server/src/main/java/org/opensearch/action/admin/cluster/stats/ClusterStatsIndices.java index 26e554f44fca1..03a73f45ffe81 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/stats/ClusterStatsIndices.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/stats/ClusterStatsIndices.java @@ -78,26 +78,49 @@ public ClusterStatsIndices(List nodeResponses, Mapping this.segments = new SegmentsStats(); for (ClusterStatsNodeResponse r : nodeResponses) { - for (org.opensearch.action.admin.indices.stats.ShardStats shardStats : r.shardsStats()) { - ShardStats indexShardStats = countsPerIndex.get(shardStats.getShardRouting().getIndexName()); - if (indexShardStats == null) { - indexShardStats = new ShardStats(); - countsPerIndex.put(shardStats.getShardRouting().getIndexName(), indexShardStats); + // Aggregated response from the node + if (r.getAggregatedNodeLevelStats() != null) { + + for (Map.Entry entry : r.getAggregatedNodeLevelStats().indexStatsMap + .entrySet()) { + ShardStats indexShardStats = countsPerIndex.get(entry.getKey()); + if (indexShardStats == null) { + indexShardStats = new ShardStats(entry.getValue()); + countsPerIndex.put(entry.getKey(), indexShardStats); + } else { + indexShardStats.addStatsFrom(entry.getValue()); + } } - indexShardStats.total++; - - CommonStats shardCommonStats = shardStats.getStats(); - - if (shardStats.getShardRouting().primary()) { - indexShardStats.primaries++; - docs.add(shardCommonStats.docs); + docs.add(r.getAggregatedNodeLevelStats().commonStats.docs); + store.add(r.getAggregatedNodeLevelStats().commonStats.store); + fieldData.add(r.getAggregatedNodeLevelStats().commonStats.fieldData); + queryCache.add(r.getAggregatedNodeLevelStats().commonStats.queryCache); + completion.add(r.getAggregatedNodeLevelStats().commonStats.completion); + segments.add(r.getAggregatedNodeLevelStats().commonStats.segments); + } else { + // Default response from the node + for (org.opensearch.action.admin.indices.stats.ShardStats shardStats : r.shardsStats()) { + ShardStats indexShardStats = countsPerIndex.get(shardStats.getShardRouting().getIndexName()); + if (indexShardStats == null) { + indexShardStats = new ShardStats(); + countsPerIndex.put(shardStats.getShardRouting().getIndexName(), indexShardStats); + } + + indexShardStats.total++; + + CommonStats shardCommonStats = shardStats.getStats(); + + if (shardStats.getShardRouting().primary()) { + indexShardStats.primaries++; + docs.add(shardCommonStats.docs); + } + store.add(shardCommonStats.store); + fieldData.add(shardCommonStats.fieldData); + queryCache.add(shardCommonStats.queryCache); + completion.add(shardCommonStats.completion); + segments.add(shardCommonStats.segments); } - store.add(shardCommonStats.store); - fieldData.add(shardCommonStats.fieldData); - queryCache.add(shardCommonStats.queryCache); - completion.add(shardCommonStats.completion); - segments.add(shardCommonStats.segments); } } @@ -202,6 +225,11 @@ public static class ShardStats implements ToXContentFragment { public ShardStats() {} + public ShardStats(ClusterStatsNodeResponse.AggregatedIndexStats aggregatedIndexStats) { + this.total = aggregatedIndexStats.total; + this.primaries = aggregatedIndexStats.primaries; + } + /** * number of indices in the cluster */ @@ -329,6 +357,11 @@ public void addIndexShardCount(ShardStats indexShardCount) { } } + public void addStatsFrom(ClusterStatsNodeResponse.AggregatedIndexStats incomingStats) { + this.total += incomingStats.total; + this.primaries += incomingStats.primaries; + } + /** * Inner Fields used for creating XContent and parsing * diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/stats/ClusterStatsNodeResponse.java b/server/src/main/java/org/opensearch/action/admin/cluster/stats/ClusterStatsNodeResponse.java index 1b25bf84356d6..133cf68f5f8c9 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/stats/ClusterStatsNodeResponse.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/stats/ClusterStatsNodeResponse.java @@ -32,17 +32,29 @@ package org.opensearch.action.admin.cluster.stats; +import org.opensearch.Version; import org.opensearch.action.admin.cluster.node.info.NodeInfo; import org.opensearch.action.admin.cluster.node.stats.NodeStats; +import org.opensearch.action.admin.indices.stats.CommonStats; import org.opensearch.action.admin.indices.stats.ShardStats; import org.opensearch.action.support.nodes.BaseNodeResponse; import org.opensearch.cluster.health.ClusterHealthStatus; import org.opensearch.cluster.node.DiscoveryNode; import org.opensearch.common.Nullable; +import org.opensearch.common.annotation.PublicApi; import org.opensearch.core.common.io.stream.StreamInput; import org.opensearch.core.common.io.stream.StreamOutput; +import org.opensearch.core.common.io.stream.Writeable; +import org.opensearch.index.cache.query.QueryCacheStats; +import org.opensearch.index.engine.SegmentsStats; +import org.opensearch.index.fielddata.FieldDataStats; +import org.opensearch.index.shard.DocsStats; +import org.opensearch.index.store.StoreStats; +import org.opensearch.search.suggest.completion.CompletionStats; import java.io.IOException; +import java.util.HashMap; +import java.util.Map; /** * Transport action for obtaining cluster stats from node level @@ -55,6 +67,7 @@ public class ClusterStatsNodeResponse extends BaseNodeResponse { private final NodeStats nodeStats; private final ShardStats[] shardsStats; private ClusterHealthStatus clusterStatus; + private AggregatedNodeLevelStats aggregatedNodeLevelStats; public ClusterStatsNodeResponse(StreamInput in) throws IOException { super(in); @@ -64,7 +77,12 @@ public ClusterStatsNodeResponse(StreamInput in) throws IOException { } this.nodeInfo = new NodeInfo(in); this.nodeStats = new NodeStats(in); - shardsStats = in.readArray(ShardStats::new, ShardStats[]::new); + if (in.getVersion().onOrAfter(Version.V_3_0_0)) { + this.shardsStats = in.readOptionalArray(ShardStats::new, ShardStats[]::new); + this.aggregatedNodeLevelStats = in.readOptionalWriteable(AggregatedNodeLevelStats::new); + } else { + this.shardsStats = in.readArray(ShardStats::new, ShardStats[]::new); + } } public ClusterStatsNodeResponse( @@ -81,6 +99,24 @@ public ClusterStatsNodeResponse( this.clusterStatus = clusterStatus; } + public ClusterStatsNodeResponse( + DiscoveryNode node, + @Nullable ClusterHealthStatus clusterStatus, + NodeInfo nodeInfo, + NodeStats nodeStats, + ShardStats[] shardsStats, + boolean useAggregatedNodeLevelResponses + ) { + super(node); + this.nodeInfo = nodeInfo; + this.nodeStats = nodeStats; + if (useAggregatedNodeLevelResponses) { + this.aggregatedNodeLevelStats = new AggregatedNodeLevelStats(node, shardsStats); + } + this.shardsStats = shardsStats; + this.clusterStatus = clusterStatus; + } + public NodeInfo nodeInfo() { return this.nodeInfo; } @@ -101,6 +137,10 @@ public ShardStats[] shardsStats() { return this.shardsStats; } + public AggregatedNodeLevelStats getAggregatedNodeLevelStats() { + return aggregatedNodeLevelStats; + } + public static ClusterStatsNodeResponse readNodeResponse(StreamInput in) throws IOException { return new ClusterStatsNodeResponse(in); } @@ -116,6 +156,95 @@ public void writeTo(StreamOutput out) throws IOException { } nodeInfo.writeTo(out); nodeStats.writeTo(out); - out.writeArray(shardsStats); + if (out.getVersion().onOrAfter(Version.V_3_0_0)) { + if (aggregatedNodeLevelStats != null) { + out.writeOptionalArray(null); + out.writeOptionalWriteable(aggregatedNodeLevelStats); + } else { + out.writeOptionalArray(shardsStats); + out.writeOptionalWriteable(null); + } + } else { + out.writeArray(shardsStats); + } + } + + /** + * Node level statistics used for ClusterStatsIndices for _cluster/stats call. + */ + public class AggregatedNodeLevelStats extends BaseNodeResponse { + + CommonStats commonStats; + Map indexStatsMap; + + protected AggregatedNodeLevelStats(StreamInput in) throws IOException { + super(in); + commonStats = in.readOptionalWriteable(CommonStats::new); + indexStatsMap = in.readMap(StreamInput::readString, AggregatedIndexStats::new); + } + + protected AggregatedNodeLevelStats(DiscoveryNode node, ShardStats[] indexShardsStats) { + super(node); + this.commonStats = new CommonStats(); + this.commonStats.docs = new DocsStats(); + this.commonStats.store = new StoreStats(); + this.commonStats.fieldData = new FieldDataStats(); + this.commonStats.queryCache = new QueryCacheStats(); + this.commonStats.completion = new CompletionStats(); + this.commonStats.segments = new SegmentsStats(); + this.indexStatsMap = new HashMap<>(); + + // Index Level Stats + for (org.opensearch.action.admin.indices.stats.ShardStats shardStats : indexShardsStats) { + AggregatedIndexStats indexShardStats = this.indexStatsMap.get(shardStats.getShardRouting().getIndexName()); + if (indexShardStats == null) { + indexShardStats = new AggregatedIndexStats(); + this.indexStatsMap.put(shardStats.getShardRouting().getIndexName(), indexShardStats); + } + + indexShardStats.total++; + + CommonStats shardCommonStats = shardStats.getStats(); + + if (shardStats.getShardRouting().primary()) { + indexShardStats.primaries++; + this.commonStats.docs.add(shardCommonStats.docs); + } + this.commonStats.store.add(shardCommonStats.store); + this.commonStats.fieldData.add(shardCommonStats.fieldData); + this.commonStats.queryCache.add(shardCommonStats.queryCache); + this.commonStats.completion.add(shardCommonStats.completion); + this.commonStats.segments.add(shardCommonStats.segments); + } + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + out.writeOptionalWriteable(commonStats); + out.writeMap(indexStatsMap, StreamOutput::writeString, (stream, stats) -> stats.writeTo(stream)); + } + } + + /** + * Node level statistics used for ClusterStatsIndices for _cluster/stats call. + */ + @PublicApi(since = "2.16.0") + public static class AggregatedIndexStats implements Writeable { + public int total = 0; + public int primaries = 0; + + public AggregatedIndexStats(StreamInput in) throws IOException { + total = in.readVInt(); + primaries = in.readVInt(); + } + + public AggregatedIndexStats() {} + + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeVInt(total); + out.writeVInt(primaries); + } } } diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/stats/ClusterStatsRequest.java b/server/src/main/java/org/opensearch/action/admin/cluster/stats/ClusterStatsRequest.java index 6a99451c596ed..fdeb82a3466f2 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/stats/ClusterStatsRequest.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/stats/ClusterStatsRequest.java @@ -32,6 +32,7 @@ package org.opensearch.action.admin.cluster.stats; +import org.opensearch.Version; import org.opensearch.action.support.nodes.BaseNodesRequest; import org.opensearch.common.annotation.PublicApi; import org.opensearch.core.common.io.stream.StreamInput; @@ -49,8 +50,13 @@ public class ClusterStatsRequest extends BaseNodesRequest { public ClusterStatsRequest(StreamInput in) throws IOException { super(in); + if (in.getVersion().onOrAfter(Version.V_3_0_0)) { + useAggregatedNodeLevelResponses = in.readOptionalBoolean(); + } } + private Boolean useAggregatedNodeLevelResponses = false; + /** * Get stats from nodes based on the nodes ids specified. If none are passed, stats * based on all nodes will be returned. @@ -59,9 +65,20 @@ public ClusterStatsRequest(String... nodesIds) { super(nodesIds); } + public boolean useAggregatedNodeLevelResponses() { + return useAggregatedNodeLevelResponses; + } + + public void useAggregatedNodeLevelResponses(boolean useAggregatedNodeLevelResponses) { + this.useAggregatedNodeLevelResponses = useAggregatedNodeLevelResponses; + } + @Override public void writeTo(StreamOutput out) throws IOException { super.writeTo(out); + if (out.getVersion().onOrAfter(Version.V_3_0_0)) { + out.writeOptionalBoolean(useAggregatedNodeLevelResponses); + } } } diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/stats/ClusterStatsRequestBuilder.java b/server/src/main/java/org/opensearch/action/admin/cluster/stats/ClusterStatsRequestBuilder.java index 0dcb03dc26d0e..4d0932bd3927d 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/stats/ClusterStatsRequestBuilder.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/stats/ClusterStatsRequestBuilder.java @@ -50,4 +50,9 @@ public class ClusterStatsRequestBuilder extends NodesOperationRequestBuilder< public ClusterStatsRequestBuilder(OpenSearchClient client, ClusterStatsAction action) { super(client, action, new ClusterStatsRequest()); } + + public final ClusterStatsRequestBuilder useAggregatedNodeLevelResponses(boolean useAggregatedNodeLevelResponses) { + request.useAggregatedNodeLevelResponses(useAggregatedNodeLevelResponses); + return this; + } } diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/stats/TransportClusterStatsAction.java b/server/src/main/java/org/opensearch/action/admin/cluster/stats/TransportClusterStatsAction.java index c7d03596a2a36..be7d41a7ba75e 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/stats/TransportClusterStatsAction.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/stats/TransportClusterStatsAction.java @@ -212,8 +212,14 @@ protected ClusterStatsNodeResponse nodeOperation(ClusterStatsNodeRequest nodeReq clusterStatus = new ClusterStateHealth(clusterService.state()).getStatus(); } - return new ClusterStatsNodeResponse(nodeInfo.getNode(), clusterStatus, nodeInfo, nodeStats, shardsStats.toArray(new ShardStats[0])); - + return new ClusterStatsNodeResponse( + nodeInfo.getNode(), + clusterStatus, + nodeInfo, + nodeStats, + shardsStats.toArray(new ShardStats[0]), + nodeRequest.request.useAggregatedNodeLevelResponses() + ); } /** diff --git a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestClusterStatsAction.java b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestClusterStatsAction.java index 913db3c81e951..d4426a004af8e 100644 --- a/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestClusterStatsAction.java +++ b/server/src/main/java/org/opensearch/rest/action/admin/cluster/RestClusterStatsAction.java @@ -67,6 +67,7 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC ClusterStatsRequest clusterStatsRequest = new ClusterStatsRequest().nodesIds(request.paramAsStringArray("nodeId", null)); clusterStatsRequest.timeout(request.param("timeout")); clusterStatsRequest.setIncludeDiscoveryNodes(false); + clusterStatsRequest.useAggregatedNodeLevelResponses(true); return channel -> client.admin().cluster().clusterStats(clusterStatsRequest, new NodesResponseRestListener<>(channel)); } diff --git a/server/src/test/java/org/opensearch/action/admin/cluster/stats/ClusterStatsNodesTests.java b/server/src/test/java/org/opensearch/action/admin/cluster/stats/ClusterStatsNodesTests.java index 40a30342b86b9..1c4a77905d73f 100644 --- a/server/src/test/java/org/opensearch/action/admin/cluster/stats/ClusterStatsNodesTests.java +++ b/server/src/test/java/org/opensearch/action/admin/cluster/stats/ClusterStatsNodesTests.java @@ -32,16 +32,38 @@ package org.opensearch.action.admin.cluster.stats; +import org.opensearch.Build; +import org.opensearch.Version; import org.opensearch.action.admin.cluster.node.info.NodeInfo; import org.opensearch.action.admin.cluster.node.stats.NodeStats; import org.opensearch.action.admin.cluster.node.stats.NodeStatsTests; +import org.opensearch.action.admin.indices.stats.CommonStats; +import org.opensearch.action.admin.indices.stats.CommonStatsFlags; +import org.opensearch.action.admin.indices.stats.ShardStats; import org.opensearch.cluster.node.DiscoveryNode; +import org.opensearch.cluster.routing.ShardRouting; +import org.opensearch.cluster.routing.ShardRoutingState; +import org.opensearch.cluster.routing.TestShardRouting; +import org.opensearch.common.io.stream.BytesStreamOutput; import org.opensearch.common.network.NetworkModule; import org.opensearch.common.settings.Settings; +import org.opensearch.core.common.io.stream.StreamInput; +import org.opensearch.core.index.Index; import org.opensearch.core.xcontent.MediaTypeRegistry; +import org.opensearch.index.cache.query.QueryCacheStats; +import org.opensearch.index.engine.SegmentsStats; +import org.opensearch.index.fielddata.FieldDataStats; +import org.opensearch.index.flush.FlushStats; +import org.opensearch.index.shard.DocsStats; +import org.opensearch.index.shard.IndexingStats; +import org.opensearch.index.shard.ShardPath; +import org.opensearch.index.store.StoreStats; +import org.opensearch.search.suggest.completion.CompletionStats; import org.opensearch.test.OpenSearchTestCase; import java.io.IOException; +import java.nio.file.Path; +import java.util.ArrayList; import java.util.Arrays; import java.util.Collections; import java.util.Iterator; @@ -158,6 +180,253 @@ public void testIngestStats() throws Exception { ); } + public void testMultiVersionScenarioWithAggregatedNodeLevelStats() { + // Assuming the default behavior will be the type of response expected from a node of version prior to version containing + // aggregated node level information + int numberOfNodes = randomIntBetween(1, 4); + Index testIndex = new Index("test-index", "_na_"); + + List defaultClusterStatsNodeResponses = new ArrayList<>(); + List aggregatedNodeLevelClusterStatsNodeResponses = new ArrayList<>(); + + for (int i = 0; i < numberOfNodes; i++) { + DiscoveryNode node = new DiscoveryNode("node-" + i, buildNewFakeTransportAddress(), Version.CURRENT); + CommonStats commonStats = createRandomCommonStats(); + ShardStats[] shardStats = createshardStats(node, testIndex, commonStats); + ClusterStatsNodeResponse customClusterStatsResponse = createClusterStatsNodeResponse(node, shardStats, testIndex, true, false); + ClusterStatsNodeResponse customNodeLevelAggregatedClusterStatsResponse = createClusterStatsNodeResponse( + node, + shardStats, + testIndex, + false, + true + ); + defaultClusterStatsNodeResponses.add(customClusterStatsResponse); + aggregatedNodeLevelClusterStatsNodeResponses.add(customNodeLevelAggregatedClusterStatsResponse); + } + + ClusterStatsIndices defaultClusterStatsIndices = new ClusterStatsIndices(defaultClusterStatsNodeResponses, null, null); + ClusterStatsIndices aggregatedNodeLevelClusterStatsIndices = new ClusterStatsIndices( + aggregatedNodeLevelClusterStatsNodeResponses, + null, + null + ); + + assertClusterStatsIndicesEqual(defaultClusterStatsIndices, aggregatedNodeLevelClusterStatsIndices); + } + + public void assertClusterStatsIndicesEqual(ClusterStatsIndices first, ClusterStatsIndices second) { + assertEquals(first.getIndexCount(), second.getIndexCount()); + + assertEquals(first.getShards().getIndices(), second.getShards().getIndices()); + assertEquals(first.getShards().getTotal(), second.getShards().getTotal()); + assertEquals(first.getShards().getPrimaries(), second.getShards().getPrimaries()); + assertEquals(first.getShards().getMinIndexShards(), second.getShards().getMaxIndexShards()); + assertEquals(first.getShards().getMinIndexPrimaryShards(), second.getShards().getMinIndexPrimaryShards()); + + // As AssertEquals with double is deprecated and can only be used to compare floating-point numbers + assertTrue(first.getShards().getReplication() == second.getShards().getReplication()); + assertTrue(first.getShards().getAvgIndexShards() == second.getShards().getAvgIndexShards()); + assertTrue(first.getShards().getMaxIndexPrimaryShards() == second.getShards().getMaxIndexPrimaryShards()); + assertTrue(first.getShards().getAvgIndexPrimaryShards() == second.getShards().getAvgIndexPrimaryShards()); + assertTrue(first.getShards().getMinIndexReplication() == second.getShards().getMinIndexReplication()); + assertTrue(first.getShards().getAvgIndexReplication() == second.getShards().getAvgIndexReplication()); + assertTrue(first.getShards().getMaxIndexReplication() == second.getShards().getMaxIndexReplication()); + + // Docs stats + assertEquals(first.getDocs().getAverageSizeInBytes(), second.getDocs().getAverageSizeInBytes()); + assertEquals(first.getDocs().getDeleted(), second.getDocs().getDeleted()); + assertEquals(first.getDocs().getCount(), second.getDocs().getCount()); + assertEquals(first.getDocs().getTotalSizeInBytes(), second.getDocs().getTotalSizeInBytes()); + + // Store Stats + assertEquals(first.getStore().getSizeInBytes(), second.getStore().getSizeInBytes()); + assertEquals(first.getStore().getSize(), second.getStore().getSize()); + assertEquals(first.getStore().getReservedSize(), second.getStore().getReservedSize()); + + // Query Cache + assertEquals(first.getQueryCache().getCacheCount(), second.getQueryCache().getCacheCount()); + assertEquals(first.getQueryCache().getCacheSize(), second.getQueryCache().getCacheSize()); + assertEquals(first.getQueryCache().getEvictions(), second.getQueryCache().getEvictions()); + assertEquals(first.getQueryCache().getHitCount(), second.getQueryCache().getHitCount()); + assertEquals(first.getQueryCache().getTotalCount(), second.getQueryCache().getTotalCount()); + assertEquals(first.getQueryCache().getMissCount(), second.getQueryCache().getMissCount()); + assertEquals(first.getQueryCache().getMemorySize(), second.getQueryCache().getMemorySize()); + assertEquals(first.getQueryCache().getMemorySizeInBytes(), second.getQueryCache().getMemorySizeInBytes()); + + // Completion Stats + assertEquals(first.getCompletion().getSizeInBytes(), second.getCompletion().getSizeInBytes()); + assertEquals(first.getCompletion().getSize(), second.getCompletion().getSize()); + + // Segment Stats + assertEquals(first.getSegments().getBitsetMemory(), second.getSegments().getBitsetMemory()); + assertEquals(first.getSegments().getCount(), second.getSegments().getCount()); + assertEquals(first.getSegments().getBitsetMemoryInBytes(), second.getSegments().getBitsetMemoryInBytes()); + assertEquals(first.getSegments().getFileSizes(), second.getSegments().getFileSizes()); + assertEquals(first.getSegments().getIndexWriterMemoryInBytes(), second.getSegments().getIndexWriterMemoryInBytes()); + assertEquals(first.getSegments().getVersionMapMemory(), second.getSegments().getVersionMapMemory()); + assertEquals(first.getSegments().getVersionMapMemoryInBytes(), second.getSegments().getVersionMapMemoryInBytes()); + } + + public void testNodeIndexShardStatsSuccessfulSerializationDeserialization() throws IOException { + Index testIndex = new Index("test-index", "_na_"); + + DiscoveryNode node = new DiscoveryNode("node", buildNewFakeTransportAddress(), Version.CURRENT); + CommonStats commonStats = createRandomCommonStats(); + ShardStats[] shardStats = createshardStats(node, testIndex, commonStats); + ClusterStatsNodeResponse aggregatedNodeLevelClusterStatsNodeResponse = createClusterStatsNodeResponse( + node, + shardStats, + testIndex, + false, + true + ); + + BytesStreamOutput out = new BytesStreamOutput(); + aggregatedNodeLevelClusterStatsNodeResponse.writeTo(out); + StreamInput in = out.bytes().streamInput(); + + ClusterStatsNodeResponse newClusterStatsNodeRequest = new ClusterStatsNodeResponse(in); + + ClusterStatsIndices beforeSerialization = new ClusterStatsIndices(List.of(aggregatedNodeLevelClusterStatsNodeResponse), null, null); + ClusterStatsIndices afterSerialization = new ClusterStatsIndices(List.of(newClusterStatsNodeRequest), null, null); + + assertClusterStatsIndicesEqual(beforeSerialization, afterSerialization); + + } + + private ClusterStatsNodeResponse createClusterStatsNodeResponse( + DiscoveryNode node, + ShardStats[] shardStats, + Index index, + boolean defaultBehavior, + boolean aggregateNodeLevelStats + ) { + NodeInfo nodeInfo = new NodeInfo( + Version.CURRENT, + Build.CURRENT, + node, + null, + null, + null, + null, + null, + null, + null, + null, + null, + null, + null, + null + ); + + NodeStats nodeStats = new NodeStats( + node, + randomNonNegativeLong(), + null, + null, + null, + null, + null, + null, + null, + null, + null, + null, + null, + null, + null, + null, + null, + null, + null, + null, + null, + null, + null, + null, + null, + null, + null, + null, + null + ); + if (defaultBehavior) { + return new ClusterStatsNodeResponse(node, null, nodeInfo, nodeStats, shardStats); + } else { + return new ClusterStatsNodeResponse(node, null, nodeInfo, nodeStats, shardStats, aggregateNodeLevelStats); + } + + } + + private CommonStats createRandomCommonStats() { + CommonStats commonStats = new CommonStats(CommonStatsFlags.NONE); + commonStats.docs = new DocsStats(randomLongBetween(0, 10000), randomLongBetween(0, 100), randomLongBetween(0, 1000)); + commonStats.store = new StoreStats(randomLongBetween(0, 100), randomLongBetween(0, 1000)); + commonStats.indexing = new IndexingStats(); + commonStats.completion = new CompletionStats(); + commonStats.flush = new FlushStats(randomLongBetween(0, 100), randomLongBetween(0, 100), randomLongBetween(0, 100)); + commonStats.fieldData = new FieldDataStats(randomLongBetween(0, 100), randomLongBetween(0, 100), null); + commonStats.queryCache = new QueryCacheStats( + randomLongBetween(0, 100), + randomLongBetween(0, 100), + randomLongBetween(0, 100), + randomLongBetween(0, 100), + randomLongBetween(0, 100) + ); + commonStats.segments = new SegmentsStats(); + + return commonStats; + } + + private ShardStats[] createshardStats(DiscoveryNode localNode, Index index, CommonStats commonStats) { + List shardStatsList = new ArrayList<>(); + for (int i = 0; i < 2; i++) { + ShardRoutingState shardRoutingState = ShardRoutingState.fromValue((byte) randomIntBetween(2, 3)); + ShardRouting shardRouting = TestShardRouting.newShardRouting( + index.getName(), + i, + localNode.getId(), + randomBoolean(), + shardRoutingState + ); + + Path path = createTempDir().resolve("indices") + .resolve(shardRouting.shardId().getIndex().getUUID()) + .resolve(String.valueOf(shardRouting.shardId().id())); + + ShardStats shardStats = new ShardStats( + shardRouting, + new ShardPath(false, path, path, shardRouting.shardId()), + commonStats, + null, + null, + null + ); + shardStatsList.add(shardStats); + } + + return shardStatsList.toArray(new ShardStats[0]); + } + + private class MockShardStats extends ClusterStatsIndices.ShardStats { + public boolean equals(ClusterStatsIndices.ShardStats shardStats) { + return this.getIndices() == shardStats.getIndices() + && this.getTotal() == shardStats.getTotal() + && this.getPrimaries() == shardStats.getPrimaries() + && this.getReplication() == shardStats.getReplication() + && this.getMaxIndexShards() == shardStats.getMaxIndexShards() + && this.getMinIndexShards() == shardStats.getMinIndexShards() + && this.getAvgIndexShards() == shardStats.getAvgIndexShards() + && this.getMaxIndexPrimaryShards() == shardStats.getMaxIndexPrimaryShards() + && this.getMinIndexPrimaryShards() == shardStats.getMinIndexPrimaryShards() + && this.getAvgIndexPrimaryShards() == shardStats.getAvgIndexPrimaryShards() + && this.getMinIndexReplication() == shardStats.getMinIndexReplication() + && this.getAvgIndexReplication() == shardStats.getAvgIndexReplication() + && this.getMaxIndexReplication() == shardStats.getMaxIndexReplication(); + } + } + private static NodeInfo createNodeInfo(String nodeId, String transportType, String httpType) { Settings.Builder settings = Settings.builder(); if (transportType != null) { From 349708198d01f205293d0ee5ca0bdae7b9ffd76a Mon Sep 17 00:00:00 2001 From: Gaurav Bafna <85113518+gbbafna@users.noreply.github.com> Date: Tue, 23 Jul 2024 21:57:47 +0530 Subject: [PATCH 109/167] Fix constraint bug which allows more primary shards than average primary shards per index (#14908) Signed-off-by: Gaurav Bafna --- .../opensearch/cluster/routing/allocation/ConstraintTypes.java | 2 +- .../cluster/routing/allocation/AllocationConstraintsTests.java | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/server/src/main/java/org/opensearch/cluster/routing/allocation/ConstraintTypes.java b/server/src/main/java/org/opensearch/cluster/routing/allocation/ConstraintTypes.java index 08fe8f92d1f80..28ad199218884 100644 --- a/server/src/main/java/org/opensearch/cluster/routing/allocation/ConstraintTypes.java +++ b/server/src/main/java/org/opensearch/cluster/routing/allocation/ConstraintTypes.java @@ -70,7 +70,7 @@ public static Predicate isPerIndexPrimaryShardsPerN return (params) -> { int perIndexPrimaryShardCount = params.getNode().numPrimaryShards(params.getIndex()); int perIndexAllowedPrimaryShardCount = (int) Math.ceil(params.getBalancer().avgPrimaryShardsPerNode(params.getIndex())); - return perIndexPrimaryShardCount > perIndexAllowedPrimaryShardCount; + return perIndexPrimaryShardCount >= perIndexAllowedPrimaryShardCount; }; } diff --git a/server/src/test/java/org/opensearch/cluster/routing/allocation/AllocationConstraintsTests.java b/server/src/test/java/org/opensearch/cluster/routing/allocation/AllocationConstraintsTests.java index 90546620e9e3e..4c9fcd1650664 100644 --- a/server/src/test/java/org/opensearch/cluster/routing/allocation/AllocationConstraintsTests.java +++ b/server/src/test/java/org/opensearch/cluster/routing/allocation/AllocationConstraintsTests.java @@ -93,7 +93,7 @@ public void testPerIndexPrimaryShardsConstraint() { assertEquals(0, constraints.weight(balancer, node, indexName)); - perIndexPrimaryShardCount = 3; + perIndexPrimaryShardCount = 2; when(node.numPrimaryShards(anyString())).thenReturn(perIndexPrimaryShardCount); assertEquals(CONSTRAINT_WEIGHT, constraints.weight(balancer, node, indexName)); From e46d1d8685a9b90a1f25920989e567373ee23284 Mon Sep 17 00:00:00 2001 From: rishavz_sagar Date: Tue, 23 Jul 2024 22:27:45 +0530 Subject: [PATCH 110/167] Optmising AwarenessAllocationDecider for hashmap.get call (#14761) Signed-off-by: RS146BIJAY --- .../decider/AwarenessAllocationDecider.java | 91 ++++++++++++------- 1 file changed, 58 insertions(+), 33 deletions(-) diff --git a/server/src/main/java/org/opensearch/cluster/routing/allocation/decider/AwarenessAllocationDecider.java b/server/src/main/java/org/opensearch/cluster/routing/allocation/decider/AwarenessAllocationDecider.java index 5344d95b217a7..16c94acfbb553 100644 --- a/server/src/main/java/org/opensearch/cluster/routing/allocation/decider/AwarenessAllocationDecider.java +++ b/server/src/main/java/org/opensearch/cluster/routing/allocation/decider/AwarenessAllocationDecider.java @@ -111,7 +111,6 @@ public class AwarenessAllocationDecider extends AllocationDecider { ); private volatile List awarenessAttributes; - private volatile Map> forcedAwarenessAttributes; public AwarenessAllocationDecider(Settings settings, ClusterSettings clusterSettings) { @@ -163,8 +162,8 @@ private Decision underCapacity(ShardRouting shardRouting, RoutingNode node, Rout IndexMetadata indexMetadata = allocation.metadata().getIndexSafe(shardRouting.index()); int shardCount = indexMetadata.getNumberOfReplicas() + 1; // 1 for primary for (String awarenessAttribute : awarenessAttributes) { - // the node the shard exists on must be associated with an awareness attribute - if (node.node().getAttributes().containsKey(awarenessAttribute) == false) { + // the node the shard exists on must be associated with an awareness attribute. + if (isAwarenessAttributeAssociatedWithNode(node, awarenessAttribute) == false) { return allocation.decision( Decision.NO, NAME, @@ -175,36 +174,10 @@ private Decision underCapacity(ShardRouting shardRouting, RoutingNode node, Rout ); } + int currentNodeCount = getCurrentNodeCountForAttribute(shardRouting, node, allocation, moveToNode, awarenessAttribute); + // build attr_value -> nodes map Set nodesPerAttribute = allocation.routingNodes().nodesPerAttributesCounts(awarenessAttribute); - - // build the count of shards per attribute value - Map shardPerAttribute = new HashMap<>(); - for (ShardRouting assignedShard : allocation.routingNodes().assignedShards(shardRouting.shardId())) { - if (assignedShard.started() || assignedShard.initializing()) { - // Note: this also counts relocation targets as that will be the new location of the shard. - // Relocation sources should not be counted as the shard is moving away - RoutingNode routingNode = allocation.routingNodes().node(assignedShard.currentNodeId()); - shardPerAttribute.merge(routingNode.node().getAttributes().get(awarenessAttribute), 1, Integer::sum); - } - } - - if (moveToNode) { - if (shardRouting.assignedToNode()) { - String nodeId = shardRouting.relocating() ? shardRouting.relocatingNodeId() : shardRouting.currentNodeId(); - if (node.nodeId().equals(nodeId) == false) { - // we work on different nodes, move counts around - shardPerAttribute.compute( - allocation.routingNodes().node(nodeId).node().getAttributes().get(awarenessAttribute), - (k, v) -> (v == null) ? 0 : v - 1 - ); - shardPerAttribute.merge(node.node().getAttributes().get(awarenessAttribute), 1, Integer::sum); - } - } else { - shardPerAttribute.merge(node.node().getAttributes().get(awarenessAttribute), 1, Integer::sum); - } - } - int numberOfAttributes = nodesPerAttribute.size(); List fullValues = forcedAwarenessAttributes.get(awarenessAttribute); @@ -216,9 +189,8 @@ private Decision underCapacity(ShardRouting shardRouting, RoutingNode node, Rout } numberOfAttributes = attributesSet.size(); } - // TODO should we remove ones that are not part of full list? - final int currentNodeCount = shardPerAttribute.get(node.node().getAttributes().get(awarenessAttribute)); + // TODO should we remove ones that are not part of full list? final int maximumNodeCount = (shardCount + numberOfAttributes - 1) / numberOfAttributes; // ceil(shardCount/numberOfAttributes) if (currentNodeCount > maximumNodeCount) { return allocation.decision( @@ -238,4 +210,57 @@ private Decision underCapacity(ShardRouting shardRouting, RoutingNode node, Rout return allocation.decision(Decision.YES, NAME, "node meets all awareness attribute requirements"); } + + private int getCurrentNodeCountForAttribute( + ShardRouting shardRouting, + RoutingNode node, + RoutingAllocation allocation, + boolean moveToNode, + String awarenessAttribute + ) { + // build the count of shards per attribute value + final String shardAttributeForNode = getAttributeValueForNode(node, awarenessAttribute); + int currentNodeCount = 0; + final List assignedShards = allocation.routingNodes().assignedShards(shardRouting.shardId()); + + for (ShardRouting assignedShard : assignedShards) { + if (assignedShard.started() || assignedShard.initializing()) { + // Note: this also counts relocation targets as that will be the new location of the shard. + // Relocation sources should not be counted as the shard is moving away + RoutingNode routingNode = allocation.routingNodes().node(assignedShard.currentNodeId()); + // Increase node count when + if (getAttributeValueForNode(routingNode, awarenessAttribute).equals(shardAttributeForNode)) { + ++currentNodeCount; + } + } + } + + if (moveToNode) { + if (shardRouting.assignedToNode()) { + String nodeId = shardRouting.relocating() ? shardRouting.relocatingNodeId() : shardRouting.currentNodeId(); + if (node.nodeId().equals(nodeId) == false) { + // we work on different nodes, move counts around + if (getAttributeValueForNode(allocation.routingNodes().node(nodeId), awarenessAttribute).equals(shardAttributeForNode) + && currentNodeCount > 0) { + --currentNodeCount; + } + + ++currentNodeCount; + } + } else { + ++currentNodeCount; + } + } + + return currentNodeCount; + } + + private boolean isAwarenessAttributeAssociatedWithNode(RoutingNode node, String awarenessAttribute) { + return node.node().getAttributes().containsKey(awarenessAttribute); + } + + private String getAttributeValueForNode(final RoutingNode node, final String awarenessAttribute) { + return node.node().getAttributes().get(awarenessAttribute); + } + } From 087355f0ee676064ea409ed68090b33e568ea941 Mon Sep 17 00:00:00 2001 From: Andrew Ross Date: Tue, 23 Jul 2024 14:26:22 -0500 Subject: [PATCH 111/167] Fix IngestServiceTests.testBulkRequestExecutionWithFailures (#14918) The test would previously fail if the randomness led to only a single indexing request being included in the bulk payload. This change guarantees multiple indexing requests in order to ensure the batch logic kicks in. Also replace some unneeded mocks with real classes. Signed-off-by: Andrew Ross --- .../opensearch/ingest/IngestServiceTests.java | 47 +++++++++---------- 1 file changed, 22 insertions(+), 25 deletions(-) diff --git a/server/src/test/java/org/opensearch/ingest/IngestServiceTests.java b/server/src/test/java/org/opensearch/ingest/IngestServiceTests.java index 9d03127692975..166b94966196c 100644 --- a/server/src/test/java/org/opensearch/ingest/IngestServiceTests.java +++ b/server/src/test/java/org/opensearch/ingest/IngestServiceTests.java @@ -78,6 +78,7 @@ import org.opensearch.test.OpenSearchTestCase; import org.opensearch.threadpool.ThreadPool; import org.opensearch.threadpool.ThreadPool.Names; +import org.hamcrest.MatcherAssert; import org.junit.Before; import java.nio.charset.StandardCharsets; @@ -104,15 +105,16 @@ import static java.util.Collections.emptyMap; import static java.util.Collections.emptySet; +import static org.hamcrest.Matchers.contains; import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.greaterThanOrEqualTo; +import static org.hamcrest.Matchers.hasSize; import static org.hamcrest.Matchers.instanceOf; import static org.hamcrest.Matchers.is; import static org.hamcrest.Matchers.notNullValue; import static org.hamcrest.Matchers.nullValue; import static org.hamcrest.Matchers.sameInstance; import static org.mockito.Mockito.any; -import static org.mockito.Mockito.anyInt; import static org.mockito.Mockito.anyString; import static org.mockito.Mockito.argThat; import static org.mockito.Mockito.doAnswer; @@ -1106,27 +1108,23 @@ public void testExecuteFailureWithNestedOnFailure() throws Exception { verify(completionHandler, times(1)).accept(Thread.currentThread(), null); } - public void testBulkRequestExecutionWithFailures() throws Exception { + public void testBulkRequestExecutionWithFailures() { BulkRequest bulkRequest = new BulkRequest(); String pipelineId = "_id"; - int numRequest = scaledRandomIntBetween(8, 64); - int numIndexRequests = 0; - for (int i = 0; i < numRequest; i++) { - DocWriteRequest request; + int numIndexRequests = scaledRandomIntBetween(4, 32); + for (int i = 0; i < numIndexRequests; i++) { + IndexRequest indexRequest = new IndexRequest("_index").id("_id").setPipeline(pipelineId).setFinalPipeline("_none"); + indexRequest.source(Requests.INDEX_CONTENT_TYPE, "field1", "value1"); + bulkRequest.add(indexRequest); + } + int numOtherRequests = scaledRandomIntBetween(4, 32); + for (int i = 0; i < numOtherRequests; i++) { if (randomBoolean()) { - if (randomBoolean()) { - request = new DeleteRequest("_index", "_id"); - } else { - request = new UpdateRequest("_index", "_id"); - } + bulkRequest.add(new DeleteRequest("_index", "_id")); } else { - IndexRequest indexRequest = new IndexRequest("_index").id("_id").setPipeline(pipelineId).setFinalPipeline("_none"); - indexRequest.source(Requests.INDEX_CONTENT_TYPE, "field1", "value1"); - request = indexRequest; - numIndexRequests++; + bulkRequest.add(new UpdateRequest("_index", "_id")); } - bulkRequest.add(request); } CompoundProcessor processor = mock(CompoundProcessor.class); @@ -1155,23 +1153,22 @@ public void testBulkRequestExecutionWithFailures() throws Exception { clusterState = IngestService.innerPut(putRequest, clusterState); ingestService.applyClusterState(new ClusterChangedEvent("", clusterState, previousClusterState)); - @SuppressWarnings("unchecked") - BiConsumer requestItemErrorHandler = mock(BiConsumer.class); - @SuppressWarnings("unchecked") - final BiConsumer completionHandler = mock(BiConsumer.class); + final Map errorHandler = new HashMap<>(); + final Map completionHandler = new HashMap<>(); ingestService.executeBulkRequest( - numRequest, + numIndexRequests + numOtherRequests, bulkRequest.requests(), - requestItemErrorHandler, - completionHandler, + errorHandler::put, + completionHandler::put, indexReq -> {}, Names.WRITE, bulkRequest ); - verify(requestItemErrorHandler, times(numIndexRequests)).accept(anyInt(), argThat(o -> o.getCause().equals(error))); + MatcherAssert.assertThat(errorHandler.entrySet(), hasSize(numIndexRequests)); + errorHandler.values().forEach(e -> assertEquals(e.getCause(), error)); - verify(completionHandler, times(1)).accept(Thread.currentThread(), null); + MatcherAssert.assertThat(completionHandler.keySet(), contains(Thread.currentThread())); } public void testBulkRequestExecution() throws Exception { From 312de9947b8848150743623009e8d4b95487e911 Mon Sep 17 00:00:00 2001 From: Bharathwaj G Date: Wed, 24 Jul 2024 08:54:27 +0530 Subject: [PATCH 112/167] [Star tree] Star tree merge changes (#14652) --------- Signed-off-by: Bharathwaj G --- .../composite/Composite99DocValuesReader.java | 10 +- .../composite/Composite99DocValuesWriter.java | 97 +- .../composite/CompositeIndexFieldInfo.java | 37 + .../codec/composite/CompositeIndexReader.java | 5 +- .../datacube/startree/StarTreeValues.java | 47 +- .../aggregators/CountValueAggregator.java | 11 +- .../aggregators/MetricAggregatorInfo.java | 21 +- .../aggregators/SumValueAggregator.java | 17 +- .../startree/aggregators/ValueAggregator.java | 6 +- .../aggregators/ValueAggregatorFactory.java | 9 +- .../startree/builder/BaseStarTreeBuilder.java | 258 +- .../builder/OnHeapStarTreeBuilder.java | 148 +- .../startree/builder/StarTreeBuilder.java | 18 +- .../StarTreeDocValuesIteratorAdapter.java | 82 - .../startree/builder/StarTreesBuilder.java | 61 +- .../datacube/startree/node/StarTreeNode.java | 112 + .../datacube/startree/node/package-info.java | 12 + .../utils/SequentialDocValuesIterator.java | 109 +- .../mapper/CompositeMappedFieldType.java | 4 + .../StarTreeDocValuesFormatTests.java | 172 +- .../CountValueAggregatorTests.java | 8 +- .../MetricAggregatorInfoTests.java | 34 +- .../aggregators/SumValueAggregatorTests.java | 15 +- .../ValueAggregatorFactoryTests.java | 2 +- .../builder/AbstractStarTreeBuilderTests.java | 2251 +++++++++++++++++ .../builder/BaseStarTreeBuilderTests.java | 25 +- .../builder/OnHeapStarTreeBuilderTests.java | 696 +---- ...StarTreeDocValuesIteratorAdapterTests.java | 139 - .../StarTreeValuesIteratorFactoryTests.java | 131 - .../builder/StarTreesBuilderTests.java | 14 +- .../SequentialDocValuesIteratorTests.java | 131 +- .../org/opensearch/index/MapperTestUtils.java | 34 + 32 files changed, 3281 insertions(+), 1435 deletions(-) create mode 100644 server/src/main/java/org/opensearch/index/codec/composite/CompositeIndexFieldInfo.java delete mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreeDocValuesIteratorAdapter.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/node/StarTreeNode.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/node/package-info.java create mode 100644 server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/AbstractStarTreeBuilderTests.java delete mode 100644 server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreeDocValuesIteratorAdapterTests.java delete mode 100644 server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreeValuesIteratorFactoryTests.java diff --git a/server/src/main/java/org/opensearch/index/codec/composite/Composite99DocValuesReader.java b/server/src/main/java/org/opensearch/index/codec/composite/Composite99DocValuesReader.java index 82c844088cfd4..df5008a7f294e 100644 --- a/server/src/main/java/org/opensearch/index/codec/composite/Composite99DocValuesReader.java +++ b/server/src/main/java/org/opensearch/index/codec/composite/Composite99DocValuesReader.java @@ -17,9 +17,9 @@ import org.apache.lucene.index.SortedNumericDocValues; import org.apache.lucene.index.SortedSetDocValues; import org.opensearch.common.annotation.ExperimentalApi; -import org.opensearch.index.mapper.CompositeMappedFieldType; import java.io.IOException; +import java.util.ArrayList; import java.util.List; /** @@ -74,15 +74,13 @@ public void close() throws IOException { } @Override - public List getCompositeIndexFields() { + public List getCompositeIndexFields() { // todo : read from file formats and get the field names. - throw new UnsupportedOperationException(); - + return new ArrayList<>(); } @Override - public CompositeIndexValues getCompositeIndexValues(String field, CompositeMappedFieldType.CompositeFieldType fieldType) - throws IOException { + public CompositeIndexValues getCompositeIndexValues(CompositeIndexFieldInfo compositeIndexFieldInfo) throws IOException { // TODO : read compositeIndexValues [starTreeValues] from star tree files throw new UnsupportedOperationException(); } diff --git a/server/src/main/java/org/opensearch/index/codec/composite/Composite99DocValuesWriter.java b/server/src/main/java/org/opensearch/index/codec/composite/Composite99DocValuesWriter.java index 3753b20a8bea3..3859d3c998573 100644 --- a/server/src/main/java/org/opensearch/index/codec/composite/Composite99DocValuesWriter.java +++ b/server/src/main/java/org/opensearch/index/codec/composite/Composite99DocValuesWriter.java @@ -8,20 +8,29 @@ package org.opensearch.index.codec.composite; +import org.apache.logging.log4j.LogManager; +import org.apache.logging.log4j.Logger; import org.apache.lucene.codecs.DocValuesConsumer; import org.apache.lucene.codecs.DocValuesProducer; +import org.apache.lucene.index.DocValues; +import org.apache.lucene.index.DocValuesType; +import org.apache.lucene.index.EmptyDocValuesProducer; import org.apache.lucene.index.FieldInfo; import org.apache.lucene.index.MergeState; import org.apache.lucene.index.SegmentWriteState; +import org.apache.lucene.index.SortedNumericDocValues; import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.index.codec.composite.datacube.startree.StarTreeValues; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeField; import org.opensearch.index.compositeindex.datacube.startree.builder.StarTreesBuilder; import org.opensearch.index.mapper.CompositeMappedFieldType; import org.opensearch.index.mapper.MapperService; -import org.opensearch.index.mapper.StarTreeMapper; import java.io.IOException; +import java.util.Collections; import java.util.HashMap; import java.util.HashSet; +import java.util.List; import java.util.Map; import java.util.Set; import java.util.concurrent.atomic.AtomicReference; @@ -40,8 +49,10 @@ public class Composite99DocValuesWriter extends DocValuesConsumer { AtomicReference mergeState = new AtomicReference<>(); private final Set compositeMappedFieldTypes; private final Set compositeFieldSet; + private final Set segmentFieldSet; private final Map fieldProducerMap = new HashMap<>(); + private static final Logger logger = LogManager.getLogger(Composite99DocValuesWriter.class); public Composite99DocValuesWriter(DocValuesConsumer delegate, SegmentWriteState segmentWriteState, MapperService mapperService) { @@ -50,6 +61,12 @@ public Composite99DocValuesWriter(DocValuesConsumer delegate, SegmentWriteState this.mapperService = mapperService; this.compositeMappedFieldTypes = mapperService.getCompositeFieldTypes(); compositeFieldSet = new HashSet<>(); + segmentFieldSet = new HashSet<>(); + for (FieldInfo fi : segmentWriteState.fieldInfos) { + if (DocValuesType.SORTED_NUMERIC.equals(fi.getDocValuesType())) { + segmentFieldSet.add(fi.name); + } + } for (CompositeMappedFieldType type : compositeMappedFieldTypes) { compositeFieldSet.addAll(type.fields()); } @@ -95,23 +112,91 @@ private void createCompositeIndicesIfPossible(DocValuesProducer valuesProducer, fieldProducerMap.put(field.name, valuesProducer); compositeFieldSet.remove(field.name); } + segmentFieldSet.remove(field.name); + if (segmentFieldSet.isEmpty()) { + Set compositeFieldSetCopy = new HashSet<>(compositeFieldSet); + for (String compositeField : compositeFieldSetCopy) { + fieldProducerMap.put(compositeField, new EmptyDocValuesProducer() { + @Override + public SortedNumericDocValues getSortedNumeric(FieldInfo field) { + return DocValues.emptySortedNumeric(); + } + }); + compositeFieldSet.remove(compositeField); + } + } // we have all the required fields to build composite fields if (compositeFieldSet.isEmpty()) { for (CompositeMappedFieldType mappedType : compositeMappedFieldTypes) { - if (mappedType instanceof StarTreeMapper.StarTreeFieldType) { - try (StarTreesBuilder starTreesBuilder = new StarTreesBuilder(fieldProducerMap, state, mapperService)) { - starTreesBuilder.build(); + if (mappedType.getCompositeIndexType().equals(CompositeMappedFieldType.CompositeFieldType.STAR_TREE)) { + try (StarTreesBuilder starTreesBuilder = new StarTreesBuilder(state, mapperService)) { + starTreesBuilder.build(fieldProducerMap); } } } } + } @Override public void merge(MergeState mergeState) throws IOException { this.mergeState.compareAndSet(null, mergeState); super.merge(mergeState); - // TODO : handle merge star tree - // mergeStarTreeFields(mergeState); + mergeCompositeFields(mergeState); + } + + /** + * Merges composite fields from multiple segments + * @param mergeState merge state + */ + private void mergeCompositeFields(MergeState mergeState) throws IOException { + mergeStarTreeFields(mergeState); + } + + /** + * Merges star tree data fields from multiple segments + * @param mergeState merge state + */ + private void mergeStarTreeFields(MergeState mergeState) throws IOException { + Map> starTreeSubsPerField = new HashMap<>(); + StarTreeField starTreeField = null; + for (int i = 0; i < mergeState.docValuesProducers.length; i++) { + CompositeIndexReader reader = null; + if (mergeState.docValuesProducers[i] == null) { + continue; + } + if (mergeState.docValuesProducers[i] instanceof CompositeIndexReader) { + reader = (CompositeIndexReader) mergeState.docValuesProducers[i]; + } else { + continue; + } + + List compositeFieldInfo = reader.getCompositeIndexFields(); + for (CompositeIndexFieldInfo fieldInfo : compositeFieldInfo) { + if (fieldInfo.getType().equals(CompositeMappedFieldType.CompositeFieldType.STAR_TREE)) { + CompositeIndexValues compositeIndexValues = reader.getCompositeIndexValues(fieldInfo); + if (compositeIndexValues instanceof StarTreeValues) { + StarTreeValues starTreeValues = (StarTreeValues) compositeIndexValues; + List fieldsList = starTreeSubsPerField.getOrDefault(fieldInfo.getField(), Collections.emptyList()); + if (starTreeField == null) { + starTreeField = starTreeValues.getStarTreeField(); + } + // assert star tree configuration is same across segments + else { + if (starTreeField.equals(starTreeValues.getStarTreeField()) == false) { + throw new IllegalArgumentException( + "star tree field configuration must match the configuration of the field being merged" + ); + } + } + fieldsList.add(starTreeValues); + starTreeSubsPerField.put(fieldInfo.getField(), fieldsList); + } + } + } + } + try (StarTreesBuilder starTreesBuilder = new StarTreesBuilder(state, mapperService)) { + starTreesBuilder.buildDuringMerge(starTreeSubsPerField); + } } } diff --git a/server/src/main/java/org/opensearch/index/codec/composite/CompositeIndexFieldInfo.java b/server/src/main/java/org/opensearch/index/codec/composite/CompositeIndexFieldInfo.java new file mode 100644 index 0000000000000..8193fcc301e67 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/codec/composite/CompositeIndexFieldInfo.java @@ -0,0 +1,37 @@ + +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.codec.composite; + +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.index.mapper.CompositeMappedFieldType; + +/** + * Field info details of composite index fields + * + * @opensearch.experimental + */ +@ExperimentalApi +public class CompositeIndexFieldInfo { + private final String field; + private final CompositeMappedFieldType.CompositeFieldType type; + + public CompositeIndexFieldInfo(String field, CompositeMappedFieldType.CompositeFieldType type) { + this.field = field; + this.type = type; + } + + public String getField() { + return field; + } + + public CompositeMappedFieldType.CompositeFieldType getType() { + return type; + } +} diff --git a/server/src/main/java/org/opensearch/index/codec/composite/CompositeIndexReader.java b/server/src/main/java/org/opensearch/index/codec/composite/CompositeIndexReader.java index d02438b75377d..a159b0619bcbb 100644 --- a/server/src/main/java/org/opensearch/index/codec/composite/CompositeIndexReader.java +++ b/server/src/main/java/org/opensearch/index/codec/composite/CompositeIndexReader.java @@ -9,7 +9,6 @@ package org.opensearch.index.codec.composite; import org.opensearch.common.annotation.ExperimentalApi; -import org.opensearch.index.mapper.CompositeMappedFieldType; import java.io.IOException; import java.util.List; @@ -25,10 +24,10 @@ public interface CompositeIndexReader { * Get list of composite index fields from the segment * */ - List getCompositeIndexFields(); + List getCompositeIndexFields(); /** * Get composite index values based on the field name and the field type */ - CompositeIndexValues getCompositeIndexValues(String field, CompositeMappedFieldType.CompositeFieldType fieldType) throws IOException; + CompositeIndexValues getCompositeIndexValues(CompositeIndexFieldInfo fieldInfo) throws IOException; } diff --git a/server/src/main/java/org/opensearch/index/codec/composite/datacube/startree/StarTreeValues.java b/server/src/main/java/org/opensearch/index/codec/composite/datacube/startree/StarTreeValues.java index 2a5b96ce2620a..8378a4063b7ca 100644 --- a/server/src/main/java/org/opensearch/index/codec/composite/datacube/startree/StarTreeValues.java +++ b/server/src/main/java/org/opensearch/index/codec/composite/datacube/startree/StarTreeValues.java @@ -8,10 +8,13 @@ package org.opensearch.index.codec.composite.datacube.startree; +import org.apache.lucene.search.DocIdSetIterator; import org.opensearch.common.annotation.ExperimentalApi; import org.opensearch.index.codec.composite.CompositeIndexValues; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeField; +import org.opensearch.index.compositeindex.datacube.startree.node.StarTreeNode; -import java.util.List; +import java.util.Map; /** * Concrete class that holds the star tree associated values from the segment @@ -20,16 +23,48 @@ */ @ExperimentalApi public class StarTreeValues implements CompositeIndexValues { - private final List dimensionsOrder; + private final StarTreeField starTreeField; + private final StarTreeNode root; + private final Map dimensionDocValuesIteratorMap; + private final Map metricDocValuesIteratorMap; + private final Map attributes; - // TODO : come up with full set of vales such as dimensions and metrics doc values + star tree - public StarTreeValues(List dimensionsOrder) { - super(); - this.dimensionsOrder = List.copyOf(dimensionsOrder); + public StarTreeValues( + StarTreeField starTreeField, + StarTreeNode root, + Map dimensionDocValuesIteratorMap, + Map metricDocValuesIteratorMap, + Map attributes + ) { + this.starTreeField = starTreeField; + this.root = root; + this.dimensionDocValuesIteratorMap = dimensionDocValuesIteratorMap; + this.metricDocValuesIteratorMap = metricDocValuesIteratorMap; + this.attributes = attributes; } @Override public CompositeIndexValues getValues() { return this; } + + public StarTreeField getStarTreeField() { + return starTreeField; + } + + public StarTreeNode getRoot() { + return root; + } + + public Map getDimensionDocValuesIteratorMap() { + return dimensionDocValuesIteratorMap; + } + + public Map getMetricDocValuesIteratorMap() { + return metricDocValuesIteratorMap; + } + + public Map getAttributes() { + return attributes; + } } diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/CountValueAggregator.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/CountValueAggregator.java index d72f4a292dc0a..5390b6728b9b6 100644 --- a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/CountValueAggregator.java +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/CountValueAggregator.java @@ -18,6 +18,11 @@ public class CountValueAggregator implements ValueAggregator { public static final StarTreeNumericType VALUE_AGGREGATOR_TYPE = StarTreeNumericType.LONG; public static final long DEFAULT_INITIAL_VALUE = 1L; + private StarTreeNumericType starTreeNumericType; + + public CountValueAggregator(StarTreeNumericType starTreeNumericType) { + this.starTreeNumericType = starTreeNumericType; + } @Override public MetricStat getAggregationType() { @@ -30,12 +35,12 @@ public StarTreeNumericType getAggregatedValueType() { } @Override - public Long getInitialAggregatedValueForSegmentDocValue(Long segmentDocValue, StarTreeNumericType starTreeNumericType) { + public Long getInitialAggregatedValueForSegmentDocValue(Long segmentDocValue) { return DEFAULT_INITIAL_VALUE; } @Override - public Long mergeAggregatedValueAndSegmentValue(Long value, Long segmentDocValue, StarTreeNumericType starTreeNumericType) { + public Long mergeAggregatedValueAndSegmentValue(Long value, Long segmentDocValue) { return value + 1; } @@ -60,7 +65,7 @@ public Long toLongValue(Long value) { } @Override - public Long toStarTreeNumericTypeValue(Long value, StarTreeNumericType type) { + public Long toStarTreeNumericTypeValue(Long value) { return value; } } diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MetricAggregatorInfo.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MetricAggregatorInfo.java index 46f1b1ac11063..a9209a38eca82 100644 --- a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MetricAggregatorInfo.java +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MetricAggregatorInfo.java @@ -9,7 +9,6 @@ import org.opensearch.index.compositeindex.datacube.MetricStat; import org.opensearch.index.compositeindex.datacube.startree.aggregators.numerictype.StarTreeNumericType; -import org.opensearch.index.compositeindex.datacube.startree.utils.SequentialDocValuesIterator; import org.opensearch.index.fielddata.IndexNumericFieldData; import java.util.Comparator; @@ -17,7 +16,6 @@ /** * Builds aggregation function and doc values field pair to support various aggregations - * * @opensearch.experimental */ public class MetricAggregatorInfo implements Comparable { @@ -29,22 +27,14 @@ public class MetricAggregatorInfo implements Comparable { private final String field; private final ValueAggregator valueAggregators; private final StarTreeNumericType starTreeNumericType; - private final SequentialDocValuesIterator metricStatReader; /** * Constructor for MetricAggregatorInfo */ - public MetricAggregatorInfo( - MetricStat metricStat, - String field, - String starFieldName, - IndexNumericFieldData.NumericType numericType, - SequentialDocValuesIterator metricStatReader - ) { + public MetricAggregatorInfo(MetricStat metricStat, String field, String starFieldName, IndexNumericFieldData.NumericType numericType) { this.metricStat = metricStat; - this.valueAggregators = ValueAggregatorFactory.getValueAggregator(metricStat); this.starTreeNumericType = StarTreeNumericType.fromNumericType(numericType); - this.metricStatReader = metricStatReader; + this.valueAggregators = ValueAggregatorFactory.getValueAggregator(metricStat, this.starTreeNumericType); this.field = field; this.starFieldName = starFieldName; this.metric = toFieldName(); @@ -85,13 +75,6 @@ public StarTreeNumericType getAggregatedValueType() { return starTreeNumericType; } - /** - * @return metric value reader iterator - */ - public SequentialDocValuesIterator getMetricStatReader() { - return metricStatReader; - } - /** * @return field name with metric type and field */ diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/SumValueAggregator.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/SumValueAggregator.java index 543b0f7f42374..385549216e4d6 100644 --- a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/SumValueAggregator.java +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/SumValueAggregator.java @@ -24,6 +24,12 @@ public class SumValueAggregator implements ValueAggregator { private double compensation = 0; private CompensatedSum kahanSummation = new CompensatedSum(0, 0); + private StarTreeNumericType starTreeNumericType; + + public SumValueAggregator(StarTreeNumericType starTreeNumericType) { + this.starTreeNumericType = starTreeNumericType; + } + @Override public MetricStat getAggregationType() { return MetricStat.SUM; @@ -35,7 +41,7 @@ public StarTreeNumericType getAggregatedValueType() { } @Override - public Double getInitialAggregatedValueForSegmentDocValue(Long segmentDocValue, StarTreeNumericType starTreeNumericType) { + public Double getInitialAggregatedValueForSegmentDocValue(Long segmentDocValue) { kahanSummation.reset(0, 0); kahanSummation.add(starTreeNumericType.getDoubleValue(segmentDocValue)); compensation = kahanSummation.delta(); @@ -44,7 +50,7 @@ public Double getInitialAggregatedValueForSegmentDocValue(Long segmentDocValue, } @Override - public Double mergeAggregatedValueAndSegmentValue(Double value, Long segmentDocValue, StarTreeNumericType starTreeNumericType) { + public Double mergeAggregatedValueAndSegmentValue(Double value, Long segmentDocValue) { assert kahanSummation.value() == value; kahanSummation.reset(sum, compensation); kahanSummation.add(starTreeNumericType.getDoubleValue(segmentDocValue)); @@ -87,9 +93,12 @@ public Long toLongValue(Double value) { } @Override - public Double toStarTreeNumericTypeValue(Long value, StarTreeNumericType type) { + public Double toStarTreeNumericTypeValue(Long value) { try { - return type.getDoubleValue(value); + if (value == null) { + return 0.0; + } + return VALUE_AGGREGATOR_TYPE.getDoubleValue(value); } catch (Exception e) { throw new IllegalStateException("Cannot convert " + value + " to sortable aggregation type", e); } diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/ValueAggregator.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/ValueAggregator.java index 3dd1f85845c17..93230ed012b13 100644 --- a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/ValueAggregator.java +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/ValueAggregator.java @@ -30,12 +30,12 @@ public interface ValueAggregator { /** * Returns the initial aggregated value. */ - A getInitialAggregatedValueForSegmentDocValue(Long segmentDocValue, StarTreeNumericType starTreeNumericType); + A getInitialAggregatedValueForSegmentDocValue(Long segmentDocValue); /** * Applies a segment doc value to the current aggregated value. */ - A mergeAggregatedValueAndSegmentValue(A value, Long segmentDocValue, StarTreeNumericType starTreeNumericType); + A mergeAggregatedValueAndSegmentValue(A value, Long segmentDocValue); /** * Applies an aggregated value to the current aggregated value. @@ -60,5 +60,5 @@ public interface ValueAggregator { /** * Converts an aggregated value from a Long type. */ - A toStarTreeNumericTypeValue(Long rawValue, StarTreeNumericType type); + A toStarTreeNumericTypeValue(Long rawValue); } diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/ValueAggregatorFactory.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/ValueAggregatorFactory.java index 4ee0b0b5b13f8..240bbd37a53ee 100644 --- a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/ValueAggregatorFactory.java +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/ValueAggregatorFactory.java @@ -21,16 +21,17 @@ private ValueAggregatorFactory() {} /** * Returns a new instance of value aggregator for the given aggregation type. * - * @param aggregationType Aggregation type + * @param aggregationType Aggregation type + * @param starTreeNumericType Numeric type associated with star tree field ( as specified in index mapping ) * @return Value aggregator */ - public static ValueAggregator getValueAggregator(MetricStat aggregationType) { + public static ValueAggregator getValueAggregator(MetricStat aggregationType, StarTreeNumericType starTreeNumericType) { switch (aggregationType) { // other metric types (count, min, max, avg) will be supported in the future case SUM: - return new SumValueAggregator(); + return new SumValueAggregator(starTreeNumericType); case COUNT: - return new CountValueAggregator(); + return new CountValueAggregator(starTreeNumericType); default: throw new IllegalStateException("Unsupported aggregation type: " + aggregationType); } diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/BaseStarTreeBuilder.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/BaseStarTreeBuilder.java index 0a363bfad8fe1..7187fade882ea 100644 --- a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/BaseStarTreeBuilder.java +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/BaseStarTreeBuilder.java @@ -12,7 +12,11 @@ import org.apache.lucene.codecs.DocValuesProducer; import org.apache.lucene.index.DocValuesType; import org.apache.lucene.index.FieldInfo; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.SegmentWriteState; +import org.apache.lucene.index.VectorEncoding; +import org.apache.lucene.index.VectorSimilarityFunction; +import org.opensearch.index.codec.composite.datacube.startree.StarTreeValues; import org.opensearch.index.compositeindex.datacube.Dimension; import org.opensearch.index.compositeindex.datacube.Metric; import org.opensearch.index.compositeindex.datacube.MetricStat; @@ -21,7 +25,6 @@ import org.opensearch.index.compositeindex.datacube.startree.StarTreeFieldConfiguration; import org.opensearch.index.compositeindex.datacube.startree.aggregators.MetricAggregatorInfo; import org.opensearch.index.compositeindex.datacube.startree.aggregators.ValueAggregator; -import org.opensearch.index.compositeindex.datacube.startree.aggregators.numerictype.StarTreeNumericType; import org.opensearch.index.compositeindex.datacube.startree.utils.SequentialDocValuesIterator; import org.opensearch.index.compositeindex.datacube.startree.utils.TreeNode; import org.opensearch.index.fielddata.IndexNumericFieldData; @@ -32,11 +35,13 @@ import java.io.IOException; import java.util.ArrayList; import java.util.Arrays; +import java.util.Collections; import java.util.HashMap; import java.util.HashSet; import java.util.Iterator; import java.util.List; import java.util.Map; +import java.util.Objects; import java.util.Set; import static org.opensearch.index.compositeindex.datacube.startree.utils.TreeNode.ALL; @@ -54,8 +59,7 @@ public abstract class BaseStarTreeBuilder implements StarTreeBuilder { /** * Default value for star node */ - public static final int STAR_IN_DOC_VALUES_INDEX = -1; - + public static final Long STAR_IN_DOC_VALUES_INDEX = null; protected final Set skipStarNodeCreationForDimensions; protected final List metricAggregatorInfos; @@ -68,59 +72,41 @@ public abstract class BaseStarTreeBuilder implements StarTreeBuilder { protected final TreeNode rootNode = getNewNode(); - protected SequentialDocValuesIterator[] dimensionReaders; - - // We do not close these producers as they are empty doc value producers (where close() is unsupported) - protected Map fieldProducerMap; - - private final StarTreeDocValuesIteratorAdapter starTreeDocValuesIteratorAdapter; private final StarTreeField starTreeField; + private final MapperService mapperService; + private final SegmentWriteState state; + static String NUM_SEGMENT_DOCS = "numSegmentDocs"; /** * Reads all the configuration related to dimensions and metrics, builds a star-tree based on the different construction parameters. * * @param starTreeField holds the configuration for the star tree - * @param fieldProducerMap helps return the doc values iterator for each type based on field name * @param state stores the segment write state * @param mapperService helps to find the original type of the field */ - protected BaseStarTreeBuilder( - StarTreeField starTreeField, - Map fieldProducerMap, - SegmentWriteState state, - MapperService mapperService - ) throws IOException { - - logger.debug("Building in base star tree builder"); + protected BaseStarTreeBuilder(StarTreeField starTreeField, SegmentWriteState state, MapperService mapperService) { + logger.debug("Building star tree : {}", starTreeField.getName()); this.starTreeField = starTreeField; StarTreeFieldConfiguration starTreeFieldSpec = starTreeField.getStarTreeConfig(); - this.fieldProducerMap = fieldProducerMap; - this.starTreeDocValuesIteratorAdapter = new StarTreeDocValuesIteratorAdapter(); List dimensionsSplitOrder = starTreeField.getDimensionsOrder(); this.numDimensions = dimensionsSplitOrder.size(); this.skipStarNodeCreationForDimensions = new HashSet<>(); this.totalSegmentDocs = state.segmentInfo.maxDoc(); - this.dimensionReaders = new SequentialDocValuesIterator[numDimensions]; + this.mapperService = mapperService; + this.state = state; + Set skipStarNodeCreationForDimensions = starTreeFieldSpec.getSkipStarNodeCreationInDims(); for (int i = 0; i < numDimensions; i++) { - String dimension = dimensionsSplitOrder.get(i).getField(); if (skipStarNodeCreationForDimensions.contains(dimensionsSplitOrder.get(i).getField())) { this.skipStarNodeCreationForDimensions.add(i); } - FieldInfo dimensionFieldInfos = state.fieldInfos.fieldInfo(dimension); - DocValuesType dimensionDocValuesType = dimensionFieldInfos.getDocValuesType(); - dimensionReaders[i] = starTreeDocValuesIteratorAdapter.getDocValuesIterator( - dimensionDocValuesType, - dimensionFieldInfos, - fieldProducerMap.get(dimensionFieldInfos.name) - ); } - this.metricAggregatorInfos = generateMetricAggregatorInfos(mapperService, state); + this.metricAggregatorInfos = generateMetricAggregatorInfos(mapperService); this.numMetrics = metricAggregatorInfos.size(); this.maxLeafDocuments = starTreeFieldSpec.maxLeafDocs(); } @@ -130,13 +116,11 @@ protected BaseStarTreeBuilder( * * @return list of MetricAggregatorInfo */ - public List generateMetricAggregatorInfos(MapperService mapperService, SegmentWriteState state) - throws IOException { + public List generateMetricAggregatorInfos(MapperService mapperService) { List metricAggregatorInfos = new ArrayList<>(); for (Metric metric : this.starTreeField.getMetrics()) { for (MetricStat metricStat : metric.getMetrics()) { IndexNumericFieldData.NumericType numericType; - SequentialDocValuesIterator metricStatReader; Mapper fieldMapper = mapperService.documentMapper().mappers().getMapper(metric.getField()); if (fieldMapper instanceof NumberFieldMapper) { numericType = ((NumberFieldMapper) fieldMapper).fieldType().numericType(); @@ -145,24 +129,11 @@ public List generateMetricAggregatorInfos(MapperService ma throw new IllegalStateException("unsupported mapper type"); } - FieldInfo metricFieldInfos = state.fieldInfos.fieldInfo(metric.getField()); - DocValuesType metricDocValuesType = metricFieldInfos.getDocValuesType(); - if (metricStat != MetricStat.COUNT) { - metricStatReader = starTreeDocValuesIteratorAdapter.getDocValuesIterator( - metricDocValuesType, - metricFieldInfos, - fieldProducerMap.get(metricFieldInfos.name) - ); - } else { - metricStatReader = new SequentialDocValuesIterator(); - } - MetricAggregatorInfo metricAggregatorInfo = new MetricAggregatorInfo( metricStat, metric.getField(), starTreeField.getName(), - numericType, - metricStatReader + numericType ); metricAggregatorInfos.add(metricAggregatorInfo); } @@ -204,12 +175,17 @@ public List generateMetricAggregatorInfos(MapperService ma public abstract Long getDimensionValue(int docId, int dimensionId) throws IOException; /** - * Sorts and aggregates the star-tree document in the segment, and returns a star-tree document iterator for all the - * aggregated star-tree document. + * Sorts and aggregates all the documents in the segment as per the configuration, and returns a star-tree document iterator for all the + * aggregated star-tree documents. * + * @param dimensionReaders List of docValues readers to read dimensions from the segment + * @param metricReaders List of docValues readers to read metrics from the segment * @return Iterator for the aggregated star-tree document */ - public abstract Iterator sortAndAggregateStarTreeDocuments() throws IOException; + public abstract Iterator sortAndAggregateSegmentDocuments( + SequentialDocValuesIterator[] dimensionReaders, + List metricReaders + ) throws IOException; /** * Generates aggregated star-tree documents for star-node. @@ -223,13 +199,16 @@ public abstract Iterator generateStarTreeDocumentsForStarNode( throws IOException; /** - * Returns the star-tree document from the segment + * Returns the star-tree document from the segment based on the current doc id * - * @throws IOException when we are unable to build a star tree document from the segment */ - protected StarTreeDocument getSegmentStarTreeDocument(int currentDocId) throws IOException { - Long[] dimensions = getStarTreeDimensionsFromSegment(currentDocId); - Object[] metrics = getStarTreeMetricsFromSegment(currentDocId); + protected StarTreeDocument getSegmentStarTreeDocument( + int currentDocId, + SequentialDocValuesIterator[] dimensionReaders, + List metricReaders + ) throws IOException { + Long[] dimensions = getStarTreeDimensionsFromSegment(currentDocId, dimensionReaders); + Object[] metrics = getStarTreeMetricsFromSegment(currentDocId, metricReaders); return new StarTreeDocument(dimensions, metrics); } @@ -239,55 +218,48 @@ protected StarTreeDocument getSegmentStarTreeDocument(int currentDocId) throws I * @return dimension values for each of the star-tree dimension * @throws IOException when we are unable to iterate to the next doc for the given dimension readers */ - private Long[] getStarTreeDimensionsFromSegment(int currentDocId) throws IOException { + Long[] getStarTreeDimensionsFromSegment(int currentDocId, SequentialDocValuesIterator[] dimensionReaders) throws IOException { Long[] dimensions = new Long[numDimensions]; for (int i = 0; i < numDimensions; i++) { - try { - dimensions[i] = getValuesFromSegment(dimensionReaders[i], currentDocId); - } catch (Exception e) { - logger.error("unable to read the dimension values from the segment", e); - throw new IllegalStateException("unable to read the dimension values from the segment", e); + if (dimensionReaders[i] != null) { + try { + dimensionReaders[i].nextDoc(currentDocId); + } catch (IOException e) { + logger.error("unable to iterate to next doc", e); + throw new RuntimeException("unable to iterate to next doc", e); + } catch (Exception e) { + logger.error("unable to read the dimension values from the segment", e); + throw new IllegalStateException("unable to read the dimension values from the segment", e); + } + dimensions[i] = dimensionReaders[i].value(currentDocId); + } else { + throw new IllegalStateException("dimension readers are empty"); } - } return dimensions; } - /** - * Returns the next value from the iterator of respective field - * - * @param iterator respective field iterator - * @param currentDocId current document id - * @return the next value for the field - * @throws IOException when we are unable to iterate to the next doc for the given iterator - */ - private Long getValuesFromSegment(SequentialDocValuesIterator iterator, int currentDocId) throws IOException { - try { - starTreeDocValuesIteratorAdapter.nextDoc(iterator, currentDocId); - } catch (IOException e) { - logger.error("unable to iterate to next doc", e); - throw new RuntimeException("unable to iterate to next doc", e); - } - return starTreeDocValuesIteratorAdapter.getNextValue(iterator, currentDocId); - } - /** * Returns the metric values for the next document from the segment * * @return metric values for each of the star-tree metric * @throws IOException when we are unable to iterate to the next doc for the given metric readers */ - private Object[] getStarTreeMetricsFromSegment(int currentDocId) throws IOException { + private Object[] getStarTreeMetricsFromSegment(int currentDocId, List metricsReaders) throws IOException { Object[] metrics = new Object[numMetrics]; for (int i = 0; i < numMetrics; i++) { - SequentialDocValuesIterator metricStatReader = metricAggregatorInfos.get(i).getMetricStatReader(); + SequentialDocValuesIterator metricStatReader = metricsReaders.get(i); if (metricStatReader != null) { try { - metrics[i] = getValuesFromSegment(metricStatReader, currentDocId); + metricStatReader.nextDoc(currentDocId); + } catch (IOException e) { + logger.error("unable to iterate to next doc", e); + throw new RuntimeException("unable to iterate to next doc", e); } catch (Exception e) { logger.error("unable to read the metric values from the segment", e); throw new IllegalStateException("unable to read the metric values from the segment", e); } + metrics[i] = metricStatReader.value(currentDocId); } else { throw new IllegalStateException("metric readers are empty"); } @@ -306,7 +278,8 @@ private Object[] getStarTreeMetricsFromSegment(int currentDocId) throws IOExcept @SuppressWarnings({ "unchecked", "rawtypes" }) protected StarTreeDocument reduceSegmentStarTreeDocuments( StarTreeDocument aggregatedSegmentDocument, - StarTreeDocument segmentDocument + StarTreeDocument segmentDocument, + boolean isMerge ) { if (aggregatedSegmentDocument == null) { Long[] dimensions = Arrays.copyOf(segmentDocument.dimensions, numDimensions); @@ -314,11 +287,12 @@ protected StarTreeDocument reduceSegmentStarTreeDocuments( for (int i = 0; i < numMetrics; i++) { try { ValueAggregator metricValueAggregator = metricAggregatorInfos.get(i).getValueAggregators(); - StarTreeNumericType starTreeNumericType = metricAggregatorInfos.get(i).getAggregatedValueType(); - metrics[i] = metricValueAggregator.getInitialAggregatedValueForSegmentDocValue( - getLong(segmentDocument.metrics[i]), - starTreeNumericType - ); + if (isMerge) { + metrics[i] = metricValueAggregator.getInitialAggregatedValue(segmentDocument.metrics[i]); + } else { + metrics[i] = metricValueAggregator.getInitialAggregatedValueForSegmentDocValue(getLong(segmentDocument.metrics[i])); + } + } catch (Exception e) { logger.error("Cannot parse initial segment doc value", e); throw new IllegalStateException("Cannot parse initial segment doc value [" + segmentDocument.metrics[i] + "]"); @@ -329,12 +303,17 @@ protected StarTreeDocument reduceSegmentStarTreeDocuments( for (int i = 0; i < numMetrics; i++) { try { ValueAggregator metricValueAggregator = metricAggregatorInfos.get(i).getValueAggregators(); - StarTreeNumericType starTreeNumericType = metricAggregatorInfos.get(i).getAggregatedValueType(); - aggregatedSegmentDocument.metrics[i] = metricValueAggregator.mergeAggregatedValueAndSegmentValue( - aggregatedSegmentDocument.metrics[i], - getLong(segmentDocument.metrics[i]), - starTreeNumericType - ); + if (isMerge) { + aggregatedSegmentDocument.metrics[i] = metricValueAggregator.mergeAggregatedValues( + segmentDocument.metrics[i], + aggregatedSegmentDocument.metrics[i] + ); + } else { + aggregatedSegmentDocument.metrics[i] = metricValueAggregator.mergeAggregatedValueAndSegmentValue( + aggregatedSegmentDocument.metrics[i], + getLong(segmentDocument.metrics[i]) + ); + } } catch (Exception e) { logger.error("Cannot apply segment doc value for aggregation", e); throw new IllegalStateException("Cannot apply segment doc value for aggregation [" + segmentDocument.metrics[i] + "]"); @@ -364,7 +343,9 @@ private static long getLong(Object metric) { } if (metricValue == null) { - throw new IllegalStateException("unable to cast segment metric"); + return 0; + // TODO: handle this properly + // throw new IllegalStateException("unable to cast segment metric"); } return metricValue; } @@ -410,25 +391,88 @@ public StarTreeDocument reduceStarTreeDocuments(StarTreeDocument aggregatedDocum } /** - * Builds the star tree using total segment documents + * Builds the star tree from the original segment documents + * + * @param fieldProducerMap contain s the docValues producer to get docValues associated with each field * * @throws IOException when we are unable to build star-tree */ - public void build() throws IOException { + public void build(Map fieldProducerMap) throws IOException { long startTime = System.currentTimeMillis(); logger.debug("Star-tree build is a go with star tree field {}", starTreeField.getName()); - if (totalSegmentDocs == 0) { logger.debug("No documents found in the segment"); return; } - - Iterator starTreeDocumentIterator = sortAndAggregateStarTreeDocuments(); + List metricReaders = getMetricReaders(state, fieldProducerMap); + List dimensionsSplitOrder = starTreeField.getDimensionsOrder(); + SequentialDocValuesIterator[] dimensionReaders = new SequentialDocValuesIterator[dimensionsSplitOrder.size()]; + for (int i = 0; i < numDimensions; i++) { + String dimension = dimensionsSplitOrder.get(i).getField(); + FieldInfo dimensionFieldInfo = state.fieldInfos.fieldInfo(dimension); + if (dimensionFieldInfo == null) { + dimensionFieldInfo = getFieldInfo(dimension); + } + dimensionReaders[i] = new SequentialDocValuesIterator( + fieldProducerMap.get(dimensionFieldInfo.name).getSortedNumeric(dimensionFieldInfo) + ); + } + Iterator starTreeDocumentIterator = sortAndAggregateSegmentDocuments(dimensionReaders, metricReaders); logger.debug("Sorting and aggregating star-tree in ms : {}", (System.currentTimeMillis() - startTime)); build(starTreeDocumentIterator); logger.debug("Finished Building star-tree in ms : {}", (System.currentTimeMillis() - startTime)); } + private static FieldInfo getFieldInfo(String field) { + return new FieldInfo( + field, + 1, + false, + false, + false, + IndexOptions.NONE, + DocValuesType.SORTED_NUMERIC, + -1, + Collections.emptyMap(), + 0, + 0, + 0, + 0, + VectorEncoding.FLOAT32, + VectorSimilarityFunction.EUCLIDEAN, + false, + false + ); + } + + /** + * Generates the configuration required to perform aggregation for all the metrics on a field + * + * @return list of MetricAggregatorInfo + */ + public List getMetricReaders(SegmentWriteState state, Map fieldProducerMap) + throws IOException { + List metricReaders = new ArrayList<>(); + for (Metric metric : this.starTreeField.getMetrics()) { + for (MetricStat metricStat : metric.getMetrics()) { + FieldInfo metricFieldInfo = state.fieldInfos.fieldInfo(metric.getField()); + if (metricFieldInfo == null) { + metricFieldInfo = getFieldInfo(metric.getField()); + } + // TODO + // if (metricStat != MetricStat.COUNT) { + // Need not initialize the metric reader for COUNT metric type + SequentialDocValuesIterator metricReader = new SequentialDocValuesIterator( + fieldProducerMap.get(metricFieldInfo.name).getSortedNumeric(metricFieldInfo) + ); + // } + + metricReaders.add(metricReader); + } + } + return metricReaders; + } + /** * Builds the star tree using Star-Tree Document * @@ -466,7 +510,6 @@ void build(Iterator starTreeDocumentIterator) throws IOExcepti // Create doc values indices in disk // Serialize and save in disk // Write star tree metadata for off heap implementation - } /** @@ -538,10 +581,10 @@ private Map constructNonStarNodes(int startDocId, int endDocId, Long nodeDimensionValue = getDimensionValue(startDocId, dimensionId); for (int i = startDocId + 1; i < endDocId; i++) { Long dimensionValue = getDimensionValue(i, dimensionId); - if (!dimensionValue.equals(nodeDimensionValue)) { + if (Objects.equals(dimensionValue, nodeDimensionValue) == false) { TreeNode child = getNewNode(); child.dimensionId = dimensionId; - child.dimensionValue = nodeDimensionValue; + child.dimensionValue = nodeDimensionValue != null ? nodeDimensionValue : ALL; child.startDocId = nodeStartDocId; child.endDocId = i; nodes.put(nodeDimensionValue, child); @@ -552,7 +595,7 @@ private Map constructNonStarNodes(int startDocId, int endDocId, } TreeNode lastNode = getNewNode(); lastNode.dimensionId = dimensionId; - lastNode.dimensionValue = nodeDimensionValue; + lastNode.dimensionValue = nodeDimensionValue != null ? nodeDimensionValue : ALL; lastNode.startDocId = nodeStartDocId; lastNode.endDocId = endDocId; nodes.put(nodeDimensionValue, lastNode); @@ -607,7 +650,7 @@ private StarTreeDocument createAggregatedDocs(TreeNode node) throws IOException throw new IllegalStateException("aggregated star-tree document is null after reducing the documents"); } for (int i = node.dimensionId + 1; i < numDimensions; i++) { - aggregatedStarTreeDocument.dimensions[i] = Long.valueOf(STAR_IN_DOC_VALUES_INDEX); + aggregatedStarTreeDocument.dimensions[i] = STAR_IN_DOC_VALUES_INDEX; } node.aggregatedDocId = numStarTreeDocs; appendToStarTree(aggregatedStarTreeDocument); @@ -639,7 +682,7 @@ private StarTreeDocument createAggregatedDocs(TreeNode node) throws IOException throw new IllegalStateException("aggregated star-tree document is null after reducing the documents"); } for (int i = node.dimensionId + 1; i < numDimensions; i++) { - aggregatedStarTreeDocument.dimensions[i] = Long.valueOf(STAR_IN_DOC_VALUES_INDEX); + aggregatedStarTreeDocument.dimensions[i] = STAR_IN_DOC_VALUES_INDEX; } node.aggregatedDocId = numStarTreeDocs; appendToStarTree(aggregatedStarTreeDocument); @@ -665,4 +708,5 @@ public void close() throws IOException { } + abstract Iterator mergeStarTrees(List starTreeValues) throws IOException; } diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/OnHeapStarTreeBuilder.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/OnHeapStarTreeBuilder.java index caeb24838da62..1599be2e76a56 100644 --- a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/OnHeapStarTreeBuilder.java +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/OnHeapStarTreeBuilder.java @@ -7,11 +7,14 @@ */ package org.opensearch.index.compositeindex.datacube.startree.builder; -import org.apache.lucene.codecs.DocValuesProducer; import org.apache.lucene.index.SegmentWriteState; +import org.apache.lucene.search.DocIdSetIterator; import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.index.codec.composite.datacube.startree.StarTreeValues; +import org.opensearch.index.compositeindex.datacube.Dimension; import org.opensearch.index.compositeindex.datacube.startree.StarTreeDocument; import org.opensearch.index.compositeindex.datacube.startree.StarTreeField; +import org.opensearch.index.compositeindex.datacube.startree.utils.SequentialDocValuesIterator; import org.opensearch.index.mapper.MapperService; import java.io.IOException; @@ -36,27 +39,20 @@ public class OnHeapStarTreeBuilder extends BaseStarTreeBuilder { * Constructor for OnHeapStarTreeBuilder * * @param starTreeField star-tree field - * @param fieldProducerMap helps with document values producer for a particular field * @param segmentWriteState segment write state * @param mapperService helps with the numeric type of field - * @throws IOException throws an exception we are unable to construct an onheap star-tree */ - public OnHeapStarTreeBuilder( - StarTreeField starTreeField, - Map fieldProducerMap, - SegmentWriteState segmentWriteState, - MapperService mapperService - ) throws IOException { - super(starTreeField, fieldProducerMap, segmentWriteState, mapperService); + public OnHeapStarTreeBuilder(StarTreeField starTreeField, SegmentWriteState segmentWriteState, MapperService mapperService) { + super(starTreeField, segmentWriteState, mapperService); } @Override - public void appendStarTreeDocument(StarTreeDocument starTreeDocument) throws IOException { + public void appendStarTreeDocument(StarTreeDocument starTreeDocument) { starTreeDocuments.add(starTreeDocument); } @Override - public StarTreeDocument getStarTreeDocument(int docId) throws IOException { + public StarTreeDocument getStarTreeDocument(int docId) { return starTreeDocuments.get(docId); } @@ -66,34 +62,123 @@ public List getStarTreeDocuments() { } @Override - public Long getDimensionValue(int docId, int dimensionId) throws IOException { + public Long getDimensionValue(int docId, int dimensionId) { return starTreeDocuments.get(docId).dimensions[dimensionId]; } + /** + * Sorts and aggregates all the documents of the segment based on dimension and metrics configuration + * + * @param dimensionReaders List of docValues readers to read dimensions from the segment + * @param metricReaders List of docValues readers to read metrics from the segment + * @return Iterator of star-tree documents + * + */ @Override - public Iterator sortAndAggregateStarTreeDocuments() throws IOException { - int numDocs = totalSegmentDocs; - StarTreeDocument[] starTreeDocuments = new StarTreeDocument[numDocs]; - for (int currentDocId = 0; currentDocId < numDocs; currentDocId++) { - starTreeDocuments[currentDocId] = getSegmentStarTreeDocument(currentDocId); + public Iterator sortAndAggregateSegmentDocuments( + SequentialDocValuesIterator[] dimensionReaders, + List metricReaders + ) throws IOException { + StarTreeDocument[] starTreeDocuments = new StarTreeDocument[totalSegmentDocs]; + for (int currentDocId = 0; currentDocId < totalSegmentDocs; currentDocId++) { + // TODO : we can save empty iterator for dimensions which are not part of segment + starTreeDocuments[currentDocId] = getSegmentStarTreeDocument(currentDocId, dimensionReaders, metricReaders); } - return sortAndAggregateStarTreeDocuments(starTreeDocuments); } + @Override + public void build(List starTreeValuesSubs) throws IOException { + build(mergeStarTrees(starTreeValuesSubs)); + } + + /** + * Sorts and aggregates the star-tree documents from multiple segments and builds star tree based on the newly + * aggregated star-tree documents + * + * @param starTreeValuesSubs StarTreeValues from multiple segments + * @return iterator of star tree documents + */ + @Override + Iterator mergeStarTrees(List starTreeValuesSubs) throws IOException { + return sortAndAggregateStarTreeDocuments(getSegmentsStarTreeDocuments(starTreeValuesSubs), true); + } + + /** + * Returns an array of all the starTreeDocuments from all the segments + * We only take the non-star documents from all the segments. + * + * @param starTreeValuesSubs StarTreeValues from multiple segments + * @return array of star tree documents + */ + StarTreeDocument[] getSegmentsStarTreeDocuments(List starTreeValuesSubs) throws IOException { + List starTreeDocuments = new ArrayList<>(); + for (StarTreeValues starTreeValues : starTreeValuesSubs) { + List dimensionsSplitOrder = starTreeValues.getStarTreeField().getDimensionsOrder(); + SequentialDocValuesIterator[] dimensionReaders = new SequentialDocValuesIterator[dimensionsSplitOrder.size()]; + + for (int i = 0; i < dimensionsSplitOrder.size(); i++) { + String dimension = dimensionsSplitOrder.get(i).getField(); + dimensionReaders[i] = new SequentialDocValuesIterator(starTreeValues.getDimensionDocValuesIteratorMap().get(dimension)); + } + + List metricReaders = new ArrayList<>(); + for (Map.Entry metricDocValuesEntry : starTreeValues.getMetricDocValuesIteratorMap().entrySet()) { + metricReaders.add(new SequentialDocValuesIterator(metricDocValuesEntry.getValue())); + } + + boolean endOfDoc = false; + int currentDocId = 0; + int numSegmentDocs = Integer.parseInt( + starTreeValues.getAttributes().getOrDefault(NUM_SEGMENT_DOCS, String.valueOf(DocIdSetIterator.NO_MORE_DOCS)) + ); + while (currentDocId < numSegmentDocs) { + Long[] dims = new Long[dimensionsSplitOrder.size()]; + int i = 0; + for (SequentialDocValuesIterator dimensionDocValueIterator : dimensionReaders) { + dimensionDocValueIterator.nextDoc(currentDocId); + Long val = dimensionDocValueIterator.value(currentDocId); + dims[i] = val; + i++; + } + i = 0; + Object[] metrics = new Object[metricReaders.size()]; + for (SequentialDocValuesIterator metricDocValuesIterator : metricReaders) { + metricDocValuesIterator.nextDoc(currentDocId); + // As part of merge, we traverse the star tree doc values + // The type of data stored in metric fields is different from the + // actual indexing field they're based on + metrics[i] = metricAggregatorInfos.get(i) + .getValueAggregators() + .toStarTreeNumericTypeValue(metricDocValuesIterator.value(currentDocId)); + i++; + } + StarTreeDocument starTreeDocument = new StarTreeDocument(dims, metrics); + starTreeDocuments.add(starTreeDocument); + currentDocId++; + } + } + StarTreeDocument[] starTreeDocumentsArr = new StarTreeDocument[starTreeDocuments.size()]; + return starTreeDocuments.toArray(starTreeDocumentsArr); + } + + Iterator sortAndAggregateStarTreeDocuments(StarTreeDocument[] starTreeDocuments) { + return sortAndAggregateStarTreeDocuments(starTreeDocuments, false); + } + /** * Sort, aggregates and merges the star-tree documents * * @param starTreeDocuments star-tree documents * @return iterator for star-tree documents */ - Iterator sortAndAggregateStarTreeDocuments(StarTreeDocument[] starTreeDocuments) { + Iterator sortAndAggregateStarTreeDocuments(StarTreeDocument[] starTreeDocuments, boolean isMerge) { // sort all the documents sortStarTreeDocumentsFromDimensionId(starTreeDocuments, 0); // merge the documents - return mergeStarTreeDocuments(starTreeDocuments); + return mergeStarTreeDocuments(starTreeDocuments, isMerge); } /** @@ -102,7 +187,7 @@ Iterator sortAndAggregateStarTreeDocuments(StarTreeDocument[] * @param starTreeDocuments star-tree documents * @return iterator to aggregate star-tree documents */ - private Iterator mergeStarTreeDocuments(StarTreeDocument[] starTreeDocuments) { + private Iterator mergeStarTreeDocuments(StarTreeDocument[] starTreeDocuments, boolean isMerge) { return new Iterator<>() { boolean hasNext = true; StarTreeDocument currentStarTreeDocument = starTreeDocuments[0]; @@ -117,7 +202,7 @@ public boolean hasNext() { @Override public StarTreeDocument next() { // aggregate as we move on to the next doc - StarTreeDocument next = reduceSegmentStarTreeDocuments(null, currentStarTreeDocument); + StarTreeDocument next = reduceSegmentStarTreeDocuments(null, currentStarTreeDocument, isMerge); while (docId < starTreeDocuments.length) { StarTreeDocument starTreeDocument = starTreeDocuments[docId]; docId++; @@ -125,7 +210,7 @@ public StarTreeDocument next() { currentStarTreeDocument = starTreeDocument; return next; } else { - next = reduceSegmentStarTreeDocuments(next, starTreeDocument); + next = reduceSegmentStarTreeDocuments(next, starTreeDocument, isMerge); } } hasNext = false; @@ -141,11 +226,9 @@ public StarTreeDocument next() { * @param endDocId End document id (exclusive) in the star-tree * @param dimensionId Dimension id of the star-node * @return iterator for star-tree documents of star-node - * @throws IOException throws when unable to generate star-tree for star-node */ @Override - public Iterator generateStarTreeDocumentsForStarNode(int startDocId, int endDocId, int dimensionId) - throws IOException { + public Iterator generateStarTreeDocumentsForStarNode(int startDocId, int endDocId, int dimensionId) { int numDocs = endDocId - startDocId; StarTreeDocument[] starTreeDocuments = new StarTreeDocument[numDocs]; for (int i = 0; i < numDocs; i++) { @@ -177,7 +260,7 @@ public boolean hasNext() { @Override public StarTreeDocument next() { StarTreeDocument next = reduceStarTreeDocuments(null, currentStarTreeDocument); - next.dimensions[dimensionId] = Long.valueOf(STAR_IN_DOC_VALUES_INDEX); + next.dimensions[dimensionId] = STAR_IN_DOC_VALUES_INDEX; while (docId < numDocs) { StarTreeDocument starTreeDocument = starTreeDocuments[docId]; docId++; @@ -204,6 +287,15 @@ private void sortStarTreeDocumentsFromDimensionId(StarTreeDocument[] starTreeDoc Arrays.sort(starTreeDocuments, (o1, o2) -> { for (int i = dimensionId; i < numDimensions; i++) { if (!Objects.equals(o1.dimensions[i], o2.dimensions[i])) { + if (o1.dimensions[i] == null && o2.dimensions[i] == null) { + return 0; + } + if (o1.dimensions[i] == null) { + return 1; + } + if (o2.dimensions[i] == null) { + return -1; + } return Long.compare(o1.dimensions[i], o2.dimensions[i]); } } diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreeBuilder.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreeBuilder.java index 20af1b3bc7935..94c9c9f2efb18 100644 --- a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreeBuilder.java +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreeBuilder.java @@ -8,10 +8,14 @@ package org.opensearch.index.compositeindex.datacube.startree.builder; +import org.apache.lucene.codecs.DocValuesProducer; import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.index.codec.composite.datacube.startree.StarTreeValues; import java.io.Closeable; import java.io.IOException; +import java.util.List; +import java.util.Map; /** * A star-tree builder that builds a single star-tree. @@ -20,10 +24,20 @@ */ @ExperimentalApi public interface StarTreeBuilder extends Closeable { + /** + * Builds the star tree from the original segment documents + * + * @param fieldProducerMap contains the docValues producer to get docValues associated with each field + * @throws IOException when we are unable to build star-tree + */ + + void build(Map fieldProducerMap) throws IOException; /** - * Builds the star tree based on star-tree field + * Builds the star tree using StarTree values from multiple segments + * + * @param starTreeValuesSubs contains the star tree values from multiple segments * @throws IOException when we are unable to build star-tree */ - void build() throws IOException; + void build(List starTreeValuesSubs) throws IOException; } diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreeDocValuesIteratorAdapter.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreeDocValuesIteratorAdapter.java deleted file mode 100644 index cb0350bb110b0..0000000000000 --- a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreeDocValuesIteratorAdapter.java +++ /dev/null @@ -1,82 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.index.compositeindex.datacube.startree.builder; - -import org.apache.lucene.codecs.DocValuesProducer; -import org.apache.lucene.index.DocValuesType; -import org.apache.lucene.index.FieldInfo; -import org.apache.lucene.index.SortedNumericDocValues; -import org.apache.lucene.search.DocIdSetIterator; -import org.opensearch.common.annotation.ExperimentalApi; -import org.opensearch.index.compositeindex.datacube.startree.utils.SequentialDocValuesIterator; - -import java.io.IOException; - -/** - * A factory class to return respective doc values iterator based on the doc volues type. - * - * @opensearch.experimental - */ -@ExperimentalApi -public class StarTreeDocValuesIteratorAdapter { - - /** - * Creates an iterator for the given doc values type and field using the doc values producer - */ - public SequentialDocValuesIterator getDocValuesIterator(DocValuesType type, FieldInfo field, DocValuesProducer producer) - throws IOException { - switch (type) { - case SORTED_NUMERIC: - return new SequentialDocValuesIterator(producer.getSortedNumeric(field)); - default: - throw new IllegalArgumentException("Unsupported DocValuesType: " + type); - } - } - - /** - * Returns the next value for the given iterator - */ - public Long getNextValue(SequentialDocValuesIterator sequentialDocValuesIterator, int currentDocId) throws IOException { - if (sequentialDocValuesIterator.getDocIdSetIterator() instanceof SortedNumericDocValues) { - SortedNumericDocValues sortedNumericDocValues = (SortedNumericDocValues) sequentialDocValuesIterator.getDocIdSetIterator(); - if (sequentialDocValuesIterator.getDocId() < 0 || sequentialDocValuesIterator.getDocId() == DocIdSetIterator.NO_MORE_DOCS) { - throw new IllegalStateException("invalid doc id to fetch the next value"); - } - - if (sequentialDocValuesIterator.getDocValue() == null) { - sequentialDocValuesIterator.setDocValue(sortedNumericDocValues.nextValue()); - return sequentialDocValuesIterator.getDocValue(); - } - - if (sequentialDocValuesIterator.getDocId() == currentDocId) { - Long nextValue = sequentialDocValuesIterator.getDocValue(); - sequentialDocValuesIterator.setDocValue(null); - return nextValue; - } else { - return null; - } - } else { - throw new IllegalStateException("Unsupported Iterator: " + sequentialDocValuesIterator.getDocIdSetIterator().toString()); - } - } - - /** - * Moves to the next doc in the iterator - * Returns the doc id for the next document from the given iterator - */ - public int nextDoc(SequentialDocValuesIterator iterator, int currentDocId) throws IOException { - if (iterator.getDocValue() != null) { - return iterator.getDocId(); - } - iterator.setDocId(iterator.getDocIdSetIterator().nextDoc()); - iterator.setDocValue(this.getNextValue(iterator, currentDocId)); - return iterator.getDocId(); - } - -} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreesBuilder.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreesBuilder.java index eaf9ae1dcdaa1..6c3d476aa3a55 100644 --- a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreesBuilder.java +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreesBuilder.java @@ -13,6 +13,7 @@ import org.apache.lucene.codecs.DocValuesProducer; import org.apache.lucene.index.SegmentWriteState; import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.index.codec.composite.datacube.startree.StarTreeValues; import org.opensearch.index.compositeindex.datacube.startree.StarTreeField; import org.opensearch.index.mapper.CompositeMappedFieldType; import org.opensearch.index.mapper.MapperService; @@ -37,14 +38,9 @@ public class StarTreesBuilder implements Closeable { private final List starTreeFields; private final SegmentWriteState state; - private final Map fieldProducerMap; private final MapperService mapperService; - public StarTreesBuilder( - Map fieldProducerMap, - SegmentWriteState segmentWriteState, - MapperService mapperService - ) { + public StarTreesBuilder(SegmentWriteState segmentWriteState, MapperService mapperService) { List starTreeFields = new ArrayList<>(); for (CompositeMappedFieldType compositeMappedFieldType : mapperService.getCompositeFieldTypes()) { if (compositeMappedFieldType instanceof StarTreeMapper.StarTreeFieldType) { @@ -59,9 +55,7 @@ public StarTreesBuilder( ); } } - this.starTreeFields = starTreeFields; - this.fieldProducerMap = fieldProducerMap; this.state = segmentWriteState; this.mapperService = mapperService; } @@ -69,38 +63,67 @@ public StarTreesBuilder( /** * Builds the star-trees. */ - public void build() throws IOException { + public void build(Map fieldProducerMap) throws IOException { if (starTreeFields.isEmpty()) { logger.debug("no star-tree fields found, returning from star-tree builder"); return; } long startTime = System.currentTimeMillis(); + int numStarTrees = starTreeFields.size(); logger.debug("Starting building {} star-trees with star-tree fields", numStarTrees); // Build all star-trees for (StarTreeField starTreeField : starTreeFields) { - try (StarTreeBuilder starTreeBuilder = getStarTreeBuilder(starTreeField, fieldProducerMap, state, mapperService)) { - starTreeBuilder.build(); + try (StarTreeBuilder starTreeBuilder = getSingleTreeBuilder(starTreeField, state, mapperService)) { + starTreeBuilder.build(fieldProducerMap); } } - logger.debug("Took {} ms to building {} star-trees with star-tree fields", System.currentTimeMillis() - startTime, numStarTrees); + logger.debug("Took {} ms to build {} star-trees with star-tree fields", System.currentTimeMillis() - startTime, numStarTrees); } @Override public void close() throws IOException { + // TODO : close files + } + /** + * Merges star tree fields from multiple segments + * + * @param starTreeValuesSubsPerField starTreeValuesSubs per field + */ + public void buildDuringMerge(final Map> starTreeValuesSubsPerField) throws IOException { + logger.debug("Starting merge of {} star-trees with star-tree fields", starTreeValuesSubsPerField.size()); + long startTime = System.currentTimeMillis(); + for (Map.Entry> entry : starTreeValuesSubsPerField.entrySet()) { + List starTreeValuesList = entry.getValue(); + if (starTreeValuesList.isEmpty()) { + logger.debug("StarTreeValues is empty for all segments for field : {}", entry.getKey()); + continue; + } + StarTreeField starTreeField = starTreeValuesList.get(0).getStarTreeField(); + StarTreeBuilder builder = getSingleTreeBuilder(starTreeField, state, mapperService); + builder.build(starTreeValuesList); + builder.close(); + } + logger.debug( + "Took {} ms to merge {} star-trees with star-tree fields", + System.currentTimeMillis() - startTime, + starTreeValuesSubsPerField.size() + ); } - StarTreeBuilder getStarTreeBuilder( - StarTreeField starTreeField, - Map fieldProducerMap, - SegmentWriteState state, - MapperService mapperService - ) throws IOException { + /** + * Get star-tree builder based on build mode. + */ + StarTreeBuilder getSingleTreeBuilder(StarTreeField starTreeField, SegmentWriteState state, MapperService mapperService) + throws IOException { switch (starTreeField.getStarTreeConfig().getBuildMode()) { case ON_HEAP: - return new OnHeapStarTreeBuilder(starTreeField, fieldProducerMap, state, mapperService); + return new OnHeapStarTreeBuilder(starTreeField, state, mapperService); + case OFF_HEAP: + // TODO + // return new OffHeapStarTreeBuilder(starTreeField, state, mapperService); default: throw new IllegalArgumentException( String.format( diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/node/StarTreeNode.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/node/StarTreeNode.java new file mode 100644 index 0000000000000..59522ffa4be89 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/node/StarTreeNode.java @@ -0,0 +1,112 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree.node; + +import org.opensearch.common.annotation.ExperimentalApi; + +import java.io.IOException; +import java.util.Iterator; + +/** + * Interface that represents star tree node + * + * @opensearch.experimental + */ +@ExperimentalApi +public interface StarTreeNode { + long ALL = -1l; + + /** + * Returns the dimension ID of the current star-tree node. + * + * @return the dimension ID + * @throws IOException if an I/O error occurs while reading the dimension ID + */ + int getDimensionId() throws IOException; + + /** + * Returns the dimension value of the current star-tree node. + * + * @return the dimension value + * @throws IOException if an I/O error occurs while reading the dimension value + */ + long getDimensionValue() throws IOException; + + /** + * Returns the dimension ID of the child star-tree node. + * + * @return the child dimension ID + * @throws IOException if an I/O error occurs while reading the child dimension ID + */ + int getChildDimensionId() throws IOException; + + /** + * Returns the start document ID of the current star-tree node. + * + * @return the start document ID + * @throws IOException if an I/O error occurs while reading the start document ID + */ + int getStartDocId() throws IOException; + + /** + * Returns the end document ID of the current star-tree node. + * + * @return the end document ID + * @throws IOException if an I/O error occurs while reading the end document ID + */ + int getEndDocId() throws IOException; + + /** + * Returns the aggregated document ID of the current star-tree node. + * + * @return the aggregated document ID + * @throws IOException if an I/O error occurs while reading the aggregated document ID + */ + int getAggregatedDocId() throws IOException; + + /** + * Returns the number of children of the current star-tree node. + * + * @return the number of children + * @throws IOException if an I/O error occurs while reading the number of children + */ + int getNumChildren() throws IOException; + + /** + * Checks if the current node is a leaf star-tree node. + * + * @return true if the node is a leaf node, false otherwise + */ + boolean isLeaf(); + + /** + * Checks if the current node is a star node. + * + * @return true if the node is a star node, false otherwise + * @throws IOException if an I/O error occurs while reading the star node status + */ + boolean isStarNode() throws IOException; + + /** + * Returns the child star-tree node for the given dimension value. + * + * @param dimensionValue the dimension value + * @return the child node for the given dimension value or null if child is not present + * @throws IOException if an I/O error occurs while retrieving the child node + */ + StarTreeNode getChildForDimensionValue(long dimensionValue) throws IOException; + + /** + * Returns an iterator over the children of the current star-tree node. + * + * @return an iterator over the children + * @throws IOException if an I/O error occurs while retrieving the children iterator + */ + Iterator getChildrenIterator() throws IOException; +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/node/package-info.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/node/package-info.java new file mode 100644 index 0000000000000..516d5b5a012ab --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/node/package-info.java @@ -0,0 +1,12 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +/** + * Holds classes associated with star tree node + */ +package org.opensearch.index.compositeindex.datacube.startree.node; diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/utils/SequentialDocValuesIterator.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/utils/SequentialDocValuesIterator.java index cf5f3e94c1ca6..400d7a1c00104 100644 --- a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/utils/SequentialDocValuesIterator.java +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/utils/SequentialDocValuesIterator.java @@ -1,3 +1,4 @@ + /* * SPDX-License-Identifier: Apache-2.0 * @@ -17,7 +18,6 @@ /** * Coordinates the reading of documents across multiple DocIdSetIterators. * It encapsulates a single DocIdSetIterator and maintains the latest document ID and its associated value. - * * @opensearch.experimental */ @ExperimentalApi @@ -28,15 +28,10 @@ public class SequentialDocValuesIterator { */ private final DocIdSetIterator docIdSetIterator; - /** - * The value associated with the latest document. - */ - private Long docValue; - /** * The id of the latest document. */ - private int docId; + private int docId = -1; /** * Constructs a new SequentialDocValuesIterator instance with the given DocIdSetIterator. @@ -47,85 +42,15 @@ public SequentialDocValuesIterator(DocIdSetIterator docIdSetIterator) { this.docIdSetIterator = docIdSetIterator; } - /** - * Constructs a new SequentialDocValuesIterator instance with the given SortedNumericDocValues. - * - */ - public SequentialDocValuesIterator() { - this.docIdSetIterator = new SortedNumericDocValues() { - @Override - public long nextValue() throws IOException { - return 0; - } - - @Override - public int docValueCount() { - return 0; - } - - @Override - public boolean advanceExact(int i) throws IOException { - return false; - } - - @Override - public int docID() { - return 0; - } - - @Override - public int nextDoc() throws IOException { - return 0; - } - - @Override - public int advance(int i) throws IOException { - return 0; - } - - @Override - public long cost() { - return 0; - } - }; - } - - /** - * Returns the value associated with the latest document. - * - * @return the value associated with the latest document - */ - public Long getDocValue() { - return docValue; - } - - /** - * Sets the value associated with the latest document. - * - * @param docValue the value to be associated with the latest document - */ - public void setDocValue(Long docValue) { - this.docValue = docValue; - } - /** * Returns the id of the latest document. * * @return the id of the latest document */ - public int getDocId() { + int getDocId() { return docId; } - /** - * Sets the id of the latest document. - * - * @param docId the ID of the latest document - */ - public void setDocId(int docId) { - this.docId = docId; - } - /** * Returns the DocIdSetIterator associated with this instance. * @@ -134,4 +59,32 @@ public void setDocId(int docId) { public DocIdSetIterator getDocIdSetIterator() { return docIdSetIterator; } + + public int nextDoc(int currentDocId) throws IOException { + // if doc id stored is less than or equal to the requested doc id , return the stored doc id + if (docId >= currentDocId) { + return docId; + } + docId = this.docIdSetIterator.nextDoc(); + return docId; + } + + public Long value(int currentDocId) throws IOException { + if (this.getDocIdSetIterator() instanceof SortedNumericDocValues) { + SortedNumericDocValues sortedNumericDocValues = (SortedNumericDocValues) this.getDocIdSetIterator(); + if (currentDocId < 0) { + throw new IllegalStateException("invalid doc id to fetch the next value"); + } + if (currentDocId == DocIdSetIterator.NO_MORE_DOCS) { + throw new IllegalStateException("DocValuesIterator is already exhausted"); + } + if (docId == DocIdSetIterator.NO_MORE_DOCS || docId != currentDocId) { + return null; + } + return sortedNumericDocValues.nextValue(); + + } else { + throw new IllegalStateException("Unsupported Iterator requested for SequentialDocValuesIterator"); + } + } } diff --git a/server/src/main/java/org/opensearch/index/mapper/CompositeMappedFieldType.java b/server/src/main/java/org/opensearch/index/mapper/CompositeMappedFieldType.java index e067e70621304..7239ddfb26c0d 100644 --- a/server/src/main/java/org/opensearch/index/mapper/CompositeMappedFieldType.java +++ b/server/src/main/java/org/opensearch/index/mapper/CompositeMappedFieldType.java @@ -72,6 +72,10 @@ public static CompositeFieldType fromName(String name) { } } + public CompositeFieldType getCompositeIndexType() { + return type; + } + public List fields() { return fields; } diff --git a/server/src/test/java/org/opensearch/index/codec/composite/datacube/startree/StarTreeDocValuesFormatTests.java b/server/src/test/java/org/opensearch/index/codec/composite/datacube/startree/StarTreeDocValuesFormatTests.java index 31df9a49bebfb..049d91bc42d9c 100644 --- a/server/src/test/java/org/opensearch/index/codec/composite/datacube/startree/StarTreeDocValuesFormatTests.java +++ b/server/src/test/java/org/opensearch/index/codec/composite/datacube/startree/StarTreeDocValuesFormatTests.java @@ -12,63 +12,165 @@ import org.apache.logging.log4j.Logger; import org.apache.lucene.codecs.Codec; import org.apache.lucene.codecs.lucene99.Lucene99Codec; +import org.apache.lucene.document.Document; +import org.apache.lucene.document.SortedNumericDocValuesField; +import org.apache.lucene.index.IndexWriterConfig; +import org.apache.lucene.store.Directory; import org.apache.lucene.tests.index.BaseDocValuesFormatTestCase; +import org.apache.lucene.tests.index.RandomIndexWriter; import org.apache.lucene.tests.util.LuceneTestCase; -import org.opensearch.common.Rounding; +import org.opensearch.Version; +import org.opensearch.cluster.ClusterModule; +import org.opensearch.cluster.metadata.IndexMetadata; +import org.opensearch.common.CheckedConsumer; +import org.opensearch.common.settings.Settings; +import org.opensearch.common.util.FeatureFlags; +import org.opensearch.common.xcontent.XContentFactory; +import org.opensearch.core.xcontent.NamedXContentRegistry; +import org.opensearch.core.xcontent.XContentBuilder; +import org.opensearch.index.MapperTestUtils; import org.opensearch.index.codec.composite.Composite99Codec; -import org.opensearch.index.compositeindex.datacube.DateDimension; -import org.opensearch.index.compositeindex.datacube.Dimension; -import org.opensearch.index.compositeindex.datacube.Metric; -import org.opensearch.index.compositeindex.datacube.MetricStat; -import org.opensearch.index.compositeindex.datacube.NumericDimension; -import org.opensearch.index.compositeindex.datacube.startree.StarTreeField; -import org.opensearch.index.compositeindex.datacube.startree.StarTreeFieldConfiguration; import org.opensearch.index.mapper.MapperService; -import org.opensearch.index.mapper.StarTreeMapper; +import org.opensearch.indices.IndicesModule; +import org.junit.After; +import org.junit.AfterClass; +import org.junit.BeforeClass; -import java.util.ArrayList; +import java.io.IOException; import java.util.Collections; -import java.util.List; -import java.util.Set; -import org.mockito.Mockito; +import static org.opensearch.common.util.FeatureFlags.STAR_TREE_INDEX; /** * Star tree doc values Lucene tests */ @LuceneTestCase.SuppressSysoutChecks(bugUrl = "we log a lot on purpose") public class StarTreeDocValuesFormatTests extends BaseDocValuesFormatTestCase { + MapperService mapperService = null; + + @BeforeClass + public static void createMapper() throws Exception { + FeatureFlags.initializeFeatureFlags(Settings.builder().put(STAR_TREE_INDEX, "true").build()); + } + + @AfterClass + public static void clearMapper() { + FeatureFlags.initializeFeatureFlags(Settings.EMPTY); + } + + @After + public void teardown() throws IOException { + mapperService.close(); + } + @Override protected Codec getCodec() { - MapperService service = Mockito.mock(MapperService.class); - Mockito.when(service.getCompositeFieldTypes()).thenReturn(Set.of(getStarTreeFieldType())); final Logger testLogger = LogManager.getLogger(StarTreeDocValuesFormatTests.class); - return new Composite99Codec(Lucene99Codec.Mode.BEST_SPEED, service, testLogger); + + try { + createMapperService(getExpandedMapping("status", "size")); + } catch (IOException e) { + throw new RuntimeException(e); + } + Codec codec = new Composite99Codec(Lucene99Codec.Mode.BEST_SPEED, mapperService, testLogger); + return codec; } - private StarTreeMapper.StarTreeFieldType getStarTreeFieldType() { - List m1 = new ArrayList<>(); - m1.add(MetricStat.MAX); - Metric metric = new Metric("sndv", m1); - List d1CalendarIntervals = new ArrayList<>(); - d1CalendarIntervals.add(Rounding.DateTimeUnit.HOUR_OF_DAY); - StarTreeField starTreeField = getStarTreeField(d1CalendarIntervals, metric); + public void testStarTreeDocValues() throws IOException { + Directory directory = newDirectory(); + IndexWriterConfig conf = newIndexWriterConfig(null); + conf.setMergePolicy(newLogMergePolicy()); + RandomIndexWriter iw = new RandomIndexWriter(random(), directory, conf); + Document doc = new Document(); + doc.add(new SortedNumericDocValuesField("sndv", 1)); + doc.add(new SortedNumericDocValuesField("dv", 1)); + doc.add(new SortedNumericDocValuesField("field", 1)); + iw.addDocument(doc); + doc.add(new SortedNumericDocValuesField("sndv", 1)); + doc.add(new SortedNumericDocValuesField("dv", 1)); + doc.add(new SortedNumericDocValuesField("field", 1)); + iw.addDocument(doc); + iw.forceMerge(1); + doc.add(new SortedNumericDocValuesField("sndv", 2)); + doc.add(new SortedNumericDocValuesField("dv", 2)); + doc.add(new SortedNumericDocValuesField("field", 2)); + iw.addDocument(doc); + doc.add(new SortedNumericDocValuesField("sndv", 2)); + doc.add(new SortedNumericDocValuesField("dv", 2)); + doc.add(new SortedNumericDocValuesField("field", 2)); + iw.addDocument(doc); + iw.forceMerge(1); + iw.close(); + + // TODO : validate star tree structures that got created + directory.close(); + } - return new StarTreeMapper.StarTreeFieldType("star_tree", starTreeField); + private XContentBuilder getExpandedMapping(String dim, String metric) throws IOException { + return topMapping(b -> { + b.startObject("composite"); + b.startObject("startree"); + b.field("type", "star_tree"); + b.startObject("config"); + b.field("max_leaf_docs", 100); + b.startArray("ordered_dimensions"); + b.startObject(); + b.field("name", "sndv"); + b.endObject(); + b.startObject(); + b.field("name", "dv"); + b.endObject(); + b.endArray(); + b.startArray("metrics"); + b.startObject(); + b.field("name", "field"); + b.startArray("stats"); + b.value("sum"); + b.value("count"); // TODO : THIS TEST FAILS. + b.endArray(); + b.endObject(); + b.endArray(); + b.endObject(); + b.endObject(); + b.endObject(); + b.startObject("properties"); + b.startObject("sndv"); + b.field("type", "integer"); + b.endObject(); + b.startObject("dv"); + b.field("type", "integer"); + b.endObject(); + b.startObject("field"); + b.field("type", "integer"); + b.endObject(); + b.endObject(); + }); } - private static StarTreeField getStarTreeField(List d1CalendarIntervals, Metric metric1) { - DateDimension d1 = new DateDimension("field", d1CalendarIntervals); - NumericDimension d2 = new NumericDimension("dv"); + private XContentBuilder topMapping(CheckedConsumer buildFields) throws IOException { + XContentBuilder builder = XContentFactory.jsonBuilder().startObject().startObject("_doc"); + buildFields.accept(builder); + return builder.endObject().endObject(); + } - List metrics = List.of(metric1); - List dims = List.of(d1, d2); - StarTreeFieldConfiguration config = new StarTreeFieldConfiguration( - 100, - Collections.emptySet(), - StarTreeFieldConfiguration.StarTreeBuildMode.OFF_HEAP + private void createMapperService(XContentBuilder builder) throws IOException { + IndexMetadata indexMetadata = IndexMetadata.builder("test") + .settings( + Settings.builder() + .put(IndexMetadata.SETTING_VERSION_CREATED, Version.CURRENT) + .put(IndexMetadata.SETTING_NUMBER_OF_SHARDS, 1) + .put(IndexMetadata.SETTING_NUMBER_OF_REPLICAS, 1) + ) + .putMapping(builder.toString()) + .build(); + IndicesModule indicesModule = new IndicesModule(Collections.emptyList()); + mapperService = MapperTestUtils.newMapperServiceWithHelperAnalyzer( + new NamedXContentRegistry(ClusterModule.getNamedXWriteables()), + createTempDir(), + Settings.EMPTY, + indicesModule, + "test" ); - - return new StarTreeField("starTree", dims, metrics, config); + mapperService.merge(indexMetadata, MapperService.MergeReason.INDEX_TEMPLATE); } } diff --git a/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/CountValueAggregatorTests.java b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/CountValueAggregatorTests.java index e30e203406a6c..8e6e9e9974646 100644 --- a/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/CountValueAggregatorTests.java +++ b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/CountValueAggregatorTests.java @@ -13,7 +13,7 @@ import org.opensearch.test.OpenSearchTestCase; public class CountValueAggregatorTests extends OpenSearchTestCase { - private final CountValueAggregator aggregator = new CountValueAggregator(); + private final CountValueAggregator aggregator = new CountValueAggregator(StarTreeNumericType.LONG); public void testGetAggregationType() { assertEquals(MetricStat.COUNT.getTypeName(), aggregator.getAggregationType().getTypeName()); @@ -24,11 +24,11 @@ public void testGetAggregatedValueType() { } public void testGetInitialAggregatedValueForSegmentDocValue() { - assertEquals(1L, aggregator.getInitialAggregatedValueForSegmentDocValue(randomLong(), StarTreeNumericType.LONG), 0.0); + assertEquals(1L, aggregator.getInitialAggregatedValueForSegmentDocValue(randomLong()), 0.0); } public void testMergeAggregatedValueAndSegmentValue() { - assertEquals(3L, aggregator.mergeAggregatedValueAndSegmentValue(2L, 3L, StarTreeNumericType.LONG), 0.0); + assertEquals(3L, aggregator.mergeAggregatedValueAndSegmentValue(2L, 3L), 0.0); } public void testMergeAggregatedValues() { @@ -48,6 +48,6 @@ public void testToLongValue() { } public void testToStarTreeNumericTypeValue() { - assertEquals(3L, aggregator.toStarTreeNumericTypeValue(3L, StarTreeNumericType.LONG), 0.0); + assertEquals(3L, aggregator.toStarTreeNumericTypeValue(3L), 0.0); } } diff --git a/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MetricAggregatorInfoTests.java b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MetricAggregatorInfoTests.java index d08f637a3f0a9..73e6aeb44cfd7 100644 --- a/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MetricAggregatorInfoTests.java +++ b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MetricAggregatorInfoTests.java @@ -19,8 +19,7 @@ public void testConstructor() { MetricStat.SUM, "column1", "star_tree_field", - IndexNumericFieldData.NumericType.DOUBLE, - null + IndexNumericFieldData.NumericType.DOUBLE ); assertEquals(MetricStat.SUM, pair.getMetricStat()); assertEquals("column1", pair.getField()); @@ -31,8 +30,7 @@ public void testCountStarConstructor() { MetricStat.COUNT, "anything", "star_tree_field", - IndexNumericFieldData.NumericType.DOUBLE, - null + IndexNumericFieldData.NumericType.DOUBLE ); assertEquals(MetricStat.COUNT, pair.getMetricStat()); assertEquals("anything", pair.getField()); @@ -43,8 +41,7 @@ public void testToFieldName() { MetricStat.SUM, "column2", "star_tree_field", - IndexNumericFieldData.NumericType.DOUBLE, - null + IndexNumericFieldData.NumericType.DOUBLE ); assertEquals("star_tree_field_column2_sum", pair.toFieldName()); } @@ -54,24 +51,22 @@ public void testEquals() { MetricStat.SUM, "column1", "star_tree_field", - IndexNumericFieldData.NumericType.DOUBLE, - null + IndexNumericFieldData.NumericType.DOUBLE ); MetricAggregatorInfo pair2 = new MetricAggregatorInfo( MetricStat.SUM, "column1", "star_tree_field", - IndexNumericFieldData.NumericType.DOUBLE, - null + IndexNumericFieldData.NumericType.DOUBLE ); assertEquals(pair1, pair2); assertNotEquals( pair1, - new MetricAggregatorInfo(MetricStat.COUNT, "column1", "star_tree_field", IndexNumericFieldData.NumericType.DOUBLE, null) + new MetricAggregatorInfo(MetricStat.COUNT, "column1", "star_tree_field", IndexNumericFieldData.NumericType.DOUBLE) ); assertNotEquals( pair1, - new MetricAggregatorInfo(MetricStat.SUM, "column2", "star_tree_field", IndexNumericFieldData.NumericType.DOUBLE, null) + new MetricAggregatorInfo(MetricStat.SUM, "column2", "star_tree_field", IndexNumericFieldData.NumericType.DOUBLE) ); } @@ -80,15 +75,13 @@ public void testHashCode() { MetricStat.SUM, "column1", "star_tree_field", - IndexNumericFieldData.NumericType.DOUBLE, - null + IndexNumericFieldData.NumericType.DOUBLE ); MetricAggregatorInfo pair2 = new MetricAggregatorInfo( MetricStat.SUM, "column1", "star_tree_field", - IndexNumericFieldData.NumericType.DOUBLE, - null + IndexNumericFieldData.NumericType.DOUBLE ); assertEquals(pair1.hashCode(), pair2.hashCode()); } @@ -98,22 +91,19 @@ public void testCompareTo() { MetricStat.SUM, "column1", "star_tree_field", - IndexNumericFieldData.NumericType.DOUBLE, - null + IndexNumericFieldData.NumericType.DOUBLE ); MetricAggregatorInfo pair2 = new MetricAggregatorInfo( MetricStat.SUM, "column2", "star_tree_field", - IndexNumericFieldData.NumericType.DOUBLE, - null + IndexNumericFieldData.NumericType.DOUBLE ); MetricAggregatorInfo pair3 = new MetricAggregatorInfo( MetricStat.COUNT, "column1", "star_tree_field", - IndexNumericFieldData.NumericType.DOUBLE, - null + IndexNumericFieldData.NumericType.DOUBLE ); assertTrue(pair1.compareTo(pair2) < 0); assertTrue(pair2.compareTo(pair1) > 0); diff --git a/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/SumValueAggregatorTests.java b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/SumValueAggregatorTests.java index 3fb627e7cd434..dd66d4344c9e8 100644 --- a/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/SumValueAggregatorTests.java +++ b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/SumValueAggregatorTests.java @@ -20,7 +20,7 @@ public class SumValueAggregatorTests extends OpenSearchTestCase { @Before public void setup() { - aggregator = new SumValueAggregator(); + aggregator = new SumValueAggregator(StarTreeNumericType.LONG); } public void testGetAggregationType() { @@ -32,21 +32,18 @@ public void testGetAggregatedValueType() { } public void testGetInitialAggregatedValueForSegmentDocValue() { - assertEquals(1.0, aggregator.getInitialAggregatedValueForSegmentDocValue(1L, StarTreeNumericType.LONG), 0.0); - assertThrows( - NullPointerException.class, - () -> aggregator.getInitialAggregatedValueForSegmentDocValue(null, StarTreeNumericType.DOUBLE) - ); + assertEquals(1.0, aggregator.getInitialAggregatedValueForSegmentDocValue(1L), 0.0); + assertThrows(NullPointerException.class, () -> aggregator.getInitialAggregatedValueForSegmentDocValue(null)); } public void testMergeAggregatedValueAndSegmentValue() { aggregator.getInitialAggregatedValue(2.0); - assertEquals(5.0, aggregator.mergeAggregatedValueAndSegmentValue(2.0, 3L, StarTreeNumericType.LONG), 0.0); + assertEquals(5.0, aggregator.mergeAggregatedValueAndSegmentValue(2.0, 3L), 0.0); } public void testMergeAggregatedValueAndSegmentValue_nullSegmentDocValue() { aggregator.getInitialAggregatedValue(2.0); - assertThrows(NullPointerException.class, () -> aggregator.mergeAggregatedValueAndSegmentValue(2.0, null, StarTreeNumericType.LONG)); + assertThrows(NullPointerException.class, () -> aggregator.mergeAggregatedValueAndSegmentValue(2.0, null)); } public void testMergeAggregatedValues() { @@ -67,6 +64,6 @@ public void testToLongValue() { } public void testToStarTreeNumericTypeValue() { - assertEquals(NumericUtils.sortableLongToDouble(3L), aggregator.toStarTreeNumericTypeValue(3L, StarTreeNumericType.DOUBLE), 0.0); + assertEquals(NumericUtils.sortableLongToDouble(3L), aggregator.toStarTreeNumericTypeValue(3L), 0.0); } } diff --git a/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/ValueAggregatorFactoryTests.java b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/ValueAggregatorFactoryTests.java index ce61ab839cc61..428668511fb2e 100644 --- a/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/ValueAggregatorFactoryTests.java +++ b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/ValueAggregatorFactoryTests.java @@ -15,7 +15,7 @@ public class ValueAggregatorFactoryTests extends OpenSearchTestCase { public void testGetValueAggregatorForSumType() { - ValueAggregator aggregator = ValueAggregatorFactory.getValueAggregator(MetricStat.SUM); + ValueAggregator aggregator = ValueAggregatorFactory.getValueAggregator(MetricStat.SUM, StarTreeNumericType.LONG); assertNotNull(aggregator); assertEquals(SumValueAggregator.class, aggregator.getClass()); } diff --git a/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/AbstractStarTreeBuilderTests.java b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/AbstractStarTreeBuilderTests.java new file mode 100644 index 0000000000000..76a7875919a8b --- /dev/null +++ b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/AbstractStarTreeBuilderTests.java @@ -0,0 +1,2251 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree.builder; + +import org.apache.lucene.codecs.DocValuesProducer; +import org.apache.lucene.codecs.lucene99.Lucene99Codec; +import org.apache.lucene.index.DocValues; +import org.apache.lucene.index.DocValuesType; +import org.apache.lucene.index.EmptyDocValuesProducer; +import org.apache.lucene.index.FieldInfo; +import org.apache.lucene.index.FieldInfos; +import org.apache.lucene.index.IndexOptions; +import org.apache.lucene.index.SegmentInfo; +import org.apache.lucene.index.SegmentWriteState; +import org.apache.lucene.index.SortedNumericDocValues; +import org.apache.lucene.index.VectorEncoding; +import org.apache.lucene.index.VectorSimilarityFunction; +import org.apache.lucene.sandbox.document.HalfFloatPoint; +import org.apache.lucene.search.DocIdSetIterator; +import org.apache.lucene.store.Directory; +import org.apache.lucene.util.InfoStream; +import org.apache.lucene.util.NumericUtils; +import org.apache.lucene.util.Version; +import org.opensearch.common.settings.Settings; +import org.opensearch.index.codec.composite.datacube.startree.StarTreeValues; +import org.opensearch.index.compositeindex.datacube.Dimension; +import org.opensearch.index.compositeindex.datacube.Metric; +import org.opensearch.index.compositeindex.datacube.MetricStat; +import org.opensearch.index.compositeindex.datacube.NumericDimension; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeDocument; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeField; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeFieldConfiguration; +import org.opensearch.index.compositeindex.datacube.startree.utils.SequentialDocValuesIterator; +import org.opensearch.index.compositeindex.datacube.startree.utils.TreeNode; +import org.opensearch.index.mapper.ContentPath; +import org.opensearch.index.mapper.DocumentMapper; +import org.opensearch.index.mapper.Mapper; +import org.opensearch.index.mapper.MapperService; +import org.opensearch.index.mapper.MappingLookup; +import org.opensearch.index.mapper.NumberFieldMapper; +import org.opensearch.test.OpenSearchTestCase; +import org.junit.Before; + +import java.io.IOException; +import java.nio.charset.StandardCharsets; +import java.util.ArrayDeque; +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Iterator; +import java.util.List; +import java.util.Map; +import java.util.Objects; +import java.util.Queue; +import java.util.Set; +import java.util.UUID; + +import static org.opensearch.index.compositeindex.datacube.startree.builder.BaseStarTreeBuilder.NUM_SEGMENT_DOCS; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; + +public abstract class AbstractStarTreeBuilderTests extends OpenSearchTestCase { + protected MapperService mapperService; + protected List dimensionsOrder; + protected List fields = List.of(); + protected List metrics; + protected Directory directory; + protected FieldInfo[] fieldsInfo; + protected StarTreeField compositeField; + protected Map fieldProducerMap; + protected SegmentWriteState writeState; + private BaseStarTreeBuilder builder; + + @Before + public void setup() throws IOException { + fields = List.of("field1", "field2", "field3", "field4", "field5", "field6", "field7", "field8", "field9", "field10"); + + dimensionsOrder = List.of( + new NumericDimension("field1"), + new NumericDimension("field3"), + new NumericDimension("field5"), + new NumericDimension("field8") + ); + metrics = List.of( + new Metric("field2", List.of(MetricStat.SUM)), + new Metric("field4", List.of(MetricStat.SUM)), + new Metric("field6", List.of(MetricStat.COUNT)) + ); + + DocValuesProducer docValuesProducer = mock(DocValuesProducer.class); + + compositeField = new StarTreeField( + "test", + dimensionsOrder, + metrics, + new StarTreeFieldConfiguration(1, Set.of("field8"), StarTreeFieldConfiguration.StarTreeBuildMode.ON_HEAP) + ); + directory = newFSDirectory(createTempDir()); + + fieldsInfo = new FieldInfo[fields.size()]; + fieldProducerMap = new HashMap<>(); + for (int i = 0; i < fieldsInfo.length; i++) { + fieldsInfo[i] = new FieldInfo( + fields.get(i), + i, + false, + false, + true, + IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS, + DocValuesType.SORTED_NUMERIC, + -1, + Collections.emptyMap(), + 0, + 0, + 0, + 0, + VectorEncoding.FLOAT32, + VectorSimilarityFunction.EUCLIDEAN, + false, + false + ); + fieldProducerMap.put(fields.get(i), docValuesProducer); + } + writeState = getWriteState(5); + + mapperService = mock(MapperService.class); + DocumentMapper documentMapper = mock(DocumentMapper.class); + when(mapperService.documentMapper()).thenReturn(documentMapper); + Settings settings = Settings.builder().put(settings(org.opensearch.Version.CURRENT).build()).build(); + NumberFieldMapper numberFieldMapper1 = new NumberFieldMapper.Builder("field2", NumberFieldMapper.NumberType.DOUBLE, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + NumberFieldMapper numberFieldMapper2 = new NumberFieldMapper.Builder("field4", NumberFieldMapper.NumberType.DOUBLE, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + NumberFieldMapper numberFieldMapper3 = new NumberFieldMapper.Builder("field6", NumberFieldMapper.NumberType.DOUBLE, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + MappingLookup fieldMappers = new MappingLookup( + Set.of(numberFieldMapper1, numberFieldMapper2, numberFieldMapper3), + Collections.emptyList(), + Collections.emptyList(), + 0, + null + ); + when(documentMapper.mappers()).thenReturn(fieldMappers); + } + + private SegmentWriteState getWriteState(int numDocs) { + FieldInfos fieldInfos = new FieldInfos(fieldsInfo); + SegmentInfo segmentInfo = new SegmentInfo( + directory, + Version.LATEST, + Version.LUCENE_9_11_0, + "test_segment", + numDocs, + false, + false, + new Lucene99Codec(), + new HashMap<>(), + UUID.randomUUID().toString().substring(0, 16).getBytes(StandardCharsets.UTF_8), + new HashMap<>(), + null + ); + return new SegmentWriteState(InfoStream.getDefault(), segmentInfo.dir, segmentInfo, fieldInfos, null, newIOContext(random())); + } + + public abstract BaseStarTreeBuilder getStarTreeBuilder( + StarTreeField starTreeField, + SegmentWriteState segmentWriteState, + MapperService mapperService + ) throws IOException; + + public void test_sortAndAggregateStarTreeDocuments() throws IOException { + + int noOfStarTreeDocuments = 5; + StarTreeDocument[] starTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + + starTreeDocuments[0] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Double[] { 12.0, 10.0, randomDouble() }); + starTreeDocuments[1] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 10.0, 6.0, randomDouble() }); + starTreeDocuments[2] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 14.0, 12.0, randomDouble() }); + starTreeDocuments[3] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Double[] { 9.0, 4.0, randomDouble() }); + starTreeDocuments[4] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 11.0, 16.0, randomDouble() }); + + StarTreeDocument[] segmentStarTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + for (int i = 0; i < noOfStarTreeDocuments; i++) { + long metric1 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[0]); + long metric2 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[1]); + long metric3 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[2]); + segmentStarTreeDocuments[i] = new StarTreeDocument(starTreeDocuments[i].dimensions, new Long[] { metric1, metric2, metric3 }); + } + List inorderStarTreeDocuments = List.of( + new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Object[] { 21.0, 14.0, 2L }), + new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Object[] { 35.0, 34.0, 3L }) + ); + Iterator expectedStarTreeDocumentIterator = inorderStarTreeDocuments.iterator(); + + SequentialDocValuesIterator[] dimsIterators = getDimensionIterators(segmentStarTreeDocuments); + List metricsIterators = getMetricIterators(segmentStarTreeDocuments); + builder = getStarTreeBuilder(compositeField, writeState, mapperService); + Iterator segmentStarTreeDocumentIterator = builder.sortAndAggregateSegmentDocuments( + dimsIterators, + metricsIterators + ); + + int numOfAggregatedDocuments = 0; + while (segmentStarTreeDocumentIterator.hasNext() && expectedStarTreeDocumentIterator.hasNext()) { + StarTreeDocument resultStarTreeDocument = segmentStarTreeDocumentIterator.next(); + StarTreeDocument expectedStarTreeDocument = expectedStarTreeDocumentIterator.next(); + + assertEquals(expectedStarTreeDocument.dimensions[0], resultStarTreeDocument.dimensions[0]); + assertEquals(expectedStarTreeDocument.dimensions[1], resultStarTreeDocument.dimensions[1]); + assertEquals(expectedStarTreeDocument.dimensions[2], resultStarTreeDocument.dimensions[2]); + assertEquals(expectedStarTreeDocument.dimensions[3], resultStarTreeDocument.dimensions[3]); + assertEquals(expectedStarTreeDocument.metrics[0], resultStarTreeDocument.metrics[0]); + assertEquals(expectedStarTreeDocument.metrics[1], resultStarTreeDocument.metrics[1]); + assertEquals(expectedStarTreeDocument.metrics[2], resultStarTreeDocument.metrics[2]); + + numOfAggregatedDocuments++; + } + + assertEquals(inorderStarTreeDocuments.size(), numOfAggregatedDocuments); + } + + SequentialDocValuesIterator[] getDimensionIterators(StarTreeDocument[] starTreeDocuments) { + SequentialDocValuesIterator[] sequentialDocValuesIterators = + new SequentialDocValuesIterator[starTreeDocuments[0].dimensions.length]; + for (int j = 0; j < starTreeDocuments[0].dimensions.length; j++) { + List dimList = new ArrayList<>(); + List docsWithField = new ArrayList<>(); + + for (int i = 0; i < starTreeDocuments.length; i++) { + if (starTreeDocuments[i].dimensions[j] != null) { + dimList.add(starTreeDocuments[i].dimensions[j]); + docsWithField.add(i); + } + } + sequentialDocValuesIterators[j] = new SequentialDocValuesIterator(getSortedNumericMock(dimList, docsWithField)); + } + return sequentialDocValuesIterators; + } + + List getMetricIterators(StarTreeDocument[] starTreeDocuments) { + List sequentialDocValuesIterators = new ArrayList<>(); + for (int j = 0; j < starTreeDocuments[0].metrics.length; j++) { + List metricslist = new ArrayList<>(); + List docsWithField = new ArrayList<>(); + + for (int i = 0; i < starTreeDocuments.length; i++) { + if (starTreeDocuments[i].metrics[j] != null) { + metricslist.add((long) starTreeDocuments[i].metrics[j]); + docsWithField.add(i); + } + } + sequentialDocValuesIterators.add(new SequentialDocValuesIterator(getSortedNumericMock(metricslist, docsWithField))); + } + return sequentialDocValuesIterators; + } + + public void test_sortAndAggregateStarTreeDocuments_nullMetric() throws IOException { + + int noOfStarTreeDocuments = 5; + StarTreeDocument[] starTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + + starTreeDocuments[0] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Double[] { 12.0, 10.0, randomDouble() }); + starTreeDocuments[1] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 10.0, 6.0, randomDouble() }); + starTreeDocuments[2] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 14.0, 12.0, randomDouble() }); + starTreeDocuments[3] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Double[] { 9.0, 4.0, randomDouble() }); + starTreeDocuments[4] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 11.0, null, randomDouble() }); + + List inorderStarTreeDocuments = List.of( + new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Object[] { 21.0, 14.0, 2L }), + new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Object[] { 35.0, 18.0, 3L }) + ); + Iterator expectedStarTreeDocumentIterator = inorderStarTreeDocuments.iterator(); + + StarTreeDocument[] segmentStarTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + for (int i = 0; i < noOfStarTreeDocuments; i++) { + Long metric1 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[0]); + Long metric2 = starTreeDocuments[i].metrics[1] != null + ? NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[1]) + : null; + Long metric3 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[2]); + segmentStarTreeDocuments[i] = new StarTreeDocument(starTreeDocuments[i].dimensions, new Object[] { metric1, metric2, metric3 }); + } + SequentialDocValuesIterator[] dimsIterators = getDimensionIterators(segmentStarTreeDocuments); + List metricsIterators = getMetricIterators(segmentStarTreeDocuments); + builder = getStarTreeBuilder(compositeField, writeState, mapperService); + Iterator segmentStarTreeDocumentIterator = builder.sortAndAggregateSegmentDocuments( + dimsIterators, + metricsIterators + ); + + while (segmentStarTreeDocumentIterator.hasNext() && expectedStarTreeDocumentIterator.hasNext()) { + StarTreeDocument resultStarTreeDocument = segmentStarTreeDocumentIterator.next(); + StarTreeDocument expectedStarTreeDocument = expectedStarTreeDocumentIterator.next(); + assertEquals(expectedStarTreeDocument.dimensions[0], resultStarTreeDocument.dimensions[0]); + assertEquals(expectedStarTreeDocument.dimensions[1], resultStarTreeDocument.dimensions[1]); + assertEquals(expectedStarTreeDocument.dimensions[2], resultStarTreeDocument.dimensions[2]); + assertEquals(expectedStarTreeDocument.dimensions[3], resultStarTreeDocument.dimensions[3]); + assertEquals(expectedStarTreeDocument.metrics[0], resultStarTreeDocument.metrics[0]); + assertEquals(expectedStarTreeDocument.metrics[1], resultStarTreeDocument.metrics[1]); + assertEquals(expectedStarTreeDocument.metrics[2], resultStarTreeDocument.metrics[2]); + } + } + + public void test_sortAndAggregateStarTreeDocuments_nullMetricField() throws IOException { + + int noOfStarTreeDocuments = 5; + StarTreeDocument[] starTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + // Setting second metric iterator as empty sorted numeric , indicating a metric field is null + starTreeDocuments[0] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Double[] { 12.0, null, randomDouble() }); + starTreeDocuments[1] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 10.0, null, randomDouble() }); + starTreeDocuments[2] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 14.0, null, randomDouble() }); + starTreeDocuments[3] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Double[] { 9.0, null, randomDouble() }); + starTreeDocuments[4] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 11.0, null, randomDouble() }); + + List inorderStarTreeDocuments = List.of( + new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Object[] { 21.0, 0.0, 2L }), + new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Object[] { 35.0, 0.0, 3L }) + ); + Iterator expectedStarTreeDocumentIterator = inorderStarTreeDocuments.iterator(); + + StarTreeDocument[] segmentStarTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + for (int i = 0; i < noOfStarTreeDocuments; i++) { + Long metric1 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[0]); + Long metric2 = starTreeDocuments[i].metrics[1] != null + ? NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[1]) + : null; + Long metric3 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[2]); + segmentStarTreeDocuments[i] = new StarTreeDocument(starTreeDocuments[i].dimensions, new Object[] { metric1, metric2, metric3 }); + } + SequentialDocValuesIterator[] dimsIterators = getDimensionIterators(segmentStarTreeDocuments); + List metricsIterators = getMetricIterators(segmentStarTreeDocuments); + builder = getStarTreeBuilder(compositeField, writeState, mapperService); + Iterator segmentStarTreeDocumentIterator = builder.sortAndAggregateSegmentDocuments( + dimsIterators, + metricsIterators + ); + + while (segmentStarTreeDocumentIterator.hasNext() && expectedStarTreeDocumentIterator.hasNext()) { + StarTreeDocument resultStarTreeDocument = segmentStarTreeDocumentIterator.next(); + StarTreeDocument expectedStarTreeDocument = expectedStarTreeDocumentIterator.next(); + assertEquals(expectedStarTreeDocument.dimensions[0], resultStarTreeDocument.dimensions[0]); + assertEquals(expectedStarTreeDocument.dimensions[1], resultStarTreeDocument.dimensions[1]); + assertEquals(expectedStarTreeDocument.dimensions[2], resultStarTreeDocument.dimensions[2]); + assertEquals(expectedStarTreeDocument.dimensions[3], resultStarTreeDocument.dimensions[3]); + assertEquals(expectedStarTreeDocument.metrics[0], resultStarTreeDocument.metrics[0]); + assertEquals(expectedStarTreeDocument.metrics[1], resultStarTreeDocument.metrics[1]); + assertEquals(expectedStarTreeDocument.metrics[2], resultStarTreeDocument.metrics[2]); + } + } + + public void test_sortAndAggregateStarTreeDocuments_nullDimensionField() throws IOException { + int noOfStarTreeDocuments = 5; + StarTreeDocument[] starTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + // Setting second metric iterator as empty sorted numeric , indicating a metric field is null + starTreeDocuments[0] = new StarTreeDocument(new Long[] { 2L, null, 3L, 4L }, new Double[] { 12.0, null, randomDouble() }); + starTreeDocuments[1] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 10.0, null, randomDouble() }); + starTreeDocuments[2] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 14.0, null, randomDouble() }); + starTreeDocuments[3] = new StarTreeDocument(new Long[] { 2L, null, 3L, 4L }, new Double[] { 9.0, null, randomDouble() }); + starTreeDocuments[4] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 11.0, null, randomDouble() }); + + List inorderStarTreeDocuments = List.of( + new StarTreeDocument(new Long[] { 2L, null, 3L, 4L }, new Object[] { 21.0, 0.0, 2L }), + new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Object[] { 35.0, 0.0, 3L }) + ); + Iterator expectedStarTreeDocumentIterator = inorderStarTreeDocuments.iterator(); + + StarTreeDocument[] segmentStarTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + for (int i = 0; i < noOfStarTreeDocuments; i++) { + Long metric1 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[0]); + Long metric2 = starTreeDocuments[i].metrics[1] != null + ? NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[1]) + : null; + Long metric3 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[2]); + segmentStarTreeDocuments[i] = new StarTreeDocument(starTreeDocuments[i].dimensions, new Object[] { metric1, metric2, metric3 }); + } + SequentialDocValuesIterator[] dimsIterators = getDimensionIterators(segmentStarTreeDocuments); + List metricsIterators = getMetricIterators(segmentStarTreeDocuments); + builder = getStarTreeBuilder(compositeField, writeState, mapperService); + Iterator segmentStarTreeDocumentIterator = builder.sortAndAggregateSegmentDocuments( + dimsIterators, + metricsIterators + ); + + while (segmentStarTreeDocumentIterator.hasNext() && expectedStarTreeDocumentIterator.hasNext()) { + StarTreeDocument resultStarTreeDocument = segmentStarTreeDocumentIterator.next(); + StarTreeDocument expectedStarTreeDocument = expectedStarTreeDocumentIterator.next(); + assertEquals(expectedStarTreeDocument.dimensions[0], resultStarTreeDocument.dimensions[0]); + assertEquals(expectedStarTreeDocument.dimensions[1], resultStarTreeDocument.dimensions[1]); + assertEquals(expectedStarTreeDocument.dimensions[2], resultStarTreeDocument.dimensions[2]); + assertEquals(expectedStarTreeDocument.dimensions[3], resultStarTreeDocument.dimensions[3]); + assertEquals(expectedStarTreeDocument.metrics[0], resultStarTreeDocument.metrics[0]); + assertEquals(expectedStarTreeDocument.metrics[1], resultStarTreeDocument.metrics[1]); + assertEquals(expectedStarTreeDocument.metrics[2], resultStarTreeDocument.metrics[2]); + } + } + + public void test_sortAndAggregateStarTreeDocuments_nullDimensionsAndNullMetrics() throws IOException { + int noOfStarTreeDocuments = 5; + StarTreeDocument[] starTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + // Setting second metric iterator as empty sorted numeric , indicating a metric field is null + starTreeDocuments[0] = new StarTreeDocument(new Long[] { null, null, null, null }, new Double[] { null, null, null }); + starTreeDocuments[1] = new StarTreeDocument(new Long[] { null, null, null, null }, new Double[] { null, null, null }); + starTreeDocuments[2] = new StarTreeDocument(new Long[] { null, null, null, null }, new Double[] { null, null, null }); + starTreeDocuments[3] = new StarTreeDocument(new Long[] { null, null, null, null }, new Double[] { null, null, null }); + starTreeDocuments[4] = new StarTreeDocument(new Long[] { null, null, null, null }, new Double[] { null, null, null }); + + List inorderStarTreeDocuments = List.of(); + Iterator expectedStarTreeDocumentIterator = inorderStarTreeDocuments.iterator(); + + StarTreeDocument[] segmentStarTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + for (int i = 0; i < noOfStarTreeDocuments; i++) { + Long metric1 = starTreeDocuments[i].metrics[1] != null + ? NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[0]) + : null; + Long metric2 = starTreeDocuments[i].metrics[1] != null + ? NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[1]) + : null; + Long metric3 = starTreeDocuments[i].metrics[1] != null + ? NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[2]) + : null; + segmentStarTreeDocuments[i] = new StarTreeDocument(starTreeDocuments[i].dimensions, new Object[] { metric1, metric2, metric3 }); + } + SequentialDocValuesIterator[] dimsIterators = getDimensionIterators(segmentStarTreeDocuments); + List metricsIterators = getMetricIterators(segmentStarTreeDocuments); + builder = getStarTreeBuilder(compositeField, writeState, mapperService); + Iterator segmentStarTreeDocumentIterator = builder.sortAndAggregateSegmentDocuments( + dimsIterators, + metricsIterators + ); + + while (segmentStarTreeDocumentIterator.hasNext() && expectedStarTreeDocumentIterator.hasNext()) { + StarTreeDocument resultStarTreeDocument = segmentStarTreeDocumentIterator.next(); + StarTreeDocument expectedStarTreeDocument = expectedStarTreeDocumentIterator.next(); + assertEquals(expectedStarTreeDocument.dimensions[0], resultStarTreeDocument.dimensions[0]); + assertEquals(expectedStarTreeDocument.dimensions[1], resultStarTreeDocument.dimensions[1]); + assertEquals(expectedStarTreeDocument.dimensions[2], resultStarTreeDocument.dimensions[2]); + assertEquals(expectedStarTreeDocument.dimensions[3], resultStarTreeDocument.dimensions[3]); + assertEquals(expectedStarTreeDocument.metrics[0], resultStarTreeDocument.metrics[0]); + assertEquals(expectedStarTreeDocument.metrics[1], resultStarTreeDocument.metrics[1]); + assertEquals(expectedStarTreeDocument.metrics[2], resultStarTreeDocument.metrics[2]); + } + } + + public void test_sortAndAggregateStarTreeDocuments_emptyDimensions() throws IOException { + + int noOfStarTreeDocuments = 5; + StarTreeDocument[] starTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + // Setting second metric iterator as empty sorted numeric , indicating a metric field is null + starTreeDocuments[0] = new StarTreeDocument(new Long[] { null, null, null, null }, new Double[] { 12.0, null, randomDouble() }); + starTreeDocuments[1] = new StarTreeDocument(new Long[] { null, null, null, null }, new Double[] { 10.0, null, randomDouble() }); + starTreeDocuments[2] = new StarTreeDocument(new Long[] { null, null, null, null }, new Double[] { 14.0, null, randomDouble() }); + starTreeDocuments[3] = new StarTreeDocument(new Long[] { null, null, null, null }, new Double[] { 9.0, null, randomDouble() }); + starTreeDocuments[4] = new StarTreeDocument(new Long[] { null, null, null, null }, new Double[] { 11.0, null, randomDouble() }); + + List inorderStarTreeDocuments = List.of( + new StarTreeDocument(new Long[] { null, null, null, null }, new Object[] { 56.0, 0.0, 5L }) + ); + Iterator expectedStarTreeDocumentIterator = inorderStarTreeDocuments.iterator(); + + StarTreeDocument[] segmentStarTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + for (int i = 0; i < noOfStarTreeDocuments; i++) { + Long metric1 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[0]); + Long metric2 = starTreeDocuments[i].metrics[1] != null + ? NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[1]) + : null; + Long metric3 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[2]); + segmentStarTreeDocuments[i] = new StarTreeDocument(starTreeDocuments[i].dimensions, new Object[] { metric1, metric2, metric3 }); + } + SequentialDocValuesIterator[] dimsIterators = getDimensionIterators(segmentStarTreeDocuments); + List metricsIterators = getMetricIterators(segmentStarTreeDocuments); + builder = getStarTreeBuilder(compositeField, writeState, mapperService); + Iterator segmentStarTreeDocumentIterator = builder.sortAndAggregateSegmentDocuments( + dimsIterators, + metricsIterators + ); + + while (segmentStarTreeDocumentIterator.hasNext() && expectedStarTreeDocumentIterator.hasNext()) { + StarTreeDocument resultStarTreeDocument = segmentStarTreeDocumentIterator.next(); + StarTreeDocument expectedStarTreeDocument = expectedStarTreeDocumentIterator.next(); + assertEquals(expectedStarTreeDocument.dimensions[0], resultStarTreeDocument.dimensions[0]); + assertEquals(expectedStarTreeDocument.dimensions[1], resultStarTreeDocument.dimensions[1]); + assertEquals(expectedStarTreeDocument.dimensions[2], resultStarTreeDocument.dimensions[2]); + assertEquals(expectedStarTreeDocument.dimensions[3], resultStarTreeDocument.dimensions[3]); + assertEquals(expectedStarTreeDocument.metrics[0], resultStarTreeDocument.metrics[0]); + assertEquals(expectedStarTreeDocument.metrics[1], resultStarTreeDocument.metrics[1]); + assertEquals(expectedStarTreeDocument.metrics[2], resultStarTreeDocument.metrics[2]); + } + } + + public void test_sortAndAggregateStarTreeDocument_longMaxAndLongMinDimensions() throws IOException { + + int noOfStarTreeDocuments = 5; + StarTreeDocument[] starTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + + starTreeDocuments[0] = new StarTreeDocument(new Long[] { Long.MIN_VALUE, 4L, 3L, 4L }, new Double[] { 12.0, 10.0, randomDouble() }); + starTreeDocuments[1] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, Long.MAX_VALUE }, new Double[] { 10.0, 6.0, randomDouble() }); + starTreeDocuments[2] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, Long.MAX_VALUE }, new Double[] { 14.0, 12.0, randomDouble() }); + starTreeDocuments[3] = new StarTreeDocument(new Long[] { Long.MIN_VALUE, 4L, 3L, 4L }, new Double[] { 9.0, 4.0, randomDouble() }); + starTreeDocuments[4] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, Long.MAX_VALUE }, new Double[] { 11.0, 16.0, randomDouble() }); + + List inorderStarTreeDocuments = List.of( + new StarTreeDocument(new Long[] { Long.MIN_VALUE, 4L, 3L, 4L }, new Object[] { 21.0, 14.0, 2L }), + new StarTreeDocument(new Long[] { 3L, 4L, 2L, Long.MAX_VALUE }, new Object[] { 35.0, 34.0, 3L }) + ); + Iterator expectedStarTreeDocumentIterator = inorderStarTreeDocuments.iterator(); + + StarTreeDocument[] segmentStarTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + for (int i = 0; i < noOfStarTreeDocuments; i++) { + long metric1 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[0]); + long metric2 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[1]); + long metric3 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[2]); + segmentStarTreeDocuments[i] = new StarTreeDocument(starTreeDocuments[i].dimensions, new Long[] { metric1, metric2, metric3 }); + } + + SequentialDocValuesIterator[] dimsIterators = getDimensionIterators(segmentStarTreeDocuments); + List metricsIterators = getMetricIterators(segmentStarTreeDocuments); + builder = getStarTreeBuilder(compositeField, writeState, mapperService); + Iterator segmentStarTreeDocumentIterator = builder.sortAndAggregateSegmentDocuments( + dimsIterators, + metricsIterators + ); + int numOfAggregatedDocuments = 0; + while (segmentStarTreeDocumentIterator.hasNext() && expectedStarTreeDocumentIterator.hasNext()) { + StarTreeDocument resultStarTreeDocument = segmentStarTreeDocumentIterator.next(); + StarTreeDocument expectedStarTreeDocument = expectedStarTreeDocumentIterator.next(); + + assertEquals(expectedStarTreeDocument.dimensions[0], resultStarTreeDocument.dimensions[0]); + assertEquals(expectedStarTreeDocument.dimensions[1], resultStarTreeDocument.dimensions[1]); + assertEquals(expectedStarTreeDocument.dimensions[2], resultStarTreeDocument.dimensions[2]); + assertEquals(expectedStarTreeDocument.dimensions[3], resultStarTreeDocument.dimensions[3]); + assertEquals(expectedStarTreeDocument.metrics[0], resultStarTreeDocument.metrics[0]); + assertEquals(expectedStarTreeDocument.metrics[1], resultStarTreeDocument.metrics[1]); + assertEquals(expectedStarTreeDocument.metrics[2], resultStarTreeDocument.metrics[2]); + + numOfAggregatedDocuments++; + } + + assertEquals(inorderStarTreeDocuments.size(), numOfAggregatedDocuments); + + } + + public void test_sortAndAggregateStarTreeDocument_DoubleMaxAndDoubleMinMetrics() throws IOException { + + int noOfStarTreeDocuments = 5; + StarTreeDocument[] starTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + + starTreeDocuments[0] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Double[] { Double.MAX_VALUE, 10.0, randomDouble() }); + starTreeDocuments[1] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 10.0, 6.0, randomDouble() }); + starTreeDocuments[2] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 14.0, Double.MIN_VALUE, randomDouble() }); + starTreeDocuments[3] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Double[] { 9.0, 4.0, randomDouble() }); + starTreeDocuments[4] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 11.0, 16.0, randomDouble() }); + + List inorderStarTreeDocuments = List.of( + new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Object[] { Double.MAX_VALUE + 9, 14.0, 2L }), + new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Object[] { 35.0, Double.MIN_VALUE + 22, 3L }) + ); + Iterator expectedStarTreeDocumentIterator = inorderStarTreeDocuments.iterator(); + + StarTreeDocument[] segmentStarTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + for (int i = 0; i < noOfStarTreeDocuments; i++) { + long metric1 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[0]); + long metric2 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[1]); + long metric3 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[2]); + segmentStarTreeDocuments[i] = new StarTreeDocument(starTreeDocuments[i].dimensions, new Long[] { metric1, metric2, metric3 }); + } + + SequentialDocValuesIterator[] dimsIterators = getDimensionIterators(segmentStarTreeDocuments); + List metricsIterators = getMetricIterators(segmentStarTreeDocuments); + builder = getStarTreeBuilder(compositeField, writeState, mapperService); + Iterator segmentStarTreeDocumentIterator = builder.sortAndAggregateSegmentDocuments( + dimsIterators, + metricsIterators + ); + int numOfAggregatedDocuments = 0; + while (segmentStarTreeDocumentIterator.hasNext() && expectedStarTreeDocumentIterator.hasNext()) { + StarTreeDocument resultStarTreeDocument = segmentStarTreeDocumentIterator.next(); + StarTreeDocument expectedStarTreeDocument = expectedStarTreeDocumentIterator.next(); + + assertEquals(expectedStarTreeDocument.dimensions[0], resultStarTreeDocument.dimensions[0]); + assertEquals(expectedStarTreeDocument.dimensions[1], resultStarTreeDocument.dimensions[1]); + assertEquals(expectedStarTreeDocument.dimensions[2], resultStarTreeDocument.dimensions[2]); + assertEquals(expectedStarTreeDocument.dimensions[3], resultStarTreeDocument.dimensions[3]); + assertEquals(expectedStarTreeDocument.metrics[0], resultStarTreeDocument.metrics[0]); + assertEquals(expectedStarTreeDocument.metrics[1], resultStarTreeDocument.metrics[1]); + assertEquals(expectedStarTreeDocument.metrics[2], resultStarTreeDocument.metrics[2]); + + numOfAggregatedDocuments++; + } + + assertEquals(inorderStarTreeDocuments.size(), numOfAggregatedDocuments); + + } + + public void test_build_halfFloatMetrics() throws IOException { + + mapperService = mock(MapperService.class); + DocumentMapper documentMapper = mock(DocumentMapper.class); + when(mapperService.documentMapper()).thenReturn(documentMapper); + Settings settings = Settings.builder().put(settings(org.opensearch.Version.CURRENT).build()).build(); + NumberFieldMapper numberFieldMapper1 = new NumberFieldMapper.Builder("field2", NumberFieldMapper.NumberType.HALF_FLOAT, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + NumberFieldMapper numberFieldMapper2 = new NumberFieldMapper.Builder("field4", NumberFieldMapper.NumberType.HALF_FLOAT, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + NumberFieldMapper numberFieldMapper3 = new NumberFieldMapper.Builder("field6", NumberFieldMapper.NumberType.HALF_FLOAT, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + MappingLookup fieldMappers = new MappingLookup( + Set.of(numberFieldMapper1, numberFieldMapper2, numberFieldMapper3), + Collections.emptyList(), + Collections.emptyList(), + 0, + null + ); + when(documentMapper.mappers()).thenReturn(fieldMappers); + + int noOfStarTreeDocuments = 5; + StarTreeDocument[] starTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + + starTreeDocuments[0] = new StarTreeDocument( + new Long[] { 2L, 4L, 3L, 4L }, + new HalfFloatPoint[] { new HalfFloatPoint("hf1", 12), new HalfFloatPoint("hf6", 10), new HalfFloatPoint("field6", 10) } + ); + starTreeDocuments[1] = new StarTreeDocument( + new Long[] { 3L, 4L, 2L, 1L }, + new HalfFloatPoint[] { new HalfFloatPoint("hf2", 10), new HalfFloatPoint("hf7", 6), new HalfFloatPoint("field6", 10) } + ); + starTreeDocuments[2] = new StarTreeDocument( + new Long[] { 3L, 4L, 2L, 1L }, + new HalfFloatPoint[] { new HalfFloatPoint("hf3", 14), new HalfFloatPoint("hf8", 12), new HalfFloatPoint("field6", 10) } + ); + starTreeDocuments[3] = new StarTreeDocument( + new Long[] { 2L, 4L, 3L, 4L }, + new HalfFloatPoint[] { new HalfFloatPoint("hf4", 9), new HalfFloatPoint("hf9", 4), new HalfFloatPoint("field6", 10) } + ); + starTreeDocuments[4] = new StarTreeDocument( + new Long[] { 3L, 4L, 2L, 1L }, + new HalfFloatPoint[] { new HalfFloatPoint("hf5", 11), new HalfFloatPoint("hf10", 16), new HalfFloatPoint("field6", 10) } + ); + + StarTreeDocument[] segmentStarTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + for (int i = 0; i < noOfStarTreeDocuments; i++) { + long metric1 = HalfFloatPoint.halfFloatToSortableShort( + ((HalfFloatPoint) starTreeDocuments[i].metrics[0]).numericValue().floatValue() + ); + long metric2 = HalfFloatPoint.halfFloatToSortableShort( + ((HalfFloatPoint) starTreeDocuments[i].metrics[1]).numericValue().floatValue() + ); + long metric3 = HalfFloatPoint.halfFloatToSortableShort( + ((HalfFloatPoint) starTreeDocuments[i].metrics[2]).numericValue().floatValue() + ); + segmentStarTreeDocuments[i] = new StarTreeDocument(starTreeDocuments[i].dimensions, new Long[] { metric1, metric2, metric3 }); + } + + SequentialDocValuesIterator[] dimsIterators = getDimensionIterators(segmentStarTreeDocuments); + List metricsIterators = getMetricIterators(segmentStarTreeDocuments); + builder = getStarTreeBuilder(compositeField, writeState, mapperService); + + Iterator segmentStarTreeDocumentIterator = builder.sortAndAggregateSegmentDocuments( + dimsIterators, + metricsIterators + ); + builder.build(segmentStarTreeDocumentIterator); + List resultStarTreeDocuments = builder.getStarTreeDocuments(); + assertEquals(7, resultStarTreeDocuments.size()); + + Iterator expectedStarTreeDocumentIterator = getExpectedStarTreeDocumentIterator(); + assertStarTreeDocuments(resultStarTreeDocuments, expectedStarTreeDocumentIterator); + } + + public void test_build_floatMetrics() throws IOException { + + mapperService = mock(MapperService.class); + DocumentMapper documentMapper = mock(DocumentMapper.class); + when(mapperService.documentMapper()).thenReturn(documentMapper); + Settings settings = Settings.builder().put(settings(org.opensearch.Version.CURRENT).build()).build(); + NumberFieldMapper numberFieldMapper1 = new NumberFieldMapper.Builder("field2", NumberFieldMapper.NumberType.FLOAT, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + NumberFieldMapper numberFieldMapper2 = new NumberFieldMapper.Builder("field4", NumberFieldMapper.NumberType.FLOAT, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + NumberFieldMapper numberFieldMapper3 = new NumberFieldMapper.Builder("field6", NumberFieldMapper.NumberType.FLOAT, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + MappingLookup fieldMappers = new MappingLookup( + Set.of(numberFieldMapper1, numberFieldMapper2, numberFieldMapper3), + Collections.emptyList(), + Collections.emptyList(), + 0, + null + ); + when(documentMapper.mappers()).thenReturn(fieldMappers); + + int noOfStarTreeDocuments = 5; + StarTreeDocument[] starTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + + starTreeDocuments[0] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Float[] { 12.0F, 10.0F, randomFloat() }); + starTreeDocuments[1] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Float[] { 10.0F, 6.0F, randomFloat() }); + starTreeDocuments[2] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Float[] { 14.0F, 12.0F, randomFloat() }); + starTreeDocuments[3] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Float[] { 9.0F, 4.0F, randomFloat() }); + starTreeDocuments[4] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Float[] { 11.0F, 16.0F, randomFloat() }); + + StarTreeDocument[] segmentStarTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + for (int i = 0; i < noOfStarTreeDocuments; i++) { + long metric1 = NumericUtils.floatToSortableInt((Float) starTreeDocuments[i].metrics[0]); + long metric2 = NumericUtils.floatToSortableInt((Float) starTreeDocuments[i].metrics[1]); + long metric3 = NumericUtils.floatToSortableInt((Float) starTreeDocuments[i].metrics[2]); + segmentStarTreeDocuments[i] = new StarTreeDocument(starTreeDocuments[i].dimensions, new Long[] { metric1, metric2, metric3 }); + } + + SequentialDocValuesIterator[] dimsIterators = getDimensionIterators(segmentStarTreeDocuments); + List metricsIterators = getMetricIterators(segmentStarTreeDocuments); + builder = getStarTreeBuilder(compositeField, writeState, mapperService); + Iterator segmentStarTreeDocumentIterator = builder.sortAndAggregateSegmentDocuments( + dimsIterators, + metricsIterators + ); + builder.build(segmentStarTreeDocumentIterator); + + List resultStarTreeDocuments = builder.getStarTreeDocuments(); + assertEquals(7, resultStarTreeDocuments.size()); + + Iterator expectedStarTreeDocumentIterator = getExpectedStarTreeDocumentIterator(); + assertStarTreeDocuments(resultStarTreeDocuments, expectedStarTreeDocumentIterator); + } + + public void test_build_longMetrics() throws IOException { + + mapperService = mock(MapperService.class); + DocumentMapper documentMapper = mock(DocumentMapper.class); + when(mapperService.documentMapper()).thenReturn(documentMapper); + Settings settings = Settings.builder().put(settings(org.opensearch.Version.CURRENT).build()).build(); + NumberFieldMapper numberFieldMapper1 = new NumberFieldMapper.Builder("field2", NumberFieldMapper.NumberType.LONG, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + NumberFieldMapper numberFieldMapper2 = new NumberFieldMapper.Builder("field4", NumberFieldMapper.NumberType.LONG, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + NumberFieldMapper numberFieldMapper3 = new NumberFieldMapper.Builder("field6", NumberFieldMapper.NumberType.LONG, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + MappingLookup fieldMappers = new MappingLookup( + Set.of(numberFieldMapper1, numberFieldMapper2, numberFieldMapper3), + Collections.emptyList(), + Collections.emptyList(), + 0, + null + ); + when(documentMapper.mappers()).thenReturn(fieldMappers); + + int noOfStarTreeDocuments = 5; + StarTreeDocument[] starTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + + starTreeDocuments[0] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Long[] { 12L, 10L, randomLong() }); + starTreeDocuments[1] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Long[] { 10L, 6L, randomLong() }); + starTreeDocuments[2] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Long[] { 14L, 12L, randomLong() }); + starTreeDocuments[3] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Long[] { 9L, 4L, randomLong() }); + starTreeDocuments[4] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Long[] { 11L, 16L, randomLong() }); + + StarTreeDocument[] segmentStarTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + for (int i = 0; i < noOfStarTreeDocuments; i++) { + long metric1 = (Long) starTreeDocuments[i].metrics[0]; + long metric2 = (Long) starTreeDocuments[i].metrics[1]; + long metric3 = (Long) starTreeDocuments[i].metrics[2]; + segmentStarTreeDocuments[i] = new StarTreeDocument(starTreeDocuments[i].dimensions, new Long[] { metric1, metric2, metric3 }); + } + + SequentialDocValuesIterator[] dimsIterators = getDimensionIterators(segmentStarTreeDocuments); + List metricsIterators = getMetricIterators(segmentStarTreeDocuments); + builder = getStarTreeBuilder(compositeField, writeState, mapperService); + Iterator segmentStarTreeDocumentIterator = builder.sortAndAggregateSegmentDocuments( + dimsIterators, + metricsIterators + ); + builder.build(segmentStarTreeDocumentIterator); + + List resultStarTreeDocuments = builder.getStarTreeDocuments(); + assertEquals(7, resultStarTreeDocuments.size()); + + Iterator expectedStarTreeDocumentIterator = getExpectedStarTreeDocumentIterator(); + assertStarTreeDocuments(resultStarTreeDocuments, expectedStarTreeDocumentIterator); + } + + private static Iterator getExpectedStarTreeDocumentIterator() { + List expectedStarTreeDocuments = List.of( + new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Object[] { 21.0, 14.0, 2L }), + new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Object[] { 35.0, 34.0, 3L }), + new StarTreeDocument(new Long[] { null, 4L, 2L, 1L }, new Object[] { 35.0, 34.0, 3L }), + new StarTreeDocument(new Long[] { null, 4L, 3L, 4L }, new Object[] { 21.0, 14.0, 2L }), + new StarTreeDocument(new Long[] { null, 4L, null, 1L }, new Object[] { 35.0, 34.0, 3L }), + new StarTreeDocument(new Long[] { null, 4L, null, 4L }, new Object[] { 21.0, 14.0, 2L }), + new StarTreeDocument(new Long[] { null, 4L, null, null }, new Object[] { 56.0, 48.0, 5L }), + new StarTreeDocument(new Long[] { null, null, null, null }, new Object[] { 56.0, 48.0, 5L }) + ); + return expectedStarTreeDocuments.iterator(); + } + + public void test_build() throws IOException { + + int noOfStarTreeDocuments = 5; + StarTreeDocument[] starTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + + starTreeDocuments[0] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Double[] { 12.0, 10.0, randomDouble() }); + starTreeDocuments[1] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 10.0, 6.0, randomDouble() }); + starTreeDocuments[2] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 14.0, 12.0, randomDouble() }); + starTreeDocuments[3] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Double[] { 9.0, 4.0, randomDouble() }); + starTreeDocuments[4] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 11.0, 16.0, randomDouble() }); + + StarTreeDocument[] segmentStarTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + for (int i = 0; i < noOfStarTreeDocuments; i++) { + long metric1 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[0]); + long metric2 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[1]); + long metric3 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[2]); + segmentStarTreeDocuments[i] = new StarTreeDocument(starTreeDocuments[i].dimensions, new Long[] { metric1, metric2, metric3 }); + } + + SequentialDocValuesIterator[] dimsIterators = getDimensionIterators(segmentStarTreeDocuments); + List metricsIterators = getMetricIterators(segmentStarTreeDocuments); + builder = getStarTreeBuilder(compositeField, writeState, mapperService); + Iterator segmentStarTreeDocumentIterator = builder.sortAndAggregateSegmentDocuments( + dimsIterators, + metricsIterators + ); + builder.build(segmentStarTreeDocumentIterator); + + List resultStarTreeDocuments = builder.getStarTreeDocuments(); + assertEquals(7, resultStarTreeDocuments.size()); + + Iterator expectedStarTreeDocumentIterator = getExpectedStarTreeDocumentIterator(); + assertStarTreeDocuments(resultStarTreeDocuments, expectedStarTreeDocumentIterator); + } + + private void assertStarTreeDocuments( + List resultStarTreeDocuments, + Iterator expectedStarTreeDocumentIterator + ) { + Iterator resultStarTreeDocumentIterator = resultStarTreeDocuments.iterator(); + while (resultStarTreeDocumentIterator.hasNext() && expectedStarTreeDocumentIterator.hasNext()) { + StarTreeDocument resultStarTreeDocument = resultStarTreeDocumentIterator.next(); + StarTreeDocument expectedStarTreeDocument = expectedStarTreeDocumentIterator.next(); + + assertEquals(expectedStarTreeDocument.dimensions[0], resultStarTreeDocument.dimensions[0]); + assertEquals(expectedStarTreeDocument.dimensions[1], resultStarTreeDocument.dimensions[1]); + assertEquals(expectedStarTreeDocument.dimensions[2], resultStarTreeDocument.dimensions[2]); + assertEquals(expectedStarTreeDocument.dimensions[3], resultStarTreeDocument.dimensions[3]); + assertEquals(expectedStarTreeDocument.metrics[0], resultStarTreeDocument.metrics[0]); + assertEquals(expectedStarTreeDocument.metrics[1], resultStarTreeDocument.metrics[1]); + assertEquals(expectedStarTreeDocument.metrics[2], resultStarTreeDocument.metrics[2]); + } + } + + public void test_build_starTreeDataset() throws IOException { + + fields = List.of("fieldC", "fieldB", "fieldL", "fieldI"); + + dimensionsOrder = List.of(new NumericDimension("fieldC"), new NumericDimension("fieldB"), new NumericDimension("fieldL")); + metrics = List.of(new Metric("fieldI", List.of(MetricStat.SUM))); + + DocValuesProducer docValuesProducer = mock(DocValuesProducer.class); + + compositeField = new StarTreeField( + "test", + dimensionsOrder, + metrics, + new StarTreeFieldConfiguration(1, Set.of(), StarTreeFieldConfiguration.StarTreeBuildMode.ON_HEAP) + ); + SegmentInfo segmentInfo = new SegmentInfo( + directory, + Version.LATEST, + Version.LUCENE_9_11_0, + "test_segment", + 7, + false, + false, + new Lucene99Codec(), + new HashMap<>(), + UUID.randomUUID().toString().substring(0, 16).getBytes(StandardCharsets.UTF_8), + new HashMap<>(), + null + ); + + fieldsInfo = new FieldInfo[fields.size()]; + fieldProducerMap = new HashMap<>(); + for (int i = 0; i < fieldsInfo.length; i++) { + fieldsInfo[i] = new FieldInfo( + fields.get(i), + i, + false, + false, + true, + IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS, + DocValuesType.SORTED_NUMERIC, + -1, + Collections.emptyMap(), + 0, + 0, + 0, + 0, + VectorEncoding.FLOAT32, + VectorSimilarityFunction.EUCLIDEAN, + false, + false + ); + fieldProducerMap.put(fields.get(i), docValuesProducer); + } + FieldInfos fieldInfos = new FieldInfos(fieldsInfo); + writeState = new SegmentWriteState(InfoStream.getDefault(), segmentInfo.dir, segmentInfo, fieldInfos, null, newIOContext(random())); + + mapperService = mock(MapperService.class); + DocumentMapper documentMapper = mock(DocumentMapper.class); + when(mapperService.documentMapper()).thenReturn(documentMapper); + Settings settings = Settings.builder().put(settings(org.opensearch.Version.CURRENT).build()).build(); + NumberFieldMapper numberFieldMapper1 = new NumberFieldMapper.Builder("fieldI", NumberFieldMapper.NumberType.DOUBLE, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + MappingLookup fieldMappers = new MappingLookup( + Set.of(numberFieldMapper1), + Collections.emptyList(), + Collections.emptyList(), + 0, + null + ); + when(documentMapper.mappers()).thenReturn(fieldMappers); + + int noOfStarTreeDocuments = 7; + StarTreeDocument[] starTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + starTreeDocuments[0] = new StarTreeDocument(new Long[] { 1L, 11L, 21L }, new Double[] { 400.0 }); + starTreeDocuments[1] = new StarTreeDocument(new Long[] { 1L, 12L, 22L }, new Double[] { 200.0 }); + starTreeDocuments[2] = new StarTreeDocument(new Long[] { 2L, 13L, 23L }, new Double[] { 300.0 }); + starTreeDocuments[3] = new StarTreeDocument(new Long[] { 2L, 13L, 21L }, new Double[] { 100.0 }); + starTreeDocuments[4] = new StarTreeDocument(new Long[] { 3L, 11L, 21L }, new Double[] { 600.0 }); + starTreeDocuments[5] = new StarTreeDocument(new Long[] { 3L, 12L, 23L }, new Double[] { 200.0 }); + starTreeDocuments[6] = new StarTreeDocument(new Long[] { 3L, 12L, 21L }, new Double[] { 400.0 }); + + StarTreeDocument[] segmentStarTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + for (int i = 0; i < noOfStarTreeDocuments; i++) { + long metric1 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[0]); + segmentStarTreeDocuments[i] = new StarTreeDocument(starTreeDocuments[i].dimensions, new Long[] { metric1 }); + } + + SequentialDocValuesIterator[] dimsIterators = getDimensionIterators(segmentStarTreeDocuments); + List metricsIterators = getMetricIterators(segmentStarTreeDocuments); + builder = getStarTreeBuilder(compositeField, writeState, mapperService); + Iterator segmentStarTreeDocumentIterator = builder.sortAndAggregateSegmentDocuments( + dimsIterators, + metricsIterators + ); + builder.build(segmentStarTreeDocumentIterator); + + List resultStarTreeDocuments = builder.getStarTreeDocuments(); + Iterator expectedStarTreeDocumentIterator = expectedStarTreeDocuments(); + Iterator resultStarTreeDocumentIterator = resultStarTreeDocuments.iterator(); + Map> dimValueToDocIdMap = new HashMap<>(); + builder.rootNode.isStarNode = true; + traverseStarTree(builder.rootNode, dimValueToDocIdMap, true); + + Map> expectedDimToValueMap = getExpectedDimToValueMap(); + for (Map.Entry> entry : dimValueToDocIdMap.entrySet()) { + int dimId = entry.getKey(); + if (dimId == -1) continue; + Map map = expectedDimToValueMap.get(dimId); + for (Map.Entry dimValueToDocIdEntry : entry.getValue().entrySet()) { + long dimValue = dimValueToDocIdEntry.getKey(); + int docId = dimValueToDocIdEntry.getValue(); + if (map.get(dimValue) != null) { + assertEquals(map.get(dimValue), resultStarTreeDocuments.get(docId).metrics[0]); + } + } + } + + while (resultStarTreeDocumentIterator.hasNext() && expectedStarTreeDocumentIterator.hasNext()) { + StarTreeDocument resultStarTreeDocument = resultStarTreeDocumentIterator.next(); + StarTreeDocument expectedStarTreeDocument = expectedStarTreeDocumentIterator.next(); + assertEquals(expectedStarTreeDocument.dimensions[0], resultStarTreeDocument.dimensions[0]); + assertEquals(expectedStarTreeDocument.dimensions[1], resultStarTreeDocument.dimensions[1]); + assertEquals(expectedStarTreeDocument.dimensions[2], resultStarTreeDocument.dimensions[2]); + assertEquals(expectedStarTreeDocument.metrics[0], resultStarTreeDocument.metrics[0]); + } + } + + private static Map> getExpectedDimToValueMap() { + Map> expectedDimToValueMap = new HashMap<>(); + Map dimValueMap = new HashMap<>(); + dimValueMap.put(1L, 600.0); + dimValueMap.put(2L, 400.0); + dimValueMap.put(3L, 1200.0); + expectedDimToValueMap.put(0, dimValueMap); + + dimValueMap = new HashMap<>(); + dimValueMap.put(11L, 1000.0); + dimValueMap.put(12L, 800.0); + dimValueMap.put(13L, 400.0); + expectedDimToValueMap.put(1, dimValueMap); + + dimValueMap = new HashMap<>(); + dimValueMap.put(21L, 1500.0); + dimValueMap.put(22L, 200.0); + dimValueMap.put(23L, 500.0); + expectedDimToValueMap.put(2, dimValueMap); + return expectedDimToValueMap; + } + + private Iterator expectedStarTreeDocuments() { + List expectedStarTreeDocuments = List.of( + new StarTreeDocument(new Long[] { 1L, 11L, 21L }, new Object[] { 400.0 }), + new StarTreeDocument(new Long[] { 1L, 12L, 22L }, new Object[] { 200.0 }), + new StarTreeDocument(new Long[] { 2L, 13L, 21L }, new Object[] { 100.0 }), + new StarTreeDocument(new Long[] { 2L, 13L, 23L }, new Object[] { 300.0 }), + new StarTreeDocument(new Long[] { 3L, 11L, 21L }, new Object[] { 600.0 }), + new StarTreeDocument(new Long[] { 3L, 12L, 21L }, new Object[] { 400.0 }), + new StarTreeDocument(new Long[] { 3L, 12L, 23L }, new Object[] { 200.0 }), + new StarTreeDocument(new Long[] { null, 11L, 21L }, new Object[] { 1000.0 }), + new StarTreeDocument(new Long[] { null, 12L, 21L }, new Object[] { 400.0 }), + new StarTreeDocument(new Long[] { null, 12L, 22L }, new Object[] { 200.0 }), + new StarTreeDocument(new Long[] { null, 12L, 23L }, new Object[] { 200.0 }), + new StarTreeDocument(new Long[] { null, 13L, 21L }, new Object[] { 100.0 }), + new StarTreeDocument(new Long[] { null, 13L, 23L }, new Object[] { 300.0 }), + new StarTreeDocument(new Long[] { null, null, 21L }, new Object[] { 1500.0 }), + new StarTreeDocument(new Long[] { null, null, 22L }, new Object[] { 200.0 }), + new StarTreeDocument(new Long[] { null, null, 23L }, new Object[] { 500.0 }), + new StarTreeDocument(new Long[] { null, null, null }, new Object[] { 2200.0 }), + new StarTreeDocument(new Long[] { null, 12L, null }, new Object[] { 800.0 }), + new StarTreeDocument(new Long[] { null, 13L, null }, new Object[] { 400.0 }), + new StarTreeDocument(new Long[] { 1L, null, 21L }, new Object[] { 400.0 }), + new StarTreeDocument(new Long[] { 1L, null, 22L }, new Object[] { 200.0 }), + new StarTreeDocument(new Long[] { 1L, null, null }, new Object[] { 600.0 }), + new StarTreeDocument(new Long[] { 2L, 13L, null }, new Object[] { 400.0 }), + new StarTreeDocument(new Long[] { 3L, null, 21L }, new Object[] { 1000.0 }), + new StarTreeDocument(new Long[] { 3L, null, 23L }, new Object[] { 200.0 }), + new StarTreeDocument(new Long[] { 3L, null, null }, new Object[] { 1200.0 }), + new StarTreeDocument(new Long[] { 3L, 12L, null }, new Object[] { 600.0 }) + ); + + return expectedStarTreeDocuments.iterator(); + } + + public void testFlushFlow() throws IOException { + List dimList = List.of(0L, 1L, 3L, 4L, 5L); + List docsWithField = List.of(0, 1, 3, 4, 5); + List dimList2 = List.of(0L, 1L, 2L, 3L, 4L, 5L); + List docsWithField2 = List.of(0, 1, 2, 3, 4, 5); + + List metricsList = List.of( + getLongFromDouble(0.0), + getLongFromDouble(10.0), + getLongFromDouble(20.0), + getLongFromDouble(30.0), + getLongFromDouble(40.0), + getLongFromDouble(50.0) + ); + List metricsWithField = List.of(0, 1, 2, 3, 4, 5); + + StarTreeField sf = getStarTreeFieldWithMultipleMetrics(); + SortedNumericDocValues d1sndv = getSortedNumericMock(dimList, docsWithField); + SortedNumericDocValues d2sndv = getSortedNumericMock(dimList2, docsWithField2); + SortedNumericDocValues m1sndv = getSortedNumericMock(metricsList, metricsWithField); + SortedNumericDocValues m2sndv = getSortedNumericMock(metricsList, metricsWithField); + + OnHeapStarTreeBuilder builder = new OnHeapStarTreeBuilder(sf, getWriteState(6), mapperService); + SequentialDocValuesIterator[] dimDvs = { new SequentialDocValuesIterator(d1sndv), new SequentialDocValuesIterator(d2sndv) }; + Iterator starTreeDocumentIterator = builder.sortAndAggregateSegmentDocuments( + dimDvs, + List.of(new SequentialDocValuesIterator(m1sndv), new SequentialDocValuesIterator(m2sndv)) + ); + /** + * Asserting following dim / metrics [ dim1, dim2 / Sum [metric], count [metric] ] + [0, 0] | [0.0, 1] + [1, 1] | [10.0, 1] + [3, 3] | [30.0, 1] + [4, 4] | [40.0, 1] + [5, 5] | [50.0, 1] + [null, 2] | [20.0, 1] + */ + int count = 0; + while (starTreeDocumentIterator.hasNext()) { + count++; + StarTreeDocument starTreeDocument = starTreeDocumentIterator.next(); + assertEquals( + starTreeDocument.dimensions[0] != null ? starTreeDocument.dimensions[0] * 1 * 10.0 : 20.0, + starTreeDocument.metrics[0] + ); + assertEquals(1L, starTreeDocument.metrics[1]); + } + assertEquals(6, count); + } + + public void testFlushFlowBuild() throws IOException { + List dimList = new ArrayList<>(100); + List docsWithField = new ArrayList<>(100); + for (int i = 0; i < 100; i++) { + dimList.add((long) i); + docsWithField.add(i); + } + + List dimList2 = new ArrayList<>(100); + List docsWithField2 = new ArrayList<>(100); + for (int i = 0; i < 100; i++) { + dimList2.add((long) i); + docsWithField2.add(i); + } + + List metricsList = new ArrayList<>(100); + List metricsWithField = new ArrayList<>(100); + for (int i = 0; i < 100; i++) { + metricsList.add(getLongFromDouble(i * 10.0)); + metricsWithField.add(i); + } + + Dimension d1 = new NumericDimension("field1"); + Dimension d2 = new NumericDimension("field3"); + Metric m1 = new Metric("field2", List.of(MetricStat.SUM)); + List dims = List.of(d1, d2); + List metrics = List.of(m1); + StarTreeFieldConfiguration c = new StarTreeFieldConfiguration( + 1, + new HashSet<>(), + StarTreeFieldConfiguration.StarTreeBuildMode.ON_HEAP + ); + StarTreeField sf = new StarTreeField("sf", dims, metrics, c); + SortedNumericDocValues d1sndv = getSortedNumericMock(dimList, docsWithField); + SortedNumericDocValues d2sndv = getSortedNumericMock(dimList2, docsWithField2); + SortedNumericDocValues m1sndv = getSortedNumericMock(metricsList, metricsWithField); + + BaseStarTreeBuilder builder = getStarTreeBuilder(sf, getWriteState(100), mapperService); + + DocValuesProducer d1vp = getDocValuesProducer(d1sndv); + DocValuesProducer d2vp = getDocValuesProducer(d2sndv); + DocValuesProducer m1vp = getDocValuesProducer(m1sndv); + Map fieldProducerMap = Map.of("field1", d1vp, "field3", d2vp, "field2", m1vp); + builder.build(fieldProducerMap); + /** + * Asserting following dim / metrics [ dim1, dim2 / Sum [ metric] ] + [0, 0] | [0.0] + [1, 1] | [10.0] + [2, 2] | [20.0] + [3, 3] | [30.0] + [4, 4] | [40.0] + .... + [null, 0] | [0.0] + [null, 1] | [10.0] + ... + [null, null] | [49500.0] + */ + List starTreeDocuments = builder.getStarTreeDocuments(); + for (StarTreeDocument starTreeDocument : starTreeDocuments) { + assertEquals( + starTreeDocument.dimensions[1] != null ? starTreeDocument.dimensions[1] * 10.0 : 49500.0, + starTreeDocument.metrics[0] + ); + } + builder.close(); + } + + private static DocValuesProducer getDocValuesProducer(SortedNumericDocValues sndv) { + return new EmptyDocValuesProducer() { + @Override + public SortedNumericDocValues getSortedNumeric(FieldInfo field) throws IOException { + return sndv; + } + }; + } + + private static StarTreeField getStarTreeFieldWithMultipleMetrics() { + Dimension d1 = new NumericDimension("field1"); + Dimension d2 = new NumericDimension("field3"); + Metric m1 = new Metric("field2", List.of(MetricStat.SUM)); + Metric m2 = new Metric("field2", List.of(MetricStat.COUNT)); + List dims = List.of(d1, d2); + List metrics = List.of(m1, m2); + StarTreeFieldConfiguration c = new StarTreeFieldConfiguration( + 1000, + new HashSet<>(), + StarTreeFieldConfiguration.StarTreeBuildMode.ON_HEAP + ); + StarTreeField sf = new StarTreeField("sf", dims, metrics, c); + return sf; + } + + public void testMergeFlowWithSum() throws IOException { + List dimList = List.of(0L, 1L, 3L, 4L, 5L, 6L); + List docsWithField = List.of(0, 1, 3, 4, 5, 6); + List dimList2 = List.of(0L, 1L, 2L, 3L, 4L, 5L, -1L); + List docsWithField2 = List.of(0, 1, 2, 3, 4, 5, 6); + + List metricsList = List.of( + getLongFromDouble(0.0), + getLongFromDouble(10.0), + getLongFromDouble(20.0), + getLongFromDouble(30.0), + getLongFromDouble(40.0), + getLongFromDouble(50.0), + getLongFromDouble(60.0) + + ); + List metricsWithField = List.of(0, 1, 2, 3, 4, 5, 6); + + StarTreeField sf = getStarTreeField(MetricStat.SUM); + StarTreeValues starTreeValues = getStarTreeValues( + getSortedNumericMock(dimList, docsWithField), + getSortedNumericMock(dimList2, docsWithField2), + getSortedNumericMock(metricsList, metricsWithField), + sf, + "6" + ); + + StarTreeValues starTreeValues2 = getStarTreeValues( + getSortedNumericMock(dimList, docsWithField), + getSortedNumericMock(dimList2, docsWithField2), + getSortedNumericMock(metricsList, metricsWithField), + sf, + "6" + ); + OnHeapStarTreeBuilder builder = new OnHeapStarTreeBuilder(sf, getWriteState(6), mapperService); + Iterator starTreeDocumentIterator = builder.mergeStarTrees(List.of(starTreeValues, starTreeValues2)); + /** + * Asserting following dim / metrics [ dim1, dim2 / Sum [ metric] ] + * [0, 0] | [0.0] + * [1, 1] | [20.0] + * [3, 3] | [60.0] + * [4, 4] | [80.0] + * [5, 5] | [100.0] + * [null, 2] | [40.0] + * ------------------ We only take non star docs + * [6,-1] | [120.0] + */ + int count = 0; + while (starTreeDocumentIterator.hasNext()) { + count++; + StarTreeDocument starTreeDocument = starTreeDocumentIterator.next(); + assertEquals( + starTreeDocument.dimensions[0] != null ? starTreeDocument.dimensions[0] * 2 * 10.0 : 40.0, + starTreeDocument.metrics[0] + ); + } + assertEquals(6, count); + } + + public void testMergeFlowWithCount() throws IOException { + List dimList = List.of(0L, 1L, 3L, 4L, 5L, 6L); + List docsWithField = List.of(0, 1, 3, 4, 5, 6); + List dimList2 = List.of(0L, 1L, 2L, 3L, 4L, 5L, -1L); + List docsWithField2 = List.of(0, 1, 2, 3, 4, 5, 6); + + List metricsList = List.of(0L, 1L, 2L, 3L, 4L, 5L, 6L); + List metricsWithField = List.of(0, 1, 2, 3, 4, 5, 6); + + StarTreeField sf = getStarTreeField(MetricStat.COUNT); + StarTreeValues starTreeValues = getStarTreeValues( + getSortedNumericMock(dimList, docsWithField), + getSortedNumericMock(dimList2, docsWithField2), + getSortedNumericMock(metricsList, metricsWithField), + sf, + "6" + ); + + StarTreeValues starTreeValues2 = getStarTreeValues( + getSortedNumericMock(dimList, docsWithField), + getSortedNumericMock(dimList2, docsWithField2), + getSortedNumericMock(metricsList, metricsWithField), + sf, + "6" + ); + OnHeapStarTreeBuilder builder = new OnHeapStarTreeBuilder(sf, getWriteState(6), mapperService); + Iterator starTreeDocumentIterator = builder.mergeStarTrees(List.of(starTreeValues, starTreeValues2)); + /** + * Asserting following dim / metrics [ dim1, dim2 / Count [ metric] ] + [0, 0] | [0] + [1, 1] | [2] + [3, 3] | [6] + [4, 4] | [8] + [5, 5] | [10] + [null, 2] | [4] + --------------- + [6,-1] | [12] + */ + int count = 0; + while (starTreeDocumentIterator.hasNext()) { + count++; + StarTreeDocument starTreeDocument = starTreeDocumentIterator.next(); + assertEquals(starTreeDocument.dimensions[0] != null ? starTreeDocument.dimensions[0] * 2 : 4, starTreeDocument.metrics[0]); + } + assertEquals(6, count); + } + + private StarTreeValues getStarTreeValues( + SortedNumericDocValues dimList, + SortedNumericDocValues dimList2, + SortedNumericDocValues metricsList, + StarTreeField sf, + String number + ) { + SortedNumericDocValues d1sndv = dimList; + SortedNumericDocValues d2sndv = dimList2; + SortedNumericDocValues m1sndv = metricsList; + Map dimDocIdSetIterators = Map.of("field1", d1sndv, "field3", d2sndv); + Map metricDocIdSetIterators = Map.of("field2", m1sndv); + StarTreeValues starTreeValues = new StarTreeValues( + sf, + null, + dimDocIdSetIterators, + metricDocIdSetIterators, + Map.of("numSegmentDocs", number) + ); + return starTreeValues; + } + + public void testMergeFlowWithDifferentDocsFromSegments() throws IOException { + List dimList = List.of(0L, 1L, 3L, 4L, 5L, 6L); + List docsWithField = List.of(0, 1, 3, 4, 5, 6); + List dimList2 = List.of(0L, 1L, 2L, 3L, 4L, 5L, -1L); + List docsWithField2 = List.of(0, 1, 2, 3, 4, 5, 6); + + List metricsList = List.of(0L, 1L, 2L, 3L, 4L, 5L, 6L); + List metricsWithField = List.of(0, 1, 2, 3, 4, 5, 6); + + List dimList3 = List.of(5L, 6L, 8L, -1L); + List docsWithField3 = List.of(0, 1, 3, 4); + List dimList4 = List.of(5L, 6L, 7L, 8L, -1L); + List docsWithField4 = List.of(0, 1, 2, 3, 4); + + List metricsList2 = List.of(5L, 6L, 7L, 8L, 9L); + List metricsWithField2 = List.of(0, 1, 2, 3, 4); + + StarTreeField sf = getStarTreeField(MetricStat.COUNT); + StarTreeValues starTreeValues = getStarTreeValues( + getSortedNumericMock(dimList, docsWithField), + getSortedNumericMock(dimList2, docsWithField2), + getSortedNumericMock(metricsList, metricsWithField), + sf, + "6" + ); + + StarTreeValues starTreeValues2 = getStarTreeValues( + getSortedNumericMock(dimList3, docsWithField3), + getSortedNumericMock(dimList4, docsWithField4), + getSortedNumericMock(metricsList2, metricsWithField2), + sf, + "4" + ); + OnHeapStarTreeBuilder builder = new OnHeapStarTreeBuilder(sf, getWriteState(4), mapperService); + Iterator starTreeDocumentIterator = builder.mergeStarTrees(List.of(starTreeValues, starTreeValues2)); + /** + * Asserting following dim / metrics [ dim1, dim2 / Count [ metric] ] + [0, 0] | [0] + [1, 1] | [1] + [3, 3] | [3] + [4, 4] | [4] + [5, 5] | [10] + [6, 6] | [6] + [8, 8] | [8] + [null, 2] | [2] + [null, 7] | [7] + */ + int count = 0; + while (starTreeDocumentIterator.hasNext()) { + count++; + StarTreeDocument starTreeDocument = starTreeDocumentIterator.next(); + if (Objects.equals(starTreeDocument.dimensions[0], 5L)) { + assertEquals(starTreeDocument.dimensions[0] * 2, starTreeDocument.metrics[0]); + } else { + assertEquals(starTreeDocument.dimensions[1], starTreeDocument.metrics[0]); + } + } + assertEquals(9, count); + } + + public void testMergeFlowWithMissingDocs() throws IOException { + List dimList = List.of(0L, 1L, 2L, 3L, 4L, 6L); + List docsWithField = List.of(0, 1, 2, 3, 4, 6); + List dimList2 = List.of(0L, 1L, 2L, 3L, 4L, 5L, -1L); + List docsWithField2 = List.of(0, 1, 2, 3, 4, 5, 6); + + List metricsList = List.of(0L, 1L, 2L, 3L, 4L, 5L, 6L); + List metricsWithField = List.of(0, 1, 2, 3, 4, 5, 6); + + List dimList3 = List.of(5L, 6L, 8L, -1L); + List docsWithField3 = List.of(0, 1, 3, 4); + List dimList4 = List.of(5L, 6L, 7L, 8L, -1L); + List docsWithField4 = List.of(0, 1, 2, 3, 4); + + List metricsList2 = List.of(5L, 6L, 7L, 8L, 9L); + List metricsWithField2 = List.of(0, 1, 2, 3, 4); + + StarTreeField sf = getStarTreeField(MetricStat.COUNT); + StarTreeValues starTreeValues = getStarTreeValues( + getSortedNumericMock(dimList, docsWithField), + getSortedNumericMock(dimList2, docsWithField2), + getSortedNumericMock(metricsList, metricsWithField), + sf, + "6" + ); + + StarTreeValues starTreeValues2 = getStarTreeValues( + getSortedNumericMock(dimList3, docsWithField3), + getSortedNumericMock(dimList4, docsWithField4), + getSortedNumericMock(metricsList2, metricsWithField2), + sf, + "4" + ); + OnHeapStarTreeBuilder builder = new OnHeapStarTreeBuilder(sf, getWriteState(4), mapperService); + Iterator starTreeDocumentIterator = builder.mergeStarTrees(List.of(starTreeValues, starTreeValues2)); + /** + * Asserting following dim / metrics [ dim1, dim2 / Count [ metric] ] + [0, 0] | [0] + [1, 1] | [1] + [2, 2] | [2] + [3, 3] | [3] + [4, 4] | [4] + [5, 5] | [5] + [6, 6] | [6] + [8, 8] | [8] + [null, 5] | [5] + [null, 7] | [7] + */ + int count = 0; + while (starTreeDocumentIterator.hasNext()) { + count++; + StarTreeDocument starTreeDocument = starTreeDocumentIterator.next(); + if (starTreeDocument.dimensions[0] == null) { + assertTrue(List.of(5L, 7L).contains(starTreeDocument.dimensions[1])); + } + assertEquals(starTreeDocument.dimensions[1], starTreeDocument.metrics[0]); + } + assertEquals(10, count); + } + + public void testMergeFlowWithMissingDocsInSecondDim() throws IOException { + List dimList2 = List.of(0L, 1L, 2L, 3L, 4L, 6L); + List docsWithField2 = List.of(0, 1, 2, 3, 4, 6); + List dimList = List.of(0L, 1L, 2L, 3L, 4L, 5L, -1L); + List docsWithField = List.of(0, 1, 2, 3, 4, 5, 6); + + List metricsList = List.of(0L, 1L, 2L, 3L, 4L, 5L, 6L); + List metricsWithField = List.of(0, 1, 2, 3, 4, 5, 6); + + List dimList3 = List.of(5L, 6L, 8L, -1L); + List docsWithField3 = List.of(0, 1, 3, 4); + List dimList4 = List.of(5L, 6L, 7L, 8L, -1L); + List docsWithField4 = List.of(0, 1, 2, 3, 4); + + List metricsList2 = List.of(5L, 6L, 7L, 8L, 9L); + List metricsWithField2 = List.of(0, 1, 2, 3, 4); + + StarTreeField sf = getStarTreeField(MetricStat.COUNT); + StarTreeValues starTreeValues = getStarTreeValues( + getSortedNumericMock(dimList, docsWithField), + getSortedNumericMock(dimList2, docsWithField2), + getSortedNumericMock(metricsList, metricsWithField), + sf, + "6" + ); + + StarTreeValues starTreeValues2 = getStarTreeValues( + getSortedNumericMock(dimList3, docsWithField3), + getSortedNumericMock(dimList4, docsWithField4), + getSortedNumericMock(metricsList2, metricsWithField2), + sf, + "4" + ); + OnHeapStarTreeBuilder builder = new OnHeapStarTreeBuilder(sf, getWriteState(4), mapperService); + Iterator starTreeDocumentIterator = builder.mergeStarTrees(List.of(starTreeValues, starTreeValues2)); + /** + * Asserting following dim / metrics [ dim1, dim2 / Count [ metric] ] + [0, 0] | [0] + [1, 1] | [1] + [2, 2] | [2] + [3, 3] | [3] + [4, 4] | [4] + [5, 5] | [5] + [5, null] | [5] + [6, 6] | [6] + [8, 8] | [8] + [null, 7] | [7] + */ + int count = 0; + while (starTreeDocumentIterator.hasNext()) { + count++; + StarTreeDocument starTreeDocument = starTreeDocumentIterator.next(); + if (starTreeDocument.dimensions[0] != null && starTreeDocument.dimensions[0] == 5) { + assertEquals(starTreeDocument.dimensions[0], starTreeDocument.metrics[0]); + } else { + assertEquals(starTreeDocument.dimensions[1], starTreeDocument.metrics[0]); + } + } + assertEquals(10, count); + } + + public void testMergeFlowWithDocsMissingAtTheEnd() throws IOException { + List dimList = List.of(0L, 1L, 2L, 3L, 4L); + List docsWithField = List.of(0, 1, 2, 3, 4); + List dimList2 = List.of(0L, 1L, 2L, 3L, 4L, 5L, -1L); + List docsWithField2 = List.of(0, 1, 2, 3, 4, 5, 6); + + List metricsList = List.of(0L, 1L, 2L, 3L, 4L, 5L, 6L); + List metricsWithField = List.of(0, 1, 2, 3, 4, 5, 6); + + List dimList3 = List.of(5L, 6L, 8L, -1L); + List docsWithField3 = List.of(0, 1, 3, 4); + List dimList4 = List.of(5L, 6L, 7L, 8L, -1L); + List docsWithField4 = List.of(0, 1, 2, 3, 4); + + List metricsList2 = List.of(5L, 6L, 7L, 8L, 9L); + List metricsWithField2 = List.of(0, 1, 2, 3, 4); + + StarTreeField sf = getStarTreeField(MetricStat.COUNT); + StarTreeValues starTreeValues = getStarTreeValues( + getSortedNumericMock(dimList, docsWithField), + getSortedNumericMock(dimList2, docsWithField2), + getSortedNumericMock(metricsList, metricsWithField), + sf, + "6" + ); + + StarTreeValues starTreeValues2 = getStarTreeValues( + getSortedNumericMock(dimList3, docsWithField3), + getSortedNumericMock(dimList4, docsWithField4), + getSortedNumericMock(metricsList2, metricsWithField2), + sf, + "4" + ); + OnHeapStarTreeBuilder builder = new OnHeapStarTreeBuilder(sf, writeState, mapperService); + Iterator starTreeDocumentIterator = builder.mergeStarTrees(List.of(starTreeValues, starTreeValues2)); + /** + * Asserting following dim / metrics [ dim1, dim2 / Count [ metric] ] + [0, 0] | [0] + [1, 1] | [1] + [2, 2] | [2] + [3, 3] | [3] + [4, 4] | [4] + [5, 5] | [5] + [6, 6] | [6] + [8, 8] | [8] + [null, 5] | [5] + [null, 7] | [7] + */ + int count = 0; + while (starTreeDocumentIterator.hasNext()) { + count++; + StarTreeDocument starTreeDocument = starTreeDocumentIterator.next(); + if (starTreeDocument.dimensions[0] == null) { + assertTrue(List.of(5L, 7L).contains(starTreeDocument.dimensions[1])); + } + assertEquals(starTreeDocument.dimensions[1], starTreeDocument.metrics[0]); + } + assertEquals(10, count); + } + + public void testMergeFlowWithEmptyFieldsInOneSegment() throws IOException { + List dimList = List.of(0L, 1L, 2L, 3L, 4L); + List docsWithField = List.of(0, 1, 2, 3, 4); + List dimList2 = List.of(0L, 1L, 2L, 3L, 4L, 5L, -1L); + List docsWithField2 = List.of(0, 1, 2, 3, 4, 5, 6); + + List metricsList = List.of(0L, 1L, 2L, 3L, 4L, 5L, 6L); + List metricsWithField = List.of(0, 1, 2, 3, 4, 5, 6); + + StarTreeField sf = getStarTreeField(MetricStat.COUNT); + StarTreeValues starTreeValues = getStarTreeValues( + getSortedNumericMock(dimList, docsWithField), + getSortedNumericMock(dimList2, docsWithField2), + getSortedNumericMock(metricsList, metricsWithField), + sf, + "6" + ); + + StarTreeValues starTreeValues2 = getStarTreeValues( + DocValues.emptySortedNumeric(), + DocValues.emptySortedNumeric(), + DocValues.emptySortedNumeric(), + sf, + "0" + ); + OnHeapStarTreeBuilder builder = new OnHeapStarTreeBuilder(sf, getWriteState(0), mapperService); + Iterator starTreeDocumentIterator = builder.mergeStarTrees(List.of(starTreeValues, starTreeValues2)); + /** + * Asserting following dim / metrics [ dim1, dim2 / Count [ metric] ] + [0, 0] | [0] + [1, 1] | [1] + [2, 2] | [2] + [3, 3] | [3] + [4, 4] | [4] + [null, 5] | [5] + */ + int count = 0; + while (starTreeDocumentIterator.hasNext()) { + count++; + StarTreeDocument starTreeDocument = starTreeDocumentIterator.next(); + if (starTreeDocument.dimensions[0] == null) { + assertEquals(5L, (long) starTreeDocument.dimensions[1]); + } + assertEquals(starTreeDocument.dimensions[1], starTreeDocument.metrics[0]); + } + assertEquals(6, count); + } + + public void testMergeFlowWithDuplicateDimensionValues() throws IOException { + List dimList1 = new ArrayList<>(500); + List docsWithField1 = new ArrayList<>(500); + for (int i = 0; i < 100; i++) { + for (int j = 0; j < 5; j++) { + dimList1.add((long) i); + docsWithField1.add(i * 5 + j); + } + } + + List dimList2 = new ArrayList<>(500); + List docsWithField2 = new ArrayList<>(500); + for (int i = 0; i < 100; i++) { + for (int j = 0; j < 5; j++) { + dimList2.add((long) i); + docsWithField2.add(i * 5 + j); + } + } + + List dimList3 = new ArrayList<>(500); + List docsWithField3 = new ArrayList<>(500); + for (int i = 0; i < 100; i++) { + for (int j = 0; j < 5; j++) { + dimList3.add((long) i); + docsWithField3.add(i * 5 + j); + } + } + + List dimList4 = new ArrayList<>(500); + List docsWithField4 = new ArrayList<>(500); + for (int i = 0; i < 100; i++) { + for (int j = 0; j < 5; j++) { + dimList4.add((long) i); + docsWithField4.add(i * 5 + j); + } + } + + List metricsList = new ArrayList<>(100); + List metricsWithField = new ArrayList<>(100); + for (int i = 0; i < 500; i++) { + metricsList.add(getLongFromDouble(i * 10.0)); + metricsWithField.add(i); + } + + StarTreeField sf = getStarTreeField(1); + StarTreeValues starTreeValues = getStarTreeValues( + dimList1, + docsWithField1, + dimList2, + docsWithField2, + dimList3, + docsWithField3, + dimList4, + docsWithField4, + metricsList, + metricsWithField, + sf + ); + + StarTreeValues starTreeValues2 = getStarTreeValues( + dimList1, + docsWithField1, + dimList2, + docsWithField2, + dimList3, + docsWithField3, + dimList4, + docsWithField4, + metricsList, + metricsWithField, + sf + ); + OnHeapStarTreeBuilder builder = new OnHeapStarTreeBuilder(sf, writeState, mapperService); + builder.build(List.of(starTreeValues, starTreeValues2)); + List starTreeDocuments = builder.getStarTreeDocuments(); + assertEquals(401, starTreeDocuments.size()); + int count = 0; + double sum = 0; + /** + 401 docs get generated + [0, 0, 0, 0] | [200.0] + [1, 1, 1, 1] | [700.0] + [2, 2, 2, 2] | [1200.0] + [3, 3, 3, 3] | [1700.0] + [4, 4, 4, 4] | [2200.0] + ..... + [null, null, null, 99] | [49700.0] + [null, null, null, null] | [2495000.0] + */ + for (StarTreeDocument starTreeDocument : starTreeDocuments) { + if (starTreeDocument.dimensions[3] == null) { + assertEquals(sum, starTreeDocument.metrics[0]); + } else { + if (starTreeDocument.dimensions[0] != null) { + sum += (double) starTreeDocument.metrics[0]; + } + assertEquals(starTreeDocument.dimensions[3] * 500 + 200.0, starTreeDocument.metrics[0]); + } + count++; + } + assertEquals(401, count); + builder.close(); + } + + public void testMergeFlowWithMaxLeafDocs() throws IOException { + List dimList1 = new ArrayList<>(500); + List docsWithField1 = new ArrayList<>(500); + + for (int i = 0; i < 20; i++) { + for (int j = 0; j < 20; j++) { + dimList1.add((long) i); + docsWithField1.add(i * 20 + j); + } + } + for (int i = 80; i < 100; i++) { + for (int j = 0; j < 5; j++) { + dimList1.add((long) i); + docsWithField1.add(i * 5 + j); + } + } + List dimList3 = new ArrayList<>(500); + List docsWithField3 = new ArrayList<>(500); + for (int i = 0; i < 100; i++) { + for (int j = 0; j < 5; j++) { + dimList3.add((long) i); + docsWithField3.add(i * 5 + j); + } + } + List dimList2 = new ArrayList<>(500); + List docsWithField2 = new ArrayList<>(500); + for (int i = 0; i < 10; i++) { + for (int j = 0; j < 50; j++) { + dimList2.add((long) i); + docsWithField2.add(i * 50 + j); + } + } + + List dimList4 = new ArrayList<>(500); + List docsWithField4 = new ArrayList<>(500); + for (int i = 0; i < 100; i++) { + for (int j = 0; j < 5; j++) { + dimList4.add((long) i); + docsWithField4.add(i * 5 + j); + } + } + + List metricsList = new ArrayList<>(100); + List metricsWithField = new ArrayList<>(100); + for (int i = 0; i < 500; i++) { + metricsList.add(getLongFromDouble(i * 10.0)); + metricsWithField.add(i); + } + + StarTreeField sf = getStarTreeField(3); + StarTreeValues starTreeValues = getStarTreeValues( + dimList1, + docsWithField1, + dimList2, + docsWithField2, + dimList3, + docsWithField3, + dimList4, + docsWithField4, + metricsList, + metricsWithField, + sf + ); + + StarTreeValues starTreeValues2 = getStarTreeValues( + dimList1, + docsWithField1, + dimList2, + docsWithField2, + dimList3, + docsWithField3, + dimList4, + docsWithField4, + metricsList, + metricsWithField, + sf + ); + + OnHeapStarTreeBuilder builder = new OnHeapStarTreeBuilder(sf, writeState, mapperService); + builder.build(List.of(starTreeValues, starTreeValues2)); + List starTreeDocuments = builder.getStarTreeDocuments(); + /** + 635 docs get generated + [0, 0, 0, 0] | [200.0] + [1, 1, 1, 1] | [700.0] + [2, 2, 2, 2] | [1200.0] + [3, 3, 3, 3] | [1700.0] + [4, 4, 4, 4] | [2200.0] + ..... + [null, null, null, 99] | [49700.0] + ..... + [null, null, null, null] | [2495000.0] + */ + assertEquals(635, starTreeDocuments.size()); + builder.close(); + } + + private StarTreeValues getStarTreeValues( + List dimList1, + List docsWithField1, + List dimList2, + List docsWithField2, + List dimList3, + List docsWithField3, + List dimList4, + List docsWithField4, + List metricsList, + List metricsWithField, + StarTreeField sf + ) { + SortedNumericDocValues d1sndv = getSortedNumericMock(dimList1, docsWithField1); + SortedNumericDocValues d2sndv = getSortedNumericMock(dimList2, docsWithField2); + SortedNumericDocValues d3sndv = getSortedNumericMock(dimList3, docsWithField3); + SortedNumericDocValues d4sndv = getSortedNumericMock(dimList4, docsWithField4); + SortedNumericDocValues m1sndv = getSortedNumericMock(metricsList, metricsWithField); + Map dimDocIdSetIterators = Map.of("field1", d1sndv, "field3", d2sndv, "field5", d3sndv, "field8", d4sndv); + Map metricDocIdSetIterators = Map.of("field2", m1sndv); + StarTreeValues starTreeValues = new StarTreeValues(sf, null, dimDocIdSetIterators, metricDocIdSetIterators, getAttributes(500)); + return starTreeValues; + } + + public void testMergeFlowWithDuplicateDimensionValueWithMaxLeafDocs() throws IOException { + List dimList1 = new ArrayList<>(500); + List docsWithField1 = new ArrayList<>(500); + + for (int i = 0; i < 20; i++) { + for (int j = 0; j < 20; j++) { + dimList1.add((long) i); + docsWithField1.add(i * 20 + j); + } + } + for (int i = 80; i < 100; i++) { + for (int j = 0; j < 5; j++) { + dimList1.add((long) i); + docsWithField1.add(i * 5 + j); + } + } + List dimList3 = new ArrayList<>(500); + List docsWithField3 = new ArrayList<>(500); + for (int i = 0; i < 100; i++) { + for (int j = 0; j < 5; j++) { + dimList3.add((long) i); + docsWithField3.add(i * 5 + j); + } + } + List dimList2 = new ArrayList<>(500); + List docsWithField2 = new ArrayList<>(500); + for (int i = 0; i < 500; i++) { + dimList2.add((long) 1); + docsWithField2.add(i); + } + + List dimList4 = new ArrayList<>(500); + List docsWithField4 = new ArrayList<>(500); + for (int i = 0; i < 100; i++) { + for (int j = 0; j < 5; j++) { + dimList4.add((long) i); + docsWithField4.add(i * 5 + j); + } + } + + List metricsList = new ArrayList<>(100); + List metricsWithField = new ArrayList<>(100); + for (int i = 0; i < 500; i++) { + metricsList.add(getLongFromDouble(i * 10.0)); + metricsWithField.add(i); + } + + StarTreeField sf = getStarTreeField(3); + StarTreeValues starTreeValues = getStarTreeValues( + dimList1, + docsWithField1, + dimList2, + docsWithField2, + dimList3, + docsWithField3, + dimList4, + docsWithField4, + metricsList, + metricsWithField, + sf + ); + + StarTreeValues starTreeValues2 = getStarTreeValues( + dimList1, + docsWithField1, + dimList2, + docsWithField2, + dimList3, + docsWithField3, + dimList4, + docsWithField4, + metricsList, + metricsWithField, + sf + ); + OnHeapStarTreeBuilder builder = new OnHeapStarTreeBuilder(sf, writeState, mapperService); + builder.build(List.of(starTreeValues, starTreeValues2)); + List starTreeDocuments = builder.getStarTreeDocuments(); + assertEquals(401, starTreeDocuments.size()); + builder.close(); + } + + public static long getLongFromDouble(double value) { + return Double.doubleToLongBits(value); + } + + public void testMergeFlowWithMaxLeafDocsAndStarTreeNodesAssertion() throws IOException { + List dimList1 = new ArrayList<>(500); + List docsWithField1 = new ArrayList<>(500); + Map> expectedDimToValueMap = new HashMap<>(); + Map dimValueMap = new HashMap<>(); + for (int i = 0; i < 20; i++) { + for (int j = 0; j < 20; j++) { + dimList1.add((long) i); + docsWithField1.add(i * 20 + j); + } + // metric = no of docs * 10.0 + dimValueMap.put((long) i, 200.0); + } + for (int i = 80; i < 100; i++) { + for (int j = 0; j < 5; j++) { + dimList1.add((long) i); + docsWithField1.add(i * 5 + j); + } + // metric = no of docs * 10.0 + dimValueMap.put((long) i, 50.0); + } + dimValueMap.put(Long.MAX_VALUE, 5000.0); + expectedDimToValueMap.put(0, dimValueMap); + dimValueMap = new HashMap<>(); + List dimList3 = new ArrayList<>(500); + List docsWithField3 = new ArrayList<>(500); + for (int i = 0; i < 500; i++) { + dimList3.add((long) 1); + docsWithField3.add(i); + dimValueMap.put((long) i, 10.0); + } + dimValueMap.put(Long.MAX_VALUE, 5000.0); + expectedDimToValueMap.put(2, dimValueMap); + dimValueMap = new HashMap<>(); + List dimList2 = new ArrayList<>(500); + List docsWithField2 = new ArrayList<>(500); + for (int i = 0; i < 500; i++) { + dimList2.add((long) i); + docsWithField2.add(i); + dimValueMap.put((long) i, 10.0); + } + dimValueMap.put(Long.MAX_VALUE, 200.0); + expectedDimToValueMap.put(1, dimValueMap); + dimValueMap = new HashMap<>(); + List dimList4 = new ArrayList<>(500); + List docsWithField4 = new ArrayList<>(500); + for (int i = 0; i < 500; i++) { + dimList4.add((long) 1); + docsWithField4.add(i); + dimValueMap.put((long) i, 10.0); + } + dimValueMap.put(Long.MAX_VALUE, 5000.0); + expectedDimToValueMap.put(3, dimValueMap); + List metricsList = new ArrayList<>(100); + List metricsWithField = new ArrayList<>(100); + for (int i = 0; i < 500; i++) { + metricsList.add(getLongFromDouble(10.0)); + metricsWithField.add(i); + } + + StarTreeField sf = getStarTreeField(10); + StarTreeValues starTreeValues = getStarTreeValues( + dimList1, + docsWithField1, + dimList2, + docsWithField2, + dimList3, + docsWithField3, + dimList4, + docsWithField4, + metricsList, + metricsWithField, + sf + ); + + StarTreeValues starTreeValues2 = getStarTreeValues( + dimList1, + docsWithField1, + dimList2, + docsWithField2, + dimList3, + docsWithField3, + dimList4, + docsWithField4, + metricsList, + metricsWithField, + sf + ); + OnHeapStarTreeBuilder builder = new OnHeapStarTreeBuilder(sf, writeState, mapperService); + builder.build(List.of(starTreeValues, starTreeValues2)); + List starTreeDocuments = builder.getStarTreeDocuments(); + Map> dimValueToDocIdMap = new HashMap<>(); + traverseStarTree(builder.rootNode, dimValueToDocIdMap, true); + for (Map.Entry> entry : dimValueToDocIdMap.entrySet()) { + int dimId = entry.getKey(); + if (dimId == -1) continue; + Map map = expectedDimToValueMap.get(dimId); + for (Map.Entry dimValueToDocIdEntry : entry.getValue().entrySet()) { + long dimValue = dimValueToDocIdEntry.getKey(); + int docId = dimValueToDocIdEntry.getValue(); + assertEquals(map.get(dimValue) * 2, starTreeDocuments.get(docId).metrics[0]); + } + } + assertEquals(1041, starTreeDocuments.size()); + builder.close(); + } + + private static StarTreeField getStarTreeField(int maxLeafDocs) { + Dimension d1 = new NumericDimension("field1"); + Dimension d2 = new NumericDimension("field3"); + Dimension d3 = new NumericDimension("field5"); + Dimension d4 = new NumericDimension("field8"); + List dims = List.of(d1, d2, d3, d4); + Metric m1 = new Metric("field2", List.of(MetricStat.SUM)); + List metrics = List.of(m1); + StarTreeFieldConfiguration c = new StarTreeFieldConfiguration( + maxLeafDocs, + new HashSet<>(), + StarTreeFieldConfiguration.StarTreeBuildMode.OFF_HEAP + ); + StarTreeField sf = new StarTreeField("sf", dims, metrics, c); + return sf; + } + + private void traverseStarTree(TreeNode root, Map> dimValueToDocIdMap, boolean traverStarNodes) { + TreeNode starTree = root; + // Use BFS to traverse the star tree + Queue queue = new ArrayDeque<>(); + queue.add(starTree); + int currentDimensionId = -1; + TreeNode starTreeNode; + List docIds = new ArrayList<>(); + while ((starTreeNode = queue.poll()) != null) { + int dimensionId = starTreeNode.dimensionId; + if (dimensionId > currentDimensionId) { + currentDimensionId = dimensionId; + } + + // store aggregated document of the node + int docId = starTreeNode.aggregatedDocId; + Map map = dimValueToDocIdMap.getOrDefault(dimensionId, new HashMap<>()); + if (starTreeNode.isStarNode) { + map.put(Long.MAX_VALUE, docId); + } else { + map.put(starTreeNode.dimensionValue, docId); + } + dimValueToDocIdMap.put(dimensionId, map); + + if (starTreeNode.children != null && (!traverStarNodes || starTreeNode.isStarNode)) { + Iterator childrenIterator = starTreeNode.children.values().iterator(); + while (childrenIterator.hasNext()) { + TreeNode childNode = childrenIterator.next(); + queue.add(childNode); + } + } + } + } + + public void testMergeFlow() throws IOException { + List dimList1 = new ArrayList<>(1000); + List docsWithField1 = new ArrayList<>(1000); + for (int i = 0; i < 1000; i++) { + dimList1.add((long) i); + docsWithField1.add(i); + } + + List dimList2 = new ArrayList<>(1000); + List docsWithField2 = new ArrayList<>(1000); + for (int i = 0; i < 1000; i++) { + dimList2.add((long) i); + docsWithField2.add(i); + } + + List dimList3 = new ArrayList<>(1000); + List docsWithField3 = new ArrayList<>(1000); + for (int i = 0; i < 1000; i++) { + dimList3.add((long) i); + docsWithField3.add(i); + } + + List dimList4 = new ArrayList<>(1000); + List docsWithField4 = new ArrayList<>(1000); + for (int i = 0; i < 1000; i++) { + dimList4.add((long) i); + docsWithField4.add(i); + } + + List dimList5 = new ArrayList<>(1000); + List docsWithField5 = new ArrayList<>(1000); + for (int i = 0; i < 1000; i++) { + dimList5.add((long) i); + docsWithField5.add(i); + } + + List metricsList = new ArrayList<>(1000); + List metricsWithField = new ArrayList<>(1000); + for (int i = 0; i < 1000; i++) { + metricsList.add(getLongFromDouble(i * 10.0)); + metricsWithField.add(i); + } + + Dimension d1 = new NumericDimension("field1"); + Dimension d2 = new NumericDimension("field3"); + Dimension d3 = new NumericDimension("field5"); + Dimension d4 = new NumericDimension("field8"); + // Dimension d5 = new NumericDimension("field5"); + Metric m1 = new Metric("field2", List.of(MetricStat.SUM)); + List dims = List.of(d1, d2, d3, d4); + List metrics = List.of(m1); + StarTreeFieldConfiguration c = new StarTreeFieldConfiguration( + 1, + new HashSet<>(), + StarTreeFieldConfiguration.StarTreeBuildMode.ON_HEAP + ); + StarTreeField sf = new StarTreeField("sf", dims, metrics, c); + SortedNumericDocValues d1sndv = getSortedNumericMock(dimList1, docsWithField1); + SortedNumericDocValues d2sndv = getSortedNumericMock(dimList2, docsWithField2); + SortedNumericDocValues d3sndv = getSortedNumericMock(dimList3, docsWithField3); + SortedNumericDocValues d4sndv = getSortedNumericMock(dimList4, docsWithField4); + SortedNumericDocValues m1sndv = getSortedNumericMock(metricsList, metricsWithField); + Map dimDocIdSetIterators = Map.of("field1", d1sndv, "field3", d2sndv, "field5", d3sndv, "field8", d4sndv); + Map metricDocIdSetIterators = Map.of("field2", m1sndv); + StarTreeValues starTreeValues = new StarTreeValues(sf, null, dimDocIdSetIterators, metricDocIdSetIterators, getAttributes(1000)); + + SortedNumericDocValues f2d1sndv = getSortedNumericMock(dimList1, docsWithField1); + SortedNumericDocValues f2d2sndv = getSortedNumericMock(dimList2, docsWithField2); + SortedNumericDocValues f2d3sndv = getSortedNumericMock(dimList3, docsWithField3); + SortedNumericDocValues f2d4sndv = getSortedNumericMock(dimList4, docsWithField4); + SortedNumericDocValues f2m1sndv = getSortedNumericMock(metricsList, metricsWithField); + Map f2dimDocIdSetIterators = Map.of( + "field1", + f2d1sndv, + "field3", + f2d2sndv, + "field5", + f2d3sndv, + "field8", + f2d4sndv + ); + Map f2metricDocIdSetIterators = Map.of("field2", f2m1sndv); + StarTreeValues starTreeValues2 = new StarTreeValues( + sf, + null, + f2dimDocIdSetIterators, + f2metricDocIdSetIterators, + getAttributes(1000) + ); + + BaseStarTreeBuilder builder = getStarTreeBuilder(sf, writeState, mapperService); + Iterator starTreeDocumentIterator = builder.mergeStarTrees(List.of(starTreeValues, starTreeValues2)); + /** + [0, 0, 0, 0] | [0.0] + [1, 1, 1, 1] | [20.0] + [2, 2, 2, 2] | [40.0] + [3, 3, 3, 3] | [60.0] + [4, 4, 4, 4] | [80.0] + [5, 5, 5, 5] | [100.0] + ... + [999, 999, 999, 999] | [19980.0] + */ + while (starTreeDocumentIterator.hasNext()) { + StarTreeDocument starTreeDocument = starTreeDocumentIterator.next(); + assertEquals(starTreeDocument.dimensions[0] * 20.0, starTreeDocument.metrics[0]); + } + builder.close(); + } + + Map getAttributes(int numSegmentDocs) { + return Map.of(String.valueOf(NUM_SEGMENT_DOCS), String.valueOf(numSegmentDocs)); + } + + private static StarTreeField getStarTreeField(MetricStat count) { + Dimension d1 = new NumericDimension("field1"); + Dimension d2 = new NumericDimension("field3"); + Metric m1 = new Metric("field2", List.of(count)); + List dims = List.of(d1, d2); + List metrics = List.of(m1); + StarTreeFieldConfiguration c = new StarTreeFieldConfiguration( + 1000, + new HashSet<>(), + StarTreeFieldConfiguration.StarTreeBuildMode.ON_HEAP + ); + return new StarTreeField("sf", dims, metrics, c); + } + + private Long getLongFromDouble(Double num) { + if (num == null) { + return null; + } + return NumericUtils.doubleToSortableLong(num); + } + + SortedNumericDocValues getSortedNumericMock(List dimList, List docsWithField) { + return new SortedNumericDocValues() { + int index = -1; + + @Override + public long nextValue() { + return dimList.get(index); + } + + @Override + public int docValueCount() { + return 0; + } + + @Override + public boolean advanceExact(int target) { + return false; + } + + @Override + public int docID() { + return index; + } + + @Override + public int nextDoc() { + if (index == docsWithField.size() - 1) { + return NO_MORE_DOCS; + } + index++; + return docsWithField.get(index); + } + + @Override + public int advance(int target) { + return 0; + } + + @Override + public long cost() { + return 0; + } + }; + } + + @Override + public void tearDown() throws Exception { + super.tearDown(); + if (builder != null) { + builder.close(); + } + directory.close(); + } +} diff --git a/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/BaseStarTreeBuilderTests.java b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/BaseStarTreeBuilderTests.java index b78130e72aba1..51ebc02ea8243 100644 --- a/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/BaseStarTreeBuilderTests.java +++ b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/BaseStarTreeBuilderTests.java @@ -22,6 +22,7 @@ import org.apache.lucene.util.InfoStream; import org.apache.lucene.util.Version; import org.opensearch.common.settings.Settings; +import org.opensearch.index.codec.composite.datacube.startree.StarTreeValues; import org.opensearch.index.compositeindex.datacube.Dimension; import org.opensearch.index.compositeindex.datacube.Metric; import org.opensearch.index.compositeindex.datacube.MetricStat; @@ -30,6 +31,7 @@ import org.opensearch.index.compositeindex.datacube.startree.StarTreeField; import org.opensearch.index.compositeindex.datacube.startree.StarTreeFieldConfiguration; import org.opensearch.index.compositeindex.datacube.startree.aggregators.MetricAggregatorInfo; +import org.opensearch.index.compositeindex.datacube.startree.utils.SequentialDocValuesIterator; import org.opensearch.index.fielddata.IndexNumericFieldData; import org.opensearch.index.mapper.ContentPath; import org.opensearch.index.mapper.DocumentMapper; @@ -155,7 +157,10 @@ public static void setup() throws IOException { ); when(documentMapper.mappers()).thenReturn(fieldMappers); - builder = new BaseStarTreeBuilder(starTreeField, fieldProducerMap, state, mapperService) { + builder = new BaseStarTreeBuilder(starTreeField, state, mapperService) { + @Override + public void build(List starTreeValuesSubs) throws IOException {} + @Override public void appendStarTreeDocument(StarTreeDocument starTreeDocument) throws IOException {} @@ -171,11 +176,14 @@ public List getStarTreeDocuments() { @Override public Long getDimensionValue(int docId, int dimensionId) throws IOException { - return 0L; + return 0l; } @Override - public Iterator sortAndAggregateStarTreeDocuments() throws IOException { + public Iterator sortAndAggregateSegmentDocuments( + SequentialDocValuesIterator[] dimensionReaders, + List metricReaders + ) throws IOException { return null; } @@ -184,14 +192,19 @@ public Iterator generateStarTreeDocumentsForStarNode(int start throws IOException { return null; } + + @Override + Iterator mergeStarTrees(List starTreeValues) throws IOException { + return null; + } }; } public void test_generateMetricAggregatorInfos() throws IOException { - List metricAggregatorInfos = builder.generateMetricAggregatorInfos(mapperService, state); + List metricAggregatorInfos = builder.generateMetricAggregatorInfos(mapperService); List expectedMetricAggregatorInfos = List.of( - new MetricAggregatorInfo(MetricStat.SUM, "field2", starTreeField.getName(), IndexNumericFieldData.NumericType.DOUBLE, null), - new MetricAggregatorInfo(MetricStat.SUM, "field4", starTreeField.getName(), IndexNumericFieldData.NumericType.DOUBLE, null) + new MetricAggregatorInfo(MetricStat.SUM, "field2", starTreeField.getName(), IndexNumericFieldData.NumericType.DOUBLE), + new MetricAggregatorInfo(MetricStat.SUM, "field4", starTreeField.getName(), IndexNumericFieldData.NumericType.DOUBLE) ); assertEquals(metricAggregatorInfos, expectedMetricAggregatorInfos); } diff --git a/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/OnHeapStarTreeBuilderTests.java b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/OnHeapStarTreeBuilderTests.java index 4e107e78d27be..aed08b7727be7 100644 --- a/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/OnHeapStarTreeBuilderTests.java +++ b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/OnHeapStarTreeBuilderTests.java @@ -8,699 +8,17 @@ package org.opensearch.index.compositeindex.datacube.startree.builder; -import org.apache.lucene.codecs.DocValuesProducer; -import org.apache.lucene.codecs.lucene99.Lucene99Codec; -import org.apache.lucene.index.DocValuesType; -import org.apache.lucene.index.FieldInfo; -import org.apache.lucene.index.FieldInfos; -import org.apache.lucene.index.IndexOptions; -import org.apache.lucene.index.SegmentInfo; import org.apache.lucene.index.SegmentWriteState; -import org.apache.lucene.index.VectorEncoding; -import org.apache.lucene.index.VectorSimilarityFunction; -import org.apache.lucene.sandbox.document.HalfFloatPoint; -import org.apache.lucene.store.Directory; -import org.apache.lucene.util.InfoStream; -import org.apache.lucene.util.NumericUtils; -import org.apache.lucene.util.Version; -import org.opensearch.common.settings.Settings; -import org.opensearch.index.compositeindex.datacube.Dimension; -import org.opensearch.index.compositeindex.datacube.Metric; -import org.opensearch.index.compositeindex.datacube.MetricStat; -import org.opensearch.index.compositeindex.datacube.NumericDimension; -import org.opensearch.index.compositeindex.datacube.startree.StarTreeDocument; import org.opensearch.index.compositeindex.datacube.startree.StarTreeField; -import org.opensearch.index.compositeindex.datacube.startree.StarTreeFieldConfiguration; -import org.opensearch.index.mapper.ContentPath; -import org.opensearch.index.mapper.DocumentMapper; -import org.opensearch.index.mapper.Mapper; import org.opensearch.index.mapper.MapperService; -import org.opensearch.index.mapper.MappingLookup; -import org.opensearch.index.mapper.NumberFieldMapper; -import org.opensearch.test.OpenSearchTestCase; -import org.junit.Before; - -import java.io.IOException; -import java.nio.charset.StandardCharsets; -import java.util.Collections; -import java.util.HashMap; -import java.util.Iterator; -import java.util.List; -import java.util.Map; -import java.util.Set; -import java.util.UUID; - -import static org.mockito.Mockito.mock; -import static org.mockito.Mockito.when; - -public class OnHeapStarTreeBuilderTests extends OpenSearchTestCase { - - private OnHeapStarTreeBuilder builder; - private MapperService mapperService; - private List dimensionsOrder; - private List fields = List.of(); - private List metrics; - private Directory directory; - private FieldInfo[] fieldsInfo; - private StarTreeField compositeField; - private Map fieldProducerMap; - private SegmentWriteState writeState; - - @Before - public void setup() throws IOException { - fields = List.of("field1", "field2", "field3", "field4", "field5", "field6", "field7", "field8", "field9", "field10"); - - dimensionsOrder = List.of( - new NumericDimension("field1"), - new NumericDimension("field3"), - new NumericDimension("field5"), - new NumericDimension("field8") - ); - metrics = List.of( - new Metric("field2", List.of(MetricStat.SUM)), - new Metric("field4", List.of(MetricStat.SUM)), - new Metric("field6", List.of(MetricStat.COUNT)) - ); - - DocValuesProducer docValuesProducer = mock(DocValuesProducer.class); - - compositeField = new StarTreeField( - "test", - dimensionsOrder, - metrics, - new StarTreeFieldConfiguration(1, Set.of("field8"), StarTreeFieldConfiguration.StarTreeBuildMode.ON_HEAP) - ); - directory = newFSDirectory(createTempDir()); - SegmentInfo segmentInfo = new SegmentInfo( - directory, - Version.LATEST, - Version.LUCENE_9_11_0, - "test_segment", - 5, - false, - false, - new Lucene99Codec(), - new HashMap<>(), - UUID.randomUUID().toString().substring(0, 16).getBytes(StandardCharsets.UTF_8), - new HashMap<>(), - null - ); - - fieldsInfo = new FieldInfo[fields.size()]; - fieldProducerMap = new HashMap<>(); - for (int i = 0; i < fieldsInfo.length; i++) { - fieldsInfo[i] = new FieldInfo( - fields.get(i), - i, - false, - false, - true, - IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS, - DocValuesType.SORTED_NUMERIC, - -1, - Collections.emptyMap(), - 0, - 0, - 0, - 0, - VectorEncoding.FLOAT32, - VectorSimilarityFunction.EUCLIDEAN, - false, - false - ); - fieldProducerMap.put(fields.get(i), docValuesProducer); - } - FieldInfos fieldInfos = new FieldInfos(fieldsInfo); - writeState = new SegmentWriteState(InfoStream.getDefault(), segmentInfo.dir, segmentInfo, fieldInfos, null, newIOContext(random())); - - mapperService = mock(MapperService.class); - DocumentMapper documentMapper = mock(DocumentMapper.class); - when(mapperService.documentMapper()).thenReturn(documentMapper); - Settings settings = Settings.builder().put(settings(org.opensearch.Version.CURRENT).build()).build(); - NumberFieldMapper numberFieldMapper1 = new NumberFieldMapper.Builder("field2", NumberFieldMapper.NumberType.DOUBLE, false, true) - .build(new Mapper.BuilderContext(settings, new ContentPath())); - NumberFieldMapper numberFieldMapper2 = new NumberFieldMapper.Builder("field4", NumberFieldMapper.NumberType.DOUBLE, false, true) - .build(new Mapper.BuilderContext(settings, new ContentPath())); - NumberFieldMapper numberFieldMapper3 = new NumberFieldMapper.Builder("field6", NumberFieldMapper.NumberType.DOUBLE, false, true) - .build(new Mapper.BuilderContext(settings, new ContentPath())); - MappingLookup fieldMappers = new MappingLookup( - Set.of(numberFieldMapper1, numberFieldMapper2, numberFieldMapper3), - Collections.emptyList(), - Collections.emptyList(), - 0, - null - ); - when(documentMapper.mappers()).thenReturn(fieldMappers); - builder = new OnHeapStarTreeBuilder(compositeField, fieldProducerMap, writeState, mapperService); - } - - public void test_sortAndAggregateStarTreeDocuments() throws IOException { - - int noOfStarTreeDocuments = 5; - StarTreeDocument[] starTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; - - starTreeDocuments[0] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Double[] { 12.0, 10.0, randomDouble() }); - starTreeDocuments[1] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 10.0, 6.0, randomDouble() }); - starTreeDocuments[2] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 14.0, 12.0, randomDouble() }); - starTreeDocuments[3] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Double[] { 9.0, 4.0, randomDouble() }); - starTreeDocuments[4] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 11.0, 16.0, randomDouble() }); - - List inorderStarTreeDocuments = List.of( - new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Object[] { 21.0, 14.0, 2L }), - new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Object[] { 35.0, 34.0, 3L }) - ); - Iterator expectedStarTreeDocumentIterator = inorderStarTreeDocuments.iterator(); - - StarTreeDocument[] segmentStarTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; - for (int i = 0; i < noOfStarTreeDocuments; i++) { - long metric1 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[0]); - long metric2 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[1]); - long metric3 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[2]); - segmentStarTreeDocuments[i] = new StarTreeDocument(starTreeDocuments[i].dimensions, new Long[] { metric1, metric2, metric3 }); - } - - Iterator segmentStarTreeDocumentIterator = builder.sortAndAggregateStarTreeDocuments(segmentStarTreeDocuments); - int numOfAggregatedDocuments = 0; - while (segmentStarTreeDocumentIterator.hasNext() && expectedStarTreeDocumentIterator.hasNext()) { - StarTreeDocument resultStarTreeDocument = segmentStarTreeDocumentIterator.next(); - StarTreeDocument expectedStarTreeDocument = expectedStarTreeDocumentIterator.next(); - - assertEquals(expectedStarTreeDocument.dimensions[0], resultStarTreeDocument.dimensions[0]); - assertEquals(expectedStarTreeDocument.dimensions[1], resultStarTreeDocument.dimensions[1]); - assertEquals(expectedStarTreeDocument.dimensions[2], resultStarTreeDocument.dimensions[2]); - assertEquals(expectedStarTreeDocument.dimensions[3], resultStarTreeDocument.dimensions[3]); - assertEquals(expectedStarTreeDocument.metrics[0], resultStarTreeDocument.metrics[0]); - assertEquals(expectedStarTreeDocument.metrics[1], resultStarTreeDocument.metrics[1]); - assertEquals(expectedStarTreeDocument.metrics[2], resultStarTreeDocument.metrics[2]); - - numOfAggregatedDocuments++; - } - - assertEquals(inorderStarTreeDocuments.size(), numOfAggregatedDocuments); - - } - - public void test_sortAndAggregateStarTreeDocuments_nullMetric() throws IOException { - - int noOfStarTreeDocuments = 5; - StarTreeDocument[] starTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; - - starTreeDocuments[0] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Double[] { 12.0, 10.0, randomDouble() }); - starTreeDocuments[1] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 10.0, 6.0, randomDouble() }); - starTreeDocuments[2] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 14.0, 12.0, randomDouble() }); - starTreeDocuments[3] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Double[] { 9.0, 4.0, randomDouble() }); - starTreeDocuments[4] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 11.0, null, randomDouble() }); - StarTreeDocument expectedStarTreeDocument = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Double[] { 21.0, 14.0, 2.0 }); - - StarTreeDocument[] segmentStarTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; - for (int i = 0; i < noOfStarTreeDocuments; i++) { - Long metric1 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[0]); - Long metric2 = starTreeDocuments[i].metrics[1] != null - ? NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[1]) - : null; - Long metric3 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[2]); - segmentStarTreeDocuments[i] = new StarTreeDocument(starTreeDocuments[i].dimensions, new Object[] { metric1, metric2, metric3 }); - } - - Iterator segmentStarTreeDocumentIterator = builder.sortAndAggregateStarTreeDocuments(segmentStarTreeDocuments); - - StarTreeDocument resultStarTreeDocument = segmentStarTreeDocumentIterator.next(); - assertEquals(expectedStarTreeDocument.dimensions[0], resultStarTreeDocument.dimensions[0]); - assertEquals(expectedStarTreeDocument.dimensions[1], resultStarTreeDocument.dimensions[1]); - assertEquals(expectedStarTreeDocument.dimensions[2], resultStarTreeDocument.dimensions[2]); - assertEquals(expectedStarTreeDocument.dimensions[3], resultStarTreeDocument.dimensions[3]); - assertEquals(expectedStarTreeDocument.metrics[0], resultStarTreeDocument.metrics[0]); - assertEquals(expectedStarTreeDocument.metrics[1], resultStarTreeDocument.metrics[1]); - - assertThrows( - "Null metric should have resulted in IllegalStateException", - IllegalStateException.class, - segmentStarTreeDocumentIterator::next - ); - - } - - public void test_sortAndAggregateStarTreeDocument_longMaxAndLongMinDimensions() throws IOException { - - int noOfStarTreeDocuments = 5; - StarTreeDocument[] starTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; - - starTreeDocuments[0] = new StarTreeDocument(new Long[] { Long.MIN_VALUE, 4L, 3L, 4L }, new Double[] { 12.0, 10.0, randomDouble() }); - starTreeDocuments[1] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, Long.MAX_VALUE }, new Double[] { 10.0, 6.0, randomDouble() }); - starTreeDocuments[2] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, Long.MAX_VALUE }, new Double[] { 14.0, 12.0, randomDouble() }); - starTreeDocuments[3] = new StarTreeDocument(new Long[] { Long.MIN_VALUE, 4L, 3L, 4L }, new Double[] { 9.0, 4.0, randomDouble() }); - starTreeDocuments[4] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, Long.MAX_VALUE }, new Double[] { 11.0, 16.0, randomDouble() }); - - List inorderStarTreeDocuments = List.of( - new StarTreeDocument(new Long[] { Long.MIN_VALUE, 4L, 3L, 4L }, new Object[] { 21.0, 14.0, 2L }), - new StarTreeDocument(new Long[] { 3L, 4L, 2L, Long.MAX_VALUE }, new Object[] { 35.0, 34.0, 3L }) - ); - Iterator expectedStarTreeDocumentIterator = inorderStarTreeDocuments.iterator(); - - StarTreeDocument[] segmentStarTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; - for (int i = 0; i < noOfStarTreeDocuments; i++) { - long metric1 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[0]); - long metric2 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[1]); - long metric3 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[2]); - segmentStarTreeDocuments[i] = new StarTreeDocument(starTreeDocuments[i].dimensions, new Long[] { metric1, metric2, metric3 }); - } - - Iterator segmentStarTreeDocumentIterator = builder.sortAndAggregateStarTreeDocuments(segmentStarTreeDocuments); - int numOfAggregatedDocuments = 0; - while (segmentStarTreeDocumentIterator.hasNext() && expectedStarTreeDocumentIterator.hasNext()) { - StarTreeDocument resultStarTreeDocument = segmentStarTreeDocumentIterator.next(); - StarTreeDocument expectedStarTreeDocument = expectedStarTreeDocumentIterator.next(); - - assertEquals(expectedStarTreeDocument.dimensions[0], resultStarTreeDocument.dimensions[0]); - assertEquals(expectedStarTreeDocument.dimensions[1], resultStarTreeDocument.dimensions[1]); - assertEquals(expectedStarTreeDocument.dimensions[2], resultStarTreeDocument.dimensions[2]); - assertEquals(expectedStarTreeDocument.dimensions[3], resultStarTreeDocument.dimensions[3]); - assertEquals(expectedStarTreeDocument.metrics[0], resultStarTreeDocument.metrics[0]); - assertEquals(expectedStarTreeDocument.metrics[1], resultStarTreeDocument.metrics[1]); - assertEquals(expectedStarTreeDocument.metrics[2], resultStarTreeDocument.metrics[2]); - - numOfAggregatedDocuments++; - } - - assertEquals(inorderStarTreeDocuments.size(), numOfAggregatedDocuments); - - } - - public void test_sortAndAggregateStarTreeDocument_DoubleMaxAndDoubleMinMetrics() throws IOException { - - int noOfStarTreeDocuments = 5; - StarTreeDocument[] starTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; - - starTreeDocuments[0] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Double[] { Double.MAX_VALUE, 10.0, randomDouble() }); - starTreeDocuments[1] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 10.0, 6.0, randomDouble() }); - starTreeDocuments[2] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 14.0, Double.MIN_VALUE, randomDouble() }); - starTreeDocuments[3] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Double[] { 9.0, 4.0, randomDouble() }); - starTreeDocuments[4] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 11.0, 16.0, randomDouble() }); - - List inorderStarTreeDocuments = List.of( - new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Object[] { Double.MAX_VALUE + 9, 14.0, 2L }), - new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Object[] { 35.0, Double.MIN_VALUE + 22, 3L }) - ); - Iterator expectedStarTreeDocumentIterator = inorderStarTreeDocuments.iterator(); - - StarTreeDocument[] segmentStarTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; - for (int i = 0; i < noOfStarTreeDocuments; i++) { - long metric1 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[0]); - long metric2 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[1]); - long metric3 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[2]); - segmentStarTreeDocuments[i] = new StarTreeDocument(starTreeDocuments[i].dimensions, new Long[] { metric1, metric2, metric3 }); - } - - Iterator segmentStarTreeDocumentIterator = builder.sortAndAggregateStarTreeDocuments(segmentStarTreeDocuments); - int numOfAggregatedDocuments = 0; - while (segmentStarTreeDocumentIterator.hasNext() && expectedStarTreeDocumentIterator.hasNext()) { - StarTreeDocument resultStarTreeDocument = segmentStarTreeDocumentIterator.next(); - StarTreeDocument expectedStarTreeDocument = expectedStarTreeDocumentIterator.next(); - - assertEquals(expectedStarTreeDocument.dimensions[0], resultStarTreeDocument.dimensions[0]); - assertEquals(expectedStarTreeDocument.dimensions[1], resultStarTreeDocument.dimensions[1]); - assertEquals(expectedStarTreeDocument.dimensions[2], resultStarTreeDocument.dimensions[2]); - assertEquals(expectedStarTreeDocument.dimensions[3], resultStarTreeDocument.dimensions[3]); - assertEquals(expectedStarTreeDocument.metrics[0], resultStarTreeDocument.metrics[0]); - assertEquals(expectedStarTreeDocument.metrics[1], resultStarTreeDocument.metrics[1]); - assertEquals(expectedStarTreeDocument.metrics[2], resultStarTreeDocument.metrics[2]); - - numOfAggregatedDocuments++; - } - - assertEquals(inorderStarTreeDocuments.size(), numOfAggregatedDocuments); - - } - - public void test_build_halfFloatMetrics() throws IOException { - - mapperService = mock(MapperService.class); - DocumentMapper documentMapper = mock(DocumentMapper.class); - when(mapperService.documentMapper()).thenReturn(documentMapper); - Settings settings = Settings.builder().put(settings(org.opensearch.Version.CURRENT).build()).build(); - NumberFieldMapper numberFieldMapper1 = new NumberFieldMapper.Builder("field2", NumberFieldMapper.NumberType.HALF_FLOAT, false, true) - .build(new Mapper.BuilderContext(settings, new ContentPath())); - NumberFieldMapper numberFieldMapper2 = new NumberFieldMapper.Builder("field4", NumberFieldMapper.NumberType.HALF_FLOAT, false, true) - .build(new Mapper.BuilderContext(settings, new ContentPath())); - NumberFieldMapper numberFieldMapper3 = new NumberFieldMapper.Builder("field6", NumberFieldMapper.NumberType.HALF_FLOAT, false, true) - .build(new Mapper.BuilderContext(settings, new ContentPath())); - MappingLookup fieldMappers = new MappingLookup( - Set.of(numberFieldMapper1, numberFieldMapper2, numberFieldMapper3), - Collections.emptyList(), - Collections.emptyList(), - 0, - null - ); - when(documentMapper.mappers()).thenReturn(fieldMappers); - builder = new OnHeapStarTreeBuilder(compositeField, fieldProducerMap, writeState, mapperService); - - int noOfStarTreeDocuments = 5; - StarTreeDocument[] starTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; - - starTreeDocuments[0] = new StarTreeDocument( - new Long[] { 2L, 4L, 3L, 4L }, - new HalfFloatPoint[] { new HalfFloatPoint("hf1", 12), new HalfFloatPoint("hf6", 10), new HalfFloatPoint("field6", 10) } - ); - starTreeDocuments[1] = new StarTreeDocument( - new Long[] { 3L, 4L, 2L, 1L }, - new HalfFloatPoint[] { new HalfFloatPoint("hf2", 10), new HalfFloatPoint("hf7", 6), new HalfFloatPoint("field6", 10) } - ); - starTreeDocuments[2] = new StarTreeDocument( - new Long[] { 3L, 4L, 2L, 1L }, - new HalfFloatPoint[] { new HalfFloatPoint("hf3", 14), new HalfFloatPoint("hf8", 12), new HalfFloatPoint("field6", 10) } - ); - starTreeDocuments[3] = new StarTreeDocument( - new Long[] { 2L, 4L, 3L, 4L }, - new HalfFloatPoint[] { new HalfFloatPoint("hf4", 9), new HalfFloatPoint("hf9", 4), new HalfFloatPoint("field6", 10) } - ); - starTreeDocuments[4] = new StarTreeDocument( - new Long[] { 3L, 4L, 2L, 1L }, - new HalfFloatPoint[] { new HalfFloatPoint("hf5", 11), new HalfFloatPoint("hf10", 16), new HalfFloatPoint("field6", 10) } - ); - - StarTreeDocument[] segmentStarTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; - for (int i = 0; i < noOfStarTreeDocuments; i++) { - long metric1 = HalfFloatPoint.halfFloatToSortableShort( - ((HalfFloatPoint) starTreeDocuments[i].metrics[0]).numericValue().floatValue() - ); - long metric2 = HalfFloatPoint.halfFloatToSortableShort( - ((HalfFloatPoint) starTreeDocuments[i].metrics[1]).numericValue().floatValue() - ); - long metric3 = HalfFloatPoint.halfFloatToSortableShort( - ((HalfFloatPoint) starTreeDocuments[i].metrics[2]).numericValue().floatValue() - ); - segmentStarTreeDocuments[i] = new StarTreeDocument(starTreeDocuments[i].dimensions, new Long[] { metric1, metric2, metric3 }); - } - - Iterator segmentStarTreeDocumentIterator = builder.sortAndAggregateStarTreeDocuments(segmentStarTreeDocuments); - builder.build(segmentStarTreeDocumentIterator); - - List resultStarTreeDocuments = builder.getStarTreeDocuments(); - assertEquals(7, resultStarTreeDocuments.size()); - - Iterator expectedStarTreeDocumentIterator = getExpectedStarTreeDocumentIterator(); - assertStarTreeDocuments(resultStarTreeDocuments, expectedStarTreeDocumentIterator); - } - - public void test_build_floatMetrics() throws IOException { - - mapperService = mock(MapperService.class); - DocumentMapper documentMapper = mock(DocumentMapper.class); - when(mapperService.documentMapper()).thenReturn(documentMapper); - Settings settings = Settings.builder().put(settings(org.opensearch.Version.CURRENT).build()).build(); - NumberFieldMapper numberFieldMapper1 = new NumberFieldMapper.Builder("field2", NumberFieldMapper.NumberType.FLOAT, false, true) - .build(new Mapper.BuilderContext(settings, new ContentPath())); - NumberFieldMapper numberFieldMapper2 = new NumberFieldMapper.Builder("field4", NumberFieldMapper.NumberType.FLOAT, false, true) - .build(new Mapper.BuilderContext(settings, new ContentPath())); - NumberFieldMapper numberFieldMapper3 = new NumberFieldMapper.Builder("field6", NumberFieldMapper.NumberType.FLOAT, false, true) - .build(new Mapper.BuilderContext(settings, new ContentPath())); - MappingLookup fieldMappers = new MappingLookup( - Set.of(numberFieldMapper1, numberFieldMapper2, numberFieldMapper3), - Collections.emptyList(), - Collections.emptyList(), - 0, - null - ); - when(documentMapper.mappers()).thenReturn(fieldMappers); - builder = new OnHeapStarTreeBuilder(compositeField, fieldProducerMap, writeState, mapperService); - - int noOfStarTreeDocuments = 5; - StarTreeDocument[] starTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; - - starTreeDocuments[0] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Float[] { 12.0F, 10.0F, randomFloat() }); - starTreeDocuments[1] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Float[] { 10.0F, 6.0F, randomFloat() }); - starTreeDocuments[2] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Float[] { 14.0F, 12.0F, randomFloat() }); - starTreeDocuments[3] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Float[] { 9.0F, 4.0F, randomFloat() }); - starTreeDocuments[4] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Float[] { 11.0F, 16.0F, randomFloat() }); - - StarTreeDocument[] segmentStarTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; - for (int i = 0; i < noOfStarTreeDocuments; i++) { - long metric1 = NumericUtils.floatToSortableInt((Float) starTreeDocuments[i].metrics[0]); - long metric2 = NumericUtils.floatToSortableInt((Float) starTreeDocuments[i].metrics[1]); - long metric3 = NumericUtils.floatToSortableInt((Float) starTreeDocuments[i].metrics[2]); - segmentStarTreeDocuments[i] = new StarTreeDocument(starTreeDocuments[i].dimensions, new Long[] { metric1, metric2, metric3 }); - } - - Iterator segmentStarTreeDocumentIterator = builder.sortAndAggregateStarTreeDocuments(segmentStarTreeDocuments); - builder.build(segmentStarTreeDocumentIterator); - - List resultStarTreeDocuments = builder.getStarTreeDocuments(); - assertEquals(7, resultStarTreeDocuments.size()); - - Iterator expectedStarTreeDocumentIterator = getExpectedStarTreeDocumentIterator(); - assertStarTreeDocuments(resultStarTreeDocuments, expectedStarTreeDocumentIterator); - } - - public void test_build_longMetrics() throws IOException { - - mapperService = mock(MapperService.class); - DocumentMapper documentMapper = mock(DocumentMapper.class); - when(mapperService.documentMapper()).thenReturn(documentMapper); - Settings settings = Settings.builder().put(settings(org.opensearch.Version.CURRENT).build()).build(); - NumberFieldMapper numberFieldMapper1 = new NumberFieldMapper.Builder("field2", NumberFieldMapper.NumberType.LONG, false, true) - .build(new Mapper.BuilderContext(settings, new ContentPath())); - NumberFieldMapper numberFieldMapper2 = new NumberFieldMapper.Builder("field4", NumberFieldMapper.NumberType.LONG, false, true) - .build(new Mapper.BuilderContext(settings, new ContentPath())); - NumberFieldMapper numberFieldMapper3 = new NumberFieldMapper.Builder("field6", NumberFieldMapper.NumberType.LONG, false, true) - .build(new Mapper.BuilderContext(settings, new ContentPath())); - MappingLookup fieldMappers = new MappingLookup( - Set.of(numberFieldMapper1, numberFieldMapper2, numberFieldMapper3), - Collections.emptyList(), - Collections.emptyList(), - 0, - null - ); - when(documentMapper.mappers()).thenReturn(fieldMappers); - builder = new OnHeapStarTreeBuilder(compositeField, fieldProducerMap, writeState, mapperService); - - int noOfStarTreeDocuments = 5; - StarTreeDocument[] starTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; - - starTreeDocuments[0] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Long[] { 12L, 10L, randomLong() }); - starTreeDocuments[1] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Long[] { 10L, 6L, randomLong() }); - starTreeDocuments[2] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Long[] { 14L, 12L, randomLong() }); - starTreeDocuments[3] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Long[] { 9L, 4L, randomLong() }); - starTreeDocuments[4] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Long[] { 11L, 16L, randomLong() }); - - StarTreeDocument[] segmentStarTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; - for (int i = 0; i < noOfStarTreeDocuments; i++) { - long metric1 = (Long) starTreeDocuments[i].metrics[0]; - long metric2 = (Long) starTreeDocuments[i].metrics[1]; - long metric3 = (Long) starTreeDocuments[i].metrics[2]; - segmentStarTreeDocuments[i] = new StarTreeDocument(starTreeDocuments[i].dimensions, new Long[] { metric1, metric2, metric3 }); - } - - Iterator segmentStarTreeDocumentIterator = builder.sortAndAggregateStarTreeDocuments(segmentStarTreeDocuments); - builder.build(segmentStarTreeDocumentIterator); - - List resultStarTreeDocuments = builder.getStarTreeDocuments(); - assertEquals(7, resultStarTreeDocuments.size()); - - Iterator expectedStarTreeDocumentIterator = getExpectedStarTreeDocumentIterator(); - assertStarTreeDocuments(resultStarTreeDocuments, expectedStarTreeDocumentIterator); - } - - private static Iterator getExpectedStarTreeDocumentIterator() { - List expectedStarTreeDocuments = List.of( - new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Object[] { 21.0, 14.0, 2L }), - new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Object[] { 35.0, 34.0, 3L }), - new StarTreeDocument(new Long[] { -1L, 4L, 2L, 1L }, new Object[] { 35.0, 34.0, 3L }), - new StarTreeDocument(new Long[] { -1L, 4L, 3L, 4L }, new Object[] { 21.0, 14.0, 2L }), - new StarTreeDocument(new Long[] { -1L, 4L, -1L, 1L }, new Object[] { 35.0, 34.0, 3L }), - new StarTreeDocument(new Long[] { -1L, 4L, -1L, 4L }, new Object[] { 21.0, 14.0, 2L }), - new StarTreeDocument(new Long[] { -1L, 4L, -1L, -1L }, new Object[] { 56.0, 48.0, 5L }) - ); - return expectedStarTreeDocuments.iterator(); - } - - public void test_build() throws IOException { - - int noOfStarTreeDocuments = 5; - StarTreeDocument[] starTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; - - starTreeDocuments[0] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Double[] { 12.0, 10.0, randomDouble() }); - starTreeDocuments[1] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 10.0, 6.0, randomDouble() }); - starTreeDocuments[2] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 14.0, 12.0, randomDouble() }); - starTreeDocuments[3] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Double[] { 9.0, 4.0, randomDouble() }); - starTreeDocuments[4] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 11.0, 16.0, randomDouble() }); - - StarTreeDocument[] segmentStarTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; - for (int i = 0; i < noOfStarTreeDocuments; i++) { - long metric1 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[0]); - long metric2 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[1]); - long metric3 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[2]); - segmentStarTreeDocuments[i] = new StarTreeDocument(starTreeDocuments[i].dimensions, new Long[] { metric1, metric2, metric3 }); - } - - Iterator segmentStarTreeDocumentIterator = builder.sortAndAggregateStarTreeDocuments(segmentStarTreeDocuments); - builder.build(segmentStarTreeDocumentIterator); - - List resultStarTreeDocuments = builder.getStarTreeDocuments(); - assertEquals(7, resultStarTreeDocuments.size()); - - Iterator expectedStarTreeDocumentIterator = getExpectedStarTreeDocumentIterator(); - assertStarTreeDocuments(resultStarTreeDocuments, expectedStarTreeDocumentIterator); - } - - private void assertStarTreeDocuments( - List resultStarTreeDocuments, - Iterator expectedStarTreeDocumentIterator - ) { - Iterator resultStarTreeDocumentIterator = resultStarTreeDocuments.iterator(); - while (resultStarTreeDocumentIterator.hasNext() && expectedStarTreeDocumentIterator.hasNext()) { - StarTreeDocument resultStarTreeDocument = resultStarTreeDocumentIterator.next(); - StarTreeDocument expectedStarTreeDocument = expectedStarTreeDocumentIterator.next(); - - assertEquals(expectedStarTreeDocument.dimensions[0], resultStarTreeDocument.dimensions[0]); - assertEquals(expectedStarTreeDocument.dimensions[1], resultStarTreeDocument.dimensions[1]); - assertEquals(expectedStarTreeDocument.dimensions[2], resultStarTreeDocument.dimensions[2]); - assertEquals(expectedStarTreeDocument.dimensions[3], resultStarTreeDocument.dimensions[3]); - assertEquals(expectedStarTreeDocument.metrics[0], resultStarTreeDocument.metrics[0]); - assertEquals(expectedStarTreeDocument.metrics[1], resultStarTreeDocument.metrics[1]); - assertEquals(expectedStarTreeDocument.metrics[2], resultStarTreeDocument.metrics[2]); - } - } - - public void test_build_starTreeDataset() throws IOException { - - fields = List.of("fieldC", "fieldB", "fieldL", "fieldI"); - - dimensionsOrder = List.of(new NumericDimension("fieldC"), new NumericDimension("fieldB"), new NumericDimension("fieldL")); - metrics = List.of(new Metric("fieldI", List.of(MetricStat.SUM))); - - DocValuesProducer docValuesProducer = mock(DocValuesProducer.class); - - compositeField = new StarTreeField( - "test", - dimensionsOrder, - metrics, - new StarTreeFieldConfiguration(1, Set.of(), StarTreeFieldConfiguration.StarTreeBuildMode.ON_HEAP) - ); - SegmentInfo segmentInfo = new SegmentInfo( - directory, - Version.LATEST, - Version.LUCENE_9_11_0, - "test_segment", - 7, - false, - false, - new Lucene99Codec(), - new HashMap<>(), - UUID.randomUUID().toString().substring(0, 16).getBytes(StandardCharsets.UTF_8), - new HashMap<>(), - null - ); - - fieldsInfo = new FieldInfo[fields.size()]; - fieldProducerMap = new HashMap<>(); - for (int i = 0; i < fieldsInfo.length; i++) { - fieldsInfo[i] = new FieldInfo( - fields.get(i), - i, - false, - false, - true, - IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS, - DocValuesType.SORTED_NUMERIC, - -1, - Collections.emptyMap(), - 0, - 0, - 0, - 0, - VectorEncoding.FLOAT32, - VectorSimilarityFunction.EUCLIDEAN, - false, - false - ); - fieldProducerMap.put(fields.get(i), docValuesProducer); - } - FieldInfos fieldInfos = new FieldInfos(fieldsInfo); - writeState = new SegmentWriteState(InfoStream.getDefault(), segmentInfo.dir, segmentInfo, fieldInfos, null, newIOContext(random())); - - mapperService = mock(MapperService.class); - DocumentMapper documentMapper = mock(DocumentMapper.class); - when(mapperService.documentMapper()).thenReturn(documentMapper); - Settings settings = Settings.builder().put(settings(org.opensearch.Version.CURRENT).build()).build(); - NumberFieldMapper numberFieldMapper1 = new NumberFieldMapper.Builder("fieldI", NumberFieldMapper.NumberType.DOUBLE, false, true) - .build(new Mapper.BuilderContext(settings, new ContentPath())); - MappingLookup fieldMappers = new MappingLookup( - Set.of(numberFieldMapper1), - Collections.emptyList(), - Collections.emptyList(), - 0, - null - ); - when(documentMapper.mappers()).thenReturn(fieldMappers); - builder = new OnHeapStarTreeBuilder(compositeField, fieldProducerMap, writeState, mapperService); - - int noOfStarTreeDocuments = 7; - StarTreeDocument[] starTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; - starTreeDocuments[0] = new StarTreeDocument(new Long[] { 1L, 11L, 21L }, new Double[] { 400.0 }); - starTreeDocuments[1] = new StarTreeDocument(new Long[] { 1L, 12L, 22L }, new Double[] { 200.0 }); - starTreeDocuments[2] = new StarTreeDocument(new Long[] { 2L, 13L, 23L }, new Double[] { 300.0 }); - starTreeDocuments[3] = new StarTreeDocument(new Long[] { 2L, 13L, 21L }, new Double[] { 100.0 }); - starTreeDocuments[4] = new StarTreeDocument(new Long[] { 3L, 11L, 21L }, new Double[] { 600.0 }); - starTreeDocuments[5] = new StarTreeDocument(new Long[] { 3L, 12L, 23L }, new Double[] { 200.0 }); - starTreeDocuments[6] = new StarTreeDocument(new Long[] { 3L, 12L, 21L }, new Double[] { 400.0 }); - - StarTreeDocument[] segmentStarTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; - for (int i = 0; i < noOfStarTreeDocuments; i++) { - long metric1 = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[0]); - segmentStarTreeDocuments[i] = new StarTreeDocument(starTreeDocuments[i].dimensions, new Long[] { metric1 }); - } - - Iterator segmentStarTreeDocumentIterator = builder.sortAndAggregateStarTreeDocuments(segmentStarTreeDocuments); - builder.build(segmentStarTreeDocumentIterator); - - List resultStarTreeDocuments = builder.getStarTreeDocuments(); - List expectedStarTreeDocuments = List.of( - new StarTreeDocument(new Long[] { 1L, 11L, 21L }, new Object[] { 400.0 }), - new StarTreeDocument(new Long[] { 1L, 12L, 22L }, new Object[] { 200.0 }), - new StarTreeDocument(new Long[] { 2L, 13L, 21L }, new Object[] { 100.0 }), - new StarTreeDocument(new Long[] { 2L, 13L, 23L }, new Object[] { 300.0 }), - new StarTreeDocument(new Long[] { 3L, 11L, 21L }, new Object[] { 600.0 }), - new StarTreeDocument(new Long[] { 3L, 12L, 21L }, new Object[] { 400.0 }), - new StarTreeDocument(new Long[] { 3L, 12L, 23L }, new Object[] { 200.0 }), - new StarTreeDocument(new Long[] { -1L, 11L, 21L }, new Object[] { 1000.0 }), - new StarTreeDocument(new Long[] { -1L, 12L, 21L }, new Object[] { 400.0 }), - new StarTreeDocument(new Long[] { -1L, 12L, 22L }, new Object[] { 200.0 }), - new StarTreeDocument(new Long[] { -1L, 12L, 23L }, new Object[] { 200.0 }), - new StarTreeDocument(new Long[] { -1L, 13L, 21L }, new Object[] { 100.0 }), - new StarTreeDocument(new Long[] { -1L, 13L, 23L }, new Object[] { 300.0 }), - new StarTreeDocument(new Long[] { -1L, -1L, 21L }, new Object[] { 1500.0 }), - new StarTreeDocument(new Long[] { -1L, -1L, 22L }, new Object[] { 200.0 }), - new StarTreeDocument(new Long[] { -1L, -1L, 23L }, new Object[] { 500.0 }), - new StarTreeDocument(new Long[] { -1L, -1L, -1L }, new Object[] { 2200.0 }), - new StarTreeDocument(new Long[] { -1L, 12L, -1L }, new Object[] { 800.0 }), - new StarTreeDocument(new Long[] { -1L, 13L, -1L }, new Object[] { 400.0 }), - new StarTreeDocument(new Long[] { 1L, -1L, 21L }, new Object[] { 400.0 }), - new StarTreeDocument(new Long[] { 1L, -1L, 22L }, new Object[] { 200.0 }), - new StarTreeDocument(new Long[] { 1L, -1L, -1L }, new Object[] { 600.0 }), - new StarTreeDocument(new Long[] { 2L, 13L, -1L }, new Object[] { 400.0 }), - new StarTreeDocument(new Long[] { 3L, -1L, 21L }, new Object[] { 1000.0 }), - new StarTreeDocument(new Long[] { 3L, -1L, 23L }, new Object[] { 200.0 }), - new StarTreeDocument(new Long[] { 3L, -1L, -1L }, new Object[] { 1200.0 }), - new StarTreeDocument(new Long[] { 3L, 12L, -1L }, new Object[] { 600.0 }) - ); - - Iterator expectedStarTreeDocumentIterator = expectedStarTreeDocuments.iterator(); - Iterator resultStarTreeDocumentIterator = resultStarTreeDocuments.iterator(); - while (resultStarTreeDocumentIterator.hasNext() && expectedStarTreeDocumentIterator.hasNext()) { - StarTreeDocument resultStarTreeDocument = resultStarTreeDocumentIterator.next(); - StarTreeDocument expectedStarTreeDocument = expectedStarTreeDocumentIterator.next(); - - assertEquals(expectedStarTreeDocument.dimensions[0], resultStarTreeDocument.dimensions[0]); - assertEquals(expectedStarTreeDocument.dimensions[1], resultStarTreeDocument.dimensions[1]); - assertEquals(expectedStarTreeDocument.dimensions[2], resultStarTreeDocument.dimensions[2]); - assertEquals(expectedStarTreeDocument.metrics[0], resultStarTreeDocument.metrics[0]); - } - - } +public class OnHeapStarTreeBuilderTests extends AbstractStarTreeBuilderTests { @Override - public void tearDown() throws Exception { - super.tearDown(); - directory.close(); + public BaseStarTreeBuilder getStarTreeBuilder( + StarTreeField starTreeField, + SegmentWriteState segmentWriteState, + MapperService mapperService + ) { + return new OnHeapStarTreeBuilder(starTreeField, segmentWriteState, mapperService); } } diff --git a/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreeDocValuesIteratorAdapterTests.java b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreeDocValuesIteratorAdapterTests.java deleted file mode 100644 index 9c2621401faa4..0000000000000 --- a/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreeDocValuesIteratorAdapterTests.java +++ /dev/null @@ -1,139 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.index.compositeindex.datacube.startree.builder; - -import org.apache.lucene.codecs.DocValuesProducer; -import org.apache.lucene.index.DocValuesType; -import org.apache.lucene.index.FieldInfo; -import org.apache.lucene.index.IndexOptions; -import org.apache.lucene.index.SortedNumericDocValues; -import org.apache.lucene.index.VectorEncoding; -import org.apache.lucene.index.VectorSimilarityFunction; -import org.apache.lucene.search.DocIdSetIterator; -import org.opensearch.index.compositeindex.datacube.startree.utils.SequentialDocValuesIterator; -import org.opensearch.test.OpenSearchTestCase; - -import java.io.IOException; -import java.util.Collections; - -import static org.mockito.ArgumentMatchers.any; -import static org.mockito.Mockito.mock; -import static org.mockito.Mockito.when; - -public class StarTreeDocValuesIteratorAdapterTests extends OpenSearchTestCase { - - private StarTreeDocValuesIteratorAdapter adapter; - - @Override - public void setUp() throws Exception { - super.setUp(); - adapter = new StarTreeDocValuesIteratorAdapter(); - } - - public void testGetDocValuesIterator() throws IOException { - DocValuesProducer mockProducer = mock(DocValuesProducer.class); - SortedNumericDocValues mockSortedNumericDocValues = mock(SortedNumericDocValues.class); - - when(mockProducer.getSortedNumeric(any())).thenReturn(mockSortedNumericDocValues); - - SequentialDocValuesIterator iterator = adapter.getDocValuesIterator(DocValuesType.SORTED_NUMERIC, any(), mockProducer); - - assertNotNull(iterator); - assertEquals(mockSortedNumericDocValues, iterator.getDocIdSetIterator()); - } - - public void testGetDocValuesIteratorWithUnsupportedType() { - DocValuesProducer mockProducer = mock(DocValuesProducer.class); - FieldInfo fieldInfo = new FieldInfo( - "random_field", - 0, - false, - false, - true, - IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS, - DocValuesType.SORTED_NUMERIC, - -1, - Collections.emptyMap(), - 0, - 0, - 0, - 0, - VectorEncoding.FLOAT32, - VectorSimilarityFunction.EUCLIDEAN, - false, - false - ); - IllegalArgumentException exception = expectThrows(IllegalArgumentException.class, () -> { - adapter.getDocValuesIterator(DocValuesType.BINARY, fieldInfo, mockProducer); - }); - - assertEquals("Unsupported DocValuesType: BINARY", exception.getMessage()); - } - - public void testGetNextValue() throws IOException { - SortedNumericDocValues mockSortedNumericDocValues = mock(SortedNumericDocValues.class); - SequentialDocValuesIterator iterator = new SequentialDocValuesIterator(mockSortedNumericDocValues); - iterator.setDocId(1); - when(mockSortedNumericDocValues.nextValue()).thenReturn(42L); - - Long nextValue = adapter.getNextValue(iterator, 1); - - assertEquals(Long.valueOf(42L), nextValue); - assertEquals(Long.valueOf(42L), iterator.getDocValue()); - } - - public void testGetNextValueWithInvalidDocId() { - SortedNumericDocValues mockSortedNumericDocValues = mock(SortedNumericDocValues.class); - SequentialDocValuesIterator iterator = new SequentialDocValuesIterator(mockSortedNumericDocValues); - iterator.setDocId(DocIdSetIterator.NO_MORE_DOCS); - - IllegalStateException exception = expectThrows(IllegalStateException.class, () -> { adapter.getNextValue(iterator, 1); }); - - assertEquals("invalid doc id to fetch the next value", exception.getMessage()); - } - - public void testGetNextValueWithUnsupportedIterator() { - DocIdSetIterator mockIterator = mock(DocIdSetIterator.class); - SequentialDocValuesIterator iterator = new SequentialDocValuesIterator(mockIterator); - - IllegalStateException exception = expectThrows(IllegalStateException.class, () -> { adapter.getNextValue(iterator, 1); }); - - assertEquals("Unsupported Iterator: " + mockIterator.toString(), exception.getMessage()); - } - - public void testNextDoc() throws IOException { - SortedNumericDocValues mockSortedNumericDocValues = mock(SortedNumericDocValues.class); - SequentialDocValuesIterator iterator = new SequentialDocValuesIterator(mockSortedNumericDocValues); - when(mockSortedNumericDocValues.nextDoc()).thenReturn(2, 3, DocIdSetIterator.NO_MORE_DOCS); - when(mockSortedNumericDocValues.nextValue()).thenReturn(42L, 32L); - - int nextDocId = adapter.nextDoc(iterator, 1); - assertEquals(2, nextDocId); - assertEquals(Long.valueOf(42L), adapter.getNextValue(iterator, nextDocId)); - - nextDocId = adapter.nextDoc(iterator, 2); - assertEquals(3, nextDocId); - when(mockSortedNumericDocValues.nextValue()).thenReturn(42L, 32L); - - } - - public void testNextDoc_noMoreDocs() throws IOException { - SortedNumericDocValues mockSortedNumericDocValues = mock(SortedNumericDocValues.class); - SequentialDocValuesIterator iterator = new SequentialDocValuesIterator(mockSortedNumericDocValues); - when(mockSortedNumericDocValues.nextDoc()).thenReturn(2, DocIdSetIterator.NO_MORE_DOCS); - when(mockSortedNumericDocValues.nextValue()).thenReturn(42L, 32L); - - int nextDocId = adapter.nextDoc(iterator, 1); - assertEquals(2, nextDocId); - assertEquals(Long.valueOf(42L), adapter.getNextValue(iterator, nextDocId)); - - assertThrows(IllegalStateException.class, () -> adapter.nextDoc(iterator, 2)); - - } -} diff --git a/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreeValuesIteratorFactoryTests.java b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreeValuesIteratorFactoryTests.java deleted file mode 100644 index 1aba67533d52e..0000000000000 --- a/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreeValuesIteratorFactoryTests.java +++ /dev/null @@ -1,131 +0,0 @@ -/* - * SPDX-License-Identifier: Apache-2.0 - * - * The OpenSearch Contributors require contributions made to - * this file be licensed under the Apache-2.0 license or a - * compatible open source license. - */ - -package org.opensearch.index.compositeindex.datacube.startree.builder; - -import org.apache.lucene.codecs.DocValuesProducer; -import org.apache.lucene.index.DocValuesType; -import org.apache.lucene.index.FieldInfo; -import org.apache.lucene.index.IndexOptions; -import org.apache.lucene.index.SortedNumericDocValues; -import org.apache.lucene.index.VectorEncoding; -import org.apache.lucene.index.VectorSimilarityFunction; -import org.apache.lucene.search.DocIdSetIterator; -import org.opensearch.index.compositeindex.datacube.startree.utils.SequentialDocValuesIterator; -import org.opensearch.test.OpenSearchTestCase; -import org.junit.BeforeClass; - -import java.io.IOException; -import java.util.Collections; - -import org.mockito.Mockito; - -import static org.mockito.Mockito.when; - -public class StarTreeValuesIteratorFactoryTests extends OpenSearchTestCase { - - private static StarTreeDocValuesIteratorAdapter starTreeDocValuesIteratorAdapter; - private static FieldInfo mockFieldInfo; - - @BeforeClass - public static void setup() { - starTreeDocValuesIteratorAdapter = new StarTreeDocValuesIteratorAdapter(); - mockFieldInfo = new FieldInfo( - "field", - 1, - false, - false, - true, - IndexOptions.NONE, - DocValuesType.NONE, - -1, - Collections.emptyMap(), - 0, - 0, - 0, - 0, - VectorEncoding.FLOAT32, - VectorSimilarityFunction.EUCLIDEAN, - false, - false - ); - } - - public void testCreateIterator_SortedNumeric() throws IOException { - DocValuesProducer producer = Mockito.mock(DocValuesProducer.class); - SortedNumericDocValues iterator = Mockito.mock(SortedNumericDocValues.class); - when(producer.getSortedNumeric(mockFieldInfo)).thenReturn(iterator); - SequentialDocValuesIterator result = starTreeDocValuesIteratorAdapter.getDocValuesIterator( - DocValuesType.SORTED_NUMERIC, - mockFieldInfo, - producer - ); - assertEquals(iterator.getClass(), result.getDocIdSetIterator().getClass()); - } - - public void testCreateIterator_UnsupportedType() { - DocValuesProducer producer = Mockito.mock(DocValuesProducer.class); - IllegalArgumentException exception = expectThrows(IllegalArgumentException.class, () -> { - starTreeDocValuesIteratorAdapter.getDocValuesIterator(DocValuesType.BINARY, mockFieldInfo, producer); - }); - assertEquals("Unsupported DocValuesType: BINARY", exception.getMessage()); - } - - public void testGetNextValue_SortedNumeric() throws IOException { - SortedNumericDocValues iterator = Mockito.mock(SortedNumericDocValues.class); - when(iterator.nextDoc()).thenReturn(0); - when(iterator.nextValue()).thenReturn(123L); - SequentialDocValuesIterator sequentialDocValuesIterator = new SequentialDocValuesIterator(iterator); - sequentialDocValuesIterator.getDocIdSetIterator().nextDoc(); - long result = starTreeDocValuesIteratorAdapter.getNextValue(sequentialDocValuesIterator, 0); - assertEquals(123L, result); - } - - public void testGetNextValue_UnsupportedIterator() { - DocIdSetIterator iterator = Mockito.mock(DocIdSetIterator.class); - SequentialDocValuesIterator sequentialDocValuesIterator = new SequentialDocValuesIterator(iterator); - - IllegalStateException exception = expectThrows(IllegalStateException.class, () -> { - starTreeDocValuesIteratorAdapter.getNextValue(sequentialDocValuesIterator, 0); - }); - assertEquals("Unsupported Iterator: " + iterator.toString(), exception.getMessage()); - } - - public void testNextDoc() throws IOException { - SortedNumericDocValues iterator = Mockito.mock(SortedNumericDocValues.class); - SequentialDocValuesIterator sequentialDocValuesIterator = new SequentialDocValuesIterator(iterator); - when(iterator.nextDoc()).thenReturn(5); - - int result = starTreeDocValuesIteratorAdapter.nextDoc(sequentialDocValuesIterator, 5); - assertEquals(5, result); - } - - public void test_multipleCoordinatedDocumentReader() throws IOException { - SortedNumericDocValues iterator1 = Mockito.mock(SortedNumericDocValues.class); - SortedNumericDocValues iterator2 = Mockito.mock(SortedNumericDocValues.class); - - SequentialDocValuesIterator sequentialDocValuesIterator1 = new SequentialDocValuesIterator(iterator1); - SequentialDocValuesIterator sequentialDocValuesIterator2 = new SequentialDocValuesIterator(iterator2); - - when(iterator1.nextDoc()).thenReturn(0); - when(iterator2.nextDoc()).thenReturn(1); - - when(iterator1.nextValue()).thenReturn(9L); - when(iterator2.nextValue()).thenReturn(9L); - - starTreeDocValuesIteratorAdapter.nextDoc(sequentialDocValuesIterator1, 0); - starTreeDocValuesIteratorAdapter.nextDoc(sequentialDocValuesIterator2, 0); - assertEquals(0, sequentialDocValuesIterator1.getDocId()); - assertEquals(9L, (long) sequentialDocValuesIterator1.getDocValue()); - assertNotEquals(0, sequentialDocValuesIterator2.getDocId()); - assertEquals(1, sequentialDocValuesIterator2.getDocId()); - assertEquals(9L, (long) sequentialDocValuesIterator2.getDocValue()); - - } - -} diff --git a/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreesBuilderTests.java b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreesBuilderTests.java index 518c6729c2e1a..564ab110fa7a5 100644 --- a/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreesBuilderTests.java +++ b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreesBuilderTests.java @@ -88,16 +88,16 @@ public void setUp() throws Exception { public void test_buildWithNoStarTreeFields() throws IOException { when(mapperService.getCompositeFieldTypes()).thenReturn(new HashSet<>()); - StarTreesBuilder starTreesBuilder = new StarTreesBuilder(fieldProducerMap, segmentWriteState, mapperService); - starTreesBuilder.build(); + StarTreesBuilder starTreesBuilder = new StarTreesBuilder(segmentWriteState, mapperService); + starTreesBuilder.build(fieldProducerMap); verifyNoInteractions(docValuesProducer); } public void test_getStarTreeBuilder() throws IOException { when(mapperService.getCompositeFieldTypes()).thenReturn(Set.of(starTreeFieldType)); - StarTreesBuilder starTreesBuilder = new StarTreesBuilder(fieldProducerMap, segmentWriteState, mapperService); - StarTreeBuilder starTreeBuilder = starTreesBuilder.getStarTreeBuilder(starTreeField, fieldProducerMap, segmentWriteState, mapperService); + StarTreesBuilder starTreesBuilder = new StarTreesBuilder(segmentWriteState, mapperService); + StarTreeBuilder starTreeBuilder = starTreesBuilder.getSingleTreeBuilder(starTreeField, segmentWriteState, mapperService); assertTrue(starTreeBuilder instanceof OnHeapStarTreeBuilder); } @@ -105,8 +105,8 @@ public void test_getStarTreeBuilder_illegalArgument() { when(mapperService.getCompositeFieldTypes()).thenReturn(Set.of(starTreeFieldType)); StarTreeFieldConfiguration starTreeFieldConfiguration = new StarTreeFieldConfiguration(1, new HashSet<>(), StarTreeFieldConfiguration.StarTreeBuildMode.OFF_HEAP); StarTreeField starTreeField = new StarTreeField("star_tree", new ArrayList<>(), new ArrayList<>(), starTreeFieldConfiguration); - StarTreesBuilder starTreesBuilder = new StarTreesBuilder(fieldProducerMap, segmentWriteState, mapperService); - assertThrows(IllegalArgumentException.class, () -> starTreesBuilder.getStarTreeBuilder(starTreeField, fieldProducerMap, segmentWriteState, mapperService)); + StarTreesBuilder starTreesBuilder = new StarTreesBuilder(segmentWriteState, mapperService); + assertThrows(IllegalArgumentException.class, () -> starTreesBuilder.getSingleTreeBuilder(starTreeField, segmentWriteState, mapperService)); } public void test_closeWithNoStarTreeFields() throws IOException { @@ -118,7 +118,7 @@ public void test_closeWithNoStarTreeFields() throws IOException { StarTreeField starTreeField = new StarTreeField("star_tree", new ArrayList<>(), new ArrayList<>(), starTreeFieldConfiguration); starTreeFieldType = new StarTreeMapper.StarTreeFieldType("star_tree", starTreeField); when(mapperService.getCompositeFieldTypes()).thenReturn(Set.of(starTreeFieldType)); - StarTreesBuilder starTreesBuilder = new StarTreesBuilder(fieldProducerMap, segmentWriteState, mapperService); + StarTreesBuilder starTreesBuilder = new StarTreesBuilder(segmentWriteState, mapperService); starTreesBuilder.close(); verifyNoInteractions(docValuesProducer); diff --git a/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/utils/SequentialDocValuesIteratorTests.java b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/utils/SequentialDocValuesIteratorTests.java index 76b612e3677f7..dfc83125b2806 100644 --- a/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/utils/SequentialDocValuesIteratorTests.java +++ b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/utils/SequentialDocValuesIteratorTests.java @@ -8,39 +8,126 @@ package org.opensearch.index.compositeindex.datacube.startree.utils; +import org.apache.lucene.codecs.DocValuesProducer; +import org.apache.lucene.index.BinaryDocValues; +import org.apache.lucene.index.DocValuesType; +import org.apache.lucene.index.FieldInfo; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.index.SortedNumericDocValues; -import org.opensearch.index.fielddata.AbstractNumericDocValues; +import org.apache.lucene.index.VectorEncoding; +import org.apache.lucene.index.VectorSimilarityFunction; +import org.apache.lucene.search.DocIdSetIterator; +import org.apache.lucene.util.BytesRef; import org.opensearch.test.OpenSearchTestCase; +import org.junit.BeforeClass; import java.io.IOException; +import java.util.Collections; + +import org.mockito.Mockito; + +import static org.mockito.Mockito.when; public class SequentialDocValuesIteratorTests extends OpenSearchTestCase { - public void test_sequentialDocValuesIterator() { - SequentialDocValuesIterator sequentialDocValuesIterator = new SequentialDocValuesIterator(new AbstractNumericDocValues() { - @Override - public long longValue() throws IOException { - return 0; - } - - @Override - public boolean advanceExact(int i) throws IOException { - return false; - } - - @Override - public int docID() { - return 0; - } + private static FieldInfo mockFieldInfo; + + @BeforeClass + public static void setup() { + mockFieldInfo = new FieldInfo( + "field", + 1, + false, + false, + true, + IndexOptions.NONE, + DocValuesType.NONE, + -1, + Collections.emptyMap(), + 0, + 0, + 0, + 0, + VectorEncoding.FLOAT32, + VectorSimilarityFunction.EUCLIDEAN, + false, + false + ); + } + + public void testCreateIterator_SortedNumeric() throws IOException { + DocValuesProducer producer = Mockito.mock(DocValuesProducer.class); + SortedNumericDocValues iterator = Mockito.mock(SortedNumericDocValues.class); + when(producer.getSortedNumeric(mockFieldInfo)).thenReturn(iterator); + SequentialDocValuesIterator result = new SequentialDocValuesIterator(producer.getSortedNumeric(mockFieldInfo)); + assertEquals(iterator.getClass(), result.getDocIdSetIterator().getClass()); + } + + public void testCreateIterator_UnsupportedType() throws IOException { + DocValuesProducer producer = Mockito.mock(DocValuesProducer.class); + BinaryDocValues iterator = Mockito.mock(BinaryDocValues.class); + when(producer.getBinary(mockFieldInfo)).thenReturn(iterator); + SequentialDocValuesIterator result = new SequentialDocValuesIterator(producer.getBinary(mockFieldInfo)); + assertEquals(iterator.getClass(), result.getDocIdSetIterator().getClass()); + when(iterator.nextDoc()).thenReturn(0); + when(iterator.binaryValue()).thenReturn(new BytesRef("123")); + + IllegalStateException exception = expectThrows(IllegalStateException.class, () -> { + result.nextDoc(0); + result.value(0); }); + assertEquals("Unsupported Iterator requested for SequentialDocValuesIterator", exception.getMessage()); + } + + public void testGetNextValue_SortedNumeric() throws IOException { + SortedNumericDocValues iterator = Mockito.mock(SortedNumericDocValues.class); + when(iterator.nextDoc()).thenReturn(0); + when(iterator.nextValue()).thenReturn(123L); + SequentialDocValuesIterator sequentialDocValuesIterator = new SequentialDocValuesIterator(iterator); + sequentialDocValuesIterator.nextDoc(0); + long result = sequentialDocValuesIterator.value(0); + assertEquals(123L, result); + } + + public void testGetNextValue_UnsupportedIterator() { + DocIdSetIterator iterator = Mockito.mock(DocIdSetIterator.class); + SequentialDocValuesIterator sequentialDocValuesIterator = new SequentialDocValuesIterator(iterator); + + IllegalStateException exception = expectThrows(IllegalStateException.class, () -> { sequentialDocValuesIterator.value(0); }); + assertEquals("Unsupported Iterator requested for SequentialDocValuesIterator", exception.getMessage()); + } + + public void testNextDoc() throws IOException { + SortedNumericDocValues iterator = Mockito.mock(SortedNumericDocValues.class); + SequentialDocValuesIterator sequentialDocValuesIterator = new SequentialDocValuesIterator(iterator); + when(iterator.nextDoc()).thenReturn(5); - assertTrue(sequentialDocValuesIterator.getDocIdSetIterator() instanceof AbstractNumericDocValues); - assertEquals(sequentialDocValuesIterator.getDocId(), 0); + int result = sequentialDocValuesIterator.nextDoc(5); + assertEquals(5, result); } - public void test_sequentialDocValuesIterator_default() { - SequentialDocValuesIterator sequentialDocValuesIterator = new SequentialDocValuesIterator(); - assertTrue(sequentialDocValuesIterator.getDocIdSetIterator() instanceof SortedNumericDocValues); + public void test_multipleCoordinatedDocumentReader() throws IOException { + SortedNumericDocValues iterator1 = Mockito.mock(SortedNumericDocValues.class); + SortedNumericDocValues iterator2 = Mockito.mock(SortedNumericDocValues.class); + + SequentialDocValuesIterator sequentialDocValuesIterator1 = new SequentialDocValuesIterator(iterator1); + SequentialDocValuesIterator sequentialDocValuesIterator2 = new SequentialDocValuesIterator(iterator2); + + when(iterator1.nextDoc()).thenReturn(0); + when(iterator2.nextDoc()).thenReturn(1); + + when(iterator1.nextValue()).thenReturn(9L); + when(iterator2.nextValue()).thenReturn(9L); + + sequentialDocValuesIterator1.nextDoc(0); + sequentialDocValuesIterator2.nextDoc(0); + assertEquals(0, sequentialDocValuesIterator1.getDocId()); + assertEquals(9L, (long) sequentialDocValuesIterator1.value(0)); + assertNull(sequentialDocValuesIterator2.value(0)); + assertNotEquals(0, sequentialDocValuesIterator2.getDocId()); + assertEquals(1, sequentialDocValuesIterator2.getDocId()); + assertEquals(9L, (long) sequentialDocValuesIterator2.value(1)); + } } diff --git a/test/framework/src/main/java/org/opensearch/index/MapperTestUtils.java b/test/framework/src/main/java/org/opensearch/index/MapperTestUtils.java index 108492c1cf8f9..302180fcf95df 100644 --- a/test/framework/src/main/java/org/opensearch/index/MapperTestUtils.java +++ b/test/framework/src/main/java/org/opensearch/index/MapperTestUtils.java @@ -38,6 +38,7 @@ import org.opensearch.common.settings.Settings; import org.opensearch.core.xcontent.NamedXContentRegistry; import org.opensearch.env.Environment; +import org.opensearch.index.analysis.AnalysisTestsHelper; import org.opensearch.index.analysis.IndexAnalyzers; import org.opensearch.index.mapper.DocumentMapper; import org.opensearch.index.mapper.DocumentMapperParser; @@ -46,6 +47,7 @@ import org.opensearch.index.similarity.SimilarityService; import org.opensearch.indices.IndicesModule; import org.opensearch.indices.mapper.MapperRegistry; +import org.opensearch.plugins.AnalysisPlugin; import org.opensearch.test.IndexSettingsModule; import java.io.IOException; @@ -97,6 +99,38 @@ public static MapperService newMapperService( ); } + public static MapperService newMapperServiceWithHelperAnalyzer( + NamedXContentRegistry xContentRegistry, + Path tempDir, + Settings settings, + IndicesModule indicesModule, + String indexName + ) throws IOException { + Settings.Builder settingsBuilder = Settings.builder().put(Environment.PATH_HOME_SETTING.getKey(), tempDir).put(settings); + if (settings.get(IndexMetadata.SETTING_VERSION_CREATED) == null) { + settingsBuilder.put(IndexMetadata.SETTING_VERSION_CREATED, Version.CURRENT); + } + Settings finalSettings = settingsBuilder.build(); + MapperRegistry mapperRegistry = indicesModule.getMapperRegistry(); + IndexSettings indexSettings = IndexSettingsModule.newIndexSettings(indexName, finalSettings); + IndexAnalyzers indexAnalyzers = createMockTestAnalysis(finalSettings); + SimilarityService similarityService = new SimilarityService(indexSettings, null, Collections.emptyMap()); + return new MapperService( + indexSettings, + indexAnalyzers, + xContentRegistry, + similarityService, + mapperRegistry, + () -> null, + () -> false, + null + ); + } + + public static IndexAnalyzers createMockTestAnalysis(Settings nodeSettings, AnalysisPlugin... analysisPlugins) throws IOException { + return AnalysisTestsHelper.createTestAnalysisFromSettings(nodeSettings, analysisPlugins).indexAnalyzers; + } + public static void assertConflicts(String mapping1, String mapping2, DocumentMapperParser parser, String... conflicts) throws IOException { DocumentMapper docMapper = parser.parse("type", new CompressedXContent(mapping1)); From e749424db053ad31db1c4f1ab9374251ca9b737d Mon Sep 17 00:00:00 2001 From: Rishabh Singh Date: Tue, 23 Jul 2024 20:24:35 -0700 Subject: [PATCH 113/167] Security fixes and updates (#14928) Signed-off-by: Rishabh Singh --- .github/workflows/add-performance-comment.yml | 5 ++- .github/workflows/benchmark-pull-request.yml | 34 +++++++++---------- 2 files changed, 21 insertions(+), 18 deletions(-) diff --git a/.github/workflows/add-performance-comment.yml b/.github/workflows/add-performance-comment.yml index b522d348c84b2..fc272714c5628 100644 --- a/.github/workflows/add-performance-comment.yml +++ b/.github/workflows/add-performance-comment.yml @@ -6,7 +6,10 @@ on: jobs: add-comment: - if: github.event.label.name == 'Performance' + if: | + github.event.label.name == 'Performance' || + github.event.label.name == 'Search:Performance' || + github.event.label.name == 'Indexing:Performance' runs-on: ubuntu-latest permissions: pull-requests: write diff --git a/.github/workflows/benchmark-pull-request.yml b/.github/workflows/benchmark-pull-request.yml index 9d83331e81d5a..47abcc1178572 100644 --- a/.github/workflows/benchmark-pull-request.yml +++ b/.github/workflows/benchmark-pull-request.yml @@ -77,18 +77,6 @@ jobs: run: | echo "Invalid comment format detected. Failing the workflow." exit 1 - - id: get_approvers - run: | - echo "approvers=$(cat .github/CODEOWNERS | grep '^\*' | tr -d '* ' | sed 's/@/,/g' | sed 's/,//1')" >> $GITHUB_OUTPUT - - uses: trstringer/manual-approval@v1 - if: (!contains(steps.get_approvers.outputs.approvers, github.event.comment.user.login)) - with: - secret: ${{ github.TOKEN }} - approvers: ${{ steps.get_approvers.outputs.approvers }} - minimum-approvals: 1 - issue-title: 'Request to approve/deny benchmark run for PR #${{ env.PR_NUMBER }}' - issue-body: "Please approve or deny the benchmark run for PR #${{ env.PR_NUMBER }}" - exclude-workflow-initiator-as-approver: false - name: Get PR Details id: get_pr uses: actions/github-script@v7 @@ -106,21 +94,33 @@ jobs: return { "headRepoFullName": pull_request.head.repo.full_name, - "headRef": pull_request.head.ref + "headRefSha": pull_request.head.sha }; - name: Set pr details env vars run: | echo '${{ steps.get_pr.outputs.result }}' | jq -r '.headRepoFullName' - echo '${{ steps.get_pr.outputs.result }}' | jq -r '.headRef' + echo '${{ steps.get_pr.outputs.result }}' | jq -r '.headRefSha' headRepo=$(echo '${{ steps.get_pr.outputs.result }}' | jq -r '.headRepoFullName') - headRef=$(echo '${{ steps.get_pr.outputs.result }}' | jq -r '.headRef') + headRefSha=$(echo '${{ steps.get_pr.outputs.result }}' | jq -r '.headRefSha') echo "prHeadRepo=$headRepo" >> $GITHUB_ENV - echo "prHeadRef=$headRef" >> $GITHUB_ENV + echo "prHeadRefSha=$headRefSha" >> $GITHUB_ENV + - id: get_approvers + run: | + echo "approvers=$(cat .github/CODEOWNERS | grep '^\*' | tr -d '* ' | sed 's/@/,/g' | sed 's/,//1')" >> $GITHUB_OUTPUT + - uses: trstringer/manual-approval@v1 + if: (!contains(steps.get_approvers.outputs.approvers, github.event.comment.user.login)) + with: + secret: ${{ github.TOKEN }} + approvers: ${{ steps.get_approvers.outputs.approvers }} + minimum-approvals: 1 + issue-title: 'Request to approve/deny benchmark run for PR #${{ env.PR_NUMBER }}' + issue-body: "Please approve or deny the benchmark run for PR #${{ env.PR_NUMBER }}" + exclude-workflow-initiator-as-approver: false - name: Checkout PR Repo uses: actions/checkout@v4 with: repository: ${{ env.prHeadRepo }} - ref: ${{ env.prHeadRef }} + ref: ${{ env.prHeadRefSha }} token: ${{ secrets.GITHUB_TOKEN }} - name: Setup Java uses: actions/setup-java@v1 From 2def4fd302b71a6d3ed2ce3efc3cce6800fbdd3f Mon Sep 17 00:00:00 2001 From: Sooraj Sinha <81695996+soosinha@users.noreply.github.com> Date: Wed, 24 Jul 2024 10:36:06 +0530 Subject: [PATCH 114/167] Create new IndexInput for multi part upload (#14888) * Create new IndexInput for multi part upload Signed-off-by: Sooraj Sinha --- .../transfer/BlobStoreTransferService.java | 35 ++++++++-------- .../blobstore/ChecksumBlobStoreFormat.java | 35 ++++++++-------- .../blobstore/ConfigBlobStoreFormat.java | 40 +++++++++++-------- .../BlobStoreTransferServiceTests.java | 30 +++++++++++++- 4 files changed, 86 insertions(+), 54 deletions(-) diff --git a/server/src/main/java/org/opensearch/index/translog/transfer/BlobStoreTransferService.java b/server/src/main/java/org/opensearch/index/translog/transfer/BlobStoreTransferService.java index d55abb40dec48..22bb4cf0514bf 100644 --- a/server/src/main/java/org/opensearch/index/translog/transfer/BlobStoreTransferService.java +++ b/server/src/main/java/org/opensearch/index/translog/transfer/BlobStoreTransferService.java @@ -131,20 +131,18 @@ public void uploadBlob( } final String resourceDescription = "BlobStoreTransferService.uploadBlob(blob=\"" + fileName + "\")"; byte[] bytes = inputStream.readAllBytes(); - try (IndexInput input = new ByteArrayIndexInput(resourceDescription, bytes)) { - long expectedChecksum = computeChecksum(input, resourceDescription); - uploadBlobAsyncInternal( - fileName, - fileName, - bytes.length, - blobPath, - writePriority, - (size, position) -> new OffsetRangeIndexInputStream(input, size, position), - expectedChecksum, - listener, - null - ); - } + long expectedChecksum = computeChecksum(bytes, resourceDescription); + uploadBlobAsyncInternal( + fileName, + fileName, + bytes.length, + blobPath, + writePriority, + (size, position) -> new OffsetRangeIndexInputStream(new ByteArrayIndexInput(resourceDescription, bytes), size, position), + expectedChecksum, + listener, + null + ); } // Builds a metadata map containing the Base64-encoded checkpoint file data associated with a translog file. @@ -220,7 +218,8 @@ private void uploadBlob( } - private void uploadBlobAsyncInternal( + // package private for testing + void uploadBlobAsyncInternal( String fileName, String remoteFileName, long contentLength, @@ -335,10 +334,10 @@ public void listAllInSortedOrderAsync( threadPool.executor(threadpoolName).execute(() -> { listAllInSortedOrder(path, filenamePrefix, limit, listener); }); } - private static long computeChecksum(IndexInput indexInput, String resourceDescription) throws ChecksumCombinationException { + private static long computeChecksum(byte[] bytes, String resourceDescription) throws ChecksumCombinationException { long expectedChecksum; - try { - expectedChecksum = checksumOfChecksum(indexInput.clone(), CHECKSUM_BYTES_LENGTH); + try (IndexInput indexInput = new ByteArrayIndexInput(resourceDescription, bytes)) { + expectedChecksum = checksumOfChecksum(indexInput, CHECKSUM_BYTES_LENGTH); } catch (Exception e) { throw new ChecksumCombinationException( "Potentially corrupted file: Checksum combination failed while combining stored checksum " diff --git a/server/src/main/java/org/opensearch/repositories/blobstore/ChecksumBlobStoreFormat.java b/server/src/main/java/org/opensearch/repositories/blobstore/ChecksumBlobStoreFormat.java index e567e1b626c5a..3a49fed4be282 100644 --- a/server/src/main/java/org/opensearch/repositories/blobstore/ChecksumBlobStoreFormat.java +++ b/server/src/main/java/org/opensearch/repositories/blobstore/ChecksumBlobStoreFormat.java @@ -223,10 +223,11 @@ private void writeAsyncWithPriority( return; } final String blobName = blobName(name); - final BytesReference bytes = serialize(obj, blobName, compressor, params); + final BytesReference bytesReference = serialize(obj, blobName, compressor, params); final String resourceDescription = "ChecksumBlobStoreFormat.writeAsyncWithPriority(blob=\"" + blobName + "\")"; - try (IndexInput input = new ByteArrayIndexInput(resourceDescription, BytesReference.toBytes(bytes))) { - long expectedChecksum; + byte[] bytes = BytesReference.toBytes(bytesReference); + long expectedChecksum; + try (IndexInput input = new ByteArrayIndexInput(resourceDescription, bytes)) { try { expectedChecksum = checksumOfChecksum(input.clone(), 8); } catch (Exception e) { @@ -237,21 +238,21 @@ private void writeAsyncWithPriority( e ); } + } - try ( - RemoteTransferContainer remoteTransferContainer = new RemoteTransferContainer( - blobName, - blobName, - bytes.length(), - true, - priority, - (size, position) -> new OffsetRangeIndexInputStream(input, size, position), - expectedChecksum, - ((AsyncMultiStreamBlobContainer) blobContainer).remoteIntegrityCheckSupported() - ) - ) { - ((AsyncMultiStreamBlobContainer) blobContainer).asyncBlobUpload(remoteTransferContainer.createWriteContext(), listener); - } + try ( + RemoteTransferContainer remoteTransferContainer = new RemoteTransferContainer( + blobName, + blobName, + bytes.length, + true, + priority, + (size, position) -> new OffsetRangeIndexInputStream(new ByteArrayIndexInput(resourceDescription, bytes), size, position), + expectedChecksum, + ((AsyncMultiStreamBlobContainer) blobContainer).remoteIntegrityCheckSupported() + ) + ) { + ((AsyncMultiStreamBlobContainer) blobContainer).asyncBlobUpload(remoteTransferContainer.createWriteContext(), listener); } } diff --git a/server/src/main/java/org/opensearch/repositories/blobstore/ConfigBlobStoreFormat.java b/server/src/main/java/org/opensearch/repositories/blobstore/ConfigBlobStoreFormat.java index 18c718ca2110e..8127bf8c2a2a2 100644 --- a/server/src/main/java/org/opensearch/repositories/blobstore/ConfigBlobStoreFormat.java +++ b/server/src/main/java/org/opensearch/repositories/blobstore/ConfigBlobStoreFormat.java @@ -8,7 +8,6 @@ package org.opensearch.repositories.blobstore; -import org.apache.lucene.store.IndexInput; import org.opensearch.common.blobstore.AsyncMultiStreamBlobContainer; import org.opensearch.common.blobstore.BlobContainer; import org.opensearch.common.blobstore.stream.write.WritePriority; @@ -51,23 +50,30 @@ public void writeAsyncWithUrgentPriority(T obj, BlobContainer blobContainer, Str return; } String blobName = blobName(name); - BytesReference bytes = serialize(obj, blobName, new NoneCompressor(), ToXContent.EMPTY_PARAMS, XContentType.JSON, null, null); + BytesReference bytesReference = serialize( + obj, + blobName, + new NoneCompressor(), + ToXContent.EMPTY_PARAMS, + XContentType.JSON, + null, + null + ); String resourceDescription = "BlobStoreFormat.writeAsyncWithPriority(blob=\"" + blobName + "\")"; - try (IndexInput input = new ByteArrayIndexInput(resourceDescription, BytesReference.toBytes(bytes))) { - try ( - RemoteTransferContainer remoteTransferContainer = new RemoteTransferContainer( - blobName, - blobName, - bytes.length(), - true, - WritePriority.URGENT, - (size, position) -> new OffsetRangeIndexInputStream(input, size, position), - null, - false - ) - ) { - ((AsyncMultiStreamBlobContainer) blobContainer).asyncBlobUpload(remoteTransferContainer.createWriteContext(), listener); - } + byte[] bytes = BytesReference.toBytes(bytesReference); + try ( + RemoteTransferContainer remoteTransferContainer = new RemoteTransferContainer( + blobName, + blobName, + bytes.length, + true, + WritePriority.URGENT, + (size, position) -> new OffsetRangeIndexInputStream(new ByteArrayIndexInput(resourceDescription, bytes), size, position), + null, + false + ) + ) { + ((AsyncMultiStreamBlobContainer) blobContainer).asyncBlobUpload(remoteTransferContainer.createWriteContext(), listener); } } } diff --git a/server/src/test/java/org/opensearch/index/translog/transfer/BlobStoreTransferServiceTests.java b/server/src/test/java/org/opensearch/index/translog/transfer/BlobStoreTransferServiceTests.java index cd78aead80923..10e4cc6cfb1ef 100644 --- a/server/src/test/java/org/opensearch/index/translog/transfer/BlobStoreTransferServiceTests.java +++ b/server/src/test/java/org/opensearch/index/translog/transfer/BlobStoreTransferServiceTests.java @@ -22,6 +22,8 @@ import org.opensearch.common.blobstore.stream.read.ReadContext; import org.opensearch.common.blobstore.stream.write.WriteContext; import org.opensearch.common.blobstore.stream.write.WritePriority; +import org.opensearch.common.blobstore.transfer.RemoteTransferContainer; +import org.opensearch.common.blobstore.transfer.stream.OffsetRangeInputStream; import org.opensearch.common.settings.ClusterSettings; import org.opensearch.common.settings.Settings; import org.opensearch.core.action.ActionListener; @@ -54,9 +56,13 @@ import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicBoolean; +import org.mockito.ArgumentCaptor; +import org.mockito.Mockito; + import static org.opensearch.index.translog.transfer.TranslogTransferManager.CHECKPOINT_FILE_DATA_KEY; import static org.mockito.ArgumentMatchers.any; import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.verify; import static org.mockito.Mockito.when; public class BlobStoreTransferServiceTests extends OpenSearchTestCase { @@ -139,8 +145,28 @@ public void testUploadBlobFromInputStreamAsyncFSRepo() throws IOException, Inter FsBlobStore fsBlobStore = mock(FsBlobStore.class); when(fsBlobStore.blobContainer(any())).thenReturn(mockAsyncFsContainer); - TransferService transferService = new BlobStoreTransferService(fsBlobStore, threadPool); - uploadBlobFromInputStream(transferService); + BlobStoreTransferService transferServiceSpy = Mockito.spy(new BlobStoreTransferService(fsBlobStore, threadPool)); + uploadBlobFromInputStream(transferServiceSpy); + + ArgumentCaptor inputStreamCaptor = ArgumentCaptor.forClass( + RemoteTransferContainer.OffsetRangeInputStreamSupplier.class + ); + verify(transferServiceSpy).uploadBlobAsyncInternal( + Mockito.anyString(), + Mockito.anyString(), + Mockito.anyLong(), + Mockito.any(), + Mockito.any(), + inputStreamCaptor.capture(), + Mockito.anyLong(), + Mockito.any(), + Mockito.any() + ); + RemoteTransferContainer.OffsetRangeInputStreamSupplier inputStreamSupplier = inputStreamCaptor.getValue(); + OffsetRangeInputStream inputStream1 = inputStreamSupplier.get(1, 0); + OffsetRangeInputStream inputStream2 = inputStreamSupplier.get(1, 2); + assertNotEquals(inputStream1, inputStream2); + assertNotEquals(inputStream1.getFilePointer(), inputStream2.getFilePointer()); } private IndexMetadata getIndexMetadata() { From 7673a7733ccecc8730e8a3ecff898b72dc3deaa6 Mon Sep 17 00:00:00 2001 From: Pranshu Shukla <55992439+Pranshu-S@users.noreply.github.com> Date: Wed, 24 Jul 2024 10:54:22 +0530 Subject: [PATCH 115/167] Updating Cluster Stats Optimisation Versions to 2.16 (#14914) * Updating Cluster Stats Optimisation Versions to 2.16 Signed-off-by: Pranshu Shukla --- .../action/admin/cluster/stats/ClusterStatsNodeResponse.java | 4 ++-- .../action/admin/cluster/stats/ClusterStatsRequest.java | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/stats/ClusterStatsNodeResponse.java b/server/src/main/java/org/opensearch/action/admin/cluster/stats/ClusterStatsNodeResponse.java index 133cf68f5f8c9..6ed3ca7c409e7 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/stats/ClusterStatsNodeResponse.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/stats/ClusterStatsNodeResponse.java @@ -77,7 +77,7 @@ public ClusterStatsNodeResponse(StreamInput in) throws IOException { } this.nodeInfo = new NodeInfo(in); this.nodeStats = new NodeStats(in); - if (in.getVersion().onOrAfter(Version.V_3_0_0)) { + if (in.getVersion().onOrAfter(Version.V_2_16_0)) { this.shardsStats = in.readOptionalArray(ShardStats::new, ShardStats[]::new); this.aggregatedNodeLevelStats = in.readOptionalWriteable(AggregatedNodeLevelStats::new); } else { @@ -156,7 +156,7 @@ public void writeTo(StreamOutput out) throws IOException { } nodeInfo.writeTo(out); nodeStats.writeTo(out); - if (out.getVersion().onOrAfter(Version.V_3_0_0)) { + if (out.getVersion().onOrAfter(Version.V_2_16_0)) { if (aggregatedNodeLevelStats != null) { out.writeOptionalArray(null); out.writeOptionalWriteable(aggregatedNodeLevelStats); diff --git a/server/src/main/java/org/opensearch/action/admin/cluster/stats/ClusterStatsRequest.java b/server/src/main/java/org/opensearch/action/admin/cluster/stats/ClusterStatsRequest.java index fdeb82a3466f2..bd75b2210e474 100644 --- a/server/src/main/java/org/opensearch/action/admin/cluster/stats/ClusterStatsRequest.java +++ b/server/src/main/java/org/opensearch/action/admin/cluster/stats/ClusterStatsRequest.java @@ -50,7 +50,7 @@ public class ClusterStatsRequest extends BaseNodesRequest { public ClusterStatsRequest(StreamInput in) throws IOException { super(in); - if (in.getVersion().onOrAfter(Version.V_3_0_0)) { + if (in.getVersion().onOrAfter(Version.V_2_16_0)) { useAggregatedNodeLevelResponses = in.readOptionalBoolean(); } } @@ -76,7 +76,7 @@ public void useAggregatedNodeLevelResponses(boolean useAggregatedNodeLevelRespon @Override public void writeTo(StreamOutput out) throws IOException { super.writeTo(out); - if (out.getVersion().onOrAfter(Version.V_3_0_0)) { + if (out.getVersion().onOrAfter(Version.V_2_16_0)) { out.writeOptionalBoolean(useAggregatedNodeLevelResponses); } } From 5744eae80dfe466397f4254acf995794855db370 Mon Sep 17 00:00:00 2001 From: shailendra0811 <167273922+shailendra0811@users.noreply.github.com> Date: Wed, 24 Jul 2024 14:59:30 +0530 Subject: [PATCH 116/167] Fix read/write method for Diff Manifest in case Shard diff file is null. (#14938) Signed-off-by: Shailendra Singh --- .../gateway/remote/ClusterStateDiffManifest.java | 8 ++++---- .../opensearch/gateway/remote/RemotePersistenceStats.java | 4 ++-- .../remote/RemoteClusterStateCleanupManagerTests.java | 2 +- 3 files changed, 7 insertions(+), 7 deletions(-) diff --git a/server/src/main/java/org/opensearch/gateway/remote/ClusterStateDiffManifest.java b/server/src/main/java/org/opensearch/gateway/remote/ClusterStateDiffManifest.java index ab7fa1fddf4bf..a3b36ddcff1a7 100644 --- a/server/src/main/java/org/opensearch/gateway/remote/ClusterStateDiffManifest.java +++ b/server/src/main/java/org/opensearch/gateway/remote/ClusterStateDiffManifest.java @@ -129,7 +129,6 @@ public ClusterStateDiffManifest( clusterStateCustomUpdated = new ArrayList<>(clusterStateCustomDiff.getDiffs().keySet()); clusterStateCustomUpdated.addAll(clusterStateCustomDiff.getUpserts().keySet()); clusterStateCustomDeleted = clusterStateCustomDiff.getDeletes(); - List indicie1s = indicesRoutingUpdated; } public ClusterStateDiffManifest( @@ -190,7 +189,7 @@ public ClusterStateDiffManifest(StreamInput in) throws IOException { this.hashesOfConsistentSettingsUpdated = in.readBoolean(); this.clusterStateCustomUpdated = in.readStringList(); this.clusterStateCustomDeleted = in.readStringList(); - this.indicesRoutingDiffPath = in.readString(); + this.indicesRoutingDiffPath = in.readOptionalString(); } @Override @@ -535,7 +534,8 @@ public int hashCode() { indicesRoutingDeleted, hashesOfConsistentSettingsUpdated, clusterStateCustomUpdated, - clusterStateCustomDeleted + clusterStateCustomDeleted, + indicesRoutingDiffPath ); } @@ -562,7 +562,7 @@ public void writeTo(StreamOutput out) throws IOException { out.writeBoolean(hashesOfConsistentSettingsUpdated); out.writeStringCollection(clusterStateCustomUpdated); out.writeStringCollection(clusterStateCustomDeleted); - out.writeString(indicesRoutingDiffPath); + out.writeOptionalString(indicesRoutingDiffPath); } /** diff --git a/server/src/main/java/org/opensearch/gateway/remote/RemotePersistenceStats.java b/server/src/main/java/org/opensearch/gateway/remote/RemotePersistenceStats.java index efd73e11e46b5..1e7f8f278fb0f 100644 --- a/server/src/main/java/org/opensearch/gateway/remote/RemotePersistenceStats.java +++ b/server/src/main/java/org/opensearch/gateway/remote/RemotePersistenceStats.java @@ -51,10 +51,10 @@ public long getIndexRoutingFilesCleanupAttemptFailedCount() { } public void indicesRoutingDiffFileCleanupAttemptFailed() { - indexRoutingFilesCleanupAttemptFailedCount.incrementAndGet(); + indicesRoutingDiffFilesCleanupAttemptFailedCount.incrementAndGet(); } public long getIndicesRoutingDiffFileCleanupAttemptFailedCount() { - return indexRoutingFilesCleanupAttemptFailedCount.get(); + return indicesRoutingDiffFilesCleanupAttemptFailedCount.get(); } } diff --git a/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateCleanupManagerTests.java b/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateCleanupManagerTests.java index b86f23f3d37aa..920a48f02b99a 100644 --- a/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateCleanupManagerTests.java +++ b/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateCleanupManagerTests.java @@ -652,7 +652,7 @@ public void testIndicesRoutingDiffFilesCleanupFailureStats() throws Exception { assertEquals(0, remoteClusterStateCleanupManager.getStats().getIndicesRoutingDiffFileCleanupAttemptFailedCount()); }); - doThrow(IOException.class).when(remoteRoutingTableService).deleteStaleIndexRoutingPaths(any()); + doThrow(IOException.class).when(remoteRoutingTableService).deleteStaleIndexRoutingDiffPaths(any()); remoteClusterStateCleanupManager.deleteClusterMetadata(clusterName, clusterUUID, activeBlobs, inactiveBlobs); assertBusy(() -> { // wait for stats to get updated From 2a14c2772cc53bf2941e80c911307eaaacca055d Mon Sep 17 00:00:00 2001 From: Bukhtawar Khan Date: Wed, 24 Jul 2024 17:23:55 +0530 Subject: [PATCH 117/167] Make reroute iteration time-bound for large shard allocations (#14848) * Make reroute iteration time-bound for large shard allocations Signed-off-by: Bukhtawar Khan Co-authored-by: Rishab Nahata --- CHANGELOG.md | 1 + .../gateway/RecoveryFromGatewayIT.java | 128 +++++++++++++++++- .../routing/allocation/AllocationService.java | 5 +- .../allocation/ExistingShardsAllocator.java | 7 +- .../common/settings/ClusterSettings.java | 2 + .../common/util/BatchRunnableExecutor.java | 66 +++++++++ .../util/concurrent/TimeoutAwareRunnable.java | 19 +++ .../gateway/BaseGatewayShardAllocator.java | 21 +++ .../gateway/ShardsBatchGatewayAllocator.java | 86 ++++++++++-- .../ExistingShardsAllocatorTests.java | 118 ++++++++++++++++ .../util/BatchRunnableExecutorTests.java | 97 +++++++++++++ .../gateway/GatewayAllocatorTests.java | 32 +++++ .../PrimaryShardBatchAllocatorTests.java | 47 +++++++ .../ReplicaShardBatchAllocatorTests.java | 27 ++++ .../TestShardBatchGatewayAllocator.java | 5 +- 15 files changed, 645 insertions(+), 16 deletions(-) create mode 100644 server/src/main/java/org/opensearch/common/util/BatchRunnableExecutor.java create mode 100644 server/src/main/java/org/opensearch/common/util/concurrent/TimeoutAwareRunnable.java create mode 100644 server/src/test/java/org/opensearch/cluster/routing/allocation/ExistingShardsAllocatorTests.java create mode 100644 server/src/test/java/org/opensearch/common/util/BatchRunnableExecutorTests.java diff --git a/CHANGELOG.md b/CHANGELOG.md index 6aa3d7a58dda4..edc0ca2732f25 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -62,6 +62,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Allow @InternalApi annotation on classes not meant to be constructed outside of the OpenSearch core ([#14575](https://github.com/opensearch-project/OpenSearch/pull/14575)) - Add @InternalApi annotation to japicmp exclusions ([#14597](https://github.com/opensearch-project/OpenSearch/pull/14597)) - Allow system index warning in OpenSearchRestTestCase.refreshAllIndices ([#14635](https://github.com/opensearch-project/OpenSearch/pull/14635)) +- Make reroute iteration time-bound for large shard allocations ([#14848](https://github.com/opensearch-project/OpenSearch/pull/14848)) ### Deprecated - Deprecate batch_size parameter on bulk API ([#14725](https://github.com/opensearch-project/OpenSearch/pull/14725)) diff --git a/server/src/internalClusterTest/java/org/opensearch/gateway/RecoveryFromGatewayIT.java b/server/src/internalClusterTest/java/org/opensearch/gateway/RecoveryFromGatewayIT.java index 6296608c64d37..4085cc3890f30 100644 --- a/server/src/internalClusterTest/java/org/opensearch/gateway/RecoveryFromGatewayIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/gateway/RecoveryFromGatewayIT.java @@ -769,7 +769,7 @@ public void testMessyElectionsStillMakeClusterGoGreen() throws Exception { ensureGreen("test"); } - public void testBatchModeEnabled() throws Exception { + public void testBatchModeEnabledWithoutTimeout() throws Exception { internalCluster().startClusterManagerOnlyNodes( 1, Settings.builder().put(ExistingShardsAllocator.EXISTING_SHARDS_ALLOCATOR_BATCH_MODE.getKey(), true).build() @@ -810,6 +810,132 @@ public void testBatchModeEnabled() throws Exception { assertEquals(0, gatewayAllocator.getNumberOfInFlightFetches()); } + public void testBatchModeEnabledWithSufficientTimeoutAndClusterGreen() throws Exception { + internalCluster().startClusterManagerOnlyNodes( + 1, + Settings.builder() + .put(ExistingShardsAllocator.EXISTING_SHARDS_ALLOCATOR_BATCH_MODE.getKey(), true) + .put(ShardsBatchGatewayAllocator.PRIMARY_BATCH_ALLOCATOR_TIMEOUT_SETTING.getKey(), "20s") + .put(ShardsBatchGatewayAllocator.REPLICA_BATCH_ALLOCATOR_TIMEOUT_SETTING.getKey(), "20s") + .build() + ); + List dataOnlyNodes = internalCluster().startDataOnlyNodes(2); + createIndex( + "test", + Settings.builder().put(IndexMetadata.SETTING_NUMBER_OF_SHARDS, 1).put(IndexMetadata.SETTING_NUMBER_OF_REPLICAS, 1).build() + ); + ensureGreen("test"); + Settings node0DataPathSettings = internalCluster().dataPathSettings(dataOnlyNodes.get(0)); + Settings node1DataPathSettings = internalCluster().dataPathSettings(dataOnlyNodes.get(1)); + internalCluster().stopRandomNode(InternalTestCluster.nameFilter(dataOnlyNodes.get(0))); + internalCluster().stopRandomNode(InternalTestCluster.nameFilter(dataOnlyNodes.get(1))); + ensureRed("test"); + ensureStableCluster(1); + + logger.info("--> Now do a protective reroute"); + ClusterRerouteResponse clusterRerouteResponse = client().admin().cluster().prepareReroute().setRetryFailed(true).get(); + assertTrue(clusterRerouteResponse.isAcknowledged()); + + ShardsBatchGatewayAllocator gatewayAllocator = internalCluster().getInstance( + ShardsBatchGatewayAllocator.class, + internalCluster().getClusterManagerName() + ); + assertTrue(ExistingShardsAllocator.EXISTING_SHARDS_ALLOCATOR_BATCH_MODE.get(internalCluster().clusterService().getSettings())); + assertEquals(1, gatewayAllocator.getNumberOfStartedShardBatches()); + assertEquals(1, gatewayAllocator.getNumberOfStoreShardBatches()); + + // Now start both data nodes and ensure batch mode is working + logger.info("--> restarting the stopped nodes"); + internalCluster().startDataOnlyNode(Settings.builder().put("node.name", dataOnlyNodes.get(0)).put(node0DataPathSettings).build()); + internalCluster().startDataOnlyNode(Settings.builder().put("node.name", dataOnlyNodes.get(1)).put(node1DataPathSettings).build()); + ensureStableCluster(3); + ensureGreen("test"); + assertEquals(0, gatewayAllocator.getNumberOfStartedShardBatches()); + assertEquals(0, gatewayAllocator.getNumberOfStoreShardBatches()); + assertEquals(0, gatewayAllocator.getNumberOfInFlightFetches()); + } + + public void testBatchModeEnabledWithInSufficientTimeoutButClusterGreen() throws Exception { + + internalCluster().startClusterManagerOnlyNodes( + 1, + Settings.builder().put(ExistingShardsAllocator.EXISTING_SHARDS_ALLOCATOR_BATCH_MODE.getKey(), true).build() + ); + List dataOnlyNodes = internalCluster().startDataOnlyNodes(2); + createNIndices(50, "test"); // this will create 50p, 50r shards + ensureStableCluster(3); + IndicesStatsResponse indicesStats = dataNodeClient().admin().indices().prepareStats().get(); + assertThat(indicesStats.getSuccessfulShards(), equalTo(100)); + ClusterHealthResponse health = client().admin() + .cluster() + .health(Requests.clusterHealthRequest().waitForGreenStatus().timeout("1m")) + .actionGet(); + assertFalse(health.isTimedOut()); + assertEquals(GREEN, health.getStatus()); + + String clusterManagerName = internalCluster().getClusterManagerName(); + Settings clusterManagerDataPathSettings = internalCluster().dataPathSettings(clusterManagerName); + Settings node0DataPathSettings = internalCluster().dataPathSettings(dataOnlyNodes.get(0)); + Settings node1DataPathSettings = internalCluster().dataPathSettings(dataOnlyNodes.get(1)); + + internalCluster().stopCurrentClusterManagerNode(); + internalCluster().stopRandomNode(InternalTestCluster.nameFilter(dataOnlyNodes.get(0))); + internalCluster().stopRandomNode(InternalTestCluster.nameFilter(dataOnlyNodes.get(1))); + + // Now start cluster manager node and post that verify batches created + internalCluster().startClusterManagerOnlyNodes( + 1, + Settings.builder() + .put("node.name", clusterManagerName) + .put(clusterManagerDataPathSettings) + .put(ShardsBatchGatewayAllocator.GATEWAY_ALLOCATOR_BATCH_SIZE.getKey(), 5) + .put(ShardsBatchGatewayAllocator.PRIMARY_BATCH_ALLOCATOR_TIMEOUT_SETTING.getKey(), "10ms") + .put(ShardsBatchGatewayAllocator.REPLICA_BATCH_ALLOCATOR_TIMEOUT_SETTING.getKey(), "10ms") + .put(ExistingShardsAllocator.EXISTING_SHARDS_ALLOCATOR_BATCH_MODE.getKey(), true) + .build() + ); + ensureStableCluster(1); + + logger.info("--> Now do a protective reroute"); // to avoid any race condition in test + ClusterRerouteResponse clusterRerouteResponse = client().admin().cluster().prepareReroute().setRetryFailed(true).get(); + assertTrue(clusterRerouteResponse.isAcknowledged()); + + ShardsBatchGatewayAllocator gatewayAllocator = internalCluster().getInstance( + ShardsBatchGatewayAllocator.class, + internalCluster().getClusterManagerName() + ); + + assertTrue(ExistingShardsAllocator.EXISTING_SHARDS_ALLOCATOR_BATCH_MODE.get(internalCluster().clusterService().getSettings())); + assertEquals(10, gatewayAllocator.getNumberOfStartedShardBatches()); + assertEquals(10, gatewayAllocator.getNumberOfStoreShardBatches()); + health = client(internalCluster().getClusterManagerName()).admin().cluster().health(Requests.clusterHealthRequest()).actionGet(); + assertFalse(health.isTimedOut()); + assertEquals(RED, health.getStatus()); + assertEquals(100, health.getUnassignedShards()); + assertEquals(0, health.getInitializingShards()); + assertEquals(0, health.getActiveShards()); + assertEquals(0, health.getRelocatingShards()); + assertEquals(0, health.getNumberOfDataNodes()); + + // Now start both data nodes and ensure batch mode is working + logger.info("--> restarting the stopped nodes"); + internalCluster().startDataOnlyNode(Settings.builder().put("node.name", dataOnlyNodes.get(0)).put(node0DataPathSettings).build()); + internalCluster().startDataOnlyNode(Settings.builder().put("node.name", dataOnlyNodes.get(1)).put(node1DataPathSettings).build()); + ensureStableCluster(3); + + // wait for cluster to turn green + health = client().admin().cluster().health(Requests.clusterHealthRequest().waitForGreenStatus().timeout("5m")).actionGet(); + assertFalse(health.isTimedOut()); + assertEquals(GREEN, health.getStatus()); + assertEquals(0, health.getUnassignedShards()); + assertEquals(0, health.getInitializingShards()); + assertEquals(100, health.getActiveShards()); + assertEquals(0, health.getRelocatingShards()); + assertEquals(2, health.getNumberOfDataNodes()); + assertEquals(0, gatewayAllocator.getNumberOfStartedShardBatches()); + assertEquals(0, gatewayAllocator.getNumberOfStoreShardBatches()); + } + public void testBatchModeDisabled() throws Exception { internalCluster().startClusterManagerOnlyNodes( 1, diff --git a/server/src/main/java/org/opensearch/cluster/routing/allocation/AllocationService.java b/server/src/main/java/org/opensearch/cluster/routing/allocation/AllocationService.java index 5ad3a2fd47ce3..e29a81a2c131f 100644 --- a/server/src/main/java/org/opensearch/cluster/routing/allocation/AllocationService.java +++ b/server/src/main/java/org/opensearch/cluster/routing/allocation/AllocationService.java @@ -72,6 +72,7 @@ import java.util.Iterator; import java.util.List; import java.util.Map; +import java.util.Optional; import java.util.Set; import java.util.function.Function; import java.util.stream.Collectors; @@ -617,10 +618,10 @@ private void allocateExistingUnassignedShards(RoutingAllocation allocation) { private void allocateAllUnassignedShards(RoutingAllocation allocation) { ExistingShardsAllocator allocator = existingShardsAllocators.get(ShardsBatchGatewayAllocator.ALLOCATOR_NAME); - allocator.allocateAllUnassignedShards(allocation, true); + Optional.ofNullable(allocator.allocateAllUnassignedShards(allocation, true)).ifPresent(Runnable::run); allocator.afterPrimariesBeforeReplicas(allocation); // Replicas Assignment - allocator.allocateAllUnassignedShards(allocation, false); + Optional.ofNullable(allocator.allocateAllUnassignedShards(allocation, false)).ifPresent(Runnable::run); } private void disassociateDeadNodes(RoutingAllocation allocation) { diff --git a/server/src/main/java/org/opensearch/cluster/routing/allocation/ExistingShardsAllocator.java b/server/src/main/java/org/opensearch/cluster/routing/allocation/ExistingShardsAllocator.java index fb2a37237f8b6..eb7a1e7209c37 100644 --- a/server/src/main/java/org/opensearch/cluster/routing/allocation/ExistingShardsAllocator.java +++ b/server/src/main/java/org/opensearch/cluster/routing/allocation/ExistingShardsAllocator.java @@ -41,6 +41,7 @@ import org.opensearch.gateway.GatewayAllocator; import org.opensearch.gateway.ShardsBatchGatewayAllocator; +import java.util.ArrayList; import java.util.List; /** @@ -108,14 +109,16 @@ void allocateUnassigned( * * Allocation service will currently run the default implementation of it implemented by {@link ShardsBatchGatewayAllocator} */ - default void allocateAllUnassignedShards(RoutingAllocation allocation, boolean primary) { + default Runnable allocateAllUnassignedShards(RoutingAllocation allocation, boolean primary) { RoutingNodes.UnassignedShards.UnassignedIterator iterator = allocation.routingNodes().unassigned().iterator(); + List runnables = new ArrayList<>(); while (iterator.hasNext()) { ShardRouting shardRouting = iterator.next(); if (shardRouting.primary() == primary) { - allocateUnassigned(shardRouting, allocation, iterator); + runnables.add(() -> allocateUnassigned(shardRouting, allocation, iterator)); } } + return () -> runnables.forEach(Runnable::run); } /** diff --git a/server/src/main/java/org/opensearch/common/settings/ClusterSettings.java b/server/src/main/java/org/opensearch/common/settings/ClusterSettings.java index 49801fd3834b8..2f60c731bc554 100644 --- a/server/src/main/java/org/opensearch/common/settings/ClusterSettings.java +++ b/server/src/main/java/org/opensearch/common/settings/ClusterSettings.java @@ -343,6 +343,8 @@ public void apply(Settings value, Settings current, Settings previous) { GatewayService.RECOVER_AFTER_NODES_SETTING, GatewayService.RECOVER_AFTER_TIME_SETTING, ShardsBatchGatewayAllocator.GATEWAY_ALLOCATOR_BATCH_SIZE, + ShardsBatchGatewayAllocator.PRIMARY_BATCH_ALLOCATOR_TIMEOUT_SETTING, + ShardsBatchGatewayAllocator.REPLICA_BATCH_ALLOCATOR_TIMEOUT_SETTING, PersistedClusterStateService.SLOW_WRITE_LOGGING_THRESHOLD, NetworkModule.HTTP_DEFAULT_TYPE_SETTING, NetworkModule.TRANSPORT_DEFAULT_TYPE_SETTING, diff --git a/server/src/main/java/org/opensearch/common/util/BatchRunnableExecutor.java b/server/src/main/java/org/opensearch/common/util/BatchRunnableExecutor.java new file mode 100644 index 0000000000000..d3d3304cb909a --- /dev/null +++ b/server/src/main/java/org/opensearch/common/util/BatchRunnableExecutor.java @@ -0,0 +1,66 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.common.util; + +import org.apache.logging.log4j.LogManager; +import org.apache.logging.log4j.Logger; +import org.opensearch.common.Randomness; +import org.opensearch.common.unit.TimeValue; +import org.opensearch.common.util.concurrent.TimeoutAwareRunnable; + +import java.util.List; +import java.util.concurrent.TimeUnit; +import java.util.function.Supplier; + +/** + * A {@link Runnable} that iteratively executes a batch of {@link TimeoutAwareRunnable}s. If the elapsed time exceeds the timeout defined by {@link TimeValue} timeout, then all subsequent {@link TimeoutAwareRunnable}s will have their {@link TimeoutAwareRunnable#onTimeout} method invoked and will not be run. + * + * @opensearch.internal + */ +public class BatchRunnableExecutor implements Runnable { + + private final Supplier timeoutSupplier; + + private final List timeoutAwareRunnables; + + private static final Logger logger = LogManager.getLogger(BatchRunnableExecutor.class); + + public BatchRunnableExecutor(List timeoutAwareRunnables, Supplier timeoutSupplier) { + this.timeoutSupplier = timeoutSupplier; + this.timeoutAwareRunnables = timeoutAwareRunnables; + } + + // for tests + public List getTimeoutAwareRunnables() { + return this.timeoutAwareRunnables; + } + + @Override + public void run() { + logger.debug("Starting execution of runnable of size [{}]", timeoutAwareRunnables.size()); + long startTime = System.nanoTime(); + if (timeoutAwareRunnables.isEmpty()) { + return; + } + Randomness.shuffle(timeoutAwareRunnables); + for (TimeoutAwareRunnable runnable : timeoutAwareRunnables) { + if (timeoutSupplier.get().nanos() < 0 || System.nanoTime() - startTime < timeoutSupplier.get().nanos()) { + runnable.run(); + } else { + logger.debug("Executing timeout for runnable of size [{}]", timeoutAwareRunnables.size()); + runnable.onTimeout(); + } + } + logger.debug( + "Time taken to execute timed runnables in this cycle:[{}ms]", + TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - startTime) + ); + } + +} diff --git a/server/src/main/java/org/opensearch/common/util/concurrent/TimeoutAwareRunnable.java b/server/src/main/java/org/opensearch/common/util/concurrent/TimeoutAwareRunnable.java new file mode 100644 index 0000000000000..8d3357ad93095 --- /dev/null +++ b/server/src/main/java/org/opensearch/common/util/concurrent/TimeoutAwareRunnable.java @@ -0,0 +1,19 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.common.util.concurrent; + +/** + * Runnable that is aware of a timeout + * + * @opensearch.internal + */ +public interface TimeoutAwareRunnable extends Runnable { + + void onTimeout(); +} diff --git a/server/src/main/java/org/opensearch/gateway/BaseGatewayShardAllocator.java b/server/src/main/java/org/opensearch/gateway/BaseGatewayShardAllocator.java index 58982e869794f..0d6af943d39e0 100644 --- a/server/src/main/java/org/opensearch/gateway/BaseGatewayShardAllocator.java +++ b/server/src/main/java/org/opensearch/gateway/BaseGatewayShardAllocator.java @@ -36,6 +36,7 @@ import org.apache.logging.log4j.Logger; import org.opensearch.cluster.routing.RecoverySource; import org.opensearch.cluster.routing.RoutingNode; +import org.opensearch.cluster.routing.RoutingNodes; import org.opensearch.cluster.routing.ShardRouting; import org.opensearch.cluster.routing.allocation.AllocateUnassignedDecision; import org.opensearch.cluster.routing.allocation.AllocationDecision; @@ -43,9 +44,12 @@ import org.opensearch.cluster.routing.allocation.NodeAllocationResult; import org.opensearch.cluster.routing.allocation.RoutingAllocation; import org.opensearch.cluster.routing.allocation.decider.Decision; +import org.opensearch.core.index.shard.ShardId; import java.util.ArrayList; +import java.util.HashSet; import java.util.List; +import java.util.Set; /** * An abstract class that implements basic functionality for allocating @@ -78,6 +82,23 @@ public void allocateUnassigned( executeDecision(shardRouting, allocateUnassignedDecision, allocation, unassignedAllocationHandler); } + protected void allocateUnassignedBatchOnTimeout(List shardRoutings, RoutingAllocation allocation, boolean primary) { + Set shardIdsFromBatch = new HashSet<>(); + for (ShardRouting shardRouting : shardRoutings) { + ShardId shardId = shardRouting.shardId(); + shardIdsFromBatch.add(shardId); + } + RoutingNodes.UnassignedShards.UnassignedIterator iterator = allocation.routingNodes().unassigned().iterator(); + while (iterator.hasNext()) { + ShardRouting unassignedShard = iterator.next(); + AllocateUnassignedDecision allocationDecision; + if (unassignedShard.primary() == primary && shardIdsFromBatch.contains(unassignedShard.shardId())) { + allocationDecision = AllocateUnassignedDecision.throttle(null); + executeDecision(unassignedShard, allocationDecision, allocation, iterator); + } + } + } + protected void executeDecision( ShardRouting shardRouting, AllocateUnassignedDecision allocateUnassignedDecision, diff --git a/server/src/main/java/org/opensearch/gateway/ShardsBatchGatewayAllocator.java b/server/src/main/java/org/opensearch/gateway/ShardsBatchGatewayAllocator.java index 3c0797cd450d2..55f5388d8f454 100644 --- a/server/src/main/java/org/opensearch/gateway/ShardsBatchGatewayAllocator.java +++ b/server/src/main/java/org/opensearch/gateway/ShardsBatchGatewayAllocator.java @@ -27,9 +27,13 @@ import org.opensearch.common.UUIDs; import org.opensearch.common.inject.Inject; import org.opensearch.common.lease.Releasables; +import org.opensearch.common.settings.ClusterSettings; import org.opensearch.common.settings.Setting; import org.opensearch.common.settings.Settings; +import org.opensearch.common.unit.TimeValue; +import org.opensearch.common.util.BatchRunnableExecutor; import org.opensearch.common.util.concurrent.ConcurrentCollections; +import org.opensearch.common.util.concurrent.TimeoutAwareRunnable; import org.opensearch.common.util.set.Sets; import org.opensearch.core.action.ActionListener; import org.opensearch.core.index.shard.ShardId; @@ -41,6 +45,7 @@ import org.opensearch.indices.store.TransportNodesListShardStoreMetadataHelper; import org.opensearch.indices.store.TransportNodesListShardStoreMetadataHelper.StoreFilesMetadata; +import java.util.ArrayList; import java.util.Collections; import java.util.HashMap; import java.util.HashSet; @@ -68,6 +73,14 @@ public class ShardsBatchGatewayAllocator implements ExistingShardsAllocator { private final long maxBatchSize; private static final short DEFAULT_SHARD_BATCH_SIZE = 2000; + private static final String PRIMARY_BATCH_ALLOCATOR_TIMEOUT_SETTING_KEY = + "cluster.routing.allocation.shards_batch_gateway_allocator.primary_allocator_timeout"; + private static final String REPLICA_BATCH_ALLOCATOR_TIMEOUT_SETTING_KEY = + "cluster.routing.allocation.shards_batch_gateway_allocator.replica_allocator_timeout"; + + private TimeValue primaryShardsBatchGatewayAllocatorTimeout; + private TimeValue replicaShardsBatchGatewayAllocatorTimeout; + /** * Number of shards we send in one batch to data nodes for fetching metadata */ @@ -79,6 +92,20 @@ public class ShardsBatchGatewayAllocator implements ExistingShardsAllocator { Setting.Property.NodeScope ); + public static final Setting PRIMARY_BATCH_ALLOCATOR_TIMEOUT_SETTING = Setting.timeSetting( + PRIMARY_BATCH_ALLOCATOR_TIMEOUT_SETTING_KEY, + TimeValue.MINUS_ONE, + Setting.Property.NodeScope, + Setting.Property.Dynamic + ); + + public static final Setting REPLICA_BATCH_ALLOCATOR_TIMEOUT_SETTING = Setting.timeSetting( + REPLICA_BATCH_ALLOCATOR_TIMEOUT_SETTING_KEY, + TimeValue.MINUS_ONE, + Setting.Property.NodeScope, + Setting.Property.Dynamic + ); + private final RerouteService rerouteService; private final PrimaryShardBatchAllocator primaryShardBatchAllocator; private final ReplicaShardBatchAllocator replicaShardBatchAllocator; @@ -97,7 +124,8 @@ public ShardsBatchGatewayAllocator( RerouteService rerouteService, TransportNodesListGatewayStartedShardsBatch batchStartedAction, TransportNodesListShardStoreMetadataBatch batchStoreAction, - Settings settings + Settings settings, + ClusterSettings clusterSettings ) { this.rerouteService = rerouteService; this.primaryShardBatchAllocator = new InternalPrimaryBatchShardAllocator(); @@ -105,6 +133,10 @@ public ShardsBatchGatewayAllocator( this.batchStartedAction = batchStartedAction; this.batchStoreAction = batchStoreAction; this.maxBatchSize = GATEWAY_ALLOCATOR_BATCH_SIZE.get(settings); + this.primaryShardsBatchGatewayAllocatorTimeout = PRIMARY_BATCH_ALLOCATOR_TIMEOUT_SETTING.get(settings); + clusterSettings.addSettingsUpdateConsumer(PRIMARY_BATCH_ALLOCATOR_TIMEOUT_SETTING, this::setPrimaryBatchAllocatorTimeout); + this.replicaShardsBatchGatewayAllocatorTimeout = REPLICA_BATCH_ALLOCATOR_TIMEOUT_SETTING.get(settings); + clusterSettings.addSettingsUpdateConsumer(REPLICA_BATCH_ALLOCATOR_TIMEOUT_SETTING, this::setReplicaBatchAllocatorTimeout); } @Override @@ -127,7 +159,10 @@ protected ShardsBatchGatewayAllocator(long batchSize) { this.batchStoreAction = null; this.replicaShardBatchAllocator = null; this.maxBatchSize = batchSize; + this.primaryShardsBatchGatewayAllocatorTimeout = null; + this.replicaShardsBatchGatewayAllocatorTimeout = null; } + // for tests @Override @@ -187,14 +222,14 @@ public void allocateUnassigned( } @Override - public void allocateAllUnassignedShards(final RoutingAllocation allocation, boolean primary) { + public BatchRunnableExecutor allocateAllUnassignedShards(final RoutingAllocation allocation, boolean primary) { assert primaryShardBatchAllocator != null; assert replicaShardBatchAllocator != null; - innerAllocateUnassignedBatch(allocation, primaryShardBatchAllocator, replicaShardBatchAllocator, primary); + return innerAllocateUnassignedBatch(allocation, primaryShardBatchAllocator, replicaShardBatchAllocator, primary); } - protected void innerAllocateUnassignedBatch( + protected BatchRunnableExecutor innerAllocateUnassignedBatch( RoutingAllocation allocation, PrimaryShardBatchAllocator primaryBatchShardAllocator, ReplicaShardBatchAllocator replicaBatchShardAllocator, @@ -203,20 +238,45 @@ protected void innerAllocateUnassignedBatch( // create batches for unassigned shards Set batchesToAssign = createAndUpdateBatches(allocation, primary); if (batchesToAssign.isEmpty()) { - return; + return null; } + List runnables = new ArrayList<>(); if (primary) { batchIdToStartedShardBatch.values() .stream() .filter(batch -> batchesToAssign.contains(batch.batchId)) - .forEach( - shardsBatch -> primaryBatchShardAllocator.allocateUnassignedBatch(shardsBatch.getBatchedShardRoutings(), allocation) - ); + .forEach(shardsBatch -> runnables.add(new TimeoutAwareRunnable() { + @Override + public void onTimeout() { + primaryBatchShardAllocator.allocateUnassignedBatchOnTimeout( + shardsBatch.getBatchedShardRoutings(), + allocation, + true + ); + } + + @Override + public void run() { + primaryBatchShardAllocator.allocateUnassignedBatch(shardsBatch.getBatchedShardRoutings(), allocation); + } + })); + return new BatchRunnableExecutor(runnables, () -> primaryShardsBatchGatewayAllocatorTimeout); } else { batchIdToStoreShardBatch.values() .stream() .filter(batch -> batchesToAssign.contains(batch.batchId)) - .forEach(batch -> replicaBatchShardAllocator.allocateUnassignedBatch(batch.getBatchedShardRoutings(), allocation)); + .forEach(batch -> runnables.add(new TimeoutAwareRunnable() { + @Override + public void onTimeout() { + replicaBatchShardAllocator.allocateUnassignedBatchOnTimeout(batch.getBatchedShardRoutings(), allocation, false); + } + + @Override + public void run() { + replicaBatchShardAllocator.allocateUnassignedBatch(batch.getBatchedShardRoutings(), allocation); + } + })); + return new BatchRunnableExecutor(runnables, () -> replicaShardsBatchGatewayAllocatorTimeout); } } @@ -721,4 +781,12 @@ public int getNumberOfStartedShardBatches() { public int getNumberOfStoreShardBatches() { return batchIdToStoreShardBatch.size(); } + + private void setPrimaryBatchAllocatorTimeout(TimeValue primaryShardsBatchGatewayAllocatorTimeout) { + this.primaryShardsBatchGatewayAllocatorTimeout = primaryShardsBatchGatewayAllocatorTimeout; + } + + private void setReplicaBatchAllocatorTimeout(TimeValue replicaShardsBatchGatewayAllocatorTimeout) { + this.replicaShardsBatchGatewayAllocatorTimeout = replicaShardsBatchGatewayAllocatorTimeout; + } } diff --git a/server/src/test/java/org/opensearch/cluster/routing/allocation/ExistingShardsAllocatorTests.java b/server/src/test/java/org/opensearch/cluster/routing/allocation/ExistingShardsAllocatorTests.java new file mode 100644 index 0000000000000..1da8f5ef7f695 --- /dev/null +++ b/server/src/test/java/org/opensearch/cluster/routing/allocation/ExistingShardsAllocatorTests.java @@ -0,0 +1,118 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.cluster.routing.allocation; + +import org.opensearch.Version; +import org.opensearch.cluster.ClusterName; +import org.opensearch.cluster.ClusterState; +import org.opensearch.cluster.OpenSearchAllocationTestCase; +import org.opensearch.cluster.metadata.IndexMetadata; +import org.opensearch.cluster.metadata.Metadata; +import org.opensearch.cluster.node.DiscoveryNodes; +import org.opensearch.cluster.routing.RoutingTable; +import org.opensearch.cluster.routing.ShardRouting; +import org.opensearch.common.settings.Settings; + +import java.util.List; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.TimeUnit; + +public class ExistingShardsAllocatorTests extends OpenSearchAllocationTestCase { + + public void testRunnablesExecutedForUnassignedShards() throws InterruptedException { + + Metadata metadata = Metadata.builder() + .put(IndexMetadata.builder("test").settings(settings(Version.CURRENT)).numberOfShards(3).numberOfReplicas(2)) + .build(); + RoutingTable initialRoutingTable = RoutingTable.builder().addAsNew(metadata.index("test")).build(); + + ClusterState clusterState = ClusterState.builder(ClusterName.CLUSTER_NAME_SETTING.getDefault(Settings.EMPTY)) + .metadata(metadata) + .routingTable(initialRoutingTable) + .build(); + clusterState = ClusterState.builder(clusterState) + .nodes(DiscoveryNodes.builder().add(newNode("node1")).add(newNode("node2")).add(newNode("node3"))) + .build(); + RoutingAllocation allocation = new RoutingAllocation( + yesAllocationDeciders(), + clusterState.getRoutingNodes(), + clusterState, + null, + null, + 0L + ); + CountDownLatch expectedStateLatch = new CountDownLatch(3); + TestAllocator testAllocator = new TestAllocator(expectedStateLatch); + testAllocator.allocateAllUnassignedShards(allocation, true).run(); + // if the below condition is passed, then we are sure runnable executed for all primary shards + assertTrue(expectedStateLatch.await(30, TimeUnit.SECONDS)); + + expectedStateLatch = new CountDownLatch(6); + testAllocator = new TestAllocator(expectedStateLatch); + testAllocator.allocateAllUnassignedShards(allocation, false).run(); + // if the below condition is passed, then we are sure runnable executed for all replica shards + assertTrue(expectedStateLatch.await(30, TimeUnit.SECONDS)); + } + + private static class TestAllocator implements ExistingShardsAllocator { + + final CountDownLatch countDownLatch; + + TestAllocator(CountDownLatch latch) { + this.countDownLatch = latch; + } + + @Override + public void beforeAllocation(RoutingAllocation allocation) { + + } + + @Override + public void afterPrimariesBeforeReplicas(RoutingAllocation allocation) { + + } + + @Override + public void allocateUnassigned( + ShardRouting shardRouting, + RoutingAllocation allocation, + UnassignedAllocationHandler unassignedAllocationHandler + ) { + countDownLatch.countDown(); + } + + @Override + public AllocateUnassignedDecision explainUnassignedShardAllocation( + ShardRouting unassignedShard, + RoutingAllocation routingAllocation + ) { + return null; + } + + @Override + public void cleanCaches() { + + } + + @Override + public void applyStartedShards(List startedShards, RoutingAllocation allocation) { + + } + + @Override + public void applyFailedShards(List failedShards, RoutingAllocation allocation) { + + } + + @Override + public int getNumberOfInFlightFetches() { + return 0; + } + } +} diff --git a/server/src/test/java/org/opensearch/common/util/BatchRunnableExecutorTests.java b/server/src/test/java/org/opensearch/common/util/BatchRunnableExecutorTests.java new file mode 100644 index 0000000000000..269f89faec54d --- /dev/null +++ b/server/src/test/java/org/opensearch/common/util/BatchRunnableExecutorTests.java @@ -0,0 +1,97 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.common.util; + +import org.opensearch.common.unit.TimeValue; +import org.opensearch.common.util.concurrent.TimeoutAwareRunnable; +import org.opensearch.test.OpenSearchTestCase; + +import java.util.Arrays; +import java.util.Collections; +import java.util.List; +import java.util.function.Supplier; + +import static org.mockito.Mockito.atMost; +import static org.mockito.Mockito.doAnswer; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.never; +import static org.mockito.Mockito.times; +import static org.mockito.Mockito.verify; + +public class BatchRunnableExecutorTests extends OpenSearchTestCase { + private Supplier timeoutSupplier; + private TimeoutAwareRunnable runnable1; + private TimeoutAwareRunnable runnable2; + private TimeoutAwareRunnable runnable3; + private List runnableList; + + public void setupRunnables() { + timeoutSupplier = mock(Supplier.class); + runnable1 = mock(TimeoutAwareRunnable.class); + runnable2 = mock(TimeoutAwareRunnable.class); + runnable3 = mock(TimeoutAwareRunnable.class); + runnableList = Arrays.asList(runnable1, runnable2, runnable3); + } + + public void testRunWithoutTimeout() { + setupRunnables(); + timeoutSupplier = () -> TimeValue.timeValueSeconds(1); + BatchRunnableExecutor executor = new BatchRunnableExecutor(runnableList, timeoutSupplier); + executor.run(); + verify(runnable1, times(1)).run(); + verify(runnable2, times(1)).run(); + verify(runnable3, times(1)).run(); + verify(runnable1, never()).onTimeout(); + verify(runnable2, never()).onTimeout(); + verify(runnable3, never()).onTimeout(); + } + + public void testRunWithTimeout() { + setupRunnables(); + timeoutSupplier = () -> TimeValue.timeValueNanos(1); + BatchRunnableExecutor executor = new BatchRunnableExecutor(runnableList, timeoutSupplier); + executor.run(); + verify(runnable1, times(1)).onTimeout(); + verify(runnable2, times(1)).onTimeout(); + verify(runnable3, times(1)).onTimeout(); + verify(runnable1, never()).run(); + verify(runnable2, never()).run(); + verify(runnable3, never()).run(); + } + + public void testRunWithPartialTimeout() { + setupRunnables(); + timeoutSupplier = () -> TimeValue.timeValueMillis(50); + BatchRunnableExecutor executor = new BatchRunnableExecutor(runnableList, timeoutSupplier); + doAnswer(invocation -> { + Thread.sleep(100); + return null; + }).when(runnable1).run(); + executor.run(); + verify(runnable1, atMost(1)).run(); + verify(runnable2, atMost(1)).run(); + verify(runnable3, atMost(1)).run(); + verify(runnable2, atMost(1)).onTimeout(); + verify(runnable3, atMost(1)).onTimeout(); + verify(runnable2, atMost(1)).onTimeout(); + verify(runnable3, atMost(1)).onTimeout(); + } + + public void testRunWithEmptyRunnableList() { + setupRunnables(); + BatchRunnableExecutor executor = new BatchRunnableExecutor(Collections.emptyList(), timeoutSupplier); + executor.run(); + verify(runnable1, never()).onTimeout(); + verify(runnable2, never()).onTimeout(); + verify(runnable3, never()).onTimeout(); + verify(runnable1, never()).run(); + verify(runnable2, never()).run(); + verify(runnable3, never()).run(); + } +} diff --git a/server/src/test/java/org/opensearch/gateway/GatewayAllocatorTests.java b/server/src/test/java/org/opensearch/gateway/GatewayAllocatorTests.java index aa31c710c1fbd..bd56123f6df1f 100644 --- a/server/src/test/java/org/opensearch/gateway/GatewayAllocatorTests.java +++ b/server/src/test/java/org/opensearch/gateway/GatewayAllocatorTests.java @@ -32,6 +32,7 @@ import org.opensearch.cluster.routing.allocation.decider.AllocationDeciders; import org.opensearch.common.collect.Tuple; import org.opensearch.common.settings.Settings; +import org.opensearch.common.util.BatchRunnableExecutor; import org.opensearch.common.util.set.Sets; import org.opensearch.core.index.shard.ShardId; import org.opensearch.snapshots.SnapshotShardSizeInfo; @@ -61,6 +62,13 @@ public void setUp() throws Exception { testShardsBatchGatewayAllocator = new TestShardBatchGatewayAllocator(); } + public void testExecutorNotNull() { + createIndexAndUpdateClusterState(1, 3, 1); + createBatchesAndAssert(1); + BatchRunnableExecutor executor = testShardsBatchGatewayAllocator.allocateAllUnassignedShards(testAllocation, true); + assertNotNull(executor); + } + public void testSingleBatchCreation() { createIndexAndUpdateClusterState(1, 3, 1); createBatchesAndAssert(1); @@ -336,6 +344,30 @@ public void testGetBatchIdNonExisting() { allShardRoutings.forEach(shard -> assertNull(testShardsBatchGatewayAllocator.getBatchId(shard, shard.primary()))); } + public void testCreatePrimaryAndReplicaExecutorOfSizeOne() { + createIndexAndUpdateClusterState(1, 3, 2); + BatchRunnableExecutor executor = testShardsBatchGatewayAllocator.allocateAllUnassignedShards(testAllocation, true); + assertEquals(executor.getTimeoutAwareRunnables().size(), 1); + executor = testShardsBatchGatewayAllocator.allocateAllUnassignedShards(testAllocation, false); + assertEquals(executor.getTimeoutAwareRunnables().size(), 1); + } + + public void testCreatePrimaryExecutorOfSizeOneAndReplicaExecutorOfSizeZero() { + createIndexAndUpdateClusterState(1, 3, 0); + BatchRunnableExecutor executor = testShardsBatchGatewayAllocator.allocateAllUnassignedShards(testAllocation, true); + assertEquals(executor.getTimeoutAwareRunnables().size(), 1); + executor = testShardsBatchGatewayAllocator.allocateAllUnassignedShards(testAllocation, false); + assertNull(executor); + } + + public void testCreatePrimaryAndReplicaExecutorOfSizeTwo() { + createIndexAndUpdateClusterState(2, 1001, 1); + BatchRunnableExecutor executor = testShardsBatchGatewayAllocator.allocateAllUnassignedShards(testAllocation, true); + assertEquals(executor.getTimeoutAwareRunnables().size(), 2); + executor = testShardsBatchGatewayAllocator.allocateAllUnassignedShards(testAllocation, false); + assertEquals(executor.getTimeoutAwareRunnables().size(), 2); + } + private void createIndexAndUpdateClusterState(int count, int numberOfShards, int numberOfReplicas) { if (count == 0) return; Metadata.Builder metadata = Metadata.builder(); diff --git a/server/src/test/java/org/opensearch/gateway/PrimaryShardBatchAllocatorTests.java b/server/src/test/java/org/opensearch/gateway/PrimaryShardBatchAllocatorTests.java index 8ad8bcda95f40..270cf465d0f80 100644 --- a/server/src/test/java/org/opensearch/gateway/PrimaryShardBatchAllocatorTests.java +++ b/server/src/test/java/org/opensearch/gateway/PrimaryShardBatchAllocatorTests.java @@ -41,6 +41,7 @@ import org.junit.Before; import java.util.ArrayList; +import java.util.Arrays; import java.util.Collections; import java.util.HashMap; import java.util.HashSet; @@ -256,6 +257,52 @@ public void testAllocateUnassignedBatchThrottlingAllocationDeciderIsHonoured() { assertEquals(UnassignedInfo.AllocationStatus.DECIDERS_THROTTLED, ignoredShards.get(0).unassignedInfo().getLastAllocationStatus()); } + public void testAllocateUnassignedBatchOnTimeoutWithMatchingPrimaryShards() { + ClusterSettings clusterSettings = new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS); + AllocationDeciders allocationDeciders = randomAllocationDeciders(Settings.builder().build(), clusterSettings, random()); + setUpShards(1); + final RoutingAllocation routingAllocation = routingAllocationWithOnePrimary(allocationDeciders, CLUSTER_RECOVERED, "allocId-0"); + ShardRouting shardRouting = routingAllocation.routingTable().getIndicesRouting().get("test").shard(shardId.id()).primaryShard(); + + List shardRoutings = Arrays.asList(shardRouting); + batchAllocator.allocateUnassignedBatchOnTimeout(shardRoutings, routingAllocation, true); + + List ignoredShards = routingAllocation.routingNodes().unassigned().ignored(); + assertEquals(1, ignoredShards.size()); + assertEquals(UnassignedInfo.AllocationStatus.DECIDERS_THROTTLED, ignoredShards.get(0).unassignedInfo().getLastAllocationStatus()); + } + + public void testAllocateUnassignedBatchOnTimeoutWithNoMatchingPrimaryShards() { + ClusterSettings clusterSettings = new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS); + AllocationDeciders allocationDeciders = randomAllocationDeciders(Settings.builder().build(), clusterSettings, random()); + setUpShards(1); + final RoutingAllocation routingAllocation = routingAllocationWithOnePrimary(allocationDeciders, CLUSTER_RECOVERED, "allocId-0"); + List shardRoutings = new ArrayList<>(); + batchAllocator.allocateUnassignedBatchOnTimeout(shardRoutings, routingAllocation, true); + + List ignoredShards = routingAllocation.routingNodes().unassigned().ignored(); + assertEquals(0, ignoredShards.size()); + } + + public void testAllocateUnassignedBatchOnTimeoutWithNonPrimaryShards() { + ClusterSettings clusterSettings = new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS); + AllocationDeciders allocationDeciders = randomAllocationDeciders(Settings.builder().build(), clusterSettings, random()); + setUpShards(1); + final RoutingAllocation routingAllocation = routingAllocationWithOnePrimary(allocationDeciders, CLUSTER_RECOVERED, "allocId-0"); + + ShardRouting shardRouting = routingAllocation.routingTable() + .getIndicesRouting() + .get("test") + .shard(shardId.id()) + .replicaShards() + .get(0); + List shardRoutings = Arrays.asList(shardRouting); + batchAllocator.allocateUnassignedBatchOnTimeout(shardRoutings, routingAllocation, false); + + List ignoredShards = routingAllocation.routingNodes().unassigned().ignored(); + assertEquals(1, ignoredShards.size()); + } + private RoutingAllocation routingAllocationWithOnePrimary( AllocationDeciders deciders, UnassignedInfo.Reason reason, diff --git a/server/src/test/java/org/opensearch/gateway/ReplicaShardBatchAllocatorTests.java b/server/src/test/java/org/opensearch/gateway/ReplicaShardBatchAllocatorTests.java index 526a3990955b8..435fd78be2bcd 100644 --- a/server/src/test/java/org/opensearch/gateway/ReplicaShardBatchAllocatorTests.java +++ b/server/src/test/java/org/opensearch/gateway/ReplicaShardBatchAllocatorTests.java @@ -717,6 +717,33 @@ public void testAllocateUnassignedBatchThrottlingAllocationDeciderIsHonoured() t assertEquals(UnassignedInfo.AllocationStatus.DECIDERS_THROTTLED, allocateUnassignedDecision.getAllocationStatus()); } + public void testAllocateUnassignedBatchOnTimeoutWithUnassignedReplicaShard() { + RoutingAllocation allocation = onePrimaryOnNode1And1Replica(yesAllocationDeciders()); + final RoutingNodes.UnassignedShards.UnassignedIterator iterator = allocation.routingNodes().unassigned().iterator(); + List shards = new ArrayList<>(); + while (iterator.hasNext()) { + shards.add(iterator.next()); + } + testBatchAllocator.allocateUnassignedBatchOnTimeout(shards, allocation, false); + assertThat(allocation.routingNodes().unassigned().ignored().size(), equalTo(1)); + assertThat(allocation.routingNodes().unassigned().ignored().get(0).shardId(), equalTo(shardId)); + assertEquals( + UnassignedInfo.AllocationStatus.NO_ATTEMPT, + allocation.routingNodes().unassigned().ignored().get(0).unassignedInfo().getLastAllocationStatus() + ); + } + + public void testAllocateUnassignedBatchOnTimeoutWithAlreadyRecoveringReplicaShard() { + RoutingAllocation allocation = onePrimaryOnNode1And1ReplicaRecovering(yesAllocationDeciders()); + final RoutingNodes.UnassignedShards.UnassignedIterator iterator = allocation.routingNodes().unassigned().iterator(); + List shards = new ArrayList<>(); + while (iterator.hasNext()) { + shards.add(iterator.next()); + } + testBatchAllocator.allocateUnassignedBatchOnTimeout(shards, allocation, false); + assertThat(allocation.routingNodes().unassigned().ignored().size(), equalTo(0)); + } + private RoutingAllocation onePrimaryOnNode1And1Replica(AllocationDeciders deciders) { return onePrimaryOnNode1And1Replica(deciders, Settings.EMPTY, UnassignedInfo.Reason.CLUSTER_RECOVERED); } diff --git a/test/framework/src/main/java/org/opensearch/test/gateway/TestShardBatchGatewayAllocator.java b/test/framework/src/main/java/org/opensearch/test/gateway/TestShardBatchGatewayAllocator.java index fbb39c284f0ff..0eb4bb6935bac 100644 --- a/test/framework/src/main/java/org/opensearch/test/gateway/TestShardBatchGatewayAllocator.java +++ b/test/framework/src/main/java/org/opensearch/test/gateway/TestShardBatchGatewayAllocator.java @@ -13,6 +13,7 @@ import org.opensearch.cluster.routing.ShardRouting; import org.opensearch.cluster.routing.allocation.AllocateUnassignedDecision; import org.opensearch.cluster.routing.allocation.RoutingAllocation; +import org.opensearch.common.util.BatchRunnableExecutor; import org.opensearch.core.index.shard.ShardId; import org.opensearch.gateway.AsyncShardFetch; import org.opensearch.gateway.PrimaryShardBatchAllocator; @@ -102,9 +103,9 @@ protected boolean hasInitiatedFetching(ShardRouting shard) { }; @Override - public void allocateAllUnassignedShards(RoutingAllocation allocation, boolean primary) { + public BatchRunnableExecutor allocateAllUnassignedShards(RoutingAllocation allocation, boolean primary) { currentNodes = allocation.nodes(); - innerAllocateUnassignedBatch(allocation, primaryBatchShardAllocator, replicaBatchShardAllocator, primary); + return innerAllocateUnassignedBatch(allocation, primaryBatchShardAllocator, replicaBatchShardAllocator, primary); } @Override From 1fe58b5d712cfef525abfbd2dfaf398c0368745f Mon Sep 17 00:00:00 2001 From: gaobinlong Date: Wed, 24 Jul 2024 19:54:04 +0800 Subject: [PATCH 118/167] Fix the documentation url of the Create or Update alias API in rest-api-spec (#14935) Signed-off-by: Gao Binlong --- .../src/main/resources/rest-api-spec/api/indices.put_alias.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/rest-api-spec/src/main/resources/rest-api-spec/api/indices.put_alias.json b/rest-api-spec/src/main/resources/rest-api-spec/api/indices.put_alias.json index d99edcf5513f9..14427b00f1bb3 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/api/indices.put_alias.json +++ b/rest-api-spec/src/main/resources/rest-api-spec/api/indices.put_alias.json @@ -1,7 +1,7 @@ { "indices.put_alias":{ "documentation":{ - "url":"https://opensearch.org/docs/latest/api-reference/index-apis/alias/", + "url":"https://opensearch.org/docs/latest/api-reference/index-apis/update-alias/", "description":"Creates or updates an alias." }, "stability":"stable", From c76bfebd49c8129b564edc68ce59f01853dc6722 Mon Sep 17 00:00:00 2001 From: Mohit Godwani <81609427+mgodwan@users.noreply.github.com> Date: Wed, 24 Jul 2024 18:44:03 +0530 Subject: [PATCH 119/167] Template creation using context (#14811) * Template creation using context Signed-off-by: Mohit Godwani --- CHANGELOG.md | 1 + .../TransportSimulateIndexTemplateAction.java | 3 +- .../post/TransportSimulateTemplateAction.java | 3 +- .../SystemTemplateMetadata.java | 29 +- .../SystemTemplatesService.java | 2 +- .../TemplateRepositoryMetadata.java | 20 + .../coordination/OpenSearchNodeCommand.java | 6 +- .../metadata/ComposableIndexTemplate.java | 45 +- .../opensearch/cluster/metadata/Context.java | 130 +++++ .../opensearch/cluster/metadata/Metadata.java | 48 +- .../MetadataIndexTemplateService.java | 117 ++++- .../SystemTemplatesServiceTests.java | 42 +- .../MetadataIndexTemplateServiceTests.java | 459 +++++++++++++++++- 13 files changed, 862 insertions(+), 43 deletions(-) create mode 100644 server/src/main/java/org/opensearch/cluster/metadata/Context.java diff --git a/CHANGELOG.md b/CHANGELOG.md index edc0ca2732f25..00560d68e4051 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -31,6 +31,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Create listener to refresh search thread resource usage ([#14832](https://github.com/opensearch-project/OpenSearch/pull/14832)) - Add rest, transport layer changes for hot to warm tiering - dedicated setup (([#13980](https://github.com/opensearch-project/OpenSearch/pull/13980)) - Optimize Cluster Stats Indices to precomute node level stats ([#14426](https://github.com/opensearch-project/OpenSearch/pull/14426)) +- Add logic to create index templates (v2) using context field ([#14811](https://github.com/opensearch-project/OpenSearch/pull/14811)) ### Dependencies - Bump `org.gradle.test-retry` from 1.5.8 to 1.5.9 ([#13442](https://github.com/opensearch-project/OpenSearch/pull/13442)) diff --git a/server/src/main/java/org/opensearch/action/admin/indices/template/post/TransportSimulateIndexTemplateAction.java b/server/src/main/java/org/opensearch/action/admin/indices/template/post/TransportSimulateIndexTemplateAction.java index c1a02d813ffb2..22f1831a54164 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/template/post/TransportSimulateIndexTemplateAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/template/post/TransportSimulateIndexTemplateAction.java @@ -140,7 +140,8 @@ protected void clusterManagerOperation( MetadataIndexTemplateService.validateV2TemplateRequest( state.metadata(), simulateTemplateToAdd, - request.getIndexTemplateRequest().indexTemplate() + request.getIndexTemplateRequest().indexTemplate(), + clusterService.getClusterSettings() ); stateWithTemplate = indexTemplateService.addIndexTemplateV2( state, diff --git a/server/src/main/java/org/opensearch/action/admin/indices/template/post/TransportSimulateTemplateAction.java b/server/src/main/java/org/opensearch/action/admin/indices/template/post/TransportSimulateTemplateAction.java index 6565896fd3db2..03190445647ad 100644 --- a/server/src/main/java/org/opensearch/action/admin/indices/template/post/TransportSimulateTemplateAction.java +++ b/server/src/main/java/org/opensearch/action/admin/indices/template/post/TransportSimulateTemplateAction.java @@ -134,7 +134,8 @@ protected void clusterManagerOperation( MetadataIndexTemplateService.validateV2TemplateRequest( state.metadata(), simulateTemplateToAdd, - request.getIndexTemplateRequest().indexTemplate() + request.getIndexTemplateRequest().indexTemplate(), + clusterService.getClusterSettings() ); stateWithTemplate = indexTemplateService.addIndexTemplateV2( state, diff --git a/server/src/main/java/org/opensearch/cluster/applicationtemplates/SystemTemplateMetadata.java b/server/src/main/java/org/opensearch/cluster/applicationtemplates/SystemTemplateMetadata.java index 9bbe27ac0e281..227b70ffa2ef5 100644 --- a/server/src/main/java/org/opensearch/cluster/applicationtemplates/SystemTemplateMetadata.java +++ b/server/src/main/java/org/opensearch/cluster/applicationtemplates/SystemTemplateMetadata.java @@ -10,6 +10,8 @@ import org.opensearch.common.annotation.ExperimentalApi; +import java.util.Objects; + /** * Metadata information about a template available in a template repository. */ @@ -48,13 +50,14 @@ public long version() { * @return Metadata object based on name */ public static SystemTemplateMetadata fromComponentTemplate(String fullyQualifiedName) { - assert fullyQualifiedName.length() > 1 : "System template name must have at least one component"; - assert fullyQualifiedName.substring(1, fullyQualifiedName.indexOf(DELIMITER, 1)).equals(COMPONENT_TEMPLATE_TYPE); + assert fullyQualifiedName.length() > DELIMITER.length() * 3 + 2 + COMPONENT_TEMPLATE_TYPE.length() + : "System template name must have all defined components"; + assert (DELIMITER + fullyQualifiedName.substring(1, fullyQualifiedName.indexOf(DELIMITER, 1))).equals(COMPONENT_TEMPLATE_TYPE); return new SystemTemplateMetadata( - Long.parseLong(fullyQualifiedName.substring(fullyQualifiedName.lastIndexOf(DELIMITER))), + Long.parseLong(fullyQualifiedName.substring(fullyQualifiedName.lastIndexOf(DELIMITER) + 1)), COMPONENT_TEMPLATE_TYPE, - fullyQualifiedName.substring(0, fullyQualifiedName.lastIndexOf(DELIMITER)) + fullyQualifiedName.substring(fullyQualifiedName.indexOf(DELIMITER, 2) + 1, fullyQualifiedName.lastIndexOf(DELIMITER)) ); } @@ -65,4 +68,22 @@ public static SystemTemplateMetadata fromComponentTemplateInfo(String name, long public final String fullyQualifiedName() { return type + DELIMITER + name + DELIMITER + version; } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + SystemTemplateMetadata that = (SystemTemplateMetadata) o; + return version == that.version && Objects.equals(type, that.type) && Objects.equals(name, that.name); + } + + @Override + public int hashCode() { + return Objects.hash(version, type, name); + } + + @Override + public String toString() { + return "SystemTemplateMetadata{" + "version=" + version + ", type='" + type + '\'' + ", name='" + name + '\'' + '}'; + } } diff --git a/server/src/main/java/org/opensearch/cluster/applicationtemplates/SystemTemplatesService.java b/server/src/main/java/org/opensearch/cluster/applicationtemplates/SystemTemplatesService.java index ccb9272fa57b1..90652192e5c28 100644 --- a/server/src/main/java/org/opensearch/cluster/applicationtemplates/SystemTemplatesService.java +++ b/server/src/main/java/org/opensearch/cluster/applicationtemplates/SystemTemplatesService.java @@ -85,7 +85,7 @@ void refreshTemplates(boolean verification) { int failedLoadingRepositories = 0; List exceptions = new ArrayList<>(); - if (loaded.compareAndSet(false, true) && enabledTemplates) { + if ((verification || loaded.compareAndSet(false, true)) && enabledTemplates) { for (SystemTemplatesPlugin plugin : systemTemplatesPluginList) { try (SystemTemplateRepository repository = plugin.loadRepository()) { diff --git a/server/src/main/java/org/opensearch/cluster/applicationtemplates/TemplateRepositoryMetadata.java b/server/src/main/java/org/opensearch/cluster/applicationtemplates/TemplateRepositoryMetadata.java index 7ab4553aade0e..1fa79d291480b 100644 --- a/server/src/main/java/org/opensearch/cluster/applicationtemplates/TemplateRepositoryMetadata.java +++ b/server/src/main/java/org/opensearch/cluster/applicationtemplates/TemplateRepositoryMetadata.java @@ -10,6 +10,8 @@ import org.opensearch.common.annotation.ExperimentalApi; +import java.util.Objects; + /** * The information to uniquely identify a template repository. */ @@ -31,4 +33,22 @@ public String id() { public long version() { return version; } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + TemplateRepositoryMetadata that = (TemplateRepositoryMetadata) o; + return version == that.version && Objects.equals(id, that.id); + } + + @Override + public int hashCode() { + return Objects.hash(id, version); + } + + @Override + public String toString() { + return "TemplateRepositoryMetadata{" + "id='" + id + '\'' + ", version=" + version + '}'; + } } diff --git a/server/src/main/java/org/opensearch/cluster/coordination/OpenSearchNodeCommand.java b/server/src/main/java/org/opensearch/cluster/coordination/OpenSearchNodeCommand.java index 259d8961a3e78..896fe6fc8024b 100644 --- a/server/src/main/java/org/opensearch/cluster/coordination/OpenSearchNodeCommand.java +++ b/server/src/main/java/org/opensearch/cluster/coordination/OpenSearchNodeCommand.java @@ -47,6 +47,7 @@ import org.opensearch.cluster.ClusterName; import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.Diff; +import org.opensearch.cluster.metadata.ComponentTemplateMetadata; import org.opensearch.cluster.metadata.DataStreamMetadata; import org.opensearch.cluster.metadata.Metadata; import org.opensearch.common.collect.Tuple; @@ -94,9 +95,10 @@ public abstract class OpenSearchNodeCommand extends EnvironmentAwareCommand { public T parseNamedObject(Class categoryClass, String name, XContentParser parser, C context) throws IOException { // Currently, two unknown top-level objects are present if (Metadata.Custom.class.isAssignableFrom(categoryClass)) { - if (DataStreamMetadata.TYPE.equals(name)) { + if (DataStreamMetadata.TYPE.equals(name) || ComponentTemplateMetadata.TYPE.equals(name)) { // DataStreamMetadata is used inside Metadata class for validation purposes and building the indicesLookup, - // therefor even es node commands need to be able to parse it. + // ComponentTemplateMetadata is used inside Metadata class for building the systemTemplatesLookup, + // therefor even OpenSearch node commands need to be able to parse it. return super.parseNamedObject(categoryClass, name, parser, context); // TODO: Try to parse other named objects (e.g. stored scripts, ingest pipelines) that are part of core es as well? // Note that supporting PersistentTasksCustomMetadata is trickier, because PersistentTaskParams is a named object too. diff --git a/server/src/main/java/org/opensearch/cluster/metadata/ComposableIndexTemplate.java b/server/src/main/java/org/opensearch/cluster/metadata/ComposableIndexTemplate.java index e7f1b97f28842..594dda83c41e2 100644 --- a/server/src/main/java/org/opensearch/cluster/metadata/ComposableIndexTemplate.java +++ b/server/src/main/java/org/opensearch/cluster/metadata/ComposableIndexTemplate.java @@ -32,6 +32,7 @@ package org.opensearch.cluster.metadata; +import org.opensearch.Version; import org.opensearch.cluster.AbstractDiffable; import org.opensearch.cluster.Diff; import org.opensearch.cluster.metadata.DataStream.TimestampField; @@ -75,6 +76,7 @@ public class ComposableIndexTemplate extends AbstractDiffable PARSER = new ConstructingObjectParser<>( @@ -87,7 +89,8 @@ public class ComposableIndexTemplate extends AbstractDiffable) a[5], - (DataStreamTemplate) a[6] + (DataStreamTemplate) a[6], + (Context) a[7] ) ); @@ -99,6 +102,7 @@ public class ComposableIndexTemplate extends AbstractDiffable p.map(), METADATA); PARSER.declareObject(ConstructingObjectParser.optionalConstructorArg(), DataStreamTemplate.PARSER, DATA_STREAM); + PARSER.declareObject(ConstructingObjectParser.optionalConstructorArg(), Context.PARSER, CONTEXT); } private final List indexPatterns; @@ -114,6 +118,8 @@ public class ComposableIndexTemplate extends AbstractDiffable metadata; @Nullable private final DataStreamTemplate dataStreamTemplate; + @Nullable + private final Context context; static Diff readITV2DiffFrom(StreamInput in) throws IOException { return AbstractDiffable.readDiffFrom(ComposableIndexTemplate::new, in); @@ -131,7 +137,7 @@ public ComposableIndexTemplate( @Nullable Long version, @Nullable Map metadata ) { - this(indexPatterns, template, componentTemplates, priority, version, metadata, null); + this(indexPatterns, template, componentTemplates, priority, version, metadata, null, null); } public ComposableIndexTemplate( @@ -142,6 +148,19 @@ public ComposableIndexTemplate( @Nullable Long version, @Nullable Map metadata, @Nullable DataStreamTemplate dataStreamTemplate + ) { + this(indexPatterns, template, componentTemplates, priority, version, metadata, dataStreamTemplate, null); + } + + public ComposableIndexTemplate( + List indexPatterns, + @Nullable Template template, + @Nullable List componentTemplates, + @Nullable Long priority, + @Nullable Long version, + @Nullable Map metadata, + @Nullable DataStreamTemplate dataStreamTemplate, + @Nullable Context context ) { this.indexPatterns = indexPatterns; this.template = template; @@ -150,6 +169,7 @@ public ComposableIndexTemplate( this.version = version; this.metadata = metadata; this.dataStreamTemplate = dataStreamTemplate; + this.context = context; } public ComposableIndexTemplate(StreamInput in) throws IOException { @@ -164,6 +184,11 @@ public ComposableIndexTemplate(StreamInput in) throws IOException { this.version = in.readOptionalVLong(); this.metadata = in.readMap(); this.dataStreamTemplate = in.readOptionalWriteable(DataStreamTemplate::new); + if (in.getVersion().onOrAfter(Version.V_3_0_0)) { + this.context = in.readOptionalWriteable(Context::new); + } else { + this.context = null; + } } public List indexPatterns() { @@ -205,6 +230,10 @@ public DataStreamTemplate getDataStreamTemplate() { return dataStreamTemplate; } + public Context context() { + return context; + } + @Override public void writeTo(StreamOutput out) throws IOException { out.writeStringCollection(this.indexPatterns); @@ -219,6 +248,9 @@ public void writeTo(StreamOutput out) throws IOException { out.writeOptionalVLong(this.version); out.writeMap(this.metadata); out.writeOptionalWriteable(dataStreamTemplate); + if (out.getVersion().onOrAfter(Version.V_3_0_0)) { + out.writeOptionalWriteable(context); + } } @Override @@ -243,6 +275,9 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws if (this.dataStreamTemplate != null) { builder.field(DATA_STREAM.getPreferredName(), dataStreamTemplate); } + if (this.context != null) { + builder.field(CONTEXT.getPreferredName(), context); + } builder.endObject(); return builder; } @@ -256,7 +291,8 @@ public int hashCode() { this.priority, this.version, this.metadata, - this.dataStreamTemplate + this.dataStreamTemplate, + this.context ); } @@ -275,7 +311,8 @@ public boolean equals(Object obj) { && Objects.equals(this.priority, other.priority) && Objects.equals(this.version, other.version) && Objects.equals(this.metadata, other.metadata) - && Objects.equals(this.dataStreamTemplate, other.dataStreamTemplate); + && Objects.equals(this.dataStreamTemplate, other.dataStreamTemplate) + && Objects.equals(this.context, other.context); } @Override diff --git a/server/src/main/java/org/opensearch/cluster/metadata/Context.java b/server/src/main/java/org/opensearch/cluster/metadata/Context.java new file mode 100644 index 0000000000000..4bd6134e8a318 --- /dev/null +++ b/server/src/main/java/org/opensearch/cluster/metadata/Context.java @@ -0,0 +1,130 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.cluster.metadata; + +import org.opensearch.cluster.AbstractDiffable; +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.core.ParseField; +import org.opensearch.core.common.io.stream.StreamInput; +import org.opensearch.core.common.io.stream.StreamOutput; +import org.opensearch.core.xcontent.ConstructingObjectParser; +import org.opensearch.core.xcontent.ToXContentObject; +import org.opensearch.core.xcontent.XContentBuilder; +import org.opensearch.core.xcontent.XContentParser; + +import java.io.IOException; +import java.util.Map; +import java.util.Objects; + +/** + * Class encapsulating the context metadata associated with an index template/index. + */ +@ExperimentalApi +public class Context extends AbstractDiffable implements ToXContentObject { + + private static final ParseField NAME = new ParseField("name"); + private static final ParseField VERSION = new ParseField("version"); + private static final ParseField PARAMS = new ParseField("params"); + + public static final String LATEST_VERSION = "_latest"; + + private String name; + private String version = LATEST_VERSION; + private Map params; + + public static final ConstructingObjectParser PARSER = new ConstructingObjectParser<>( + "index_template", + false, + a -> new Context((String) a[0], (String) a[1], (Map) a[2]) + ); + + static { + PARSER.declareString(ConstructingObjectParser.constructorArg(), NAME); + PARSER.declareString(ConstructingObjectParser.optionalConstructorArg(), VERSION); + PARSER.declareObject(ConstructingObjectParser.optionalConstructorArg(), (p, c) -> p.map(), PARAMS); + } + + public Context(String name) { + this(name, LATEST_VERSION, Map.of()); + } + + public Context(String name, String version, Map params) { + this.name = name; + if (version != null) { + this.version = version; + } + this.params = params; + } + + public Context(StreamInput in) throws IOException { + this.name = in.readString(); + this.version = in.readOptionalString(); + this.params = in.readMap(); + } + + public String name() { + return name; + } + + public void name(String name) { + this.name = name; + } + + public String version() { + return version; + } + + public void version(String version) { + this.version = version; + } + + public Map params() { + return params; + } + + public void params(Map params) { + this.params = params; + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeString(name); + out.writeOptionalString(version); + out.writeMap(params); + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); + builder.field(NAME.getPreferredName(), this.name); + builder.field("version", this.version); + if (params != null) { + builder.field("params", this.params); + } + builder.endObject(); + return builder; + } + + public static Context fromXContent(XContentParser parser) { + return PARSER.apply(parser, null); + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + Context context = (Context) o; + return Objects.equals(name, context.name) && Objects.equals(version, context.version) && Objects.equals(params, context.params); + } + + @Override + public int hashCode() { + return Objects.hash(name, version, params); + } +} diff --git a/server/src/main/java/org/opensearch/cluster/metadata/Metadata.java b/server/src/main/java/org/opensearch/cluster/metadata/Metadata.java index 2a54f6444ffda..440b9e267cf0a 100644 --- a/server/src/main/java/org/opensearch/cluster/metadata/Metadata.java +++ b/server/src/main/java/org/opensearch/cluster/metadata/Metadata.java @@ -43,6 +43,7 @@ import org.opensearch.cluster.DiffableUtils; import org.opensearch.cluster.NamedDiffable; import org.opensearch.cluster.NamedDiffableValueSerializer; +import org.opensearch.cluster.applicationtemplates.SystemTemplateMetadata; import org.opensearch.cluster.block.ClusterBlock; import org.opensearch.cluster.block.ClusterBlockLevel; import org.opensearch.cluster.coordination.CoordinationMetadata; @@ -280,6 +281,8 @@ static Custom fromXContent(XContentParser parser, String name) throws IOExceptio private final SortedMap indicesLookup; + private final Map> systemTemplatesLookup; + Metadata( String clusterUUID, boolean clusterUUIDCommitted, @@ -297,7 +300,8 @@ static Custom fromXContent(XContentParser parser, String name) throws IOExceptio String[] visibleOpenIndices, String[] allClosedIndices, String[] visibleClosedIndices, - SortedMap indicesLookup + SortedMap indicesLookup, + Map> systemTemplatesLookup ) { this.clusterUUID = clusterUUID; this.clusterUUIDCommitted = clusterUUIDCommitted; @@ -328,6 +332,7 @@ static Custom fromXContent(XContentParser parser, String name) throws IOExceptio this.allClosedIndices = allClosedIndices; this.visibleClosedIndices = visibleClosedIndices; this.indicesLookup = indicesLookup; + this.systemTemplatesLookup = systemTemplatesLookup; } public long version() { @@ -828,6 +833,10 @@ public Map componentTemplates() { .orElse(Collections.emptyMap()); } + public Map> systemTemplatesLookup() { + return systemTemplatesLookup; + } + public Map templatesV2() { return Optional.ofNullable((ComposableIndexTemplateMetadata) this.custom(ComposableIndexTemplateMetadata.TYPE)) .map(ComposableIndexTemplateMetadata::indexTemplates) @@ -1189,6 +1198,8 @@ public static class Builder { private final Map customs; private final Metadata previousMetadata; + private Map> systemTemplatesLookup; + public Builder() { clusterUUID = UNKNOWN_CLUSTER_UUID; indices = new HashMap<>(); @@ -1554,6 +1565,8 @@ public Metadata build() { ? (DataStreamMetadata) this.previousMetadata.customs.get(DataStreamMetadata.TYPE) : null; + buildSystemTemplatesLookup(); + boolean recomputeRequiredforIndicesLookups = (previousMetadata == null) || (indices.equals(previousMetadata.indices) == false) || (previousDataStreamMetadata != null && previousDataStreamMetadata.equals(dataStreamMetadata) == false) @@ -1564,6 +1577,33 @@ public Metadata build() { : buildMetadataWithRecomputedIndicesLookups(); } + private void buildSystemTemplatesLookup() { + if (previousMetadata != null + && Objects.equals( + previousMetadata.customs.get(ComponentTemplateMetadata.TYPE), + this.customs.get(ComponentTemplateMetadata.TYPE) + )) { + systemTemplatesLookup = Collections.unmodifiableMap(previousMetadata.systemTemplatesLookup); + } else { + systemTemplatesLookup = new HashMap<>(); + Optional.ofNullable((ComponentTemplateMetadata) this.customs.get(ComponentTemplateMetadata.TYPE)) + .map(ComponentTemplateMetadata::componentTemplates) + .orElseGet(Collections::emptyMap) + .forEach((k, v) -> { + if (MetadataIndexTemplateService.isSystemTemplate(v)) { + SystemTemplateMetadata templateMetadata = SystemTemplateMetadata.fromComponentTemplate(k); + systemTemplatesLookup.compute(templateMetadata.name(), (ik, iv) -> { + if (iv == null) { + iv = new TreeMap<>(); + } + iv.put(templateMetadata.version(), k); + return iv; + }); + } + }); + } + } + protected Metadata buildMetadataWithPreviousIndicesLookups() { return new Metadata( clusterUUID, @@ -1582,7 +1622,8 @@ protected Metadata buildMetadataWithPreviousIndicesLookups() { Arrays.copyOf(previousMetadata.visibleOpenIndices, previousMetadata.visibleOpenIndices.length), Arrays.copyOf(previousMetadata.allClosedIndices, previousMetadata.allClosedIndices.length), Arrays.copyOf(previousMetadata.visibleClosedIndices, previousMetadata.visibleClosedIndices.length), - Collections.unmodifiableSortedMap(previousMetadata.indicesLookup) + Collections.unmodifiableSortedMap(previousMetadata.indicesLookup), + systemTemplatesLookup ); } @@ -1705,7 +1746,8 @@ protected Metadata buildMetadataWithRecomputedIndicesLookups() { visibleOpenIndicesArray, allClosedIndicesArray, visibleClosedIndicesArray, - indicesLookup + indicesLookup, + systemTemplatesLookup ); } diff --git a/server/src/main/java/org/opensearch/cluster/metadata/MetadataIndexTemplateService.java b/server/src/main/java/org/opensearch/cluster/metadata/MetadataIndexTemplateService.java index 5b03d3f7b19ce..7bc3d279513cd 100644 --- a/server/src/main/java/org/opensearch/cluster/metadata/MetadataIndexTemplateService.java +++ b/server/src/main/java/org/opensearch/cluster/metadata/MetadataIndexTemplateService.java @@ -42,6 +42,9 @@ import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.ClusterStateUpdateTask; +import org.opensearch.cluster.applicationtemplates.ClusterStateSystemTemplateLoader; +import org.opensearch.cluster.applicationtemplates.SystemTemplateMetadata; +import org.opensearch.cluster.applicationtemplates.SystemTemplatesService; import org.opensearch.cluster.service.ClusterManagerTaskKeys; import org.opensearch.cluster.service.ClusterManagerTaskThrottler; import org.opensearch.cluster.service.ClusterService; @@ -53,9 +56,11 @@ import org.opensearch.common.inject.Inject; import org.opensearch.common.logging.HeaderWarning; import org.opensearch.common.regex.Regex; +import org.opensearch.common.settings.ClusterSettings; import org.opensearch.common.settings.IndexScopedSettings; import org.opensearch.common.settings.Settings; import org.opensearch.common.unit.TimeValue; +import org.opensearch.common.util.FeatureFlags; import org.opensearch.common.util.set.Sets; import org.opensearch.common.xcontent.XContentFactory; import org.opensearch.core.action.ActionListener; @@ -72,6 +77,7 @@ import org.opensearch.indices.IndexTemplateMissingException; import org.opensearch.indices.IndicesService; import org.opensearch.indices.InvalidIndexTemplateException; +import org.opensearch.threadpool.ThreadPool; import java.io.IOException; import java.io.UncheckedIOException; @@ -94,6 +100,7 @@ import static org.opensearch.cluster.metadata.MetadataCreateDataStreamService.validateTimestampFieldMapping; import static org.opensearch.cluster.metadata.MetadataCreateIndexService.validateRefreshIntervalSettings; +import static org.opensearch.common.util.concurrent.ThreadContext.ACTION_ORIGIN_TRANSIENT_NAME; import static org.opensearch.indices.cluster.IndicesClusterStateService.AllocatedIndices.IndexRemovalReason.NO_LONGER_ASSIGNED; /** @@ -116,6 +123,7 @@ public class MetadataIndexTemplateService { private final ClusterManagerTaskThrottler.ThrottlingKey removeIndexTemplateV2TaskKey; private final ClusterManagerTaskThrottler.ThrottlingKey createComponentTemplateTaskKey; private final ClusterManagerTaskThrottler.ThrottlingKey removeComponentTemplateTaskKey; + private final ThreadPool threadPool; @Inject public MetadataIndexTemplateService( @@ -124,7 +132,8 @@ public MetadataIndexTemplateService( AliasValidator aliasValidator, IndicesService indicesService, IndexScopedSettings indexScopedSettings, - NamedXContentRegistry xContentRegistry + NamedXContentRegistry xContentRegistry, + ThreadPool threadPool ) { this.clusterService = clusterService; this.aliasValidator = aliasValidator; @@ -132,6 +141,7 @@ public MetadataIndexTemplateService( this.metadataCreateIndexService = metadataCreateIndexService; this.indexScopedSettings = indexScopedSettings; this.xContentRegistry = xContentRegistry; + this.threadPool = threadPool; // Task is onboarded for throttling, it will get retried from associated TransportClusterManagerNodeAction. createIndexTemplateTaskKey = clusterService.registerClusterManagerTask(ClusterManagerTaskKeys.CREATE_INDEX_TEMPLATE_KEY, true); @@ -209,6 +219,7 @@ public void putComponentTemplate( final ComponentTemplate template, final ActionListener listener ) { + validateComponentTemplateRequest(template); clusterService.submitStateUpdateTask( "create-component-template [" + name + "], cause [" + cause + "]", new ClusterStateUpdateTask(Priority.URGENT) { @@ -378,6 +389,7 @@ public void removeComponentTemplate( final ActionListener listener ) { validateNotInUse(clusterService.state().metadata(), name); + validateComponentTemplateRequest(clusterService.state().metadata().componentTemplates().get(name)); clusterService.submitStateUpdateTask("remove-component-template [" + name + "]", new ClusterStateUpdateTask(Priority.URGENT) { @Override @@ -439,7 +451,12 @@ static void validateNotInUse(Metadata metadata, String templateNameOrWildcard) { .collect(Collectors.toSet()); final Set componentsBeingUsed = new HashSet<>(); final List templatesStillUsing = metadata.templatesV2().entrySet().stream().filter(e -> { - Set intersecting = Sets.intersection(new HashSet<>(e.getValue().composedOf()), matchingComponentTemplates); + Set referredComponentTemplates = new HashSet<>(e.getValue().composedOf()); + String systemTemplateUsed = findContextTemplate(metadata, e.getValue().context()); + if (systemTemplateUsed != null) { + referredComponentTemplates.add(systemTemplateUsed); + } + Set intersecting = Sets.intersection(referredComponentTemplates, matchingComponentTemplates); if (intersecting.size() > 0) { componentsBeingUsed.addAll(intersecting); return true; @@ -469,7 +486,7 @@ public void putIndexTemplateV2( final ComposableIndexTemplate template, final ActionListener listener ) { - validateV2TemplateRequest(clusterService.state().metadata(), name, template); + validateV2TemplateRequest(clusterService.state().metadata(), name, template, clusterService.getClusterSettings()); clusterService.submitStateUpdateTask( "create-index-template-v2 [" + name + "], cause [" + cause + "]", new ClusterStateUpdateTask(Priority.URGENT) { @@ -502,7 +519,12 @@ public void clusterStateProcessed(String source, ClusterState oldState, ClusterS ); } - public static void validateV2TemplateRequest(Metadata metadata, String name, ComposableIndexTemplate template) { + public static void validateV2TemplateRequest( + Metadata metadata, + String name, + ComposableIndexTemplate template, + ClusterSettings settings + ) { if (template.indexPatterns().stream().anyMatch(Regex::isMatchAllPattern)) { Settings mergedSettings = resolveSettings(metadata, template); if (IndexMetadata.INDEX_HIDDEN_SETTING.exists(mergedSettings)) { @@ -514,6 +536,8 @@ public static void validateV2TemplateRequest(Metadata metadata, String name, Com } final Map componentTemplates = metadata.componentTemplates(); + final boolean isContextAllowed = FeatureFlags.isEnabled(FeatureFlags.APPLICATION_BASED_CONFIGURATION_TEMPLATES); + final List missingComponentTemplates = template.composedOf() .stream() .filter(componentTemplate -> componentTemplates.containsKey(componentTemplate) == false) @@ -525,6 +549,59 @@ public static void validateV2TemplateRequest(Metadata metadata, String name, Com "index template [" + name + "] specifies component templates " + missingComponentTemplates + " that do not exist" ); } + + if (template.context() != null && !isContextAllowed) { + throw new InvalidIndexTemplateException( + name, + "index template [" + + name + + "] specifies a context which cannot be used without enabling: " + + SystemTemplatesService.SETTING_APPLICATION_BASED_CONFIGURATION_TEMPLATES_ENABLED.getKey() + ); + } + + if (isContextAllowed + && template.composedOf().stream().anyMatch(componentTemplate -> isSystemTemplate(componentTemplates.get(componentTemplate)))) { + throw new InvalidIndexTemplateException( + name, + "index template [" + name + "] specifies a component templates which can only be used in context." + ); + } + + if (template.context() != null && findContextTemplate(metadata, template.context()) == null) { + throw new InvalidIndexTemplateException( + name, + "index template [" + name + "] specifies a context which is not loaded on the cluster." + ); + } + } + + private void validateComponentTemplateRequest(ComponentTemplate componentTemplate) { + if (isSystemTemplate(componentTemplate) + && !ClusterStateSystemTemplateLoader.TEMPLATE_LOADER_IDENTIFIER.equals( + threadPool.getThreadContext().getTransient(ACTION_ORIGIN_TRANSIENT_NAME) + )) { + throw new IllegalArgumentException("A system template can only be created/updated/deleted with a repository"); + } + } + + private static String findContextTemplate(Metadata metadata, Context context) { + if (context == null) { + return null; + } + final boolean searchSpecificVersion = !Context.LATEST_VERSION.equals(context.version()); + return Optional.ofNullable(metadata.systemTemplatesLookup()) + .map(coll -> coll.get(context.name())) + .map(coll -> coll.get(searchSpecificVersion ? Long.parseLong(context.version()) : coll.lastKey())) + .orElse(null); + } + + public static boolean isSystemTemplate(ComponentTemplate componentTemplate) { + return Optional.ofNullable(componentTemplate) + .map(ComponentTemplate::metadata) + .map(md -> md.get(ClusterStateSystemTemplateLoader.TEMPLATE_TYPE_KEY)) + .filter(ob -> SystemTemplateMetadata.COMPONENT_TEMPLATE_TYPE.equals(ob.toString())) + .isPresent(); } public ClusterState addIndexTemplateV2( @@ -613,7 +690,8 @@ public ClusterState addIndexTemplateV2( template.priority(), template.version(), template.metadata(), - template.getDataStreamTemplate() + template.getDataStreamTemplate(), + template.context() ); } @@ -1140,7 +1218,7 @@ public static List collectMappings(final ClusterState state, .map(Template::mappings) .filter(Objects::nonNull) .collect(Collectors.toCollection(LinkedList::new)); - // Add the actual index template's mappings, since it takes the highest precedence + // Add the actual index template's mappings, since it takes the next precedence Optional.ofNullable(template.template()).map(Template::mappings).ifPresent(mappings::add); if (template.getDataStreamTemplate() != null && indexName.startsWith(DataStream.BACKING_INDEX_PREFIX)) { // add a default mapping for the timestamp field, at the lowest precedence, to make bootstrapping data streams more @@ -1165,6 +1243,15 @@ public static List collectMappings(final ClusterState state, }) .ifPresent(mappings::add); } + + // Now use context mappings which take the highest precedence + Optional.ofNullable(template.context()) + .map(ctx -> findContextTemplate(state.metadata(), ctx)) + .map(name -> state.metadata().componentTemplates().get(name)) + .map(ComponentTemplate::template) + .map(Template::mappings) + .ifPresent(mappings::add); + return Collections.unmodifiableList(mappings); } @@ -1226,8 +1313,14 @@ private static Settings resolveSettings(Metadata metadata, ComposableIndexTempla Settings.Builder templateSettings = Settings.builder(); componentSettings.forEach(templateSettings::put); - // Add the actual index template's settings to the end, since it takes the highest precedence. + // Add the actual index template's settings now, since it takes the next precedence. Optional.ofNullable(template.template()).map(Template::settings).ifPresent(templateSettings::put); + + // Add the template referred by context since it will take the highest precedence. + final String systemTemplate = findContextTemplate(metadata, template.context()); + final ComponentTemplate componentTemplate = metadata.componentTemplates().get(systemTemplate); + Optional.ofNullable(componentTemplate).map(ComponentTemplate::template).map(Template::settings).ifPresent(templateSettings::put); + return templateSettings.build(); } @@ -1269,8 +1362,16 @@ public static List> resolveAliases(final Metadata met .filter(Objects::nonNull) .collect(Collectors.toList()); - // Add the actual index template's aliases to the end if they exist + // Add the actual index template's aliases now if they exist Optional.ofNullable(template.template()).map(Template::aliases).ifPresent(aliases::add); + + // Now use context referenced template's aliases which take the highest precedence + if (template.context() != null) { + final String systemTemplate = findContextTemplate(metadata, template.context()); + final ComponentTemplate componentTemplate = metadata.componentTemplates().get(systemTemplate); + Optional.ofNullable(componentTemplate.template()).map(Template::aliases).ifPresent(aliases::add); + } + // Aliases are applied in order, but subsequent alias configuration from the same name is // ignored, so in order for the order to be correct, alias configuration should be in order // of precedence (with the index template first) diff --git a/server/src/test/java/org/opensearch/cluster/applicationtemplates/SystemTemplatesServiceTests.java b/server/src/test/java/org/opensearch/cluster/applicationtemplates/SystemTemplatesServiceTests.java index 4addf3802b40d..affb017264fdf 100644 --- a/server/src/test/java/org/opensearch/cluster/applicationtemplates/SystemTemplatesServiceTests.java +++ b/server/src/test/java/org/opensearch/cluster/applicationtemplates/SystemTemplatesServiceTests.java @@ -32,39 +32,51 @@ public class SystemTemplatesServiceTests extends OpenSearchTestCase { public void testSystemTemplatesLoaded() throws IOException { setupService(true); - systemTemplatesService.onClusterManager(); - SystemTemplatesService.Stats stats = systemTemplatesService.stats(); - assertNotNull(stats); - assertEquals(stats.getTemplatesLoaded(), 1L); - assertEquals(stats.getFailedLoadingTemplates(), 0L); - assertEquals(stats.getFailedLoadingRepositories(), 1L); + // First time load should happen, second time should short circuit. + for (int iter = 1; iter <= 2; iter++) { + systemTemplatesService.onClusterManager(); + SystemTemplatesService.Stats stats = systemTemplatesService.stats(); + assertNotNull(stats); + assertEquals(stats.getTemplatesLoaded(), iter % 2); + assertEquals(stats.getFailedLoadingTemplates(), 0L); + assertEquals(stats.getFailedLoadingRepositories(), iter % 2); + } } - public void testSystemTemplatesVerify() throws IOException { + public void testSystemTemplatesVerifyAndLoad() throws IOException { setupService(false); systemTemplatesService.verifyRepositories(); - SystemTemplatesService.Stats stats = systemTemplatesService.stats(); assertNotNull(stats); assertEquals(stats.getTemplatesLoaded(), 0L); assertEquals(stats.getFailedLoadingTemplates(), 0L); assertEquals(stats.getFailedLoadingRepositories(), 0L); + + systemTemplatesService.onClusterManager(); + stats = systemTemplatesService.stats(); + assertNotNull(stats); + assertEquals(stats.getTemplatesLoaded(), 1L); + assertEquals(stats.getFailedLoadingTemplates(), 0L); + assertEquals(stats.getFailedLoadingRepositories(), 0L); } public void testSystemTemplatesVerifyWithFailingRepository() throws IOException { setupService(true); - assertThrows(IllegalStateException.class, () -> systemTemplatesService.verifyRepositories()); + // Do it multiple times to ensure verify checks are always executed. + for (int i = 0; i < 2; i++) { + assertThrows(IllegalStateException.class, () -> systemTemplatesService.verifyRepositories()); - SystemTemplatesService.Stats stats = systemTemplatesService.stats(); - assertNotNull(stats); - assertEquals(stats.getTemplatesLoaded(), 0L); - assertEquals(stats.getFailedLoadingTemplates(), 0L); - assertEquals(stats.getFailedLoadingRepositories(), 1L); + SystemTemplatesService.Stats stats = systemTemplatesService.stats(); + assertNotNull(stats); + assertEquals(stats.getTemplatesLoaded(), 0L); + assertEquals(stats.getFailedLoadingTemplates(), 0L); + assertEquals(stats.getFailedLoadingRepositories(), 1L); + } } - void setupService(boolean errorFromMockPlugin) throws IOException { + private void setupService(boolean errorFromMockPlugin) throws IOException { FeatureFlags.initializeFeatureFlags(Settings.builder().put(FeatureFlags.APPLICATION_BASED_CONFIGURATION_TEMPLATES, true).build()); ThreadPool mockPool = Mockito.mock(ThreadPool.class); diff --git a/server/src/test/java/org/opensearch/cluster/metadata/MetadataIndexTemplateServiceTests.java b/server/src/test/java/org/opensearch/cluster/metadata/MetadataIndexTemplateServiceTests.java index 6643d6e13289b..f26f45b69d133 100644 --- a/server/src/test/java/org/opensearch/cluster/metadata/MetadataIndexTemplateServiceTests.java +++ b/server/src/test/java/org/opensearch/cluster/metadata/MetadataIndexTemplateServiceTests.java @@ -36,6 +36,8 @@ import org.opensearch.action.admin.indices.alias.Alias; import org.opensearch.action.support.master.AcknowledgedResponse; import org.opensearch.cluster.ClusterState; +import org.opensearch.cluster.applicationtemplates.ClusterStateSystemTemplateLoader; +import org.opensearch.cluster.applicationtemplates.SystemTemplateMetadata; import org.opensearch.cluster.metadata.MetadataIndexTemplateService.PutRequest; import org.opensearch.cluster.routing.allocation.AwarenessReplicaBalance; import org.opensearch.cluster.service.ClusterService; @@ -45,6 +47,8 @@ import org.opensearch.common.settings.Settings; import org.opensearch.common.settings.SettingsException; import org.opensearch.common.unit.TimeValue; +import org.opensearch.common.util.FeatureFlags; +import org.opensearch.common.util.concurrent.ThreadContext; import org.opensearch.common.xcontent.LoggingDeprecationHandler; import org.opensearch.common.xcontent.XContentFactory; import org.opensearch.core.action.ActionListener; @@ -53,6 +57,8 @@ import org.opensearch.core.xcontent.NamedXContentRegistry; import org.opensearch.core.xcontent.XContentParser; import org.opensearch.env.Environment; +import org.opensearch.index.codec.CodecService; +import org.opensearch.index.engine.EngineConfig; import org.opensearch.index.mapper.MapperParsingException; import org.opensearch.index.mapper.MapperService; import org.opensearch.indices.DefaultRemoteStoreSettings; @@ -62,6 +68,7 @@ import org.opensearch.indices.SystemIndices; import org.opensearch.repositories.RepositoriesService; import org.opensearch.test.OpenSearchSingleNodeTestCase; +import org.opensearch.threadpool.ThreadPool; import java.io.IOException; import java.util.ArrayList; @@ -76,10 +83,14 @@ import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicReference; +import java.util.function.Function; import java.util.stream.Collectors; import static java.util.Collections.singletonList; +import static org.opensearch.cluster.applicationtemplates.ClusterStateSystemTemplateLoader.TEMPLATE_LOADER_IDENTIFIER; +import static org.opensearch.cluster.applicationtemplates.SystemTemplateMetadata.fromComponentTemplateInfo; import static org.opensearch.common.settings.Settings.builder; +import static org.opensearch.common.util.concurrent.ThreadContext.ACTION_ORIGIN_TRANSIENT_NAME; import static org.opensearch.env.Environment.PATH_HOME_SETTING; import static org.opensearch.index.mapper.DataStreamFieldMapper.Defaults.TIMESTAMP_FIELD; import static org.opensearch.indices.ShardLimitValidatorTests.createTestShardLimitService; @@ -656,6 +667,306 @@ public void onFailure(Exception e) { ); } + public void testPutGlobalV2TemplateWhichProvidesContextWithContextDisabled() throws Exception { + MetadataIndexTemplateService metadataIndexTemplateService = getMetadataIndexTemplateService(); + ComposableIndexTemplate globalIndexTemplate = new ComposableIndexTemplate( + List.of("*"), + null, + List.of(), + null, + null, + null, + null, + new Context("any") + ); + InvalidIndexTemplateException ex = expectThrows( + InvalidIndexTemplateException.class, + () -> metadataIndexTemplateService.putIndexTemplateV2( + "testing", + true, + "template-referencing-context-as-ct", + TimeValue.timeValueSeconds(30L), + globalIndexTemplate, + new ActionListener() { + @Override + public void onResponse(AcknowledgedResponse response) { + fail("the listener should not be invoked as validation should fail"); + } + + @Override + public void onFailure(Exception e) { + fail("the listener should not be invoked as validation should fail"); + } + } + ) + ); + assertTrue( + "Invalid exception message." + ex.getMessage(), + ex.getMessage().contains("specifies a context which cannot be used without enabling") + ); + } + + public void testPutGlobalV2TemplateWhichProvidesContextNotPresentInState() throws Exception { + FeatureFlags.initializeFeatureFlags(Settings.builder().put(FeatureFlags.APPLICATION_BASED_CONFIGURATION_TEMPLATES, true).build()); + MetadataIndexTemplateService metadataIndexTemplateService = getMetadataIndexTemplateService(); + ComposableIndexTemplate globalIndexTemplate = new ComposableIndexTemplate( + List.of("*"), + null, + List.of(), + null, + null, + null, + null, + new Context("any") + ); + InvalidIndexTemplateException ex = expectThrows( + InvalidIndexTemplateException.class, + () -> metadataIndexTemplateService.putIndexTemplateV2( + "testing", + true, + "template-referencing-context-as-ct", + TimeValue.timeValueSeconds(30L), + globalIndexTemplate, + new ActionListener() { + @Override + public void onResponse(AcknowledgedResponse response) { + fail("the listener should not be invoked as validation should fail"); + } + + @Override + public void onFailure(Exception e) { + fail("the listener should not be invoked as validation should fail"); + } + } + ) + ); + assertTrue( + "Invalid exception message." + ex.getMessage(), + ex.getMessage().contains("specifies a context which is not loaded on the cluster") + ); + } + + public void testPutGlobalV2TemplateWhichProvidesContextWithNonExistingVersion() throws Exception { + FeatureFlags.initializeFeatureFlags(Settings.builder().put(FeatureFlags.APPLICATION_BASED_CONFIGURATION_TEMPLATES, true).build()); + MetadataIndexTemplateService metadataIndexTemplateService = getMetadataIndexTemplateService(); + + Function templateApplier = codec -> new Template( + Settings.builder().put(EngineConfig.INDEX_CODEC_SETTING.getKey(), codec).build(), + null, + null + ); + ComponentTemplate systemTemplate = new ComponentTemplate( + templateApplier.apply(CodecService.BEST_COMPRESSION_CODEC), + 1L, + Map.of(ClusterStateSystemTemplateLoader.TEMPLATE_TYPE_KEY, SystemTemplateMetadata.COMPONENT_TEMPLATE_TYPE, "_version", 1L) + ); + SystemTemplateMetadata systemTemplateMetadata = fromComponentTemplateInfo("ct-best-compression-codec" + System.nanoTime(), 1); + + CountDownLatch waitToCreateComponentTemplate = new CountDownLatch(1); + ActionListener createComponentTemplateListener = new ActionListener() { + + @Override + public void onResponse(AcknowledgedResponse response) { + waitToCreateComponentTemplate.countDown(); + } + + @Override + public void onFailure(Exception e) { + fail("expecting the component template PUT to succeed but got: " + e.getMessage()); + } + }; + + ThreadContext threadContext = getInstanceFromNode(ThreadPool.class).getThreadContext(); + try (ThreadContext.StoredContext ignore = threadContext.stashContext()) { + getInstanceFromNode(ThreadPool.class).getThreadContext().putTransient(ACTION_ORIGIN_TRANSIENT_NAME, TEMPLATE_LOADER_IDENTIFIER); + metadataIndexTemplateService.putComponentTemplate( + "test", + true, + systemTemplateMetadata.fullyQualifiedName(), + TimeValue.timeValueSeconds(30L), + systemTemplate, + createComponentTemplateListener + ); + } + + assertTrue("Could not create component templates", waitToCreateComponentTemplate.await(10, TimeUnit.SECONDS)); + + Context context = new Context(systemTemplateMetadata.name(), Long.toString(2L), Map.of()); + ComposableIndexTemplate globalIndexTemplate = new ComposableIndexTemplate( + List.of("*"), + templateApplier.apply(CodecService.LZ4), + List.of(), + null, + null, + null, + null, + context + ); + + InvalidIndexTemplateException ex = expectThrows( + InvalidIndexTemplateException.class, + () -> metadataIndexTemplateService.putIndexTemplateV2( + "testing", + true, + "template-referencing-context-as-ct", + TimeValue.timeValueSeconds(30L), + globalIndexTemplate, + new ActionListener() { + @Override + public void onResponse(AcknowledgedResponse response) { + fail("the listener should not be invoked as validation should fail"); + } + + @Override + public void onFailure(Exception e) { + fail("the listener should not be invoked as validation should fail"); + } + } + ) + ); + assertTrue( + "Invalid exception message." + ex.getMessage(), + ex.getMessage().contains("specifies a context which is not loaded on the cluster") + ); + } + + public void testPutGlobalV2TemplateWhichProvidesContextInComposedOfSection() throws Exception { + FeatureFlags.initializeFeatureFlags(Settings.builder().put(FeatureFlags.APPLICATION_BASED_CONFIGURATION_TEMPLATES, true).build()); + MetadataIndexTemplateService metadataIndexTemplateService = getMetadataIndexTemplateService(); + + Function templateApplier = codec -> new Template( + Settings.builder().put(EngineConfig.INDEX_CODEC_SETTING.getKey(), codec).build(), + null, + null + ); + ComponentTemplate systemTemplate = new ComponentTemplate( + templateApplier.apply(CodecService.BEST_COMPRESSION_CODEC), + 1L, + Map.of(ClusterStateSystemTemplateLoader.TEMPLATE_TYPE_KEY, SystemTemplateMetadata.COMPONENT_TEMPLATE_TYPE, "_version", 1L) + ); + SystemTemplateMetadata systemTemplateMetadata = fromComponentTemplateInfo("context-best-compression-codec" + System.nanoTime(), 1); + + CountDownLatch waitToCreateComponentTemplate = new CountDownLatch(1); + ActionListener createComponentTemplateListener = new ActionListener() { + + @Override + public void onResponse(AcknowledgedResponse response) { + waitToCreateComponentTemplate.countDown(); + } + + @Override + public void onFailure(Exception e) { + fail("expecting the component template PUT to succeed but got: " + e.getMessage()); + } + }; + ThreadContext context = getInstanceFromNode(ThreadPool.class).getThreadContext(); + try (ThreadContext.StoredContext ignore = context.stashContext()) { + getInstanceFromNode(ThreadPool.class).getThreadContext().putTransient(ACTION_ORIGIN_TRANSIENT_NAME, TEMPLATE_LOADER_IDENTIFIER); + metadataIndexTemplateService.putComponentTemplate( + "test", + true, + systemTemplateMetadata.fullyQualifiedName(), + TimeValue.timeValueSeconds(30L), + systemTemplate, + createComponentTemplateListener + ); + } + assertTrue("Could not create component templates", waitToCreateComponentTemplate.await(10, TimeUnit.SECONDS)); + + ComposableIndexTemplate globalIndexTemplate = new ComposableIndexTemplate( + List.of("*"), + templateApplier.apply(CodecService.LZ4), + List.of(systemTemplateMetadata.fullyQualifiedName()), + null, + null, + null, + null + ); + InvalidIndexTemplateException ex = expectThrows( + InvalidIndexTemplateException.class, + () -> metadataIndexTemplateService.putIndexTemplateV2( + "testing", + true, + "template-referencing-context-as-ct", + TimeValue.timeValueSeconds(30L), + globalIndexTemplate, + new ActionListener() { + @Override + public void onResponse(AcknowledgedResponse response) { + fail("the listener should not be invoked as validation should fail"); + } + + @Override + public void onFailure(Exception e) { + fail("the listener should not be invoked as validation should fail"); + } + } + ) + ); + assertTrue( + "Invalid exception message." + ex.getMessage(), + ex.getMessage().contains("specifies a component templates which can only be used in context") + ); + } + + public void testPutGlobalV2TemplateWhichProvidesContextWithSpecificVersion() throws Exception { + verifyTemplateCreationUsingContext("1"); + } + + public void testPutGlobalV2TemplateWhichProvidesContextWithLatestVersion() throws Exception { + verifyTemplateCreationUsingContext("_latest"); + } + + public void testModifySystemTemplateViaUnknownSource() throws Exception { + FeatureFlags.initializeFeatureFlags(Settings.builder().put(FeatureFlags.APPLICATION_BASED_CONFIGURATION_TEMPLATES, true).build()); + MetadataIndexTemplateService metadataIndexTemplateService = getMetadataIndexTemplateService(); + + Function templateApplier = codec -> new Template( + Settings.builder().put(EngineConfig.INDEX_CODEC_SETTING.getKey(), codec).build(), + null, + null + ); + + ComponentTemplate systemTemplate = new ComponentTemplate( + templateApplier.apply(CodecService.BEST_COMPRESSION_CODEC), + 1L, + Map.of(ClusterStateSystemTemplateLoader.TEMPLATE_TYPE_KEY, SystemTemplateMetadata.COMPONENT_TEMPLATE_TYPE, "_version", 1L) + ); + SystemTemplateMetadata systemTemplateMetadata = fromComponentTemplateInfo("ct-best-compression-codec" + System.nanoTime(), 1); + + IllegalArgumentException ex = expectThrows( + IllegalArgumentException.class, + () -> metadataIndexTemplateService.putComponentTemplate( + "test", + true, + systemTemplateMetadata.fullyQualifiedName(), + TimeValue.timeValueSeconds(30L), + systemTemplate, + ActionListener.wrap(() -> {}) + ) + ); + assertTrue( + "Invalid exception message." + ex.getMessage(), + ex.getMessage().contains("A system template can only be created/updated/deleted with a repository") + ); + } + + public void testResolveSettingsWithContextVersion() throws Exception { + ClusterService clusterService = node().injector().getInstance(ClusterService.class); + final String indexTemplateName = verifyTemplateCreationUsingContext("1"); + + Settings settings = MetadataIndexTemplateService.resolveSettings(clusterService.state().metadata(), indexTemplateName); + assertThat(settings.get("index.codec"), equalTo(CodecService.BEST_COMPRESSION_CODEC)); + } + + public void testResolveSettingsWithContextLatest() throws Exception { + ClusterService clusterService = node().injector().getInstance(ClusterService.class); + final String indexTemplateName = verifyTemplateCreationUsingContext(Context.LATEST_VERSION); + + Settings settings = MetadataIndexTemplateService.resolveSettings(clusterService.state().metadata(), indexTemplateName); + assertThat(settings.get("index.codec"), equalTo(CodecService.ZLIB)); + } + /** * Test that if we have a pre-existing v2 template and put a "*" v1 template, we generate a warning */ @@ -1513,6 +1824,16 @@ public void testResolveAliases() throws Exception { ComponentTemplate ct2 = new ComponentTemplate(new Template(null, null, a2), null, null); state = service.addComponentTemplate(state, true, "ct_high", ct1); state = service.addComponentTemplate(state, true, "ct_low", ct2); + + Map a4 = Map.of("sys", AliasMetadata.builder("sys").build()); + ComponentTemplate sysTemplate = new ComponentTemplate( + new Template(null, null, a4), + 1L, + Map.of(ClusterStateSystemTemplateLoader.TEMPLATE_TYPE_KEY, SystemTemplateMetadata.COMPONENT_TEMPLATE_TYPE, "_version", 1) + ); + SystemTemplateMetadata systemTemplateMetadata = SystemTemplateMetadata.fromComponentTemplateInfo("sys-template", 1L); + state = service.addComponentTemplate(state, true, systemTemplateMetadata.fullyQualifiedName(), sysTemplate); + ComposableIndexTemplate it = new ComposableIndexTemplate( Collections.singletonList("i*"), new Template(null, null, a3), @@ -1520,14 +1841,15 @@ public void testResolveAliases() throws Exception { 0L, 1L, null, - null + null, + new Context(systemTemplateMetadata.name()) ); state = service.addIndexTemplateV2(state, true, "my-template", it); List> resolvedAliases = MetadataIndexTemplateService.resolveAliases(state.metadata(), "my-template"); - // These should be order of precedence, so the index template (a3), then ct_high (a1), then ct_low (a2) - assertThat(resolvedAliases, equalTo(Arrays.asList(a3, a1, a2))); + // These should be order of precedence, so the context(a4), index template (a3), then ct_high (a1), then ct_low (a2) + assertThat(resolvedAliases, equalTo(Arrays.asList(a4, a3, a1, a2))); } public void testAddInvalidTemplate() throws Exception { @@ -2067,7 +2389,8 @@ private static List putTemplate(NamedXContentRegistry xContentRegistr new AliasValidator(), null, new IndexScopedSettings(Settings.EMPTY, IndexScopedSettings.BUILT_IN_INDEX_SETTINGS), - xContentRegistry + xContentRegistry, + null ); final List throwables = new ArrayList<>(); @@ -2190,4 +2513,132 @@ public static void assertTemplatesEqual(ComposableIndexTemplate actual, Composab } } } + + private String verifyTemplateCreationUsingContext(String contextVersion) throws Exception { + FeatureFlags.initializeFeatureFlags(Settings.builder().put(FeatureFlags.APPLICATION_BASED_CONFIGURATION_TEMPLATES, true).build()); + MetadataIndexTemplateService metadataIndexTemplateService = getMetadataIndexTemplateService(); + + Function templateApplier = codec -> new Template( + Settings.builder().put(EngineConfig.INDEX_CODEC_SETTING.getKey(), codec).build(), + null, + null + ); + + ComponentTemplate componentTemplate = new ComponentTemplate(templateApplier.apply(CodecService.DEFAULT_CODEC), 1L, new HashMap<>()); + + ComponentTemplate systemTemplate = new ComponentTemplate( + templateApplier.apply(CodecService.BEST_COMPRESSION_CODEC), + 1L, + Map.of(ClusterStateSystemTemplateLoader.TEMPLATE_TYPE_KEY, SystemTemplateMetadata.COMPONENT_TEMPLATE_TYPE, "_version", 1L) + ); + SystemTemplateMetadata systemTemplateMetadata = fromComponentTemplateInfo("ct-best-compression-codec" + System.nanoTime(), 1); + + ComponentTemplate systemTemplateV2 = new ComponentTemplate( + templateApplier.apply(CodecService.ZLIB), + 2L, + Map.of(ClusterStateSystemTemplateLoader.TEMPLATE_TYPE_KEY, SystemTemplateMetadata.COMPONENT_TEMPLATE_TYPE, "_version", 2L) + ); + SystemTemplateMetadata systemTemplateV2Metadata = fromComponentTemplateInfo(systemTemplateMetadata.name(), 2); + + CountDownLatch waitToCreateComponentTemplate = new CountDownLatch(3); + ActionListener createComponentTemplateListener = new ActionListener() { + + @Override + public void onResponse(AcknowledgedResponse response) { + waitToCreateComponentTemplate.countDown(); + } + + @Override + public void onFailure(Exception e) { + fail("expecting the component template PUT to succeed but got: " + e.getMessage()); + } + }; + + String componentTemplateName = "ct-default-codec" + System.nanoTime(); + metadataIndexTemplateService.putComponentTemplate( + "test", + true, + componentTemplateName, + TimeValue.timeValueSeconds(30L), + componentTemplate, + createComponentTemplateListener + ); + + ThreadContext threadContext = getInstanceFromNode(ThreadPool.class).getThreadContext(); + try (ThreadContext.StoredContext ignore = threadContext.stashContext()) { + getInstanceFromNode(ThreadPool.class).getThreadContext().putTransient(ACTION_ORIGIN_TRANSIENT_NAME, TEMPLATE_LOADER_IDENTIFIER); + metadataIndexTemplateService.putComponentTemplate( + "test", + true, + systemTemplateMetadata.fullyQualifiedName(), + TimeValue.timeValueSeconds(30L), + systemTemplate, + createComponentTemplateListener + ); + + metadataIndexTemplateService.putComponentTemplate( + "test", + true, + systemTemplateV2Metadata.fullyQualifiedName(), + TimeValue.timeValueSeconds(30L), + systemTemplateV2, + createComponentTemplateListener + ); + } + + assertTrue("Could not create component templates", waitToCreateComponentTemplate.await(10, TimeUnit.SECONDS)); + + Context context = new Context(systemTemplateMetadata.name(), contextVersion, Map.of()); + ComposableIndexTemplate globalIndexTemplate = new ComposableIndexTemplate( + List.of("*"), + templateApplier.apply(CodecService.LZ4), + List.of(componentTemplateName), + null, + null, + null, + null, + context + ); + + String indexTemplateName = "template-referencing-ct-and-context"; + CountDownLatch waitForIndexTemplate = new CountDownLatch(1); + metadataIndexTemplateService.putIndexTemplateV2( + "testing", + true, + indexTemplateName, + TimeValue.timeValueSeconds(30L), + globalIndexTemplate, + new ActionListener() { + @Override + public void onResponse(AcknowledgedResponse response) { + waitForIndexTemplate.countDown(); + } + + @Override + public void onFailure(Exception e) { + fail("the listener should not be invoked as the template should succeed"); + } + } + ); + assertTrue("Expected index template to have been created.", waitForIndexTemplate.await(10, TimeUnit.SECONDS)); + assertTemplatesEqual( + node().injector().getInstance(ClusterService.class).state().metadata().templatesV2().get(indexTemplateName), + globalIndexTemplate + ); + + return indexTemplateName; + } + + @Override + protected boolean resetNodeAfterTest() { + return true; + } + + @Override + protected Settings featureFlagSettings() { + return Settings.builder() + .put(super.featureFlagSettings()) + .put(FeatureFlags.APPLICATION_BASED_CONFIGURATION_TEMPLATES, false) + .build(); + } } From 712ebfdac5c1a22acb0f6aff55170ce8336a718d Mon Sep 17 00:00:00 2001 From: Sooraj Sinha <81695996+soosinha@users.noreply.github.com> Date: Wed, 24 Jul 2024 20:28:15 +0530 Subject: [PATCH 120/167] Add changelog for remote state multi part upload fix (#14958) Signed-off-by: Sooraj Sinha --- CHANGELOG.md | 1 + 1 file changed, 1 insertion(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index 00560d68e4051..0d6312a76e0d0 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -95,6 +95,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Fix NPE in ReplicaShardAllocator ([#14385](https://github.com/opensearch-project/OpenSearch/pull/14385)) - Fix constant_keyword field type used when creating index ([#14807](https://github.com/opensearch-project/OpenSearch/pull/14807)) - Use circuit breaker in InternalHistogram when adding empty buckets ([#14754](https://github.com/opensearch-project/OpenSearch/pull/14754)) +- Create new IndexInput for multi part upload ([#14888](https://github.com/opensearch-project/OpenSearch/pull/14888)) - Fix searchable snapshot failure with scripted fields ([#14411](https://github.com/opensearch-project/OpenSearch/pull/14411)) ### Security From 76be6155f5e637252dc1b60b51462a6787736b51 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Wed, 24 Jul 2024 11:10:43 -0400 Subject: [PATCH 121/167] Bump org.apache.commons:commons-lang3 from 3.14.0 to 3.15.0 in /plugins/repository-hdfs (#14861) * Bump org.apache.commons:commons-lang3 in /plugins/repository-hdfs Bumps org.apache.commons:commons-lang3 from 3.14.0 to 3.15.0. --- updated-dependencies: - dependency-name: org.apache.commons:commons-lang3 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] * Updating SHAs Signed-off-by: dependabot[bot] * Update changelog Signed-off-by: dependabot[bot] --------- Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] --- CHANGELOG.md | 1 + plugins/repository-hdfs/build.gradle | 2 +- plugins/repository-hdfs/licenses/commons-lang3-3.14.0.jar.sha1 | 1 - plugins/repository-hdfs/licenses/commons-lang3-3.15.0.jar.sha1 | 1 + 4 files changed, 3 insertions(+), 2 deletions(-) delete mode 100644 plugins/repository-hdfs/licenses/commons-lang3-3.14.0.jar.sha1 create mode 100644 plugins/repository-hdfs/licenses/commons-lang3-3.15.0.jar.sha1 diff --git a/CHANGELOG.md b/CHANGELOG.md index 0d6312a76e0d0..3dc2f38b5f998 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -54,6 +54,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Bump `jackson` from 2.17.1 to 2.17.2 ([#14687](https://github.com/opensearch-project/OpenSearch/pull/14687)) - Bump `net.minidev:json-smart` from 2.5.0 to 2.5.1 ([#14748](https://github.com/opensearch-project/OpenSearch/pull/14748)) - Bump `actions/checkout` from 2 to 4 ([#14858](https://github.com/opensearch-project/OpenSearch/pull/14858)) +- Bump `org.apache.commons:commons-lang3` from 3.14.0 to 3.15.0 ([#14861](https://github.com/opensearch-project/OpenSearch/pull/14861)) ### Changed - [Tiered Caching] Move query recomputation logic outside write lock ([#14187](https://github.com/opensearch-project/OpenSearch/pull/14187)) diff --git a/plugins/repository-hdfs/build.gradle b/plugins/repository-hdfs/build.gradle index 63eb783649884..884fb1333404a 100644 --- a/plugins/repository-hdfs/build.gradle +++ b/plugins/repository-hdfs/build.gradle @@ -76,7 +76,7 @@ dependencies { api "org.apache.commons:commons-compress:${versions.commonscompress}" api 'org.apache.commons:commons-configuration2:2.11.0' api "commons-io:commons-io:${versions.commonsio}" - api 'org.apache.commons:commons-lang3:3.14.0' + api 'org.apache.commons:commons-lang3:3.15.0' implementation 'com.google.re2j:re2j:1.7' api 'javax.servlet:servlet-api:2.5' api "org.slf4j:slf4j-api:${versions.slf4j}" diff --git a/plugins/repository-hdfs/licenses/commons-lang3-3.14.0.jar.sha1 b/plugins/repository-hdfs/licenses/commons-lang3-3.14.0.jar.sha1 deleted file mode 100644 index d783e07e40902..0000000000000 --- a/plugins/repository-hdfs/licenses/commons-lang3-3.14.0.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -1ed471194b02f2c6cb734a0cd6f6f107c673afae \ No newline at end of file diff --git a/plugins/repository-hdfs/licenses/commons-lang3-3.15.0.jar.sha1 b/plugins/repository-hdfs/licenses/commons-lang3-3.15.0.jar.sha1 new file mode 100644 index 0000000000000..4b1179c935946 --- /dev/null +++ b/plugins/repository-hdfs/licenses/commons-lang3-3.15.0.jar.sha1 @@ -0,0 +1 @@ +21581109b4be710ea4b195d5760392ec284f9f11 \ No newline at end of file From fcc231dfc349e092c3f68e49f49e32a062313f71 Mon Sep 17 00:00:00 2001 From: zhichao-aws Date: Wed, 24 Jul 2024 23:13:02 +0800 Subject: [PATCH 122/167] [BUG FIX] Fix the visit of inner query for NestedQueryBuilder (#14739) * fix nested query visit subquery Signed-off-by: zhichao-aws * add change log Signed-off-by: zhichao-aws --------- Signed-off-by: zhichao-aws Signed-off-by: Daniel (dB.) Doubrovkine Co-authored-by: Daniel (dB.) Doubrovkine --- CHANGELOG.md | 1 + .../opensearch/index/query/NestedQueryBuilder.java | 9 +++++++++ .../index/query/NestedQueryBuilderTests.java | 11 +++++++++++ 3 files changed, 21 insertions(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index 3dc2f38b5f998..fb1d060be684b 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -98,6 +98,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Use circuit breaker in InternalHistogram when adding empty buckets ([#14754](https://github.com/opensearch-project/OpenSearch/pull/14754)) - Create new IndexInput for multi part upload ([#14888](https://github.com/opensearch-project/OpenSearch/pull/14888)) - Fix searchable snapshot failure with scripted fields ([#14411](https://github.com/opensearch-project/OpenSearch/pull/14411)) +- Fix the visit of inner query for NestedQueryBuilder ([#14739](https://github.com/opensearch-project/OpenSearch/pull/14739)) ### Security diff --git a/server/src/main/java/org/opensearch/index/query/NestedQueryBuilder.java b/server/src/main/java/org/opensearch/index/query/NestedQueryBuilder.java index b5ba79632b622..5908882472ce7 100644 --- a/server/src/main/java/org/opensearch/index/query/NestedQueryBuilder.java +++ b/server/src/main/java/org/opensearch/index/query/NestedQueryBuilder.java @@ -34,6 +34,7 @@ import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.index.ReaderUtil; +import org.apache.lucene.search.BooleanClause; import org.apache.lucene.search.MatchNoDocsQuery; import org.apache.lucene.search.MultiCollector; import org.apache.lucene.search.Query; @@ -505,4 +506,12 @@ public TopDocsAndMaxScore topDocs(SearchHit hit) throws IOException { } } } + + @Override + public void visit(QueryBuilderVisitor visitor) { + visitor.accept(this); + if (query != null) { + visitor.getChildVisitor(BooleanClause.Occur.MUST).accept(query); + } + } } diff --git a/server/src/test/java/org/opensearch/index/query/NestedQueryBuilderTests.java b/server/src/test/java/org/opensearch/index/query/NestedQueryBuilderTests.java index f72bd76913c8f..351011eb1b812 100644 --- a/server/src/test/java/org/opensearch/index/query/NestedQueryBuilderTests.java +++ b/server/src/test/java/org/opensearch/index/query/NestedQueryBuilderTests.java @@ -59,8 +59,10 @@ import org.hamcrest.Matchers; import java.io.IOException; +import java.util.ArrayList; import java.util.Collections; import java.util.HashMap; +import java.util.List; import java.util.Map; import java.util.Optional; @@ -565,4 +567,13 @@ void doWithDepth(int depth, ThrowingConsumer test) throws Exc ); } } + + public void testVisit() { + NestedQueryBuilder builder = new NestedQueryBuilder("path", new MatchAllQueryBuilder(), ScoreMode.None); + + List visitedQueries = new ArrayList<>(); + builder.visit(createTestVisitor(visitedQueries)); + + assertEquals(2, visitedQueries.size()); + } } From 157d27700157f3e24ef8b150b542e78d788bddab Mon Sep 17 00:00:00 2001 From: Andrew Ross Date: Thu, 25 Jul 2024 11:15:21 -0500 Subject: [PATCH 123/167] Forward port 2.16 release notes (#14975) Signed-off-by: Andrew Ross --- CHANGELOG.md | 83 ----------------- .../opensearch.release-notes-2.16.0.md | 92 +++++++++++++++++++ 2 files changed, 92 insertions(+), 83 deletions(-) create mode 100644 release-notes/opensearch.release-notes-2.16.0.md diff --git a/CHANGELOG.md b/CHANGELOG.md index fb1d060be684b..e88a084f7d7f6 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -5,100 +5,17 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), ## [Unreleased 2.x] ### Added -- Add fingerprint ingest processor ([#13724](https://github.com/opensearch-project/OpenSearch/pull/13724)) -- [Remote Store] Rate limiter for remote store low priority uploads ([#14374](https://github.com/opensearch-project/OpenSearch/pull/14374/)) -- Apply the date histogram rewrite optimization to range aggregation ([#13865](https://github.com/opensearch-project/OpenSearch/pull/13865)) -- [Writable Warm] Add composite directory implementation and integrate it with FileCache ([12782](https://github.com/opensearch-project/OpenSearch/pull/12782)) -- [Workload Management] Add QueryGroup schema ([13669](https://github.com/opensearch-project/OpenSearch/pull/13669)) -- Add batching supported processor base type AbstractBatchingProcessor ([#14554](https://github.com/opensearch-project/OpenSearch/pull/14554)) -- Fix race condition while parsing derived fields from search definition ([14445](https://github.com/opensearch-project/OpenSearch/pull/14445)) -- Add `strict_allow_templates` dynamic mapping option ([#14555](https://github.com/opensearch-project/OpenSearch/pull/14555)) -- Add allowlist setting for ingest-common and search-pipeline-common processors ([#14439](https://github.com/opensearch-project/OpenSearch/issues/14439)) -- [Workload Management] add queryGroupId header propagator across requests and nodes ([#14614](https://github.com/opensearch-project/OpenSearch/pull/14614)) -- Create SystemIndexRegistry with helper method matchesSystemIndex ([#14415](https://github.com/opensearch-project/OpenSearch/pull/14415)) -- Print reason why parent task was cancelled ([#14604](https://github.com/opensearch-project/OpenSearch/issues/14604)) -- Add matchesPluginSystemIndexPattern to SystemIndexRegistry ([#14750](https://github.com/opensearch-project/OpenSearch/pull/14750)) -- Add Plugin interface for loading application based configuration templates (([#14659](https://github.com/opensearch-project/OpenSearch/issues/14659))) -- Refactor remote-routing-table service inline with remote state interfaces([#14668](https://github.com/opensearch-project/OpenSearch/pull/14668)) -- Add shard-diff path to diff manifest to reduce number of read calls remote store (([#14684](https://github.com/opensearch-project/OpenSearch/pull/14684))) -- Add SortResponseProcessor to Search Pipelines (([#14785](https://github.com/opensearch-project/OpenSearch/issues/14785))) -- Add prefix mode verification setting for repository verification (([#14790](https://github.com/opensearch-project/OpenSearch/pull/14790))) -- Add SplitResponseProcessor to Search Pipelines (([#14800](https://github.com/opensearch-project/OpenSearch/issues/14800))) -- Optimize TransportNodesAction to not send DiscoveryNodes for NodeStats, NodesInfo and ClusterStats call ([14749](https://github.com/opensearch-project/OpenSearch/pull/14749)) -- Reduce logging in DEBUG for MasterService:run ([#14795](https://github.com/opensearch-project/OpenSearch/pull/14795)) -- Enabling term version check on local state for all ClusterManager Read Transport Actions ([#14273](https://github.com/opensearch-project/OpenSearch/pull/14273)) -- Add persian_stem filter (([#14847](https://github.com/opensearch-project/OpenSearch/pull/14847))) -- Create listener to refresh search thread resource usage ([#14832](https://github.com/opensearch-project/OpenSearch/pull/14832)) -- Add rest, transport layer changes for hot to warm tiering - dedicated setup (([#13980](https://github.com/opensearch-project/OpenSearch/pull/13980)) -- Optimize Cluster Stats Indices to precomute node level stats ([#14426](https://github.com/opensearch-project/OpenSearch/pull/14426)) -- Add logic to create index templates (v2) using context field ([#14811](https://github.com/opensearch-project/OpenSearch/pull/14811)) ### Dependencies -- Bump `org.gradle.test-retry` from 1.5.8 to 1.5.9 ([#13442](https://github.com/opensearch-project/OpenSearch/pull/13442)) -- Update to Apache Lucene 9.11.0 ([#14042](https://github.com/opensearch-project/OpenSearch/pull/14042)) -- Bump `netty` from 4.1.110.Final to 4.1.111.Final ([#14356](https://github.com/opensearch-project/OpenSearch/pull/14356)) -- Bump `org.wiremock:wiremock-standalone` from 3.3.1 to 3.6.0 ([#14361](https://github.com/opensearch-project/OpenSearch/pull/14361)) -- Bump `reactor` from 3.5.17 to 3.5.19 ([#14395](https://github.com/opensearch-project/OpenSearch/pull/14395), [#14697](https://github.com/opensearch-project/OpenSearch/pull/14697)) -- Bump `reactor-netty` from 1.1.19 to 1.1.21 ([#14395](https://github.com/opensearch-project/OpenSearch/pull/14395), [#14697](https://github.com/opensearch-project/OpenSearch/pull/14697)) -- Bump `commons-net:commons-net` from 3.10.0 to 3.11.1 ([#14396](https://github.com/opensearch-project/OpenSearch/pull/14396)) -- Bump `com.nimbusds:nimbus-jose-jwt` from 9.37.3 to 9.40 ([#14398](https://github.com/opensearch-project/OpenSearch/pull/14398)) -- Bump `org.apache.commons:commons-configuration2` from 2.10.1 to 2.11.0 ([#14399](https://github.com/opensearch-project/OpenSearch/pull/14399)) -- Bump `com.gradle.develocity` from 3.17.4 to 3.17.6 ([#14397](https://github.com/opensearch-project/OpenSearch/pull/14397), [#14856](https://github.com/opensearch-project/OpenSearch/pull/14856)) -- Bump `opentelemetry` from 1.36.0 to 1.40.0 ([#14457](https://github.com/opensearch-project/OpenSearch/pull/14457), [#14674](https://github.com/opensearch-project/OpenSearch/pull/14674)) -- Bump `opentelemetry-semconv` from 1.25.0-alpha to 1.26.0-alpha ([#14674](https://github.com/opensearch-project/OpenSearch/pull/14674)) -- Bump `azure-identity` from 1.11.4 to 1.13.0, Bump `msal4j` from 1.14.3 to 1.15.1, Bump `msal4j-persistence-extension` from 1.2.0 to 1.3.0 ([#14506](https://github.com/opensearch-project/OpenSearch/pull/14673)) -- Bump `com.azure:azure-storage-common` from 12.21.2 to 12.25.1 ([#14517](https://github.com/opensearch-project/OpenSearch/pull/14517)) -- Bump `com.microsoft.azure:msal4j` from 1.15.1 to 1.16.1 ([#14610](https://github.com/opensearch-project/OpenSearch/pull/14610), [#14857](https://github.com/opensearch-project/OpenSearch/pull/14857)) -- Bump `com.github.spullara.mustache.java:compiler` from 0.9.13 to 0.9.14 ([#14672](https://github.com/opensearch-project/OpenSearch/pull/14672)) -- Bump `net.minidev:accessors-smart` from 2.5.0 to 2.5.1 ([#14673](https://github.com/opensearch-project/OpenSearch/pull/14673)) -- Bump `jackson` from 2.17.1 to 2.17.2 ([#14687](https://github.com/opensearch-project/OpenSearch/pull/14687)) -- Bump `net.minidev:json-smart` from 2.5.0 to 2.5.1 ([#14748](https://github.com/opensearch-project/OpenSearch/pull/14748)) -- Bump `actions/checkout` from 2 to 4 ([#14858](https://github.com/opensearch-project/OpenSearch/pull/14858)) - Bump `org.apache.commons:commons-lang3` from 3.14.0 to 3.15.0 ([#14861](https://github.com/opensearch-project/OpenSearch/pull/14861)) ### Changed -- [Tiered Caching] Move query recomputation logic outside write lock ([#14187](https://github.com/opensearch-project/OpenSearch/pull/14187)) -- unsignedLongRangeQuery now returns MatchNoDocsQuery if the lower bounds are greater than the upper bounds ([#14416](https://github.com/opensearch-project/OpenSearch/pull/14416)) -- Updated the `indices.query.bool.max_clause_count` setting from being static to dynamically updateable ([#13568](https://github.com/opensearch-project/OpenSearch/pull/13568)) -- Make the class CommunityIdProcessor final ([#14448](https://github.com/opensearch-project/OpenSearch/pull/14448)) -- Allow @InternalApi annotation on classes not meant to be constructed outside of the OpenSearch core ([#14575](https://github.com/opensearch-project/OpenSearch/pull/14575)) -- Add @InternalApi annotation to japicmp exclusions ([#14597](https://github.com/opensearch-project/OpenSearch/pull/14597)) -- Allow system index warning in OpenSearchRestTestCase.refreshAllIndices ([#14635](https://github.com/opensearch-project/OpenSearch/pull/14635)) -- Make reroute iteration time-bound for large shard allocations ([#14848](https://github.com/opensearch-project/OpenSearch/pull/14848)) ### Deprecated -- Deprecate batch_size parameter on bulk API ([#14725](https://github.com/opensearch-project/OpenSearch/pull/14725)) ### Removed -- Remove query categorization changes ([#14759](https://github.com/opensearch-project/OpenSearch/pull/14759)) ### Fixed -- Fix allowUnmappedFields, mapUnmappedFieldAsString settings are not applied when parsing certain types of query string query ([#13957](https://github.com/opensearch-project/OpenSearch/pull/13957)) -- Fix bug in SBP cancellation logic ([#13259](https://github.com/opensearch-project/OpenSearch/pull/13474)) -- Fix handling of Short and Byte data types in ScriptProcessor ingest pipeline ([#14379](https://github.com/opensearch-project/OpenSearch/issues/14379)) -- Switch to iterative version of WKT format parser ([#14086](https://github.com/opensearch-project/OpenSearch/pull/14086)) -- Fix match_phrase_prefix_query not working on text field with multiple values and index_prefixes ([#10959](https://github.com/opensearch-project/OpenSearch/pull/10959)) -- Fix the computed max shards of cluster to avoid int overflow ([#14155](https://github.com/opensearch-project/OpenSearch/pull/14155)) -- Fixed rest-high-level client searchTemplate & mtermVectors endpoints to have a leading slash ([#14465](https://github.com/opensearch-project/OpenSearch/pull/14465)) -- Write shard level metadata blob when snapshotting searchable snapshot indexes ([#13190](https://github.com/opensearch-project/OpenSearch/pull/13190)) -- Fix aggs result of NestedAggregator with sub NestedAggregator ([#13324](https://github.com/opensearch-project/OpenSearch/pull/13324)) -- Fix fs info reporting negative available size ([#11573](https://github.com/opensearch-project/OpenSearch/pull/11573)) -- Add ListPitInfo::getKeepAlive() getter ([#14495](https://github.com/opensearch-project/OpenSearch/pull/14495)) -- Fix FuzzyQuery in keyword field will use IndexOrDocValuesQuery when both of index and doc_value are true ([#14378](https://github.com/opensearch-project/OpenSearch/pull/14378)) -- Fix file cache initialization ([#14004](https://github.com/opensearch-project/OpenSearch/pull/14004)) -- Handle NPE in GetResult if "found" field is missing ([#14552](https://github.com/opensearch-project/OpenSearch/pull/14552)) -- Fix create or update alias API doesn't throw exception for unsupported parameters ([#14719](https://github.com/opensearch-project/OpenSearch/pull/14719)) -- Refactoring FilterPath.parse by using an iterative approach ([#14200](https://github.com/opensearch-project/OpenSearch/pull/14200)) -- Refactoring Grok.validatePatternBank by using an iterative approach ([#14206](https://github.com/opensearch-project/OpenSearch/pull/14206)) -- Fix NPE when creating index with index.number_of_replicas set to null ([#14812](https://github.com/opensearch-project/OpenSearch/pull/14812)) -- Update help output for _cat ([#14722](https://github.com/opensearch-project/OpenSearch/pull/14722)) -- Fix bulk upsert ignores the default_pipeline and final_pipeline when auto-created index matches the index template ([#12891](https://github.com/opensearch-project/OpenSearch/pull/12891)) -- Fix NPE in ReplicaShardAllocator ([#14385](https://github.com/opensearch-project/OpenSearch/pull/14385)) -- Fix constant_keyword field type used when creating index ([#14807](https://github.com/opensearch-project/OpenSearch/pull/14807)) -- Use circuit breaker in InternalHistogram when adding empty buckets ([#14754](https://github.com/opensearch-project/OpenSearch/pull/14754)) -- Create new IndexInput for multi part upload ([#14888](https://github.com/opensearch-project/OpenSearch/pull/14888)) -- Fix searchable snapshot failure with scripted fields ([#14411](https://github.com/opensearch-project/OpenSearch/pull/14411)) -- Fix the visit of inner query for NestedQueryBuilder ([#14739](https://github.com/opensearch-project/OpenSearch/pull/14739)) ### Security diff --git a/release-notes/opensearch.release-notes-2.16.0.md b/release-notes/opensearch.release-notes-2.16.0.md new file mode 100644 index 0000000000000..193aa6b53714c --- /dev/null +++ b/release-notes/opensearch.release-notes-2.16.0.md @@ -0,0 +1,92 @@ +## 2024-07-24 Version 2.16.0 Release Notes + +## [2.16.0] +### Added +- Add fingerprint ingest processor ([#13724](https://github.com/opensearch-project/OpenSearch/pull/13724)) +- [Remote Store] Rate limiter for remote store low priority uploads ([#14374](https://github.com/opensearch-project/OpenSearch/pull/14374/)) +- Apply the date histogram rewrite optimization to range aggregation ([#13865](https://github.com/opensearch-project/OpenSearch/pull/13865)) +- [Writable Warm] Add composite directory implementation and integrate it with FileCache ([12782](https://github.com/opensearch-project/OpenSearch/pull/12782)) +- [Workload Management] Add QueryGroup schema ([13669](https://github.com/opensearch-project/OpenSearch/pull/13669)) +- Add batching supported processor base type AbstractBatchingProcessor ([#14554](https://github.com/opensearch-project/OpenSearch/pull/14554)) +- Fix race condition while parsing derived fields from search definition ([14445](https://github.com/opensearch-project/OpenSearch/pull/14445)) +- Add `strict_allow_templates` dynamic mapping option ([#14555](https://github.com/opensearch-project/OpenSearch/pull/14555)) +- Add allowlist setting for ingest-common and search-pipeline-common processors ([#14439](https://github.com/opensearch-project/OpenSearch/issues/14439)) +- [Workload Management] add queryGroupId header propagator across requests and nodes ([#14614](https://github.com/opensearch-project/OpenSearch/pull/14614)) +- Create SystemIndexRegistry with helper method matchesSystemIndex ([#14415](https://github.com/opensearch-project/OpenSearch/pull/14415)) +- Print reason why parent task was cancelled ([#14604](https://github.com/opensearch-project/OpenSearch/issues/14604)) +- Add matchesPluginSystemIndexPattern to SystemIndexRegistry ([#14750](https://github.com/opensearch-project/OpenSearch/pull/14750)) +- Add Plugin interface for loading application based configuration templates (([#14659](https://github.com/opensearch-project/OpenSearch/issues/14659))) +- Refactor remote-routing-table service inline with remote state interfaces([#14668](https://github.com/opensearch-project/OpenSearch/pull/14668)) +- Add shard-diff path to diff manifest to reduce number of read calls remote store (([#14684](https://github.com/opensearch-project/OpenSearch/pull/14684))) +- Add SortResponseProcessor to Search Pipelines (([#14785](https://github.com/opensearch-project/OpenSearch/issues/14785))) +- Add prefix mode verification setting for repository verification (([#14790](https://github.com/opensearch-project/OpenSearch/pull/14790))) +- Add SplitResponseProcessor to Search Pipelines (([#14800](https://github.com/opensearch-project/OpenSearch/issues/14800))) +- Optimize TransportNodesAction to not send DiscoveryNodes for NodeStats, NodesInfo and ClusterStats call ([14749](https://github.com/opensearch-project/OpenSearch/pull/14749)) +- Reduce logging in DEBUG for MasterService:run ([#14795](https://github.com/opensearch-project/OpenSearch/pull/14795)) +- Refactor remote-routing-table service inline with remote state interfaces([#14668](https://github.com/opensearch-project/OpenSearch/pull/14668)) +- Add rest, transport layer changes for hot to warm tiering - dedicated setup (([#13980](https://github.com/opensearch-project/OpenSearch/pull/13980)) +- Enabling term version check on local state for all ClusterManager Read Transport Actions ([#14273](https://github.com/opensearch-project/OpenSearch/pull/14273)) +- Optimize Cluster Stats Indices to precomute node level stats ([#14426](https://github.com/opensearch-project/OpenSearch/pull/14426)) +- Create listener to refresh search thread resource usage ([#14832](https://github.com/opensearch-project/OpenSearch/pull/14832)) +- Add logic to create index templates (v2) using context field ([#14811](https://github.com/opensearch-project/OpenSearch/pull/14811)) + +### Dependencies +- Update to Apache Lucene 9.11.1 ([#14042](https://github.com/opensearch-project/OpenSearch/pull/14042), [#14576](https://github.com/opensearch-project/OpenSearch/pull/14576)) +- Bump `netty` from 4.1.110.Final to 4.1.111.Final ([#14356](https://github.com/opensearch-project/OpenSearch/pull/14356)) +- Bump `org.wiremock:wiremock-standalone` from 3.3.1 to 3.6.0 ([#14361](https://github.com/opensearch-project/OpenSearch/pull/14361)) +- Bump `reactor` from 3.5.17 to 3.5.19 ([#14395](https://github.com/opensearch-project/OpenSearch/pull/14395), [#14697](https://github.com/opensearch-project/OpenSearch/pull/14697)) +- Bump `reactor-netty` from 1.1.19 to 1.1.21 ([#14395](https://github.com/opensearch-project/OpenSearch/pull/14395), [#14697](https://github.com/opensearch-project/OpenSearch/pull/14697)) +- Bump `commons-net:commons-net` from 3.10.0 to 3.11.1 ([#14396](https://github.com/opensearch-project/OpenSearch/pull/14396)) +- Bump `com.nimbusds:nimbus-jose-jwt` from 9.37.3 to 9.40 ([#14398](https://github.com/opensearch-project/OpenSearch/pull/14398)) +- Bump `org.apache.commons:commons-configuration2` from 2.10.1 to 2.11.0 ([#14399](https://github.com/opensearch-project/OpenSearch/pull/14399)) +- Bump `com.gradle.develocity` from 3.17.4 to 3.17.5 ([#14397](https://github.com/opensearch-project/OpenSearch/pull/14397)) +- Bump `opentelemetry` from 1.36.0 to 1.40.0 ([#14457](https://github.com/opensearch-project/OpenSearch/pull/14457), [#14674](https://github.com/opensearch-project/OpenSearch/pull/14674)) +- Bump `opentelemetry-semconv` from 1.25.0-alpha to 1.26.0-alpha ([#14674](https://github.com/opensearch-project/OpenSearch/pull/14674)) +- Bump `azure-identity` from 1.11.4 to 1.13.0, Bump `msal4j` from 1.14.3 to 1.15.1, Bump `msal4j-persistence-extension` from 1.2.0 to 1.3.0 ([#14506](https://github.com/opensearch-project/OpenSearch/pull/14673)) +- Bump `com.azure:azure-storage-common` from 12.21.2 to 12.25.1 ([#14517](https://github.com/opensearch-project/OpenSearch/pull/14517)) +- Bump `com.microsoft.azure:msal4j` from 1.15.1 to 1.16.0 ([#14610](https://github.com/opensearch-project/OpenSearch/pull/14610)) +- Bump `com.github.spullara.mustache.java:compiler` from 0.9.13 to 0.9.14 ([#14672](https://github.com/opensearch-project/OpenSearch/pull/14672)) +- Bump `net.minidev:accessors-smart` from 2.5.0 to 2.5.1 ([#14673](https://github.com/opensearch-project/OpenSearch/pull/14673)) +- Bump `jackson` from 2.17.1 to 2.17.2 ([#14687](https://github.com/opensearch-project/OpenSearch/pull/14687)) +- Bump `net.minidev:json-smart` from 2.5.0 to 2.5.1 ([#14748](https://github.com/opensearch-project/OpenSearch/pull/14748)) + +### Changed +- [Tiered Caching] Move query recomputation logic outside write lock ([#14187](https://github.com/opensearch-project/OpenSearch/pull/14187)) +- unsignedLongRangeQuery now returns MatchNoDocsQuery if the lower bounds are greater than the upper bounds ([#14416](https://github.com/opensearch-project/OpenSearch/pull/14416)) +- Make the class CommunityIdProcessor final ([#14448](https://github.com/opensearch-project/OpenSearch/pull/14448)) +- Updated the `indices.query.bool.max_clause_count` setting from being static to dynamically updateable ([#13568](https://github.com/opensearch-project/OpenSearch/pull/13568)) +- Allow @InternalApi annotation on classes not meant to be constructed outside of the OpenSearch core ([#14575](https://github.com/opensearch-project/OpenSearch/pull/14575)) +- Add @InternalApi annotation to japicmp exclusions ([#14597](https://github.com/opensearch-project/OpenSearch/pull/14597)) +- Allow system index warning in OpenSearchRestTestCase.refreshAllIndices ([#14635](https://github.com/opensearch-project/OpenSearch/pull/14635)) +- Make reroute iteration time-bound for large shard allocations ([#14848](https://github.com/opensearch-project/OpenSearch/pull/14848)) + +### Deprecated +- Deprecate batch_size parameter on bulk API ([#14725](https://github.com/opensearch-project/OpenSearch/pull/14725)) + +### Removed +- Remove query categorization changes ([#14759](https://github.com/opensearch-project/OpenSearch/pull/14759)) + +### Fixed +- Fix bug in SBP cancellation logic ([#13259](https://github.com/opensearch-project/OpenSearch/pull/13474)) +- Fix handling of Short and Byte data types in ScriptProcessor ingest pipeline ([#14379](https://github.com/opensearch-project/OpenSearch/issues/14379)) +- Switch to iterative version of WKT format parser ([#14086](https://github.com/opensearch-project/OpenSearch/pull/14086)) +- Fix match_phrase_prefix_query not working on text field with multiple values and index_prefixes ([#10959](https://github.com/opensearch-project/OpenSearch/pull/10959)) +- Fix the computed max shards of cluster to avoid int overflow ([#14155](https://github.com/opensearch-project/OpenSearch/pull/14155)) +- Fixed rest-high-level client searchTemplate & mtermVectors endpoints to have a leading slash ([#14465](https://github.com/opensearch-project/OpenSearch/pull/14465)) +- Write shard level metadata blob when snapshotting searchable snapshot indexes ([#13190](https://github.com/opensearch-project/OpenSearch/pull/13190)) +- Fix aggs result of NestedAggregator with sub NestedAggregator ([#13324](https://github.com/opensearch-project/OpenSearch/pull/13324)) +- Fix fs info reporting negative available size ([#11573](https://github.com/opensearch-project/OpenSearch/pull/11573)) +- Add ListPitInfo::getKeepAlive() getter ([#14495](https://github.com/opensearch-project/OpenSearch/pull/14495)) +- Fix FuzzyQuery in keyword field will use IndexOrDocValuesQuery when both of index and doc_value are true ([#14378](https://github.com/opensearch-project/OpenSearch/pull/14378)) +- Fix file cache initialization ([#14004](https://github.com/opensearch-project/OpenSearch/pull/14004)) +- Handle NPE in GetResult if "found" field is missing ([#14552](https://github.com/opensearch-project/OpenSearch/pull/14552)) +- Fix create or update alias API doesn't throw exception for unsupported parameters ([#14719](https://github.com/opensearch-project/OpenSearch/pull/14719)) +- Refactoring FilterPath.parse by using an iterative approach ([#14200](https://github.com/opensearch-project/OpenSearch/pull/14200)) +- Refactoring Grok.validatePatternBank by using an iterative approach ([#14206](https://github.com/opensearch-project/OpenSearch/pull/14206)) +- Fix NPE when creating index with index.number_of_replicas set to null ([#14812](https://github.com/opensearch-project/OpenSearch/pull/14812)) +- Update help output for _cat ([#14722](https://github.com/opensearch-project/OpenSearch/pull/14722)) +- Fix bulk upsert ignores the default_pipeline and final_pipeline when auto-created index matches the index template ([#12891](https://github.com/opensearch-project/OpenSearch/pull/12891)) +- Fix NPE in ReplicaShardAllocator ([#14385](https://github.com/opensearch-project/OpenSearch/pull/14385)) +- Use circuit breaker in InternalHistogram when adding empty buckets ([#14754](https://github.com/opensearch-project/OpenSearch/pull/14754)) +- Create new IndexInput for multi part upload ([#14888](https://github.com/opensearch-project/OpenSearch/pull/14888)) +- Fix searchable snapshot failure with scripted fields ([#14411](https://github.com/opensearch-project/OpenSearch/pull/14411)) From 59302a3d5ea255be7f2bb72187b8df1f0aa33572 Mon Sep 17 00:00:00 2001 From: Mohit Godwani <81609427+mgodwan@users.noreply.github.com> Date: Fri, 26 Jul 2024 18:37:24 +0530 Subject: [PATCH 124/167] Fix version check after backport (#14985) Signed-off-by: Mohit Godwani --- .../opensearch/cluster/metadata/ComposableIndexTemplate.java | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/server/src/main/java/org/opensearch/cluster/metadata/ComposableIndexTemplate.java b/server/src/main/java/org/opensearch/cluster/metadata/ComposableIndexTemplate.java index 594dda83c41e2..63bbe4144c4fb 100644 --- a/server/src/main/java/org/opensearch/cluster/metadata/ComposableIndexTemplate.java +++ b/server/src/main/java/org/opensearch/cluster/metadata/ComposableIndexTemplate.java @@ -184,7 +184,7 @@ public ComposableIndexTemplate(StreamInput in) throws IOException { this.version = in.readOptionalVLong(); this.metadata = in.readMap(); this.dataStreamTemplate = in.readOptionalWriteable(DataStreamTemplate::new); - if (in.getVersion().onOrAfter(Version.V_3_0_0)) { + if (in.getVersion().onOrAfter(Version.V_2_16_0)) { this.context = in.readOptionalWriteable(Context::new); } else { this.context = null; @@ -248,7 +248,7 @@ public void writeTo(StreamOutput out) throws IOException { out.writeOptionalVLong(this.version); out.writeMap(this.metadata); out.writeOptionalWriteable(dataStreamTemplate); - if (out.getVersion().onOrAfter(Version.V_3_0_0)) { + if (out.getVersion().onOrAfter(Version.V_2_16_0)) { out.writeOptionalWriteable(context); } } From d08c4253e18981d688253d20fb967e614923a957 Mon Sep 17 00:00:00 2001 From: Rahul Karajgikar <50844303+rahulkarajgikar@users.noreply.github.com> Date: Mon, 29 Jul 2024 18:18:44 +0530 Subject: [PATCH 125/167] [Batch Fetch] Fix for hasInitiatedFetching to fix allocation explain and manual reroute APIs (#14972) * Fix for hasInitiatedFetching() in batch mode Signed-off-by: Rahul Karajgikar --- CHANGELOG.md | 1 + .../gateway/RecoveryFromGatewayIT.java | 160 +++++++++++++++++- .../gateway/AsyncShardBatchFetch.java | 8 + .../gateway/ReplicaShardBatchAllocator.java | 2 +- .../gateway/ShardsBatchGatewayAllocator.java | 31 +++- 5 files changed, 191 insertions(+), 11 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index e88a084f7d7f6..d4c8c955bced4 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -5,6 +5,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), ## [Unreleased 2.x] ### Added +- Fix for hasInitiatedFetching to fix allocation explain and manual reroute APIs (([#14972](https://github.com/opensearch-project/OpenSearch/pull/14972)) ### Dependencies - Bump `org.apache.commons:commons-lang3` from 3.14.0 to 3.15.0 ([#14861](https://github.com/opensearch-project/OpenSearch/pull/14861)) diff --git a/server/src/internalClusterTest/java/org/opensearch/gateway/RecoveryFromGatewayIT.java b/server/src/internalClusterTest/java/org/opensearch/gateway/RecoveryFromGatewayIT.java index 4085cc3890f30..eccc903dfac82 100644 --- a/server/src/internalClusterTest/java/org/opensearch/gateway/RecoveryFromGatewayIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/gateway/RecoveryFromGatewayIT.java @@ -57,6 +57,7 @@ import org.opensearch.cluster.routing.ShardRouting; import org.opensearch.cluster.routing.ShardRoutingState; import org.opensearch.cluster.routing.UnassignedInfo; +import org.opensearch.cluster.routing.allocation.AllocateUnassignedDecision; import org.opensearch.cluster.routing.allocation.AllocationDecision; import org.opensearch.cluster.routing.allocation.ExistingShardsAllocator; import org.opensearch.cluster.service.ClusterService; @@ -797,11 +798,26 @@ public void testBatchModeEnabledWithoutTimeout() throws Exception { ); assertTrue(ExistingShardsAllocator.EXISTING_SHARDS_ALLOCATOR_BATCH_MODE.get(internalCluster().clusterService().getSettings())); assertEquals(1, gatewayAllocator.getNumberOfStartedShardBatches()); - assertEquals(1, gatewayAllocator.getNumberOfStoreShardBatches()); + // Replica shard would be marked ineligible since there are no data nodes. + // It would then be removed from any batch and batches would get deleted, so we would have 0 replica batches + assertEquals(0, gatewayAllocator.getNumberOfStoreShardBatches()); - // Now start both data nodes and ensure batch mode is working - logger.info("--> restarting the stopped nodes"); + // Now start one data node + logger.info("--> restarting the first stopped node"); internalCluster().startDataOnlyNode(Settings.builder().put("node.name", dataOnlyNodes.get(0)).put(node0DataPathSettings).build()); + ensureStableCluster(2); + ensureYellow("test"); + assertEquals(0, gatewayAllocator.getNumberOfStartedShardBatches()); + assertEquals(0, gatewayAllocator.getNumberOfStoreShardBatches()); + assertEquals(0, gatewayAllocator.getNumberOfInFlightFetches()); + + // calling reroute and asserting on reroute response + logger.info("--> calling reroute while cluster is yellow"); + clusterRerouteResponse = client().admin().cluster().prepareReroute().setRetryFailed(true).get(); + assertTrue(clusterRerouteResponse.isAcknowledged()); + + // Now start last data node and ensure batch mode is working and cluster goes green + logger.info("--> restarting the second stopped node"); internalCluster().startDataOnlyNode(Settings.builder().put("node.name", dataOnlyNodes.get(1)).put(node1DataPathSettings).build()); ensureStableCluster(3); ensureGreen("test"); @@ -842,11 +858,26 @@ public void testBatchModeEnabledWithSufficientTimeoutAndClusterGreen() throws Ex ); assertTrue(ExistingShardsAllocator.EXISTING_SHARDS_ALLOCATOR_BATCH_MODE.get(internalCluster().clusterService().getSettings())); assertEquals(1, gatewayAllocator.getNumberOfStartedShardBatches()); - assertEquals(1, gatewayAllocator.getNumberOfStoreShardBatches()); + // Replica shard would be marked ineligible since there are no data nodes. + // It would then be removed from any batch and batches would get deleted, so we would have 0 replica batches + assertEquals(0, gatewayAllocator.getNumberOfStoreShardBatches()); - // Now start both data nodes and ensure batch mode is working - logger.info("--> restarting the stopped nodes"); + // Now start one data nodes and ensure batch mode is working + logger.info("--> restarting the first stopped node"); internalCluster().startDataOnlyNode(Settings.builder().put("node.name", dataOnlyNodes.get(0)).put(node0DataPathSettings).build()); + ensureStableCluster(2); + ensureYellow("test"); + assertEquals(0, gatewayAllocator.getNumberOfStartedShardBatches()); + assertEquals(0, gatewayAllocator.getNumberOfStoreShardBatches()); + assertEquals(0, gatewayAllocator.getNumberOfInFlightFetches()); + + // calling reroute and asserting on reroute response + logger.info("--> calling reroute while cluster is yellow"); + clusterRerouteResponse = client().admin().cluster().prepareReroute().setRetryFailed(true).get(); + assertTrue(clusterRerouteResponse.isAcknowledged()); + + // Now start last data node and ensure batch mode is working and cluster goes green + logger.info("--> restarting the second stopped node"); internalCluster().startDataOnlyNode(Settings.builder().put("node.name", dataOnlyNodes.get(1)).put(node1DataPathSettings).build()); ensureStableCluster(3); ensureGreen("test"); @@ -907,7 +938,9 @@ public void testBatchModeEnabledWithInSufficientTimeoutButClusterGreen() throws assertTrue(ExistingShardsAllocator.EXISTING_SHARDS_ALLOCATOR_BATCH_MODE.get(internalCluster().clusterService().getSettings())); assertEquals(10, gatewayAllocator.getNumberOfStartedShardBatches()); - assertEquals(10, gatewayAllocator.getNumberOfStoreShardBatches()); + // All replica shards would be marked ineligible since there are no data nodes. + // They would then be removed from any batch and batches would get deleted, so we would have 0 replica batches + assertEquals(0, gatewayAllocator.getNumberOfStoreShardBatches()); health = client(internalCluster().getClusterManagerName()).admin().cluster().health(Requests.clusterHealthRequest()).actionGet(); assertFalse(health.isTimedOut()); assertEquals(RED, health.getStatus()); @@ -1051,6 +1084,18 @@ public void testMultipleReplicaShardAssignmentWithDelayedAllocationAndDifferentN ensureGreen("test"); } + public void testAllocationExplainReturnsNoWhenExtraReplicaShardInNonBatchMode() throws Exception { + // Non batch mode - This test is to validate that we don't return AWAITING_INFO in allocation explain API when the deciders are + // returning NO + this.allocationExplainReturnsNoWhenExtraReplicaShard(false); + } + + public void testAllocationExplainReturnsNoWhenExtraReplicaShardInBatchMode() throws Exception { + // Batch mode - This test is to validate that we don't return AWAITING_INFO in allocation explain API when the deciders are + // returning NO + this.allocationExplainReturnsNoWhenExtraReplicaShard(true); + } + public void testNBatchesCreationAndAssignment() throws Exception { // we will reduce batch size to 5 to make sure we have enough batches to test assignment // Total number of primary shards = 50 (50 indices*1) @@ -1104,7 +1149,9 @@ public void testNBatchesCreationAndAssignment() throws Exception { ); assertTrue(ExistingShardsAllocator.EXISTING_SHARDS_ALLOCATOR_BATCH_MODE.get(internalCluster().clusterService().getSettings())); assertEquals(10, gatewayAllocator.getNumberOfStartedShardBatches()); - assertEquals(10, gatewayAllocator.getNumberOfStoreShardBatches()); + // All replica shards would be marked ineligible since there are no data nodes. + // They would then be removed from any batch and batches would get deleted, so we would have 0 replica batches + assertEquals(0, gatewayAllocator.getNumberOfStoreShardBatches()); health = client(internalCluster().getClusterManagerName()).admin().cluster().health(Requests.clusterHealthRequest()).actionGet(); assertFalse(health.isTimedOut()); assertEquals(RED, health.getStatus()); @@ -1193,7 +1240,9 @@ public void testCulpritShardInBatch() throws Exception { ); assertTrue(ExistingShardsAllocator.EXISTING_SHARDS_ALLOCATOR_BATCH_MODE.get(internalCluster().clusterService().getSettings())); assertEquals(1, gatewayAllocator.getNumberOfStartedShardBatches()); - assertEquals(1, gatewayAllocator.getNumberOfStoreShardBatches()); + // Replica shard would be marked ineligible since there are no data nodes. + // It would then be removed from any batch and batches would get deleted, so we would have 0 replica batches + assertEquals(0, gatewayAllocator.getNumberOfStoreShardBatches()); assertTrue(clusterRerouteResponse.isAcknowledged()); health = client(internalCluster().getClusterManagerName()).admin().cluster().health(Requests.clusterHealthRequest()).actionGet(); assertFalse(health.isTimedOut()); @@ -1511,4 +1560,97 @@ private List findNodesWithShard(final boolean primary) { Collections.shuffle(requiredStartedShards, random()); return requiredStartedShards.stream().map(shard -> state.nodes().get(shard.currentNodeId()).getName()).collect(Collectors.toList()); } + + private void allocationExplainReturnsNoWhenExtraReplicaShard(boolean batchModeEnabled) throws Exception { + internalCluster().startClusterManagerOnlyNodes( + 1, + Settings.builder().put(ExistingShardsAllocator.EXISTING_SHARDS_ALLOCATOR_BATCH_MODE.getKey(), batchModeEnabled).build() + ); + internalCluster().startDataOnlyNodes(5); + createIndex( + "test", + Settings.builder().put(IndexMetadata.SETTING_NUMBER_OF_SHARDS, 1).put(IndexMetadata.SETTING_NUMBER_OF_REPLICAS, 4).build() + ); + ensureGreen("test"); + ensureStableCluster(6); + + // Stop one of the nodes to make the cluster yellow + // We cannot directly create an index with replica = data node count because then the whole flow will get skipped due to + // INDEX_CREATED + List nodesWithReplicaShards = findNodesWithShard(false); + Settings replicaNodeDataPathSettings = internalCluster().dataPathSettings(nodesWithReplicaShards.get(0)); + internalCluster().stopRandomNode(InternalTestCluster.nameFilter(nodesWithReplicaShards.get(0))); + + ensureStableCluster(5); + ensureYellow("test"); + + logger.info("--> calling allocation explain API"); + // shard should have decision NO because there is no valid node for the extra replica to go to + AllocateUnassignedDecision aud = client().admin() + .cluster() + .prepareAllocationExplain() + .setIndex("test") + .setShard(0) + .setPrimary(false) + .get() + .getExplanation() + .getShardAllocationDecision() + .getAllocateDecision(); + + assertEquals(AllocationDecision.NO, aud.getAllocationDecision()); + assertEquals("cannot allocate because allocation is not permitted to any of the nodes", aud.getExplanation()); + + // Now creating a new index with too many replicas and trying again + createIndex( + "test2", + Settings.builder().put(IndexMetadata.SETTING_NUMBER_OF_SHARDS, 1).put(IndexMetadata.SETTING_NUMBER_OF_REPLICAS, 5).build() + ); + + ensureYellowAndNoInitializingShards("test2"); + + logger.info("--> calling allocation explain API again"); + // shard should have decision NO because there are 6 replicas and 4 data nodes + aud = client().admin() + .cluster() + .prepareAllocationExplain() + .setIndex("test2") + .setShard(0) + .setPrimary(false) + .get() + .getExplanation() + .getShardAllocationDecision() + .getAllocateDecision(); + + assertEquals(AllocationDecision.NO, aud.getAllocationDecision()); + assertEquals("cannot allocate because allocation is not permitted to any of the nodes", aud.getExplanation()); + + logger.info("--> restarting the stopped node"); + internalCluster().startDataOnlyNode( + Settings.builder().put("node.name", nodesWithReplicaShards.get(0)).put(replicaNodeDataPathSettings).build() + ); + + ensureStableCluster(6); + ensureGreen("test"); + + logger.info("--> calling allocation explain API 3rd time"); + // shard should still have decision NO because there are 6 replicas and 5 data nodes + aud = client().admin() + .cluster() + .prepareAllocationExplain() + .setIndex("test2") + .setShard(0) + .setPrimary(false) + .get() + .getExplanation() + .getShardAllocationDecision() + .getAllocateDecision(); + + assertEquals(AllocationDecision.NO, aud.getAllocationDecision()); + assertEquals("cannot allocate because allocation is not permitted to any of the nodes", aud.getExplanation()); + + internalCluster().startDataOnlyNodes(1); + + ensureStableCluster(7); + ensureGreen("test2"); + } } diff --git a/server/src/main/java/org/opensearch/gateway/AsyncShardBatchFetch.java b/server/src/main/java/org/opensearch/gateway/AsyncShardBatchFetch.java index 4f39a39cea678..df642a9f5a743 100644 --- a/server/src/main/java/org/opensearch/gateway/AsyncShardBatchFetch.java +++ b/server/src/main/java/org/opensearch/gateway/AsyncShardBatchFetch.java @@ -80,6 +80,14 @@ public synchronized void clearShard(ShardId shardId) { this.cache.deleteShard(shardId); } + public boolean hasEmptyCache() { + return this.cache.getCache().isEmpty(); + } + + public AsyncShardFetchCache getCache() { + return this.cache; + } + /** * Cache implementation of transport actions returning batch of shards related data in the response. * Store node level responses of transport actions like {@link TransportNodesListGatewayStartedShardsBatch} or diff --git a/server/src/main/java/org/opensearch/gateway/ReplicaShardBatchAllocator.java b/server/src/main/java/org/opensearch/gateway/ReplicaShardBatchAllocator.java index 7c75f2a5d1a8f..0818b187271cb 100644 --- a/server/src/main/java/org/opensearch/gateway/ReplicaShardBatchAllocator.java +++ b/server/src/main/java/org/opensearch/gateway/ReplicaShardBatchAllocator.java @@ -183,7 +183,7 @@ private AllocateUnassignedDecision getUnassignedShardAllocationDecision( if (allocationDecision.type() != Decision.Type.YES && (!explain || !hasInitiatedFetching(shardRouting))) { // only return early if we are not in explain mode, or we are in explain mode but we have not // yet attempted to fetch any shard data - logger.trace("{}: ignoring allocation, can't be allocated on any node", shardRouting); + logger.trace("{}: ignoring allocation, can't be allocated on any node. Decision: {}", shardRouting, allocationDecision.type()); return AllocateUnassignedDecision.no( UnassignedInfo.AllocationStatus.fromDecision(allocationDecision.type()), result.v2() != null ? new ArrayList<>(result.v2().values()) : null diff --git a/server/src/main/java/org/opensearch/gateway/ShardsBatchGatewayAllocator.java b/server/src/main/java/org/opensearch/gateway/ShardsBatchGatewayAllocator.java index 55f5388d8f454..673ed8dbaa1c3 100644 --- a/server/src/main/java/org/opensearch/gateway/ShardsBatchGatewayAllocator.java +++ b/server/src/main/java/org/opensearch/gateway/ShardsBatchGatewayAllocator.java @@ -576,8 +576,37 @@ protected AsyncShardFetch.FetchResult Date: Mon, 29 Jul 2024 10:26:49 -0400 Subject: [PATCH 126/167] Bump com.microsoft.azure:msal4j from 1.16.1 to 1.16.2 in /plugins/repository-azure (#14995) * Bump com.microsoft.azure:msal4j in /plugins/repository-azure Bumps [com.microsoft.azure:msal4j](https://github.com/AzureAD/microsoft-authentication-library-for-java) from 1.16.1 to 1.16.2. - [Release notes](https://github.com/AzureAD/microsoft-authentication-library-for-java/releases) - [Changelog](https://github.com/AzureAD/microsoft-authentication-library-for-java/blob/dev/changelog.txt) - [Commits](https://github.com/AzureAD/microsoft-authentication-library-for-java/compare/v1.16.1...v1.16.2) --- updated-dependencies: - dependency-name: com.microsoft.azure:msal4j dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] * Updating SHAs Signed-off-by: dependabot[bot] * Update changelog Signed-off-by: dependabot[bot] --------- Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] --- CHANGELOG.md | 1 + plugins/repository-azure/build.gradle | 2 +- plugins/repository-azure/licenses/msal4j-1.16.1.jar.sha1 | 1 - plugins/repository-azure/licenses/msal4j-1.16.2.jar.sha1 | 1 + 4 files changed, 3 insertions(+), 2 deletions(-) delete mode 100644 plugins/repository-azure/licenses/msal4j-1.16.1.jar.sha1 create mode 100644 plugins/repository-azure/licenses/msal4j-1.16.2.jar.sha1 diff --git a/CHANGELOG.md b/CHANGELOG.md index d4c8c955bced4..138efce1c29e7 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -9,6 +9,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), ### Dependencies - Bump `org.apache.commons:commons-lang3` from 3.14.0 to 3.15.0 ([#14861](https://github.com/opensearch-project/OpenSearch/pull/14861)) +- Bump `com.microsoft.azure:msal4j` from 1.16.1 to 1.16.2 ([#14995](https://github.com/opensearch-project/OpenSearch/pull/14995)) ### Changed diff --git a/plugins/repository-azure/build.gradle b/plugins/repository-azure/build.gradle index 7bd7be1481a2f..15e3158f2dbc4 100644 --- a/plugins/repository-azure/build.gradle +++ b/plugins/repository-azure/build.gradle @@ -61,7 +61,7 @@ dependencies { // Start of transitive dependencies for azure-identity api 'com.microsoft.azure:msal4j-persistence-extension:1.3.0' api "net.java.dev.jna:jna-platform:${versions.jna}" - api 'com.microsoft.azure:msal4j:1.16.1' + api 'com.microsoft.azure:msal4j:1.16.2' api 'com.nimbusds:oauth2-oidc-sdk:11.9.1' api 'com.nimbusds:nimbus-jose-jwt:9.40' api 'com.nimbusds:content-type:2.3' diff --git a/plugins/repository-azure/licenses/msal4j-1.16.1.jar.sha1 b/plugins/repository-azure/licenses/msal4j-1.16.1.jar.sha1 deleted file mode 100644 index 7d24922196be4..0000000000000 --- a/plugins/repository-azure/licenses/msal4j-1.16.1.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -4ad89b4632ef9abab883114e77c079843a206862 \ No newline at end of file diff --git a/plugins/repository-azure/licenses/msal4j-1.16.2.jar.sha1 b/plugins/repository-azure/licenses/msal4j-1.16.2.jar.sha1 new file mode 100644 index 0000000000000..1363e5a0793d2 --- /dev/null +++ b/plugins/repository-azure/licenses/msal4j-1.16.2.jar.sha1 @@ -0,0 +1 @@ +b43ec4dd657f8ed5922bc0a8ccbe49000968bd15 \ No newline at end of file From 122f3f0ab7448f06a20b46919c6e23e74ce1fa9c Mon Sep 17 00:00:00 2001 From: Rishab Nahata Date: Mon, 29 Jul 2024 20:20:26 +0530 Subject: [PATCH 127/167] Cache index shard limit to optimise ShardLimitsAllocationDecider (#14962) * Cache index shard limit per node Signed-off-by: Rishab Nahata --- .../routing/allocation/RerouteBenchmark.java | 135 ++++++++++++++++++ .../cluster/metadata/IndexMetadata.java | 16 ++- .../decider/ShardsLimitAllocationDecider.java | 4 +- 3 files changed, 150 insertions(+), 5 deletions(-) create mode 100644 benchmarks/src/main/java/org/opensearch/benchmark/routing/allocation/RerouteBenchmark.java diff --git a/benchmarks/src/main/java/org/opensearch/benchmark/routing/allocation/RerouteBenchmark.java b/benchmarks/src/main/java/org/opensearch/benchmark/routing/allocation/RerouteBenchmark.java new file mode 100644 index 0000000000000..e54bca579423b --- /dev/null +++ b/benchmarks/src/main/java/org/opensearch/benchmark/routing/allocation/RerouteBenchmark.java @@ -0,0 +1,135 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.benchmark.routing.allocation; + +import org.opensearch.Version; +import org.opensearch.cluster.ClusterName; +import org.opensearch.cluster.ClusterState; +import org.opensearch.cluster.metadata.IndexMetadata; +import org.opensearch.cluster.metadata.Metadata; +import org.opensearch.cluster.node.DiscoveryNodes; +import org.opensearch.cluster.routing.RoutingTable; +import org.opensearch.cluster.routing.ShardRouting; +import org.opensearch.cluster.routing.allocation.AllocationService; +import org.opensearch.common.logging.LogConfigurator; +import org.opensearch.common.settings.Settings; +import org.openjdk.jmh.annotations.Benchmark; +import org.openjdk.jmh.annotations.BenchmarkMode; +import org.openjdk.jmh.annotations.Fork; +import org.openjdk.jmh.annotations.Measurement; +import org.openjdk.jmh.annotations.Mode; +import org.openjdk.jmh.annotations.OutputTimeUnit; +import org.openjdk.jmh.annotations.Param; +import org.openjdk.jmh.annotations.Scope; +import org.openjdk.jmh.annotations.Setup; +import org.openjdk.jmh.annotations.State; +import org.openjdk.jmh.annotations.Warmup; + +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.concurrent.TimeUnit; + +import static org.opensearch.cluster.routing.ShardRoutingState.INITIALIZING; + +@Fork(1) +@Warmup(iterations = 3) +@Measurement(iterations = 3) +@BenchmarkMode(Mode.AverageTime) +@OutputTimeUnit(TimeUnit.MILLISECONDS) +@State(Scope.Benchmark) +@SuppressWarnings("unused") // invoked by benchmarking framework +public class RerouteBenchmark { + @Param({ + // indices| nodes + " 10000| 500|", }) + public String indicesNodes = "1|1"; + public int numIndices; + public int numNodes; + public int numShards = 10; + public int numReplicas = 1; + + private AllocationService allocationService; + private ClusterState initialClusterState; + + @Setup + public void setUp() throws Exception { + LogConfigurator.setNodeName("test"); + final String[] params = indicesNodes.split("\\|"); + numIndices = toInt(params[0]); + numNodes = toInt(params[1]); + + int totalShardCount = (numReplicas + 1) * numShards * numIndices; + Metadata.Builder mb = Metadata.builder(); + for (int i = 1; i <= numIndices; i++) { + mb.put( + IndexMetadata.builder("test_" + i) + .settings(Settings.builder().put("index.version.created", Version.CURRENT)) + .numberOfShards(numShards) + .numberOfReplicas(numReplicas) + ); + } + + Metadata metadata = mb.build(); + RoutingTable.Builder rb = RoutingTable.builder(); + for (int i = 1; i <= numIndices; i++) { + rb.addAsNew(metadata.index("test_" + i)); + } + RoutingTable routingTable = rb.build(); + initialClusterState = ClusterState.builder(ClusterName.CLUSTER_NAME_SETTING.getDefault(Settings.EMPTY)) + .metadata(metadata) + .routingTable(routingTable) + .nodes(setUpClusterNodes(numNodes)) + .build(); + } + + @Benchmark + public ClusterState measureShardAllocationEmptyCluster() throws Exception { + ClusterState clusterState = initialClusterState; + allocationService = Allocators.createAllocationService( + Settings.builder() + .put("cluster.routing.allocation.awareness.attributes", "zone") + .put("cluster.routing.allocation.load_awareness.provisioned_capacity", numNodes) + .put("cluster.routing.allocation.load_awareness.skew_factor", "50") + .put("cluster.routing.allocation.node_concurrent_recoveries", "2") + .build() + ); + clusterState = allocationService.reroute(clusterState, "reroute"); + while (clusterState.getRoutingNodes().hasUnassignedShards()) { + clusterState = startInitializingShardsAndReroute(allocationService, clusterState); + } + return clusterState; + } + + private int toInt(String v) { + return Integer.valueOf(v.trim()); + } + + private DiscoveryNodes.Builder setUpClusterNodes(int nodes) { + DiscoveryNodes.Builder nb = DiscoveryNodes.builder(); + for (int i = 1; i <= nodes; i++) { + Map attributes = new HashMap<>(); + attributes.put("zone", "zone_" + (i % 3)); + nb.add(Allocators.newNode("node_0_" + i, attributes)); + } + return nb; + } + + private static ClusterState startInitializingShardsAndReroute(AllocationService allocationService, ClusterState clusterState) { + return startShardsAndReroute(allocationService, clusterState, clusterState.routingTable().shardsWithState(INITIALIZING)); + } + + private static ClusterState startShardsAndReroute( + AllocationService allocationService, + ClusterState clusterState, + List initializingShards + ) { + return allocationService.reroute(allocationService.applyStartedShards(clusterState, initializingShards), "reroute after starting"); + } +} diff --git a/server/src/main/java/org/opensearch/cluster/metadata/IndexMetadata.java b/server/src/main/java/org/opensearch/cluster/metadata/IndexMetadata.java index 9e7fe23f29872..df0d2609ad83d 100644 --- a/server/src/main/java/org/opensearch/cluster/metadata/IndexMetadata.java +++ b/server/src/main/java/org/opensearch/cluster/metadata/IndexMetadata.java @@ -43,6 +43,7 @@ import org.opensearch.cluster.block.ClusterBlockLevel; import org.opensearch.cluster.node.DiscoveryNodeFilters; import org.opensearch.cluster.routing.allocation.IndexMetadataUpdater; +import org.opensearch.cluster.routing.allocation.decider.ShardsLimitAllocationDecider; import org.opensearch.common.Nullable; import org.opensearch.common.annotation.PublicApi; import org.opensearch.common.collect.MapBuilder; @@ -686,6 +687,8 @@ public static APIBlock readFrom(StreamInput input) throws IOException { private final boolean isSystem; private final boolean isRemoteSnapshot; + private final int indexTotalShardsPerNodeLimit; + private IndexMetadata( final Index index, final long version, @@ -711,7 +714,8 @@ private IndexMetadata( final int routingPartitionSize, final ActiveShardCount waitForActiveShards, final Map rolloverInfos, - final boolean isSystem + final boolean isSystem, + final int indexTotalShardsPerNodeLimit ) { this.index = index; @@ -746,6 +750,7 @@ private IndexMetadata( this.rolloverInfos = Collections.unmodifiableMap(rolloverInfos); this.isSystem = isSystem; this.isRemoteSnapshot = IndexModule.Type.REMOTE_SNAPSHOT.match(this.settings); + this.indexTotalShardsPerNodeLimit = indexTotalShardsPerNodeLimit; assert numberOfShards * routingFactor == routingNumShards : routingNumShards + " must be a multiple of " + numberOfShards; } @@ -899,6 +904,10 @@ public Set inSyncAllocationIds(int shardId) { return inSyncAllocationIds.get(shardId); } + public int getIndexTotalShardsPerNodeLimit() { + return this.indexTotalShardsPerNodeLimit; + } + @Nullable public DiscoveryNodeFilters requireFilters() { return requireFilters; @@ -1583,6 +1592,8 @@ public IndexMetadata build() { ); } + final int indexTotalShardsPerNodeLimit = ShardsLimitAllocationDecider.INDEX_TOTAL_SHARDS_PER_NODE_SETTING.get(settings); + final String uuid = settings.get(SETTING_INDEX_UUID, INDEX_UUID_NA_VALUE); return new IndexMetadata( @@ -1610,7 +1621,8 @@ public IndexMetadata build() { routingPartitionSize, waitForActiveShards, rolloverInfos, - isSystem + isSystem, + indexTotalShardsPerNodeLimit ); } diff --git a/server/src/main/java/org/opensearch/cluster/routing/allocation/decider/ShardsLimitAllocationDecider.java b/server/src/main/java/org/opensearch/cluster/routing/allocation/decider/ShardsLimitAllocationDecider.java index c008102554e8c..6f211f370de95 100644 --- a/server/src/main/java/org/opensearch/cluster/routing/allocation/decider/ShardsLimitAllocationDecider.java +++ b/server/src/main/java/org/opensearch/cluster/routing/allocation/decider/ShardsLimitAllocationDecider.java @@ -32,7 +32,6 @@ package org.opensearch.cluster.routing.allocation.decider; -import org.opensearch.cluster.metadata.IndexMetadata; import org.opensearch.cluster.routing.RoutingNode; import org.opensearch.cluster.routing.ShardRouting; import org.opensearch.cluster.routing.ShardRoutingState; @@ -125,8 +124,7 @@ private Decision doDecide( RoutingAllocation allocation, BiPredicate decider ) { - IndexMetadata indexMd = allocation.metadata().getIndexSafe(shardRouting.index()); - final int indexShardLimit = INDEX_TOTAL_SHARDS_PER_NODE_SETTING.get(indexMd.getSettings(), settings); + final int indexShardLimit = allocation.metadata().getIndexSafe(shardRouting.index()).getIndexTotalShardsPerNodeLimit(); // Capture the limit here in case it changes during this method's // execution final int clusterShardLimit = this.clusterShardLimit; From 691f78ca480588df3d27dcad96601b22e77b6386 Mon Sep 17 00:00:00 2001 From: Andriy Redko Date: Mon, 29 Jul 2024 11:50:29 -0400 Subject: [PATCH 128/167] OpenJDK Update (July 2024 Patch releases) (#14998) Signed-off-by: Andriy Redko --- CHANGELOG.md | 1 + .../java/org/opensearch/gradle/test/DistroTestPlugin.java | 4 ++-- buildSrc/version.properties | 2 +- 3 files changed, 4 insertions(+), 3 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 138efce1c29e7..1b21fded97e47 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -9,6 +9,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), ### Dependencies - Bump `org.apache.commons:commons-lang3` from 3.14.0 to 3.15.0 ([#14861](https://github.com/opensearch-project/OpenSearch/pull/14861)) +- OpenJDK Update (July 2024 Patch releases) ([#14998](https://github.com/opensearch-project/OpenSearch/pull/14998)) - Bump `com.microsoft.azure:msal4j` from 1.16.1 to 1.16.2 ([#14995](https://github.com/opensearch-project/OpenSearch/pull/14995)) ### Changed diff --git a/buildSrc/src/main/java/org/opensearch/gradle/test/DistroTestPlugin.java b/buildSrc/src/main/java/org/opensearch/gradle/test/DistroTestPlugin.java index b2b3e3003e572..8d5ce9143cbac 100644 --- a/buildSrc/src/main/java/org/opensearch/gradle/test/DistroTestPlugin.java +++ b/buildSrc/src/main/java/org/opensearch/gradle/test/DistroTestPlugin.java @@ -77,9 +77,9 @@ import java.util.stream.Stream; public class DistroTestPlugin implements Plugin { - private static final String SYSTEM_JDK_VERSION = "21.0.3+9"; + private static final String SYSTEM_JDK_VERSION = "21.0.4+7"; private static final String SYSTEM_JDK_VENDOR = "adoptium"; - private static final String GRADLE_JDK_VERSION = "21.0.3+9"; + private static final String GRADLE_JDK_VERSION = "21.0.4+7"; private static final String GRADLE_JDK_VENDOR = "adoptium"; // all distributions used by distro tests. this is temporary until tests are per distribution diff --git a/buildSrc/version.properties b/buildSrc/version.properties index 855ccc1f87413..7d32ed3df7b76 100644 --- a/buildSrc/version.properties +++ b/buildSrc/version.properties @@ -2,7 +2,7 @@ opensearch = 3.0.0 lucene = 9.12.0-snapshot-847316d bundled_jdk_vendor = adoptium -bundled_jdk = 21.0.3+9 +bundled_jdk = 21.0.4+7 # optional dependencies spatial4j = 0.7 From f5b0ebaab8632932faa362caffa412b7eb6eb23a Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Mon, 29 Jul 2024 12:03:23 -0500 Subject: [PATCH 129/167] Bump actions/github-script from 6 to 7 (#14997) * Bump actions/github-script from 6 to 7 Bumps [actions/github-script](https://github.com/actions/github-script) from 6 to 7. - [Release notes](https://github.com/actions/github-script/releases) - [Commits](https://github.com/actions/github-script/compare/v6...v7) --- updated-dependencies: - dependency-name: actions/github-script dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] * Update changelog Signed-off-by: dependabot[bot] --------- Signed-off-by: dependabot[bot] Signed-off-by: Daniel (dB.) Doubrovkine Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] Co-authored-by: Daniel (dB.) Doubrovkine --- .github/workflows/add-performance-comment.yml | 2 +- .github/workflows/benchmark-pull-request.yml | 6 +++--- .github/workflows/maintainer-approval.yml | 2 +- .github/workflows/triage.yml | 2 +- .github/workflows/version.yml | 2 +- CHANGELOG.md | 1 + 6 files changed, 8 insertions(+), 7 deletions(-) diff --git a/.github/workflows/add-performance-comment.yml b/.github/workflows/add-performance-comment.yml index fc272714c5628..6a310bff4c0a1 100644 --- a/.github/workflows/add-performance-comment.yml +++ b/.github/workflows/add-performance-comment.yml @@ -16,7 +16,7 @@ jobs: steps: - name: Add comment to PR - uses: actions/github-script@v6 + uses: actions/github-script@v7 with: github-token: ${{secrets.GITHUB_TOKEN}} script: | diff --git a/.github/workflows/benchmark-pull-request.yml b/.github/workflows/benchmark-pull-request.yml index 47abcc1178572..98dd39b1dad54 100644 --- a/.github/workflows/benchmark-pull-request.yml +++ b/.github/workflows/benchmark-pull-request.yml @@ -25,7 +25,7 @@ jobs: echo "USER_TAGS=pull_request_number:${{ github.event.issue.number }},repository:OpenSearch" >> $GITHUB_ENV - name: Check comment format id: check_comment - uses: actions/github-script@v6 + uses: actions/github-script@v7 with: script: | const fs = require('fs'); @@ -62,7 +62,7 @@ jobs: } - name: Post invalid format comment if: steps.check_comment.outputs.invalid == 'true' - uses: actions/github-script@v6 + uses: actions/github-script@v7 with: github-token: ${{secrets.GITHUB_TOKEN}} script: | @@ -150,7 +150,7 @@ jobs: cat $GITHUB_ENV bash opensearch-build/scripts/benchmark/benchmark-pull-request.sh ${{ secrets.JENKINS_PR_BENCHMARK_GENERIC_WEBHOOK_TOKEN }} - name: Update PR with Job Url - uses: actions/github-script@v6 + uses: actions/github-script@v7 with: github-token: ${{ secrets.GITHUB_TOKEN }} script: | diff --git a/.github/workflows/maintainer-approval.yml b/.github/workflows/maintainer-approval.yml index fdc2bf16937b4..34e8f57cc1878 100644 --- a/.github/workflows/maintainer-approval.yml +++ b/.github/workflows/maintainer-approval.yml @@ -9,7 +9,7 @@ jobs: runs-on: ubuntu-latest steps: - id: find-maintainers - uses: actions/github-script@v7.0.1 + uses: actions/github-script@v7 with: github-token: ${{ secrets.GITHUB_TOKEN }} result-encoding: string diff --git a/.github/workflows/triage.yml b/.github/workflows/triage.yml index 83bf4926a8c2d..c305818bdb0a9 100644 --- a/.github/workflows/triage.yml +++ b/.github/workflows/triage.yml @@ -9,7 +9,7 @@ jobs: if: github.repository == 'opensearch-project/OpenSearch' runs-on: ubuntu-latest steps: - - uses: actions/github-script@v7.0.1 + - uses: actions/github-script@v7 with: script: | const { issue, repository } = context.payload; diff --git a/.github/workflows/version.yml b/.github/workflows/version.yml index 7f120b65d7c2e..2de54716256ff 100644 --- a/.github/workflows/version.yml +++ b/.github/workflows/version.yml @@ -129,7 +129,7 @@ jobs: - name: Create tracking issue id: create-issue - uses: actions/github-script@v7.0.1 + uses: actions/github-script@v7 with: script: | const body = ` diff --git a/CHANGELOG.md b/CHANGELOG.md index 1b21fded97e47..7cc918b2ac089 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -11,6 +11,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Bump `org.apache.commons:commons-lang3` from 3.14.0 to 3.15.0 ([#14861](https://github.com/opensearch-project/OpenSearch/pull/14861)) - OpenJDK Update (July 2024 Patch releases) ([#14998](https://github.com/opensearch-project/OpenSearch/pull/14998)) - Bump `com.microsoft.azure:msal4j` from 1.16.1 to 1.16.2 ([#14995](https://github.com/opensearch-project/OpenSearch/pull/14995)) +- Bump `actions/github-script` from 6 to 7 ([#14997](https://github.com/opensearch-project/OpenSearch/pull/14997)) ### Changed From e26608b1492a8c1dcc61b8f3965564a40b3c0401 Mon Sep 17 00:00:00 2001 From: Marc Handalian Date: Mon, 29 Jul 2024 13:43:22 -0700 Subject: [PATCH 130/167] [Derived Fields] Add aggregation support for derived fields (#14618) * Add aggregation support for derived fields Signed-off-by: Marc Handalian * add unit test for a terms agg with derived fields Signed-off-by: Marc Handalian * Fix license header and add changelog entry Signed-off-by: Marc Handalian * move matrix_stats tests to aggs-matrix-stats module Signed-off-by: Marc Handalian * Move matrix tests back and add dependency to painless module Signed-off-by: Marc Handalian * add tests for all aggregations types and support ip_range Signed-off-by: Marc Handalian * Add tests for agg script returned from DerivedFieldType Signed-off-by: Marc Handalian * remove children aggs test as its not yet supported Signed-off-by: Marc Handalian * Add more tests Signed-off-by: Marc Handalian * fix changelog Signed-off-by: Marc Handalian --------- Signed-off-by: Marc Handalian --- CHANGELOG.md | 1 + modules/lang-painless/build.gradle | 1 + .../derived_fields/60_derived_field_aggs.yml | 1521 +++++++++++++++++ .../index/mapper/DerivedFieldType.java | 75 +- .../index/mapper/ObjectDerivedFieldType.java | 31 +- .../support/ValuesSourceConfig.java | 9 +- .../support/values/ScriptBytesValues.java | 7 +- .../index/mapper/DerivedFieldTypeTests.java | 69 + .../terms/DerivedFieldAggregationTests.java | 146 ++ .../support/ValuesSourceConfigTests.java | 39 + 10 files changed, 1889 insertions(+), 10 deletions(-) create mode 100644 modules/lang-painless/src/yamlRestTest/resources/rest-api-spec/test/painless/derived_fields/60_derived_field_aggs.yml create mode 100644 server/src/test/java/org/opensearch/search/aggregations/bucket/terms/DerivedFieldAggregationTests.java diff --git a/CHANGELOG.md b/CHANGELOG.md index 7cc918b2ac089..36cd33cc40453 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -6,6 +6,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), ## [Unreleased 2.x] ### Added - Fix for hasInitiatedFetching to fix allocation explain and manual reroute APIs (([#14972](https://github.com/opensearch-project/OpenSearch/pull/14972)) +- Add basic aggregation support for derived fields ([#14618](https://github.com/opensearch-project/OpenSearch/pull/14618)) ### Dependencies - Bump `org.apache.commons:commons-lang3` from 3.14.0 to 3.15.0 ([#14861](https://github.com/opensearch-project/OpenSearch/pull/14861)) diff --git a/modules/lang-painless/build.gradle b/modules/lang-painless/build.gradle index fb51a0bb7f157..7b828109139c8 100644 --- a/modules/lang-painless/build.gradle +++ b/modules/lang-painless/build.gradle @@ -46,6 +46,7 @@ ext { testClusters.all { module ':modules:mapper-extras' + module ':modules:aggs-matrix-stats' systemProperty 'opensearch.scripting.update.ctx_in_params', 'false' // TODO: remove this once cname is prepended to transport.publish_address by default in 8.0 systemProperty 'opensearch.transport.cname_in_publish_address', 'true' diff --git a/modules/lang-painless/src/yamlRestTest/resources/rest-api-spec/test/painless/derived_fields/60_derived_field_aggs.yml b/modules/lang-painless/src/yamlRestTest/resources/rest-api-spec/test/painless/derived_fields/60_derived_field_aggs.yml new file mode 100644 index 0000000000000..ba879a5fd73c3 --- /dev/null +++ b/modules/lang-painless/src/yamlRestTest/resources/rest-api-spec/test/painless/derived_fields/60_derived_field_aggs.yml @@ -0,0 +1,1521 @@ +--- +setup: +- skip: + version: " - 2.14.99" + reason: "derived_field feature was added in 2.15" + +# -- NOT SUPPORTED: -- +# geobounds +# scripted metric +# -- NOT SUPPORTED: -- +# Any geo agg +# sig terms/text + +- do: + indices.create: + index: test + body: + mappings: + properties: + text: + type: text + keyword: + type: keyword + os: + type: keyword + long: + type: long + float: + type: float + double: + type: double + date: + type: date + geo: + type: geo_point + ip: + type: ip + boolean: + type: boolean + array_of_long: + type: long + json_field: + type: text + derived: + derived_text: + type: text + script: "emit(params._source[\"text\"])" + derived_text_prefilter_field: + type: text + script: "emit(params._source[\"text\"])" + prefilter_field: "text" + derived_keyword: + type: keyword + script: "emit(params._source[\"keyword\"])" + derived_os: + type: keyword + script: "emit(params._source[\"os\"])" + derived_long: + type: long + script: "emit(params._source[\"long\"])" + derived_float: + type: float + script: "emit(params._source[\"float\"])" + derived_double: + type: double + script: "emit(params._source[\"double\"])" + derived_date: + type: date + script: "emit(ZonedDateTime.parse(params._source[\"date\"]).toInstant().toEpochMilli())" + derived_geo: + type: geo_point + script: "emit(params._source[\"geo\"][0], params._source[\"geo\"][1])" + derived_ip: + type: ip + script: "emit(params._source[\"ip\"])" + derived_boolean: + type: boolean + script: "emit(params._source[\"boolean\"])" + derived_array_of_long: + type: long + script: "emit(params._source[\"array_of_long\"][0]);emit(params._source[\"array_of_long\"][1]);" + derived_object: + type: object + properties: + keyword: keyword + ip: ip + os: keyword + script: "emit(params._source[\"json_field\"])" + prefilter_field: "json_field" + +- do: + bulk: + refresh: true + body: + - index: + _index: test + _id: 1 + - text: "peter piper" + keyword: "foo" + os: "mac" + long: 1 + float: 1.0 + double: 1.0 + date: "2017-01-01T00:00:00Z" + geo: [ -74.0060, 40.7128 ] + ip: "192.168.0.1" + boolean: true + array_of_long: [ 1, 2 ] + json_field: "{\"text\":\"peter piper\",\"keyword\":\"foo\",\"os\":\"mac\",\"long\":1,\"float\":1.0,\"double\":1.0,\"date\":\"2017-01-01T00:00:00Z\",\"ip\":\"192.168.0.1\",\"boolean\":true, \"array_of_long\": [1, 2]}" + - index: + _index: test + _id: 2 + - text: "piper picked a peck" + keyword: "bar" + os: "windows" + long: 2 + float: 2.0 + double: 2.0 + date: "2017-01-02T00:00:00Z" + geo: [ -118.2437, 34.0522 ] + ip: "10.0.0.1" + boolean: false + array_of_long: [ 2, 3 ] + json_field: "{\"keyword\":\"bar\",\"long\":2,\"float\":2.0,\"os\":\"windows\",\"double\":2.0,\"date\":\"2017-01-02T00:00:00Z\",\"ip\":\"10.0.0.1\",\"boolean\":false, \"array_of_long\": [2, 3]}" + - index: + _index: test + _id: 3 + - text: "peck of pickled peppers" + keyword: "baz" + os: "mac" + long: -3 + float: -3.0 + double: -3.0 + date: "2017-01-03T00:00:00Z" + geo: [ -87.6298, 41.87 ] + ip: "172.16.0.1" + boolean: true + array_of_long: [ 3, 4 ] + json_field: "{\"keyword\":\"baz\",\"long\":-3,\"float\":-3.0,\"os\":\"mac\",\"double\":-3.0,\"date\":\"2017-01-03T00:00:00Z\",\"ip\":\"172.16.0.1\",\"boolean\":true, \"array_of_long\": [3, 4]}" + - index: + _index: test + _id: 4 + - text: "pickled peppers" + keyword: "qux" + os: "windows" + long: 4 + float: 4.0 + double: 4.0 + date: "2017-01-04T00:00:00Z" + geo: [ -74.0060, 40.7128 ] + ip: "192.168.0.2" + boolean: false + array_of_long: [ 4, 5 ] + json_field: "{\"keyword\":\"qux\",\"long\":4,\"float\":4.0,\"os\":\"windows\",\"double\":4.0,\"date\":\"2017-01-04T00:00:00Z\",\"ip\":\"192.168.0.2\",\"boolean\":false, \"array_of_long\": [4, 5]}" + - index: + _index: test + _id: 5 + - text: "peppers" + keyword: "quux" + os: "mac" + long: 5 + float: 5.0 + double: 5.0 + date: "2017-01-05T00:00:00Z" + geo: [ -87.6298, 41.87 ] + ip: "10.0.0.2" + boolean: true + array_of_long: [ 5, 6 ] + json_field: "{\"keyword\":\"quux\",\"long\":5,\"float\":5.0,\"os\":\"mac\",\"double\":5.0,\"date\":\"2017-01-05T00:00:00Z\",\"ip\":\"10.0.0.2\",\"boolean\":true, \"array_of_long\": [5, 6]}" + +- do: + indices.refresh: + index: [test] + +### BUCKET AGGS +--- +"Test terms aggregation on derived_keyword from search definition": +- do: + search: + index: test + body: + derived: + derived_keyword_search_definition: + type: keyword + script: "emit(params._source[\"keyword\"])" + size: 0 + aggs: + keywords: + terms: + field: derived_keyword_search_definition + +- match: { hits.total.value: 5 } +- length: { aggregations.keywords.buckets: 5 } +- match: { aggregations.keywords.buckets.0.key: "bar" } +- match: { aggregations.keywords.buckets.0.doc_count: 1 } +- match: { aggregations.keywords.buckets.1.key: "baz" } +- match: { aggregations.keywords.buckets.1.doc_count: 1 } +- match: { aggregations.keywords.buckets.2.key: "foo" } +- match: { aggregations.keywords.buckets.2.doc_count: 1 } +- match: { aggregations.keywords.buckets.3.key: "quux" } +- match: { aggregations.keywords.buckets.3.doc_count: 1 } +- match: { aggregations.keywords.buckets.4.key: "qux" } +- match: { aggregations.keywords.buckets.4.doc_count: 1 } + +--- +"Test terms aggregation on derived_keyword": +- do: + search: + index: test + body: + size: 0 + aggs: + keywords: + terms: + field: derived_keyword + +- match: { hits.total.value: 5 } +- length: { aggregations.keywords.buckets: 5 } +- match: { aggregations.keywords.buckets.0.key: "bar" } +- match: { aggregations.keywords.buckets.0.doc_count: 1 } +- match: { aggregations.keywords.buckets.1.key: "baz" } +- match: { aggregations.keywords.buckets.1.doc_count: 1 } +- match: { aggregations.keywords.buckets.2.key: "foo" } +- match: { aggregations.keywords.buckets.2.doc_count: 1 } +- match: { aggregations.keywords.buckets.3.key: "quux" } +- match: { aggregations.keywords.buckets.3.doc_count: 1 } +- match: { aggregations.keywords.buckets.4.key: "qux" } +- match: { aggregations.keywords.buckets.4.doc_count: 1 } + +--- +"Test range aggregation on derived_long": +- do: + search: + index: test + body: + size: 0 + aggs: + long_ranges: + range: + field: derived_long + ranges: + - to: 0 + - from: 0 + to: 3 + - from: 3 + +- match: { hits.total.value: 5 } +- length: { aggregations.long_ranges.buckets: 3 } +- match: { aggregations.long_ranges.buckets.0.doc_count: 1 } +- match: { aggregations.long_ranges.buckets.1.doc_count: 2 } +- match: { aggregations.long_ranges.buckets.2.doc_count: 2 } + +--- +"Test histogram aggregation on derived_float": +- do: + search: + index: test + body: + size: 0 + aggs: + float_histogram: + histogram: + field: derived_float + interval: 2 + +- match: { hits.total.value: 5 } +- length: { aggregations.float_histogram.buckets: 5 } +- match: { aggregations.float_histogram.buckets.0.key: -4.0 } +- match: { aggregations.float_histogram.buckets.0.doc_count: 1 } + +--- +"Test date_histogram aggregation on derived_date": +- do: + search: + index: test + body: + size: 0 + aggs: + date_histogram: + date_histogram: + field: derived_date + calendar_interval: day + +- match: { hits.total.value: 5 } +- length: { aggregations.date_histogram.buckets: 5 } +- match: { aggregations.date_histogram.buckets.0.key_as_string: "2017-01-01T00:00:00.000Z" } +- match: { aggregations.date_histogram.buckets.0.doc_count: 1 } +- match: { aggregations.date_histogram.buckets.1.key_as_string: "2017-01-02T00:00:00.000Z" } +- match: { aggregations.date_histogram.buckets.1.doc_count: 1 } +- match: { aggregations.date_histogram.buckets.2.key_as_string: "2017-01-03T00:00:00.000Z" } +- match: { aggregations.date_histogram.buckets.2.doc_count: 1 } +- match: { aggregations.date_histogram.buckets.3.key_as_string: "2017-01-04T00:00:00.000Z" } +- match: { aggregations.date_histogram.buckets.3.doc_count: 1 } +- match: { aggregations.date_histogram.buckets.4.key_as_string: "2017-01-05T00:00:00.000Z" } +- match: { aggregations.date_histogram.buckets.4.doc_count: 1 } + +--- +"Test date_range aggregation on derived_date": +- do: + search: + index: test + body: + size: 0 + aggs: + date_range: + date_range: + field: derived_date + ranges: + - to: "2017-01-03T00:00:00Z" + - from: "2017-01-03T00:00:00Z" + +- match: { hits.total.value: 5 } +- match: { aggregations.date_range.buckets.0.key: "*-2017-01-03T00:00:00.000Z" } +- match: { aggregations.date_range.buckets.0.doc_count: 2 } +- match: { aggregations.date_range.buckets.1.key: "2017-01-03T00:00:00.000Z-*" } +- match: { aggregations.date_range.buckets.1.doc_count: 3 } + +--- +"Test filters aggregation on derived_boolean": +- do: + search: + index: test + body: + size: 0 + aggs: + boolean_filters: + filters: + filters: + true_values: + term: + derived_boolean: true + false_values: + term: + derived_boolean: false + +- match: { hits.total.value: 5 } +- match: { aggregations.boolean_filters.buckets.true_values.doc_count: 3 } +- match: { aggregations.boolean_filters.buckets.false_values.doc_count: 2 } + +--- +"Test adjacency matrix aggregation on derived_long": +- do: + search: + index: test + body: + size: 0 + aggs: + adj_matrix: + adjacency_matrix: + filters: + high_num: + range: + derived_long: + gte: 3 + low_num: + range: + derived_long: + lt: 3 +- match: { hits.total.value: 5 } +- length: { aggregations.adj_matrix.buckets: 2 } +- match: { aggregations.adj_matrix.buckets.0.key: "high_num" } +- match: { aggregations.adj_matrix.buckets.0.doc_count: 2 } +- match: { aggregations.adj_matrix.buckets.1.key: "low_num" } +- match: { aggregations.adj_matrix.buckets.1.doc_count: 3 } + +### METRIC AGGS + +--- +"Test stats aggregation on derived_array_of_long": +- do: + search: + index: test + body: + size: 0 + aggs: + long_array_stats: + stats: + field: derived_array_of_long + +- match: { hits.total.value: 5 } +- match: { aggregations.long_array_stats.count: 10 } +- match: { aggregations.long_array_stats.min: 1 } +- match: { aggregations.long_array_stats.max: 6 } +- match: { aggregations.long_array_stats.avg: 3.5 } +- match: { aggregations.long_array_stats.sum: 35 } + +--- +"Test cardinality aggregation on derived_keyword": +- do: + search: + index: test + body: + size: 0 + aggs: + unique_keywords: + cardinality: + field: derived_keyword + +- match: { hits.total.value: 5 } +- match: { aggregations.unique_keywords.value: 5 } + +--- +"Test percentiles aggregation on derived_double": +- do: + search: + index: test + body: + size: 0 + aggs: + double_percentiles: + percentiles: + field: derived_double + percents: [ 25, 50, 75 ] + +- match: { hits.total.value: 5 } +- match: { aggregations.double_percentiles.values.25\.0: 1.0 } +- match: { aggregations.double_percentiles.values.50\.0: 2.0 } +- match: { aggregations.double_percentiles.values.75\.0: 4.0 } + +--- +"Test percentile ranks aggregation on derived_long": +- do: + search: + index: test + body: + size: 0 + aggs: + long_percentile_ranks: + percentile_ranks: + field: derived_long + values: [ 2, 4 ] + +- match: { hits.total.value: 5 } +- match: { aggregations.long_percentile_ranks.values.2\.0: 50.0 } +- match: { aggregations.long_percentile_ranks.values.4\.0: 70.0 } + +--- +"Test top hits aggregation on derived_keyword": +- do: + search: + index: test + body: + size: 0 + aggs: + top_keywords: + terms: + field: derived_keyword + aggs: + top_hits: + top_hits: + size: 1 +- match: { hits.total.value: 5 } +- length: { aggregations.top_keywords.buckets: 5 } +- match: { aggregations.top_keywords.buckets.0.key: "bar" } +- match: { aggregations.top_keywords.buckets.0.doc_count: 1 } +- length: { aggregations.top_keywords.buckets.0.top_hits.hits.hits: 1 } + +--- +"Test matrix stats aggregation on derived_long and float": +- do: + search: + index: test + body: + size: 0 + aggs: + matrix_stats: + matrix_stats: + fields: [ derived_long, derived_float ] +- match: { hits.total.value: 5 } +- length: { aggregations.matrix_stats.fields: 2 } +- match: { aggregations.matrix_stats.fields.0.name: "derived_float" } +- match: { aggregations.matrix_stats.fields.0.count: 5 } +- match: { aggregations.matrix_stats.fields.1.name: "derived_long" } +- match: { aggregations.matrix_stats.fields.1.count: 5 } + +--- +"Test median absolute deviation aggregation on derived_long": +- do: + search: + index: test + body: + size: 0 + aggs: + mad_long: + median_absolute_deviation: + field: derived_long +- match: { hits.total.value: 5 } +- match: { aggregations.mad_long.value: 2.0 } + +## Pipeline agg +--- +"Test simple pipeline agg with derived_keyword and long": +- do: + search: + index: test + body: + size: 0 + aggs: + keywords: + terms: + field: derived_keyword + aggs: + sum_derived_longs: + sum: + field: derived_long + sum_total: + sum_bucket: + buckets_path: "keywords>sum_derived_longs" +- match: { hits.total.value: 5 } +- match: { aggregations.keywords.buckets.0.key: "bar" } +- match: { aggregations.keywords.buckets.0.sum_derived_longs.value: 2 } +- match: { aggregations.keywords.buckets.1.key: "baz" } +- match: { aggregations.keywords.buckets.1.sum_derived_longs.value: -3 } +- match: { aggregations.keywords.buckets.2.key: "foo" } +- match: { aggregations.keywords.buckets.2.sum_derived_longs.value: 1 } +- match: { aggregations.keywords.buckets.3.key: "quux" } +- match: { aggregations.keywords.buckets.3.sum_derived_longs.value: 5 } +- match: { aggregations.keywords.buckets.4.key: "qux" } +- match: { aggregations.keywords.buckets.4.sum_derived_longs.value: 4 } +- match: { aggregations.sum_total.value: 9 } + + +--- +"Test terms aggregation on derived_ip": +- do: + search: + index: test + body: + size: 0 + aggs: + ip_terms: + terms: + field: derived_ip + +- match: { hits.total.value: 5 } +- length: { aggregations.ip_terms.buckets: 5 } +- match: { aggregations.ip_terms.buckets.0.key: "10.0.0.1" } +- match: { aggregations.ip_terms.buckets.0.doc_count: 1 } + +--- +"Test range aggregation on derived_ip": +- do: + search: + index: test + body: + size: 0 + aggs: + ip_ranges: + ip_range: + field: derived_ip + ranges: + - to: "10.0.0.0" + - from: "10.0.0.0" + to: "172.16.0.0" + - from: "172.16.0.0" + +- match: { hits.total.value: 5 } +- length: { aggregations.ip_ranges.buckets: 3 } +- match: { aggregations.ip_ranges.buckets.0.doc_count: 0 } +- match: { aggregations.ip_ranges.buckets.1.doc_count: 2 } +- match: { aggregations.ip_ranges.buckets.2.doc_count: 3 } + +--- +"Test cardinality aggregation on derived_ip": +- do: + search: + index: test + body: + size: 0 + aggs: + unique_ips: + cardinality: + field: derived_ip + +- match: { hits.total.value: 5 } +- match: { aggregations.unique_ips.value: 5 } + +--- +"Test missing aggregation on derived_ip": +- do: + search: + index: test + body: + size: 0 + aggs: + missing_ips: + missing: + field: derived_ip + +- match: { hits.total.value: 5 } +- match: { aggregations.missing_ips.doc_count: 0 } + +--- +"Test value count aggregation on derived_ip": +- do: + search: + index: test + body: + size: 0 + aggs: + ip_count: + value_count: + field: derived_ip + +- match: { hits.total.value: 5 } +- match: { aggregations.ip_count.value: 5 } + +--- +"Test composite agg": +- do: + search: + index: test + body: + size: 0 + aggs: + test_composite_agg: + composite: + size: 10 + sources: + - os: + terms: + field: derived_os + - keyword: + terms: + field: derived_keyword + - is_true: + terms: + field: derived_boolean + aggs: + avg_long: + avg: + field: derived_long +- match: { aggregations.test_composite_agg.buckets.0.key.os: "mac" } +- match: { aggregations.test_composite_agg.buckets.0.key.keyword: "baz" } +- match: { aggregations.test_composite_agg.buckets.0.key.is_true: true } +- match: { aggregations.test_composite_agg.buckets.0.doc_count: 1 } +- match: { aggregations.test_composite_agg.buckets.0.avg_long.value: -3.0 } +- match: { aggregations.test_composite_agg.buckets.1.key.os: "mac" } +- match: { aggregations.test_composite_agg.buckets.1.key.keyword: "foo" } +- match: { aggregations.test_composite_agg.buckets.1.key.is_true: true } +- match: { aggregations.test_composite_agg.buckets.1.doc_count: 1 } +- match: { aggregations.test_composite_agg.buckets.1.avg_long.value: 1.0 } +- match: { aggregations.test_composite_agg.buckets.2.key.os: "mac" } +- match: { aggregations.test_composite_agg.buckets.2.key.keyword: "quux" } +- match: { aggregations.test_composite_agg.buckets.2.key.is_true: true } +- match: { aggregations.test_composite_agg.buckets.2.doc_count: 1 } +- match: { aggregations.test_composite_agg.buckets.2.avg_long.value: 5.0 } +- match: { aggregations.test_composite_agg.buckets.3.key.os: "windows" } +- match: { aggregations.test_composite_agg.buckets.3.key.keyword: "bar" } +- match: { aggregations.test_composite_agg.buckets.3.key.is_true: false } +- match: { aggregations.test_composite_agg.buckets.3.doc_count: 1 } +- match: { aggregations.test_composite_agg.buckets.3.avg_long.value: 2.0 } +- match: { aggregations.test_composite_agg.buckets.4.key.os: "windows" } +- match: { aggregations.test_composite_agg.buckets.4.key.keyword: "qux" } +- match: { aggregations.test_composite_agg.buckets.4.key.is_true: false } +- match: { aggregations.test_composite_agg.buckets.4.doc_count: 1 } +- match: { aggregations.test_composite_agg.buckets.4.avg_long.value: 4.0 } + +--- +"Test auto date histogram": +- do: + search: + rest_total_hits_as_int: true + index: test + body: + size: 0 + aggs: + test_auto_date_histogram: + auto_date_histogram: + field: "derived_date" + buckets: 10 + format: "yyyy-MM-dd" + aggs: + avg_long: + avg: + field: derived_long +- match: { hits.total: 5 } +- length: { aggregations.test_auto_date_histogram.buckets: 9 } +- match: { aggregations.test_auto_date_histogram.buckets.0.key_as_string: "2017-01-01"} +- match: { aggregations.test_auto_date_histogram.buckets.0.avg_long.value: 1.0} + +--- +"Test variable_width_histogram aggregation": +- do: + search: + index: test + body: + size: 0 + aggs: + var_width_hist: + variable_width_histogram: + field: derived_long + buckets: 3 + +- match: { hits.total.value: 5 } +- length: { aggregations.var_width_hist.buckets: 3 } + +--- +"Test extended_stats aggregation": +- do: + search: + index: test + body: + size: 0 + aggs: + extended_stats_agg: + extended_stats: + field: derived_long + +- match: { hits.total.value: 5 } +- match: { aggregations.extended_stats_agg.count: 5 } +- match: { aggregations.extended_stats_agg.min: -3 } +- match: { aggregations.extended_stats_agg.max: 5 } +- is_true: aggregations.extended_stats_agg.avg +- is_true: aggregations.extended_stats_agg.sum +- is_true: aggregations.extended_stats_agg.sum_of_squares +- is_true: aggregations.extended_stats_agg.variance +- is_true: aggregations.extended_stats_agg.std_deviation + +--- +"Test rare_terms aggregation": +- do: + search: + index: test + body: + size: 0 + aggs: + rare_terms_agg: + rare_terms: + field: derived_keyword + max_doc_count: 1 + +- match: { hits.total.value: 5 } +- length: { aggregations.rare_terms_agg.buckets: 5 } + +--- +"Test global aggregation": +- do: + search: + index: test + body: + query: + term: + derived_keyword: "foo" + aggs: + all_docs: + global: {} + aggs: + avg_long: + avg: + field: derived_long + +- match: { hits.total.value: 1 } +- match: { aggregations.all_docs.doc_count: 5 } +- match: { aggregations.all_docs.avg_long.value: 1.8 } + +--- +"Test missing aggregation": +- do: + search: + index: test + body: + size: 0 + aggs: + missing_agg: + missing: + field: derived_keyword + +- match: { hits.total.value: 5 } +- match: { aggregations.missing_agg.doc_count: 0 } + +--- +"Test value_count aggregation": +- do: + search: + index: test + body: + size: 0 + aggs: + value_count_agg: + value_count: + field: derived_long + +- match: { hits.total.value: 5 } +- match: { aggregations.value_count_agg.value: 5 } + +--- +"Test weighted_avg aggregation": +- do: + search: + index: test + body: + size: 0 + aggs: + weighted_avg_agg: + weighted_avg: + value: + field: derived_long + weight: + field: derived_float + +- match: { hits.total.value: 5 } +- is_true: aggregations.weighted_avg_agg.value + +--- +"Test diversified_sampler aggregation": +- do: + search: + index: test + body: + size: 0 + aggs: + diversified_sampler_agg: + diversified_sampler: + field: derived_keyword + max_docs_per_value: 1 + aggs: + avg_long: + avg: + field: derived_long + +- match: { hits.total.value: 5 } +- match: { aggregations.diversified_sampler_agg.doc_count: 5 } +- match: { aggregations.diversified_sampler_agg.avg_long.value: 1.8 } + +--- +"Test sampler aggregation": +- do: + search: + index: test + body: + size: 0 + aggs: + sampler_agg: + sampler: + shard_size: 2 + aggs: + avg_long: + avg: + field: derived_long + +- match: { hits.total.value: 5 } +- is_true: aggregations.sampler_agg.doc_count +- is_true: aggregations.sampler_agg.avg_long.value + +--- +"Test multi_terms aggregation": +- do: + search: + index: test + body: + size: 0 + aggs: + multi_terms_agg: + multi_terms: + terms: + - field: derived_keyword + - field: derived_os + size: 10 + +- match: { hits.total.value: 5 } +- length: { aggregations.multi_terms_agg.buckets: 5 } + +#### SAME TESTS WITH DERIVED_OBJECT +--- +"Test terms aggregation on derived_object.keyword": +- do: + search: + index: test + body: + size: 0 + aggs: + keywords: + terms: + field: derived_object.keyword + +- match: { hits.total.value: 5 } +- length: { aggregations.keywords.buckets: 5 } +- match: { aggregations.keywords.buckets.0.key: "bar" } +- match: { aggregations.keywords.buckets.0.doc_count: 1 } +- match: { aggregations.keywords.buckets.1.key: "baz" } +- match: { aggregations.keywords.buckets.1.doc_count: 1 } +- match: { aggregations.keywords.buckets.2.key: "foo" } +- match: { aggregations.keywords.buckets.2.doc_count: 1 } +- match: { aggregations.keywords.buckets.3.key: "quux" } +- match: { aggregations.keywords.buckets.3.doc_count: 1 } +- match: { aggregations.keywords.buckets.4.key: "qux" } +- match: { aggregations.keywords.buckets.4.doc_count: 1 } + +--- +"Test range aggregation on derived_object.long": +- do: + search: + index: test + body: + size: 0 + aggs: + long_ranges: + range: + field: derived_object.long + ranges: + - to: 0 + - from: 0 + to: 3 + - from: 3 + +- match: { hits.total.value: 5 } +- length: { aggregations.long_ranges.buckets: 3 } +- match: { aggregations.long_ranges.buckets.0.doc_count: 1 } +- match: { aggregations.long_ranges.buckets.1.doc_count: 2 } +- match: { aggregations.long_ranges.buckets.2.doc_count: 2 } + +--- +"Test histogram aggregation on derived_object.float": +- do: + search: + index: test + body: + size: 0 + aggs: + float_histogram: + histogram: + field: derived_object.float + interval: 2 + +- match: { hits.total.value: 5 } +- length: { aggregations.float_histogram.buckets: 5 } +- match: { aggregations.float_histogram.buckets.0.key: -4.0 } +- match: { aggregations.float_histogram.buckets.0.doc_count: 1 } + +--- +"Test date_histogram aggregation on derived_object.date": +- do: + search: + index: test + body: + size: 0 + aggs: + date_histogram: + date_histogram: + field: derived_object.date + calendar_interval: day + +- match: { hits.total.value: 5 } +- length: { aggregations.date_histogram.buckets: 5 } +- match: { aggregations.date_histogram.buckets.0.key_as_string: "2017-01-01T00:00:00.000Z" } +- match: { aggregations.date_histogram.buckets.0.doc_count: 1 } +- match: { aggregations.date_histogram.buckets.1.key_as_string: "2017-01-02T00:00:00.000Z" } +- match: { aggregations.date_histogram.buckets.1.doc_count: 1 } +- match: { aggregations.date_histogram.buckets.2.key_as_string: "2017-01-03T00:00:00.000Z" } +- match: { aggregations.date_histogram.buckets.2.doc_count: 1 } +- match: { aggregations.date_histogram.buckets.3.key_as_string: "2017-01-04T00:00:00.000Z" } +- match: { aggregations.date_histogram.buckets.3.doc_count: 1 } +- match: { aggregations.date_histogram.buckets.4.key_as_string: "2017-01-05T00:00:00.000Z" } +- match: { aggregations.date_histogram.buckets.4.doc_count: 1 } + +--- +"Test date_range aggregation on derived_object.date": +- do: + search: + index: test + body: + size: 0 + aggs: + date_range: + date_range: + field: derived_object.date + ranges: + - to: "2017-01-03T00:00:00Z" + - from: "2017-01-03T00:00:00Z" + +- match: { hits.total.value: 5 } +- match: { aggregations.date_range.buckets.0.key: "*-2017-01-03T00:00:00.000Z" } +- match: { aggregations.date_range.buckets.0.doc_count: 2 } +- match: { aggregations.date_range.buckets.1.key: "2017-01-03T00:00:00.000Z-*" } +- match: { aggregations.date_range.buckets.1.doc_count: 3 } + +--- +"Test filters aggregation on derived_object.boolean": +- do: + search: + index: test + body: + size: 0 + aggs: + boolean_filters: + filters: + filters: + true_values: + term: + derived_object.boolean: true + false_values: + term: + derived_object.boolean: false + +- match: { hits.total.value: 5 } +- match: { aggregations.boolean_filters.buckets.true_values.doc_count: 3 } +- match: { aggregations.boolean_filters.buckets.false_values.doc_count: 2 } + +--- +"Test adjacency matrix aggregation on derived_object.long": +- do: + search: + index: test + body: + size: 0 + aggs: + adj_matrix: + adjacency_matrix: + filters: + high_num: + range: + derived_object.long: + gte: 3 + low_num: + range: + derived_object.long: + lt: 3 +- match: { hits.total.value: 5 } +- length: { aggregations.adj_matrix.buckets: 2 } +- match: { aggregations.adj_matrix.buckets.0.key: "high_num" } +- match: { aggregations.adj_matrix.buckets.0.doc_count: 2 } +- match: { aggregations.adj_matrix.buckets.1.key: "low_num" } +- match: { aggregations.adj_matrix.buckets.1.doc_count: 3 } + +--- +"Test stats aggregation on derived_object.array_of_long": +- do: + search: + index: test + body: + size: 0 + aggs: + long_array_stats: + stats: + field: derived_object.array_of_long + +- match: { hits.total.value: 5 } +- match: { aggregations.long_array_stats.count: 10 } +- match: { aggregations.long_array_stats.min: 1 } +- match: { aggregations.long_array_stats.max: 6 } +- match: { aggregations.long_array_stats.avg: 3.5 } +- match: { aggregations.long_array_stats.sum: 35 } + +--- +"Test cardinality aggregation on derived_object_keyword": +- do: + search: + index: test + body: + size: 0 + aggs: + unique_keywords: + cardinality: + field: derived_object.keyword + +- match: { hits.total.value: 5 } +- match: { aggregations.unique_keywords.value: 5 } + +--- +"Test percentiles aggregation on derived_object.double": +- do: + search: + index: test + body: + size: 0 + aggs: + double_percentiles: + percentiles: + field: derived_object.double + percents: [ 25, 50, 75 ] + +- match: { hits.total.value: 5 } +- match: { aggregations.double_percentiles.values.25\.0: 1.0 } +- match: { aggregations.double_percentiles.values.50\.0: 2.0 } +- match: { aggregations.double_percentiles.values.75\.0: 4.0 } + +--- +"Test percentile ranks aggregation on derived_object.long": +- do: + search: + index: test + body: + size: 0 + aggs: + long_percentile_ranks: + percentile_ranks: + field: derived_object.long + values: [ 2, 4 ] + +- match: { hits.total.value: 5 } +- match: { aggregations.long_percentile_ranks.values.2\.0: 50.0 } +- match: { aggregations.long_percentile_ranks.values.4\.0: 70.0 } + +--- +"Test top hits aggregation on derived_object.keyword": +- do: + search: + index: test + body: + size: 0 + aggs: + top_keywords: + terms: + field: derived_object.keyword + aggs: + top_hits: + top_hits: + size: 1 +- match: { hits.total.value: 5 } +- length: { aggregations.top_keywords.buckets: 5 } +- match: { aggregations.top_keywords.buckets.0.key: "bar" } +- match: { aggregations.top_keywords.buckets.0.doc_count: 1 } +- length: { aggregations.top_keywords.buckets.0.top_hits.hits.hits: 1 } + +--- +"Test matrix stats aggregation on derived_object.long and float": +- do: + search: + index: test + body: + size: 0 + aggs: + matrix_stats: + matrix_stats: + fields: [ derived_object.long, derived_object.float ] +- match: { hits.total.value: 5 } +- length: { aggregations.matrix_stats.fields: 2 } +- match: { aggregations.matrix_stats.fields.0.name: "derived_object.long" } +- match: { aggregations.matrix_stats.fields.0.count: 5 } +- match: { aggregations.matrix_stats.fields.1.name: "derived_object.float" } +- match: { aggregations.matrix_stats.fields.1.count: 5 } + +--- +"Test median absolute deviation aggregation on derived_object.long": +- do: + search: + index: test + body: + size: 0 + aggs: + mad_long: + median_absolute_deviation: + field: derived_object.long +- match: { hits.total.value: 5 } +- match: { aggregations.mad_long.value: 2.0 } + +--- +"Test simple pipeline agg derived_object": +- do: + search: + index: test + body: + size: 0 + aggs: + keywords: + terms: + field: derived_object.keyword + aggs: + sum_derived_longs: + sum: + field: derived_object.long + sum_total: + sum_bucket: + buckets_path: "keywords>sum_derived_longs" +- match: { hits.total.value: 5 } +- match: { aggregations.keywords.buckets.0.key: "bar" } +- match: { aggregations.keywords.buckets.0.sum_derived_longs.value: 2 } +- match: { aggregations.keywords.buckets.1.key: "baz" } +- match: { aggregations.keywords.buckets.1.sum_derived_longs.value: -3 } +- match: { aggregations.keywords.buckets.2.key: "foo" } +- match: { aggregations.keywords.buckets.2.sum_derived_longs.value: 1 } +- match: { aggregations.keywords.buckets.3.key: "quux" } +- match: { aggregations.keywords.buckets.3.sum_derived_longs.value: 5 } +- match: { aggregations.keywords.buckets.4.key: "qux" } +- match: { aggregations.keywords.buckets.4.sum_derived_longs.value: 4 } +- match: { aggregations.sum_total.value: 9 } + + +--- +"Test composite agg on derived_object": +- do: + search: + index: test + body: + size: 0 + aggs: + test_composite_agg: + composite: + size: 10 + sources: + - os: + terms: + field: derived_object.os + - keyword: + terms: + field: derived_object.keyword + - is_true: + terms: + field: derived_object.boolean + aggs: + avg_long: + avg: + field: derived_object.long +- length: { aggregations.test_composite_agg.buckets: 5 } +- match: { aggregations.test_composite_agg.buckets.0.key.os: "mac" } +- match: { aggregations.test_composite_agg.buckets.0.key.keyword: "baz" } +- match: { aggregations.test_composite_agg.buckets.0.key.is_true: true } +- match: { aggregations.test_composite_agg.buckets.0.doc_count: 1 } +- match: { aggregations.test_composite_agg.buckets.0.avg_long.value: -3.0 } +- match: { aggregations.test_composite_agg.buckets.1.key.os: "mac" } +- match: { aggregations.test_composite_agg.buckets.1.key.keyword: "foo" } +- match: { aggregations.test_composite_agg.buckets.1.key.is_true: true } +- match: { aggregations.test_composite_agg.buckets.1.doc_count: 1 } +- match: { aggregations.test_composite_agg.buckets.1.avg_long.value: 1.0 } +- match: { aggregations.test_composite_agg.buckets.2.key.os: "mac" } +- match: { aggregations.test_composite_agg.buckets.2.key.keyword: "quux" } +- match: { aggregations.test_composite_agg.buckets.2.key.is_true: true } +- match: { aggregations.test_composite_agg.buckets.2.doc_count: 1 } +- match: { aggregations.test_composite_agg.buckets.2.avg_long.value: 5.0 } +- match: { aggregations.test_composite_agg.buckets.3.key.os: "windows" } +- match: { aggregations.test_composite_agg.buckets.3.key.keyword: "bar" } +- match: { aggregations.test_composite_agg.buckets.3.key.is_true: false } +- match: { aggregations.test_composite_agg.buckets.3.doc_count: 1 } +- match: { aggregations.test_composite_agg.buckets.3.avg_long.value: 2.0 } +- match: { aggregations.test_composite_agg.buckets.4.key.os: "windows" } +- match: { aggregations.test_composite_agg.buckets.4.key.keyword: "qux" } +- match: { aggregations.test_composite_agg.buckets.4.key.is_true: false } +- match: { aggregations.test_composite_agg.buckets.4.doc_count: 1 } +- match: { aggregations.test_composite_agg.buckets.4.avg_long.value: 4.0 } + +--- +"Test auto date histogram on derived_object": +- do: + search: + rest_total_hits_as_int: true + index: test + body: + size: 0 + aggs: + test_auto_date_histogram: + auto_date_histogram: + field: "derived_object.date" + buckets: 10 + format: "yyyy-MM-dd" + aggs: + avg_long: + avg: + field: derived_object.long +- match: { hits.total: 5 } +- length: { aggregations.test_auto_date_histogram.buckets: 9 } +- match: { aggregations.test_auto_date_histogram.buckets.0.key_as_string: "2017-01-01"} +- match: { aggregations.test_auto_date_histogram.buckets.0.avg_long.value: 1.0} + +--- +"Test variable_width_histogram aggregation on derived_object": +- do: + search: + index: test + body: + size: 0 + aggs: + var_width_hist: + variable_width_histogram: + field: derived_object.long + buckets: 3 + +- match: { hits.total.value: 5 } +- length: { aggregations.var_width_hist.buckets: 3 } + +--- +"Test extended_stats aggregation on derived_object": +- do: + search: + index: test + body: + size: 0 + aggs: + extended_stats_agg: + extended_stats: + field: derived_object.long + +- match: { hits.total.value: 5 } +- match: { aggregations.extended_stats_agg.count: 5 } +- match: { aggregations.extended_stats_agg.min: -3 } +- match: { aggregations.extended_stats_agg.max: 5 } +- is_true: aggregations.extended_stats_agg.avg +- is_true: aggregations.extended_stats_agg.sum +- is_true: aggregations.extended_stats_agg.sum_of_squares +- is_true: aggregations.extended_stats_agg.variance +- is_true: aggregations.extended_stats_agg.std_deviation + +--- +"Test rare_terms aggregation on derived_object": +- do: + search: + index: test + body: + size: 0 + aggs: + rare_terms_agg: + rare_terms: + field: derived_object.keyword + max_doc_count: 1 + +- match: { hits.total.value: 5 } +- length: { aggregations.rare_terms_agg.buckets: 5 } + +--- +"Test global aggregation on derived_object": +- do: + search: + index: test + body: + query: + term: + derived_object.keyword: "foo" + aggs: + all_docs: + global: {} + aggs: + avg_long: + avg: + field: derived_object.long + +- match: { hits.total.value: 1 } +- match: { aggregations.all_docs.doc_count: 5 } +- match: { aggregations.all_docs.avg_long.value: 1.8 } + +--- +"Test value_count aggregation on derived_object": +- do: + search: + index: test + body: + size: 0 + aggs: + value_count_agg: + value_count: + field: derived_object.long + +- match: { hits.total.value: 5 } +- match: { aggregations.value_count_agg.value: 5 } + +--- +"Test multi_terms aggregation on derived_object": +- do: + search: + index: test + body: + size: 0 + aggs: + multi_terms_agg: + multi_terms: + terms: + - field: derived_object.keyword + - field: derived_object.os + size: 10 + +- match: { hits.total.value: 5 } +- length: { aggregations.multi_terms_agg.buckets: 5 } + + +### IP specific tests +--- +"Test terms aggregation on derived_object_ip": +- do: + search: + index: test + body: + size: 0 + aggs: + ip_terms: + terms: + field: derived_object.ip + +- match: { hits.total.value: 5 } +- length: { aggregations.ip_terms.buckets: 5 } +- match: { aggregations.ip_terms.buckets.0.key: "10.0.0.1" } +- match: { aggregations.ip_terms.buckets.0.doc_count: 1 } + +--- +"Test range aggregation on derived_object_ip": +- do: + search: + index: test + body: + size: 0 + aggs: + ip_ranges: + ip_range: + field: derived_object.ip + ranges: + - to: "10.0.0.0" + - from: "10.0.0.0" + to: "172.16.0.0" + - from: "172.16.0.0" + +- match: { hits.total.value: 5 } +- length: { aggregations.ip_ranges.buckets: 3 } +- match: { aggregations.ip_ranges.buckets.0.doc_count: 0 } +- match: { aggregations.ip_ranges.buckets.1.doc_count: 2 } +- match: { aggregations.ip_ranges.buckets.2.doc_count: 3 } + +--- +"Test cardinality aggregation on derived_object_ip": +- do: + search: + index: test + body: + size: 0 + aggs: + unique_ips: + cardinality: + field: derived_object.ip + +- match: { hits.total.value: 5 } +- match: { aggregations.unique_ips.value: 5 } + +--- +"Test missing aggregation on derived_object_ip": +- do: + search: + index: test + body: + size: 0 + aggs: + missing_ips: + missing: + field: derived_object.ip + +- match: { hits.total.value: 5 } +- match: { aggregations.missing_ips.doc_count: 0 } + +--- +"Test value count aggregation on derived_object_ip": +- do: + search: + index: test + body: + size: 0 + aggs: + ip_count: + value_count: + field: derived_object.ip + +- match: { hits.total.value: 5 } +- match: { aggregations.ip_count.value: 5 } + +### TEST UNSUPPORTED AGG TYPES +--- +"Test sig terms not supported": +- do: + catch: /illegal_argument_exception/ + search: + rest_total_hits_as_int: true + index: test + body: + query: + terms: + derived_keyword: ["foo"] + aggs: + significant_os: + significant_terms: + field: "derived_os" + min_doc_count: 1 + size: 10 + +--- +"Test significant text": +- do: + catch: /illegal_argument_exception/ + search: + rest_total_hits_as_int: true + index: test + body: + query: + terms: + derived_keyword: ["foo"] + aggs: + significant_words: + significant_text: + field: "derived_text" + size: 10 + min_doc_count: 1 + +--- +"Test scripted_metric aggregation": +- do: + catch: /A document doesn't have a value for a field/ + search: + index: test + body: + size: 0 + aggs: + scripted_metric_agg: + scripted_metric: + init_script: "state.arr = []" + map_script: "state.arr.add(doc.derived_long.value)" + combine_script: "return 0" + reduce_script: "return 0" + +--- +"Test geo_distance aggregation on derived_geo": +- do: + catch: /aggregation_execution_exception/ + search: + index: test + rest_total_hits_as_int: true + body: + size: 0 + aggs: + distance: + geo_distance: + field: derived_geo + origin: "35.7796, -78.6382" + ranges: + - to: 1000000 + - from: 1000000 + to: 5000000 + - from: 5000000 diff --git a/server/src/main/java/org/opensearch/index/mapper/DerivedFieldType.java b/server/src/main/java/org/opensearch/index/mapper/DerivedFieldType.java index f0200e72c3bc2..e230e37e6d826 100644 --- a/server/src/main/java/org/opensearch/index/mapper/DerivedFieldType.java +++ b/server/src/main/java/org/opensearch/index/mapper/DerivedFieldType.java @@ -9,38 +9,50 @@ package org.opensearch.index.mapper; import org.apache.lucene.analysis.TokenStream; +import org.apache.lucene.document.InetAddressPoint; import org.apache.lucene.index.IndexableField; +import org.apache.lucene.index.LeafReaderContext; import org.apache.lucene.queries.spans.SpanMultiTermQueryWrapper; import org.apache.lucene.queries.spans.SpanQuery; import org.apache.lucene.search.BooleanClause; import org.apache.lucene.search.BooleanQuery; import org.apache.lucene.search.MultiTermQuery; import org.apache.lucene.search.Query; +import org.apache.lucene.util.BytesRef; import org.opensearch.common.Nullable; import org.opensearch.common.geo.ShapeRelation; +import org.opensearch.common.network.InetAddresses; import org.opensearch.common.time.DateFormatter; import org.opensearch.common.time.DateMathParser; import org.opensearch.common.unit.Fuzziness; import org.opensearch.geometry.Geometry; import org.opensearch.index.analysis.IndexAnalyzers; import org.opensearch.index.analysis.NamedAnalyzer; +import org.opensearch.index.fielddata.IndexFieldData; import org.opensearch.index.query.DerivedFieldQuery; import org.opensearch.index.query.QueryShardContext; +import org.opensearch.script.AggregationScript; import org.opensearch.script.DerivedFieldScript; import org.opensearch.script.Script; +import org.opensearch.search.DocValueFormat; +import org.opensearch.search.lookup.LeafSearchLookup; import org.opensearch.search.lookup.SearchLookup; import java.io.IOException; +import java.net.InetAddress; import java.time.ZoneId; import java.util.Collections; import java.util.List; import java.util.Map; import java.util.Optional; import java.util.function.Function; +import java.util.function.Supplier; +import java.util.stream.Collectors; /** * MappedFieldType for Derived Fields * Contains logic to execute different type of queries on a derived field of given type. + * * @opensearch.internal */ @@ -49,6 +61,11 @@ public class DerivedFieldType extends MappedFieldType implements GeoShapeQueryab final FieldMapper typeFieldMapper; final Function indexableFieldGenerator; + @Override + public DocValueFormat docValueFormat(String format, ZoneId timeZone) { + return typeFieldMapper.mappedFieldType.docValueFormat(format, timeZone); + } + public DerivedFieldType( DerivedField derivedField, boolean isIndexed, @@ -134,6 +151,11 @@ public DerivedFieldValueFetcher valueFetcher(QueryShardContext context, SearchLo ); } + @Override + public IndexFieldData.Builder fielddataBuilder(String fullyQualifiedIndexName, Supplier searchLookup) { + return getFieldMapper().mappedFieldType.fielddataBuilder(fullyQualifiedIndexName, searchLookup); + } + @Override public Query termQuery(Object value, QueryShardContext context) { Query query = typeFieldMapper.mappedFieldType.termQuery(value, context); @@ -503,7 +525,7 @@ public Query existsQuery(QueryShardContext context) { @Override public boolean isAggregatable() { - return false; + return true; } private Query createConjuctionQuery(Query filterQuery, DerivedFieldQuery derivedFieldQuery) { @@ -529,4 +551,55 @@ public static DerivedFieldScript.LeafFactory getDerivedFieldLeafFactory( DerivedFieldScript.Factory factory = context.compile(script, DerivedFieldScript.CONTEXT); return factory.newFactory(script.getParams(), searchLookup); } + + public AggregationScript.LeafFactory getAggregationScript(QueryShardContext context) { + return new AggregationScript.LeafFactory() { + @Override + public AggregationScript newInstance(LeafReaderContext ctx) throws IOException { + final DerivedFieldValueFetcher derivedFieldValueFetcher = valueFetcher(context, context.lookup(), null); + derivedFieldValueFetcher.setNextReader(ctx); + final LeafSearchLookup leafSearchLookup = context.lookup().getLeafSearchLookup(ctx); + + return new AggregationScript(derivedField.getScript().getParams(), context.lookup(), ctx) { + @Override + public Object execute() { + return formatValues(derivedFieldValueFetcher.fetchValuesInternal(leafSearchLookup.source())); + } + + @Override + public void setDocument(int docid) { + super.setDocument(docid); + leafSearchLookup.source().setSegmentAndDocument(ctx, docid); + } + }; + } + + @Override + public boolean needs_score() { + return false; + } + }; + } + + // perform any formatting on the returned Object before passing to + // any values source. + private Object formatValues(List objects) { + // ips are returned as raw strings, format them as BytesRefs + // This ensures that ip_range aggs compare the bytesRef against ranges computed in the + // same way. + if (typeFieldMapper instanceof IpFieldMapper) { + return objects.stream().map(o -> (String) o).map(this::toBytesRef).collect(Collectors.toList()); + } + return objects; + } + + // format the ip string as BytesRef. + private BytesRef toBytesRef(String ip) { + if (ip == null) { + return null; + } + InetAddress address = InetAddresses.forString(ip); + byte[] bytes = InetAddressPoint.encode(address); + return new BytesRef(bytes); + } } diff --git a/server/src/main/java/org/opensearch/index/mapper/ObjectDerivedFieldType.java b/server/src/main/java/org/opensearch/index/mapper/ObjectDerivedFieldType.java index 7e5c9a3f3da93..3d0165f702fda 100644 --- a/server/src/main/java/org/opensearch/index/mapper/ObjectDerivedFieldType.java +++ b/server/src/main/java/org/opensearch/index/mapper/ObjectDerivedFieldType.java @@ -91,21 +91,22 @@ public DerivedFieldValueFetcher valueFetcher(QueryShardContext context, SearchLo derivedField.getFormat() != null ? DateFormatter.forPattern(derivedField.getFormat()) : null ); - Function valueForDisplayUpdated = derivedField.getType().equals(DerivedFieldSupportedTypes.DATE.getName()) ? (o -> { + Function dateFormatter = derivedField.getType().equals(DerivedFieldSupportedTypes.DATE.getName()) ? (o -> { // this is needed to support date type for nested fields as they are required to be converted to long if (o instanceof String) { - return valueForDisplay.apply(((DateFieldMapper) typeFieldMapper).fieldType().parse((String) o)); + return ((DateFieldMapper) typeFieldMapper).fieldType().parse((String) o); } else { - return valueForDisplay.apply(o); + return o; } - }) : valueForDisplay; + }) : null; String subFieldName = name().substring(name().indexOf(".") + 1); return new ObjectDerivedFieldValueFetcher( subFieldName, getDerivedFieldLeafFactory(derivedField.getScript(), context, searchLookup == null ? context.lookup() : searchLookup), - valueForDisplayUpdated, - derivedField.getIgnoreMalformed() + valueForDisplay, + derivedField.getIgnoreMalformed(), + dateFormatter ); } @@ -115,6 +116,8 @@ static class ObjectDerivedFieldValueFetcher extends DerivedFieldValueFetcher { // TODO add it as part of index setting? private final boolean ignoreOnMalFormed; + private final Function dateFormatter; + ObjectDerivedFieldValueFetcher( String subField, DerivedFieldScript.LeafFactory derivedFieldScriptFactory, @@ -124,6 +127,20 @@ static class ObjectDerivedFieldValueFetcher extends DerivedFieldValueFetcher { super(derivedFieldScriptFactory, valueForDisplay); this.subField = subField; this.ignoreOnMalFormed = ignoreOnMalFormed; + this.dateFormatter = null; + } + + ObjectDerivedFieldValueFetcher( + String subField, + DerivedFieldScript.LeafFactory derivedFieldScriptFactory, + Function valueForDisplay, + boolean ignoreOnMalFormed, + Function dateFormatter + ) { + super(derivedFieldScriptFactory, valueForDisplay); + this.subField = subField; + this.ignoreOnMalFormed = ignoreOnMalFormed; + this.dateFormatter = dateFormatter; } @Override @@ -140,7 +157,7 @@ public List fetchValuesInternal(SourceLookup lookup) { if (nestedFieldObj instanceof List) { result.addAll((List) nestedFieldObj); } else { - result.add(nestedFieldObj); + result.add(dateFormatter != null ? dateFormatter.apply(nestedFieldObj) : nestedFieldObj); } } catch (OpenSearchParseException e) { if (!ignoreOnMalFormed) { diff --git a/server/src/main/java/org/opensearch/search/aggregations/support/ValuesSourceConfig.java b/server/src/main/java/org/opensearch/search/aggregations/support/ValuesSourceConfig.java index d006b15df327c..b6c8fe5d4802c 100644 --- a/server/src/main/java/org/opensearch/search/aggregations/support/ValuesSourceConfig.java +++ b/server/src/main/java/org/opensearch/search/aggregations/support/ValuesSourceConfig.java @@ -36,6 +36,7 @@ import org.opensearch.index.fielddata.IndexFieldData; import org.opensearch.index.fielddata.IndexGeoPointFieldData; import org.opensearch.index.fielddata.IndexNumericFieldData; +import org.opensearch.index.mapper.DerivedFieldType; import org.opensearch.index.mapper.MappedFieldType; import org.opensearch.index.mapper.RangeFieldMapper; import org.opensearch.index.query.QueryShardContext; @@ -183,6 +184,12 @@ private static ValuesSourceConfig internalResolve( valuesSourceType = defaultValueSourceType; } DocValueFormat docValueFormat = resolveFormat(format, valuesSourceType, timeZone, fieldType); + + // If we are aggregating on derived field set the agg script. + if (fieldType instanceof DerivedFieldType) { + aggregationScript = ((DerivedFieldType) fieldType).getAggregationScript(context); + } + config = new ValuesSourceConfig( valuesSourceType, fieldContext, @@ -336,7 +343,7 @@ private ValuesSource ConstructValuesSource(Object missing, DocValueFormat format if (this.unmapped) { vs = valueSourceType().getEmpty(); } else { - if (fieldContext() == null) { + if (fieldContext() == null || fieldType() instanceof DerivedFieldType) { // Script case vs = valueSourceType().getScript(script(), scriptValueType()); } else { diff --git a/server/src/main/java/org/opensearch/search/aggregations/support/values/ScriptBytesValues.java b/server/src/main/java/org/opensearch/search/aggregations/support/values/ScriptBytesValues.java index 349bd8e14edf6..30f7494ea2d18 100644 --- a/server/src/main/java/org/opensearch/search/aggregations/support/values/ScriptBytesValues.java +++ b/server/src/main/java/org/opensearch/search/aggregations/support/values/ScriptBytesValues.java @@ -32,6 +32,7 @@ package org.opensearch.search.aggregations.support.values; import org.apache.lucene.search.Scorable; +import org.apache.lucene.util.BytesRef; import org.opensearch.common.lucene.ScorerAware; import org.opensearch.core.common.util.CollectionUtils; import org.opensearch.index.fielddata.SortedBinaryDocValues; @@ -61,7 +62,11 @@ private void set(int i, Object o) { values[i].clear(); } else { CollectionUtils.ensureNoSelfReferences(o, "ScriptBytesValues value"); - values[i].copyChars(o.toString()); + if (o instanceof BytesRef) { + values[i].copyBytes((BytesRef) o); + } else { + values[i].copyChars(o.toString()); + } } } diff --git a/server/src/test/java/org/opensearch/index/mapper/DerivedFieldTypeTests.java b/server/src/test/java/org/opensearch/index/mapper/DerivedFieldTypeTests.java index f65acd0db0627..fe9db24f494ad 100644 --- a/server/src/test/java/org/opensearch/index/mapper/DerivedFieldTypeTests.java +++ b/server/src/test/java/org/opensearch/index/mapper/DerivedFieldTypeTests.java @@ -15,14 +15,30 @@ import org.apache.lucene.document.LatLonPoint; import org.apache.lucene.document.LongField; import org.apache.lucene.document.LongPoint; +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.index.memory.MemoryIndex; +import org.apache.lucene.util.BytesRef; import org.opensearch.OpenSearchException; import org.opensearch.common.collect.Tuple; +import org.opensearch.common.network.InetAddresses; +import org.opensearch.index.query.QueryShardContext; +import org.opensearch.script.AggregationScript; import org.opensearch.script.Script; +import org.opensearch.search.lookup.LeafSearchLookup; +import org.opensearch.search.lookup.SearchLookup; +import org.opensearch.search.lookup.SourceLookup; +import java.io.IOException; import java.util.List; import static org.apache.lucene.index.IndexOptions.NONE; +import static org.mockito.Mockito.any; +import static org.mockito.Mockito.doReturn; +import static org.mockito.Mockito.eq; import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.spy; +import static org.mockito.Mockito.times; +import static org.mockito.Mockito.verify; import static org.mockito.Mockito.when; public class DerivedFieldTypeTests extends FieldTypeTestCase { @@ -100,4 +116,57 @@ public void testObjectType() { public void testUnsupportedType() { expectThrows(IllegalArgumentException.class, () -> createDerivedFieldType("match_only_text")); } + + public void testGetAggregationScript_keyword() throws IOException { + DerivedFieldType dft = spy(createDerivedFieldType("keyword")); + assertTrue(dft.isAggregatable()); + QueryShardContext mockContext = mock(QueryShardContext.class); + List expected = List.of("foo"); + mockValueFetcherForAggs(mockContext, dft, expected); + + AggregationScript.LeafFactory aggregationScript = dft.getAggregationScript(mockContext); + // have to use a memoryIndex because we can't mock leafReaderContext + MemoryIndex index = new MemoryIndex(); + LeafReaderContext leafReaderContext = index.createSearcher().getIndexReader().leaves().get(0); + AggregationScript script = aggregationScript.newInstance(leafReaderContext); + + Object result = script.execute(); + assertEquals(expected, result); + } + + public void testGetAggregationScript_ip() throws IOException { + DerivedFieldType dft = spy(createDerivedFieldType("ip")); + assertTrue(dft.isAggregatable()); + QueryShardContext mockContext = mock(QueryShardContext.class); + List expected = List.of("192.168.0.1"); + LeafSearchLookup leafSearchLookup = mockValueFetcherForAggs(mockContext, dft, expected); + SourceLookup sourceLookup = mock(SourceLookup.class); + when(leafSearchLookup.source()).thenReturn(sourceLookup); + AggregationScript.LeafFactory aggregationScript = dft.getAggregationScript(mockContext); + assertFalse(aggregationScript.needs_score()); + // have to use a memoryIndex because we can't mock leafReaderContext + MemoryIndex index = new MemoryIndex(); + LeafReaderContext leafReaderContext = index.createSearcher().getIndexReader().leaves().get(0); + AggregationScript script = aggregationScript.newInstance(leafReaderContext); + + // test setDocument + int docid = 1; + script.setDocument(docid); + verify(sourceLookup, times(1)).setSegmentAndDocument(any(), eq(docid)); + + // test execute + List result = (List) script.execute(); + assertEquals(new BytesRef(InetAddressPoint.encode(InetAddresses.forString((String) expected.get(0)))), result.get(0)); + } + + private static LeafSearchLookup mockValueFetcherForAggs(QueryShardContext mockContext, DerivedFieldType dft, List expected) { + SearchLookup searchLookup = mock(SearchLookup.class); + LeafSearchLookup leafLookup = mock(LeafSearchLookup.class); + when(searchLookup.getLeafSearchLookup(any())).thenReturn(leafLookup); + when(mockContext.lookup()).thenReturn(searchLookup); + DerivedFieldValueFetcher valueFetcher = mock(DerivedFieldValueFetcher.class); + when(valueFetcher.fetchValuesInternal(any())).thenReturn(expected); + doReturn(valueFetcher).when(dft).valueFetcher(any(), any(), any()); + return leafLookup; + } } diff --git a/server/src/test/java/org/opensearch/search/aggregations/bucket/terms/DerivedFieldAggregationTests.java b/server/src/test/java/org/opensearch/search/aggregations/bucket/terms/DerivedFieldAggregationTests.java new file mode 100644 index 0000000000000..2fb65d7fe3c46 --- /dev/null +++ b/server/src/test/java/org/opensearch/search/aggregations/bucket/terms/DerivedFieldAggregationTests.java @@ -0,0 +1,146 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.search.aggregations.bucket.terms; + +import org.apache.lucene.document.Document; +import org.apache.lucene.document.Field; +import org.apache.lucene.document.KeywordField; +import org.apache.lucene.document.TextField; +import org.apache.lucene.search.MatchAllDocsQuery; +import org.opensearch.Version; +import org.opensearch.cluster.metadata.IndexMetadata; +import org.opensearch.common.settings.Settings; +import org.opensearch.core.index.Index; +import org.opensearch.index.IndexSettings; +import org.opensearch.index.mapper.DerivedField; +import org.opensearch.index.mapper.DerivedFieldResolver; +import org.opensearch.index.mapper.DerivedFieldResolverFactory; +import org.opensearch.index.mapper.DerivedFieldType; +import org.opensearch.index.mapper.DerivedFieldValueFetcher; +import org.opensearch.index.mapper.KeywordFieldMapper; +import org.opensearch.index.mapper.MappedFieldType; +import org.opensearch.index.mapper.MapperService; +import org.opensearch.index.query.QueryShardContext; +import org.opensearch.script.DerivedFieldScript; +import org.opensearch.script.Script; +import org.opensearch.search.aggregations.AggregatorTestCase; +import org.opensearch.search.lookup.LeafSearchLookup; +import org.opensearch.search.lookup.SearchLookup; +import org.opensearch.search.lookup.SourceLookup; +import org.junit.Before; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; + +import static org.mockito.Mockito.any; +import static org.mockito.Mockito.doReturn; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.spy; +import static org.mockito.Mockito.when; + +public class DerivedFieldAggregationTests extends AggregatorTestCase { + + private QueryShardContext mockContext; + private List docs; + + private static final String[][] raw_requests = new String[][] { + { "40.135.0.0 GET /images/hm_bg.jpg HTTP/1.0", "200", "40.135.0.0" }, + { "232.0.0.0 GET /images/hm_bg.jpg HTTP/1.0", "400", "232.0.0.0" }, + { "26.1.0.0 GET /images/hm_bg.jpg HTTP/1.0", "200", "26.1.0.0" }, + { "247.37.0.0 GET /french/splash_inet.html HTTP/1.0", "400", "247.37.0.0" }, + { "247.37.0.0 GET /french/splash_inet.html HTTP/1.0", "400", "247.37.0.0" }, + { "247.37.0.0 GET /french/splash_inet.html HTTP/1.0", "200", "247.37.0.0" } }; + + @Before + public void init() { + super.initValuesSourceRegistry(); + // Create a mock QueryShardContext + mockContext = mock(QueryShardContext.class); + when(mockContext.index()).thenReturn(new Index("test_index", "uuid")); + when(mockContext.allowExpensiveQueries()).thenReturn(true); + + MapperService mockMapperService = mock(MapperService.class); + when(mockContext.getMapperService()).thenReturn(mockMapperService); + Settings indexSettings = Settings.builder() + .put(IndexMetadata.SETTING_VERSION_CREATED, Version.CURRENT) + .put(IndexMetadata.SETTING_NUMBER_OF_SHARDS, 1) + .put(IndexMetadata.SETTING_NUMBER_OF_REPLICAS, 1) + .build(); + // Mock IndexSettings + IndexSettings mockIndexSettings = new IndexSettings( + IndexMetadata.builder("test_index").settings(indexSettings).build(), + Settings.EMPTY + ); + when(mockMapperService.getIndexSettings()).thenReturn(mockIndexSettings); + when(mockContext.getIndexSettings()).thenReturn(mockIndexSettings); + docs = new ArrayList<>(); + for (String[] request : raw_requests) { + Document document = new Document(); + document.add(new TextField("raw_request", request[0], Field.Store.YES)); + document.add(new KeywordField("status", request[1], Field.Store.YES)); + docs.add(document); + } + } + + public void testSimpleTermsAggregationWithDerivedField() throws IOException { + MappedFieldType keywordFieldType = new KeywordFieldMapper.KeywordFieldType("status"); + + SearchLookup searchLookup = mock(SearchLookup.class); + SourceLookup sourceLookup = new SourceLookup(); + LeafSearchLookup leafLookup = mock(LeafSearchLookup.class); + when(leafLookup.source()).thenReturn(sourceLookup); + + // Mock DerivedFieldScript.Factory + DerivedFieldScript.Factory factory = (params, lookup) -> (DerivedFieldScript.LeafFactory) ctx -> { + when(searchLookup.getLeafSearchLookup(any())).thenReturn(leafLookup); + return new DerivedFieldScript(params, lookup, ctx) { + @Override + public void execute() { + addEmittedValue(raw_requests[sourceLookup.docId()][1]); + } + + @Override + public void setDocument(int docid) { + sourceLookup.setSegmentAndDocument(ctx, docid); + } + }; + }; + + DerivedField derivedField = new DerivedField("derived_field", "keyword", new Script("")); + DerivedFieldResolver resolver = DerivedFieldResolverFactory.createResolver( + mockContext, + Collections.emptyMap(), + Collections.singletonList(derivedField), + true + ); + + // spy on the resolved type so we can mock the valuefetcher + DerivedFieldType derivedFieldType = spy((DerivedFieldType) resolver.resolve("derived_field")); + DerivedFieldScript.LeafFactory leafFactory = factory.newFactory((new Script("")).getParams(), searchLookup); + DerivedFieldValueFetcher valueFetcher = new DerivedFieldValueFetcher(leafFactory, null); + doReturn(valueFetcher).when(derivedFieldType).valueFetcher(any(), any(), any()); + + TermsAggregationBuilder aggregationBuilder = new TermsAggregationBuilder("derived_terms").field("status").size(10); + + testCase(aggregationBuilder, new MatchAllDocsQuery(), iw -> { + for (Document d : docs) { + iw.addDocument(d); + } + }, (InternalTerms result) -> { + assertEquals(2, result.getBuckets().size()); + List buckets = result.getBuckets(); + assertEquals("200", buckets.get(0).getKey()); + assertEquals(3, buckets.get(0).getDocCount()); + assertEquals("400", buckets.get(1).getKey()); + assertEquals(3, buckets.get(1).getDocCount()); + }, keywordFieldType, derivedFieldType); + } +} diff --git a/server/src/test/java/org/opensearch/search/aggregations/support/ValuesSourceConfigTests.java b/server/src/test/java/org/opensearch/search/aggregations/support/ValuesSourceConfigTests.java index 33d9a63f61a35..568c3c950f588 100644 --- a/server/src/test/java/org/opensearch/search/aggregations/support/ValuesSourceConfigTests.java +++ b/server/src/test/java/org/opensearch/search/aggregations/support/ValuesSourceConfigTests.java @@ -37,9 +37,12 @@ import org.apache.lucene.util.BytesRef; import org.opensearch.action.support.WriteRequest; import org.opensearch.common.settings.Settings; +import org.opensearch.common.xcontent.XContentFactory; +import org.opensearch.core.xcontent.XContentBuilder; import org.opensearch.index.IndexService; import org.opensearch.index.engine.Engine; import org.opensearch.index.fielddata.SortedBinaryDocValues; +import org.opensearch.index.mapper.MapperService; import org.opensearch.index.query.QueryShardContext; import org.opensearch.test.OpenSearchSingleNodeTestCase; @@ -334,4 +337,40 @@ public void testFieldAlias() throws Exception { assertEquals(new BytesRef("value"), values.nextValue()); } } + + public void testDerivedField() throws Exception { + String script = "derived_field_script"; + String derived_field = "derived_keyword"; + + XContentBuilder mapping = XContentFactory.jsonBuilder() + .startObject() + .startObject("derived") + .startObject(derived_field) + .field("type", "keyword") + .startObject("script") + .field("source", script) + .field("lang", "mockscript") + .endObject() + .endObject() + .endObject() + .endObject(); + IndexService indexService = createIndex("index", Settings.EMPTY, MapperService.SINGLE_MAPPING_NAME, mapping); + client().prepareIndex("index").setId("1").setSource("field", "value").setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE).get(); + + try (Engine.Searcher searcher = indexService.getShard(0).acquireSearcher("test")) { + QueryShardContext context = indexService.newQueryShardContext(0, searcher, () -> 42L, null); + ValuesSourceConfig config = ValuesSourceConfig.resolve( + context, + null, + derived_field, + null, + null, + null, + null, + CoreValuesSourceType.BYTES + ); + assertNotNull(script); + assertEquals(ValuesSource.Bytes.Script.class, config.getValuesSource().getClass()); + } + } } From 6dbb079ba85ad14e261f0a402a118992482a848e Mon Sep 17 00:00:00 2001 From: David Zane <38449481+dzane17@users.noreply.github.com> Date: Mon, 29 Jul 2024 14:16:39 -0700 Subject: [PATCH 131/167] Remove mmap.extensions setting (#9392) * Remove mmap.extensions setting in favor of nio.extensions Signed-off-by: David Zane * Update CHANGELOG-3.0.md Co-authored-by: Andriy Redko Signed-off-by: David Zane <38449481+dzane17@users.noreply.github.com> --------- Signed-off-by: David Zane Signed-off-by: David Zane <38449481+dzane17@users.noreply.github.com> Co-authored-by: Andriy Redko --- CHANGELOG-3.0.md | 1 + .../common/settings/IndexScopedSettings.java | 1 - .../org/opensearch/index/IndexModule.java | 77 +----------------- .../index/store/FsDirectoryFactory.java | 19 +---- .../index/store/FsDirectoryFactoryTests.java | 79 +------------------ 5 files changed, 5 insertions(+), 172 deletions(-) diff --git a/CHANGELOG-3.0.md b/CHANGELOG-3.0.md index 06b761b1df8bd..48d978bede420 100644 --- a/CHANGELOG-3.0.md +++ b/CHANGELOG-3.0.md @@ -43,6 +43,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Remove LegacyESVersion.V_7_10_ Constants ([#5018](https://github.com/opensearch-project/OpenSearch/pull/5018)) - Remove Version.V_1_ Constants ([#5021](https://github.com/opensearch-project/OpenSearch/pull/5021)) - Remove custom Map, List and Set collection classes ([#6871](https://github.com/opensearch-project/OpenSearch/pull/6871)) +- Remove `index.store.hybrid.mmap.extensions` setting in favor of `index.store.hybrid.nio.extensions` setting ([#9392](https://github.com/opensearch-project/OpenSearch/pull/9392)) ### Fixed - Fix 'org.apache.hc.core5.http.ParseException: Invalid protocol version' under JDK 16+ ([#4827](https://github.com/opensearch-project/OpenSearch/pull/4827)) diff --git a/server/src/main/java/org/opensearch/common/settings/IndexScopedSettings.java b/server/src/main/java/org/opensearch/common/settings/IndexScopedSettings.java index 6e7d77d0c00d4..a4d60bc76127c 100644 --- a/server/src/main/java/org/opensearch/common/settings/IndexScopedSettings.java +++ b/server/src/main/java/org/opensearch/common/settings/IndexScopedSettings.java @@ -191,7 +191,6 @@ public final class IndexScopedSettings extends AbstractScopedSettings { BitsetFilterCache.INDEX_LOAD_RANDOM_ACCESS_FILTERS_EAGERLY_SETTING, IndexModule.INDEX_STORE_TYPE_SETTING, IndexModule.INDEX_STORE_PRE_LOAD_SETTING, - IndexModule.INDEX_STORE_HYBRID_MMAP_EXTENSIONS, IndexModule.INDEX_STORE_HYBRID_NIO_EXTENSIONS, IndexModule.INDEX_RECOVERY_TYPE_SETTING, IndexModule.INDEX_QUERY_CACHE_ENABLED_SETTING, diff --git a/server/src/main/java/org/opensearch/index/IndexModule.java b/server/src/main/java/org/opensearch/index/IndexModule.java index 93ff1b78b1ac5..eab070e1c6c10 100644 --- a/server/src/main/java/org/opensearch/index/IndexModule.java +++ b/server/src/main/java/org/opensearch/index/IndexModule.java @@ -97,7 +97,6 @@ import java.util.Collections; import java.util.HashMap; import java.util.HashSet; -import java.util.Iterator; import java.util.List; import java.util.Map; import java.util.Objects; @@ -183,52 +182,7 @@ public final class IndexModule { Property.PrivateIndex ); - /** Which lucene file extensions to load with the mmap directory when using hybridfs store. This settings is ignored if {@link #INDEX_STORE_HYBRID_NIO_EXTENSIONS} is set. - * This is an expert setting. - * @see Lucene File Extensions. - * - * @deprecated This setting will be removed in OpenSearch 3.x. Use {@link #INDEX_STORE_HYBRID_NIO_EXTENSIONS} instead. - */ - @Deprecated - public static final Setting> INDEX_STORE_HYBRID_MMAP_EXTENSIONS = Setting.listSetting( - "index.store.hybrid.mmap.extensions", - List.of("nvd", "dvd", "tim", "tip", "dim", "kdd", "kdi", "cfs", "doc"), - Function.identity(), - new Setting.Validator>() { - - @Override - public void validate(final List value) {} - - @Override - public void validate(final List value, final Map, Object> settings) { - if (value.equals(INDEX_STORE_HYBRID_MMAP_EXTENSIONS.getDefault(Settings.EMPTY)) == false) { - final List nioExtensions = (List) settings.get(INDEX_STORE_HYBRID_NIO_EXTENSIONS); - final List defaultNioExtensions = INDEX_STORE_HYBRID_NIO_EXTENSIONS.getDefault(Settings.EMPTY); - if (nioExtensions.equals(defaultNioExtensions) == false) { - throw new IllegalArgumentException( - "Settings " - + INDEX_STORE_HYBRID_NIO_EXTENSIONS.getKey() - + " & " - + INDEX_STORE_HYBRID_MMAP_EXTENSIONS.getKey() - + " cannot both be set. Use " - + INDEX_STORE_HYBRID_NIO_EXTENSIONS.getKey() - + " only." - ); - } - } - } - - @Override - public Iterator> settings() { - return List.>of(INDEX_STORE_HYBRID_NIO_EXTENSIONS).iterator(); - } - }, - Property.IndexScope, - Property.NodeScope, - Property.Deprecated - ); - - /** Which lucene file extensions to load with nio. All others will default to mmap. Takes precedence over {@link #INDEX_STORE_HYBRID_MMAP_EXTENSIONS}. + /** Which lucene file extensions to load with nio. All others will default to mmap. * This is an expert setting. * @see Lucene File Extensions. */ @@ -253,35 +207,6 @@ public Iterator> settings() { "vem" ), Function.identity(), - new Setting.Validator>() { - - @Override - public void validate(final List value) {} - - @Override - public void validate(final List value, final Map, Object> settings) { - if (value.equals(INDEX_STORE_HYBRID_NIO_EXTENSIONS.getDefault(Settings.EMPTY)) == false) { - final List mmapExtensions = (List) settings.get(INDEX_STORE_HYBRID_MMAP_EXTENSIONS); - final List defaultMmapExtensions = INDEX_STORE_HYBRID_MMAP_EXTENSIONS.getDefault(Settings.EMPTY); - if (mmapExtensions.equals(defaultMmapExtensions) == false) { - throw new IllegalArgumentException( - "Settings " - + INDEX_STORE_HYBRID_NIO_EXTENSIONS.getKey() - + " & " - + INDEX_STORE_HYBRID_MMAP_EXTENSIONS.getKey() - + " cannot both be set. Use " - + INDEX_STORE_HYBRID_NIO_EXTENSIONS.getKey() - + " only." - ); - } - } - } - - @Override - public Iterator> settings() { - return List.>of(INDEX_STORE_HYBRID_MMAP_EXTENSIONS).iterator(); - } - }, Property.IndexScope, Property.NodeScope ); diff --git a/server/src/main/java/org/opensearch/index/store/FsDirectoryFactory.java b/server/src/main/java/org/opensearch/index/store/FsDirectoryFactory.java index a46b641d1423f..c963f8aa95b8d 100644 --- a/server/src/main/java/org/opensearch/index/store/FsDirectoryFactory.java +++ b/server/src/main/java/org/opensearch/index/store/FsDirectoryFactory.java @@ -45,7 +45,6 @@ import org.apache.lucene.store.SimpleFSLockFactory; import org.opensearch.common.settings.Setting; import org.opensearch.common.settings.Setting.Property; -import org.opensearch.common.settings.Settings; import org.opensearch.common.util.io.IOUtils; import org.opensearch.index.IndexModule; import org.opensearch.index.IndexSettings; @@ -57,8 +56,6 @@ import java.nio.file.Path; import java.util.HashSet; import java.util.Set; -import java.util.stream.Collectors; -import java.util.stream.Stream; /** * Factory for a filesystem directory @@ -100,21 +97,7 @@ protected Directory newFSDirectory(Path location, LockFactory lockFactory, Index case HYBRIDFS: // Use Lucene defaults final FSDirectory primaryDirectory = FSDirectory.open(location, lockFactory); - final Set nioExtensions; - final Set mmapExtensions = Set.copyOf(indexSettings.getValue(IndexModule.INDEX_STORE_HYBRID_MMAP_EXTENSIONS)); - if (mmapExtensions.equals( - new HashSet(IndexModule.INDEX_STORE_HYBRID_MMAP_EXTENSIONS.getDefault(Settings.EMPTY)) - ) == false) { - // If the mmap extension setting was defined, then compute nio extensions by subtracting out the - // mmap extensions from the set of all extensions. - nioExtensions = Stream.concat( - IndexModule.INDEX_STORE_HYBRID_NIO_EXTENSIONS.getDefault(Settings.EMPTY).stream(), - IndexModule.INDEX_STORE_HYBRID_MMAP_EXTENSIONS.getDefault(Settings.EMPTY).stream() - ).filter(e -> mmapExtensions.contains(e) == false).collect(Collectors.toUnmodifiableSet()); - } else { - // Otherwise, get the list of nio extensions from the nio setting - nioExtensions = Set.copyOf(indexSettings.getValue(IndexModule.INDEX_STORE_HYBRID_NIO_EXTENSIONS)); - } + final Set nioExtensions = new HashSet<>(indexSettings.getValue(IndexModule.INDEX_STORE_HYBRID_NIO_EXTENSIONS)); if (primaryDirectory instanceof MMapDirectory) { MMapDirectory mMapDirectory = (MMapDirectory) primaryDirectory; return new HybridDirectory(lockFactory, setPreload(mMapDirectory, lockFactory, preLoadExtensions), nioExtensions); diff --git a/server/src/test/java/org/opensearch/index/store/FsDirectoryFactoryTests.java b/server/src/test/java/org/opensearch/index/store/FsDirectoryFactoryTests.java index 2fffebbcf5f1f..95113b7eeb370 100644 --- a/server/src/test/java/org/opensearch/index/store/FsDirectoryFactoryTests.java +++ b/server/src/test/java/org/opensearch/index/store/FsDirectoryFactoryTests.java @@ -96,7 +96,7 @@ public void testPreload() throws IOException { build = Settings.builder() .put(IndexModule.INDEX_STORE_TYPE_SETTING.getKey(), IndexModule.Type.HYBRIDFS.name().toLowerCase(Locale.ROOT)) .putList(IndexModule.INDEX_STORE_PRE_LOAD_SETTING.getKey(), "nvd", "dvd", "cfs") - .putList(IndexModule.INDEX_STORE_HYBRID_NIO_EXTENSIONS.getKey(), "tip", "dim", "kdd", "kdi", "cfs", "doc") + .putList(IndexModule.INDEX_STORE_HYBRID_NIO_EXTENSIONS.getKey(), "tip", "dim", "kdd", "kdi", "cfs", "doc", "new") .build(); try (Directory directory = newDirectory(build)) { assertTrue(FsDirectoryFactory.isHybridFs(directory)); @@ -108,7 +108,7 @@ public void testPreload() throws IOException { assertTrue(hybridDirectory.useDelegate("foo.tim")); assertTrue(hybridDirectory.useDelegate("foo.pos")); assertTrue(hybridDirectory.useDelegate("foo.pay")); - assertTrue(hybridDirectory.useDelegate("foo.new")); + assertFalse(hybridDirectory.useDelegate("foo.new")); assertFalse(hybridDirectory.useDelegate("foo.tip")); assertFalse(hybridDirectory.useDelegate("foo.dim")); assertFalse(hybridDirectory.useDelegate("foo.kdd")); @@ -123,63 +123,6 @@ public void testPreload() throws IOException { assertTrue(preLoadMMapDirectory.useDelegate("foo.cfs")); assertTrue(preLoadMMapDirectory.useDelegate("foo.nvd")); } - build = Settings.builder() - .put(IndexModule.INDEX_STORE_TYPE_SETTING.getKey(), IndexModule.Type.HYBRIDFS.name().toLowerCase(Locale.ROOT)) - .putList(IndexModule.INDEX_STORE_PRE_LOAD_SETTING.getKey(), "nvd", "dvd", "cfs") - .putList(IndexModule.INDEX_STORE_HYBRID_MMAP_EXTENSIONS.getKey(), "nvd", "dvd", "tim", "pos") - .build(); - try (Directory directory = newDirectory(build)) { - assertTrue(FsDirectoryFactory.isHybridFs(directory)); - FsDirectoryFactory.HybridDirectory hybridDirectory = (FsDirectoryFactory.HybridDirectory) directory; - // test custom hybrid mmap extensions - // true->mmap, false->nio - assertTrue(hybridDirectory.useDelegate("foo.nvd")); - assertTrue(hybridDirectory.useDelegate("foo.dvd")); - assertTrue(hybridDirectory.useDelegate("foo.tim")); - assertTrue(hybridDirectory.useDelegate("foo.pos")); - assertTrue(hybridDirectory.useDelegate("foo.new")); - assertFalse(hybridDirectory.useDelegate("foo.pay")); - assertFalse(hybridDirectory.useDelegate("foo.tip")); - assertFalse(hybridDirectory.useDelegate("foo.dim")); - assertFalse(hybridDirectory.useDelegate("foo.kdd")); - assertFalse(hybridDirectory.useDelegate("foo.kdi")); - assertFalse(hybridDirectory.useDelegate("foo.cfs")); - assertFalse(hybridDirectory.useDelegate("foo.doc")); - MMapDirectory delegate = hybridDirectory.getDelegate(); - assertThat(delegate, Matchers.instanceOf(FsDirectoryFactory.PreLoadMMapDirectory.class)); - assertWarnings( - "[index.store.hybrid.mmap.extensions] setting was deprecated in OpenSearch and will be removed in a future release!" - + " See the breaking changes documentation for the next major version." - ); - } - build = Settings.builder() - .put(IndexModule.INDEX_STORE_TYPE_SETTING.getKey(), IndexModule.Type.HYBRIDFS.name().toLowerCase(Locale.ROOT)) - .putList(IndexModule.INDEX_STORE_PRE_LOAD_SETTING.getKey(), "nvd", "dvd", "cfs") - .putList(IndexModule.INDEX_STORE_HYBRID_MMAP_EXTENSIONS.getKey(), "nvd", "dvd", "tim", "pos") - .putList(IndexModule.INDEX_STORE_HYBRID_NIO_EXTENSIONS.getKey(), "nvd", "dvd", "tim", "pos") - .build(); - try { - newDirectory(build); - } catch (final Exception e) { - assertEquals( - "Settings index.store.hybrid.nio.extensions & index.store.hybrid.mmap.extensions cannot both be set. Use index.store.hybrid.nio.extensions only.", - e.getMessage() - ); - } - build = Settings.builder() - .put(IndexModule.INDEX_STORE_TYPE_SETTING.getKey(), IndexModule.Type.HYBRIDFS.name().toLowerCase(Locale.ROOT)) - .putList(IndexModule.INDEX_STORE_PRE_LOAD_SETTING.getKey(), "nvd", "dvd", "cfs") - .putList(IndexModule.INDEX_STORE_HYBRID_NIO_EXTENSIONS.getKey(), "nvd", "dvd", "tim", "pos") - .putList(IndexModule.INDEX_STORE_HYBRID_MMAP_EXTENSIONS.getKey(), "nvd", "dvd", "tim", "pos") - .build(); - try { - newDirectory(build); - } catch (final Exception e) { - assertEquals( - "Settings index.store.hybrid.nio.extensions & index.store.hybrid.mmap.extensions cannot both be set. Use index.store.hybrid.nio.extensions only.", - e.getMessage() - ); - } build = Settings.builder() .put(IndexModule.INDEX_STORE_TYPE_SETTING.getKey(), IndexModule.Type.HYBRIDFS.name().toLowerCase(Locale.ROOT)) .putList(IndexModule.INDEX_STORE_PRE_LOAD_SETTING.getKey(), "nvd", "dvd", "cfs") @@ -198,24 +141,6 @@ public void testPreload() throws IOException { MMapDirectory delegate = hybridDirectory.getDelegate(); assertThat(delegate, Matchers.instanceOf(FsDirectoryFactory.PreLoadMMapDirectory.class)); } - build = Settings.builder() - .put(IndexModule.INDEX_STORE_TYPE_SETTING.getKey(), IndexModule.Type.HYBRIDFS.name().toLowerCase(Locale.ROOT)) - .putList(IndexModule.INDEX_STORE_PRE_LOAD_SETTING.getKey(), "nvd", "dvd", "cfs") - .putList(IndexModule.INDEX_STORE_HYBRID_MMAP_EXTENSIONS.getKey()) - .build(); - try (Directory directory = newDirectory(build)) { - assertTrue(FsDirectoryFactory.isHybridFs(directory)); - FsDirectoryFactory.HybridDirectory hybridDirectory = (FsDirectoryFactory.HybridDirectory) directory; - // test custom hybrid mmap extensions - // true->mmap, false->nio - assertTrue(hybridDirectory.useDelegate("foo.new")); - assertFalse(hybridDirectory.useDelegate("foo.nvd")); - assertFalse(hybridDirectory.useDelegate("foo.dvd")); - assertFalse(hybridDirectory.useDelegate("foo.cfs")); - assertFalse(hybridDirectory.useDelegate("foo.doc")); - MMapDirectory delegate = hybridDirectory.getDelegate(); - assertThat(delegate, Matchers.instanceOf(FsDirectoryFactory.PreLoadMMapDirectory.class)); - } } private Directory newDirectory(Settings settings) throws IOException { From 0cde7baf438e6ab994114b71dd9fadcacac4e443 Mon Sep 17 00:00:00 2001 From: Marc Handalian Date: Mon, 29 Jul 2024 17:08:54 -0700 Subject: [PATCH 132/167] Fix derived field tests for percentile ranks. (#15015) These tests fail to backport to 2.x becuase 2.x uses a different branch of tdigest that computes percentiles differently. Rather than chase these over time, change the assertions to check for the length of results returned instead of their values. Signed-off-by: Marc Handalian --- .../derived_fields/60_derived_field_aggs.yml | 14 ++++---------- 1 file changed, 4 insertions(+), 10 deletions(-) diff --git a/modules/lang-painless/src/yamlRestTest/resources/rest-api-spec/test/painless/derived_fields/60_derived_field_aggs.yml b/modules/lang-painless/src/yamlRestTest/resources/rest-api-spec/test/painless/derived_fields/60_derived_field_aggs.yml index ba879a5fd73c3..87c260ce5f308 100644 --- a/modules/lang-painless/src/yamlRestTest/resources/rest-api-spec/test/painless/derived_fields/60_derived_field_aggs.yml +++ b/modules/lang-painless/src/yamlRestTest/resources/rest-api-spec/test/painless/derived_fields/60_derived_field_aggs.yml @@ -413,9 +413,7 @@ setup: percents: [ 25, 50, 75 ] - match: { hits.total.value: 5 } -- match: { aggregations.double_percentiles.values.25\.0: 1.0 } -- match: { aggregations.double_percentiles.values.50\.0: 2.0 } -- match: { aggregations.double_percentiles.values.75\.0: 4.0 } +- length: { aggregations.double_percentiles.values: 3} --- "Test percentile ranks aggregation on derived_long": @@ -431,8 +429,7 @@ setup: values: [ 2, 4 ] - match: { hits.total.value: 5 } -- match: { aggregations.long_percentile_ranks.values.2\.0: 50.0 } -- match: { aggregations.long_percentile_ranks.values.4\.0: 70.0 } +- length: { aggregations.long_percentile_ranks.values: 2} --- "Test top hits aggregation on derived_keyword": @@ -1071,9 +1068,7 @@ setup: percents: [ 25, 50, 75 ] - match: { hits.total.value: 5 } -- match: { aggregations.double_percentiles.values.25\.0: 1.0 } -- match: { aggregations.double_percentiles.values.50\.0: 2.0 } -- match: { aggregations.double_percentiles.values.75\.0: 4.0 } +- length: { aggregations.double_percentiles.values: 3} --- "Test percentile ranks aggregation on derived_object.long": @@ -1089,8 +1084,7 @@ setup: values: [ 2, 4 ] - match: { hits.total.value: 5 } -- match: { aggregations.long_percentile_ranks.values.2\.0: 50.0 } -- match: { aggregations.long_percentile_ranks.values.4\.0: 70.0 } +- length: { aggregations.long_percentile_ranks.values: 2} --- "Test top hits aggregation on derived_object.keyword": From 03b1306b3cf2f4a37634ea6aca89512803541de6 Mon Sep 17 00:00:00 2001 From: gaobinlong Date: Tue, 30 Jul 2024 19:45:15 +0800 Subject: [PATCH 133/167] Fix version check in yml test for the bug fix of constant_keyword field type not working (#15019) Signed-off-by: Gao Binlong --- .../rest-api-spec/test/index/110_constant_keyword.yml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/index/110_constant_keyword.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/index/110_constant_keyword.yml index 9864bfbbb26e9..f4f8b3752bec8 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/index/110_constant_keyword.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/index/110_constant_keyword.yml @@ -8,8 +8,8 @@ "Mappings and Supported queries": - skip: - version: " - 2.99.99" - reason: "fixed in 3.0.0" + version: " - 2.15.99" + reason: "fixed in 2.16.0" # Create index with constant_keyword field type - do: From ffa67f9ad7b00739d7471166ba1f2cc5ec1ecbf5 Mon Sep 17 00:00:00 2001 From: panguixin Date: Tue, 30 Jul 2024 23:24:07 +0800 Subject: [PATCH 134/167] Fix missing value of FieldSort for unsigned_long (#14963) * Fix missing value of FieldSort for unsigned_long Signed-off-by: panguixin * add changelog Signed-off-by: panguixin * apply review comments Signed-off-by: panguixin --------- Signed-off-by: panguixin --- CHANGELOG.md | 1 + .../opensearch/search/sort/FieldSortIT.java | 46 ++++++++++++++++++- .../UnsignedLongValuesComparatorSource.java | 8 +++- 3 files changed, 52 insertions(+), 3 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 36cd33cc40453..f619b6b85c649 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -21,6 +21,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), ### Removed ### Fixed +- Fix missing value of FieldSort for unsigned_long ([#14963](https://github.com/opensearch-project/OpenSearch/pull/14963)) ### Security diff --git a/server/src/internalClusterTest/java/org/opensearch/search/sort/FieldSortIT.java b/server/src/internalClusterTest/java/org/opensearch/search/sort/FieldSortIT.java index e40928f15e8a8..fdb12639c65be 100644 --- a/server/src/internalClusterTest/java/org/opensearch/search/sort/FieldSortIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/search/sort/FieldSortIT.java @@ -42,6 +42,7 @@ import org.opensearch.action.bulk.BulkRequestBuilder; import org.opensearch.action.index.IndexRequestBuilder; import org.opensearch.action.search.SearchPhaseExecutionException; +import org.opensearch.action.search.SearchRequestBuilder; import org.opensearch.action.search.SearchResponse; import org.opensearch.action.search.ShardSearchFailure; import org.opensearch.cluster.metadata.IndexMetadata; @@ -90,6 +91,7 @@ import static org.opensearch.script.MockScriptPlugin.NAME; import static org.opensearch.search.SearchService.CLUSTER_CONCURRENT_SEGMENT_SEARCH_SETTING; import static org.opensearch.test.hamcrest.OpenSearchAssertions.assertAcked; +import static org.opensearch.test.hamcrest.OpenSearchAssertions.assertFailures; import static org.opensearch.test.hamcrest.OpenSearchAssertions.assertFirstHit; import static org.opensearch.test.hamcrest.OpenSearchAssertions.assertHitCount; import static org.opensearch.test.hamcrest.OpenSearchAssertions.assertNoFailures; @@ -919,7 +921,7 @@ public void testSortMissingNumbers() throws Exception { client().prepareIndex("test") .setId("3") .setSource( - jsonBuilder().startObject().field("id", "3").field("i_value", 2).field("d_value", 2.2).field("u_value", 2).endObject() + jsonBuilder().startObject().field("id", "3").field("i_value", 2).field("d_value", 2.2).field("u_value", 3).endObject() ) .get(); @@ -964,6 +966,18 @@ public void testSortMissingNumbers() throws Exception { assertThat(searchResponse.getHits().getAt(1).getId(), equalTo("1")); assertThat(searchResponse.getHits().getAt(2).getId(), equalTo("3")); + logger.info("--> sort with custom missing value"); + searchResponse = client().prepareSearch() + .setQuery(matchAllQuery()) + .addSort(SortBuilders.fieldSort("i_value").order(SortOrder.ASC).missing(randomBoolean() ? 1 : "1")) + .get(); + assertNoFailures(searchResponse); + + assertThat(searchResponse.getHits().getTotalHits().value, equalTo(3L)); + assertThat(searchResponse.getHits().getAt(0).getId(), equalTo("1")); + assertThat(searchResponse.getHits().getAt(1).getId(), equalTo("2")); + assertThat(searchResponse.getHits().getAt(2).getId(), equalTo("3")); + // FLOAT logger.info("--> sort with no missing (same as missing _last)"); searchResponse = client().prepareSearch() @@ -1001,6 +1015,18 @@ public void testSortMissingNumbers() throws Exception { assertThat(searchResponse.getHits().getAt(1).getId(), equalTo("1")); assertThat(searchResponse.getHits().getAt(2).getId(), equalTo("3")); + logger.info("--> sort with custom missing value"); + searchResponse = client().prepareSearch() + .setQuery(matchAllQuery()) + .addSort(SortBuilders.fieldSort("d_value").order(SortOrder.ASC).missing(randomBoolean() ? 1.1 : "1.1")) + .get(); + assertNoFailures(searchResponse); + + assertThat(searchResponse.getHits().getTotalHits().value, equalTo(3L)); + assertThat(searchResponse.getHits().getAt(0).getId(), equalTo("1")); + assertThat(searchResponse.getHits().getAt(1).getId(), equalTo("2")); + assertThat(searchResponse.getHits().getAt(2).getId(), equalTo("3")); + // UNSIGNED_LONG logger.info("--> sort with no missing (same as missing _last)"); searchResponse = client().prepareSearch() @@ -1037,6 +1063,24 @@ public void testSortMissingNumbers() throws Exception { assertThat(searchResponse.getHits().getAt(0).getId(), equalTo("2")); assertThat(searchResponse.getHits().getAt(1).getId(), equalTo("1")); assertThat(searchResponse.getHits().getAt(2).getId(), equalTo("3")); + + logger.info("--> sort with custom missing value"); + searchResponse = client().prepareSearch() + .setQuery(matchAllQuery()) + .addSort(SortBuilders.fieldSort("u_value").order(SortOrder.ASC).missing(randomBoolean() ? 2 : "2")) + .get(); + assertNoFailures(searchResponse); + + assertThat(searchResponse.getHits().getTotalHits().value, equalTo(3L)); + assertThat(searchResponse.getHits().getAt(0).getId(), equalTo("1")); + assertThat(searchResponse.getHits().getAt(1).getId(), equalTo("2")); + assertThat(searchResponse.getHits().getAt(2).getId(), equalTo("3")); + + logger.info("--> sort with negative missing value"); + SearchRequestBuilder searchRequestBuilder = client().prepareSearch() + .setQuery(matchAllQuery()) + .addSort(SortBuilders.fieldSort("u_value").order(SortOrder.ASC).missing(randomBoolean() ? -1 : "-1")); + assertFailures(searchRequestBuilder, RestStatus.BAD_REQUEST, containsString("Value [-1] is out of range for an unsigned long")); } public void testSortMissingNumbersMinMax() throws Exception { diff --git a/server/src/main/java/org/opensearch/index/fielddata/fieldcomparator/UnsignedLongValuesComparatorSource.java b/server/src/main/java/org/opensearch/index/fielddata/fieldcomparator/UnsignedLongValuesComparatorSource.java index 3714561b63e44..9db5817450cd0 100644 --- a/server/src/main/java/org/opensearch/index/fielddata/fieldcomparator/UnsignedLongValuesComparatorSource.java +++ b/server/src/main/java/org/opensearch/index/fielddata/fieldcomparator/UnsignedLongValuesComparatorSource.java @@ -81,9 +81,13 @@ public Object missingObject(Object missingValue, boolean reversed) { return min ? Numbers.MIN_UNSIGNED_LONG_VALUE : Numbers.MAX_UNSIGNED_LONG_VALUE; } else { if (missingValue instanceof Number) { - return ((Number) missingValue); + return Numbers.toUnsignedLongExact((Number) missingValue); } else { - return new BigInteger(missingValue.toString()); + BigInteger missing = new BigInteger(missingValue.toString()); + if (missing.signum() < 0) { + throw new IllegalArgumentException("Value [" + missingValue + "] is out of range for an unsigned long"); + } + return missing; } } } From 09276b372269b48187976ddc2c39bb95dd862544 Mon Sep 17 00:00:00 2001 From: Rishab Nahata Date: Tue, 30 Jul 2024 23:15:34 +0530 Subject: [PATCH 135/167] Add lower limit for primary and replica batch allocators timeout (#14979) * Add lower limit for primary and replica batch allocators Signed-off-by: Rishab Nahata --- CHANGELOG.md | 1 + .../gateway/RecoveryFromGatewayIT.java | 6 +- .../gateway/ShardsBatchGatewayAllocator.java | 39 ++++++++++++- .../gateway/GatewayAllocatorTests.java | 55 +++++++++++++++++++ 4 files changed, 96 insertions(+), 5 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index f619b6b85c649..a5a3e9c60b664 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -15,6 +15,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Bump `actions/github-script` from 6 to 7 ([#14997](https://github.com/opensearch-project/OpenSearch/pull/14997)) ### Changed +- Add lower limit for primary and replica batch allocators timeout ([#14979](https://github.com/opensearch-project/OpenSearch/pull/14979)) ### Deprecated diff --git a/server/src/internalClusterTest/java/org/opensearch/gateway/RecoveryFromGatewayIT.java b/server/src/internalClusterTest/java/org/opensearch/gateway/RecoveryFromGatewayIT.java index eccc903dfac82..bcf23a37c0010 100644 --- a/server/src/internalClusterTest/java/org/opensearch/gateway/RecoveryFromGatewayIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/gateway/RecoveryFromGatewayIT.java @@ -886,7 +886,7 @@ public void testBatchModeEnabledWithSufficientTimeoutAndClusterGreen() throws Ex assertEquals(0, gatewayAllocator.getNumberOfInFlightFetches()); } - public void testBatchModeEnabledWithInSufficientTimeoutButClusterGreen() throws Exception { + public void testBatchModeEnabledWithDisabledTimeoutAndClusterGreen() throws Exception { internalCluster().startClusterManagerOnlyNodes( 1, @@ -920,8 +920,8 @@ public void testBatchModeEnabledWithInSufficientTimeoutButClusterGreen() throws .put("node.name", clusterManagerName) .put(clusterManagerDataPathSettings) .put(ShardsBatchGatewayAllocator.GATEWAY_ALLOCATOR_BATCH_SIZE.getKey(), 5) - .put(ShardsBatchGatewayAllocator.PRIMARY_BATCH_ALLOCATOR_TIMEOUT_SETTING.getKey(), "10ms") - .put(ShardsBatchGatewayAllocator.REPLICA_BATCH_ALLOCATOR_TIMEOUT_SETTING.getKey(), "10ms") + .put(ShardsBatchGatewayAllocator.PRIMARY_BATCH_ALLOCATOR_TIMEOUT_SETTING.getKey(), "-1") + .put(ShardsBatchGatewayAllocator.REPLICA_BATCH_ALLOCATOR_TIMEOUT_SETTING.getKey(), "-1") .put(ExistingShardsAllocator.EXISTING_SHARDS_ALLOCATOR_BATCH_MODE.getKey(), true) .build() ); diff --git a/server/src/main/java/org/opensearch/gateway/ShardsBatchGatewayAllocator.java b/server/src/main/java/org/opensearch/gateway/ShardsBatchGatewayAllocator.java index 673ed8dbaa1c3..6c6b1126a78d6 100644 --- a/server/src/main/java/org/opensearch/gateway/ShardsBatchGatewayAllocator.java +++ b/server/src/main/java/org/opensearch/gateway/ShardsBatchGatewayAllocator.java @@ -73,13 +73,14 @@ public class ShardsBatchGatewayAllocator implements ExistingShardsAllocator { private final long maxBatchSize; private static final short DEFAULT_SHARD_BATCH_SIZE = 2000; - private static final String PRIMARY_BATCH_ALLOCATOR_TIMEOUT_SETTING_KEY = + public static final String PRIMARY_BATCH_ALLOCATOR_TIMEOUT_SETTING_KEY = "cluster.routing.allocation.shards_batch_gateway_allocator.primary_allocator_timeout"; - private static final String REPLICA_BATCH_ALLOCATOR_TIMEOUT_SETTING_KEY = + public static final String REPLICA_BATCH_ALLOCATOR_TIMEOUT_SETTING_KEY = "cluster.routing.allocation.shards_batch_gateway_allocator.replica_allocator_timeout"; private TimeValue primaryShardsBatchGatewayAllocatorTimeout; private TimeValue replicaShardsBatchGatewayAllocatorTimeout; + public static final TimeValue MIN_ALLOCATOR_TIMEOUT = TimeValue.timeValueSeconds(20); /** * Number of shards we send in one batch to data nodes for fetching metadata @@ -92,16 +93,50 @@ public class ShardsBatchGatewayAllocator implements ExistingShardsAllocator { Setting.Property.NodeScope ); + /** + * Timeout for existing primary shards batch allocator. + * Timeout value must be greater than or equal to 20s or -1ms to effectively disable timeout + */ public static final Setting PRIMARY_BATCH_ALLOCATOR_TIMEOUT_SETTING = Setting.timeSetting( PRIMARY_BATCH_ALLOCATOR_TIMEOUT_SETTING_KEY, TimeValue.MINUS_ONE, + TimeValue.MINUS_ONE, + new Setting.Validator<>() { + @Override + public void validate(TimeValue timeValue) { + if (timeValue.compareTo(MIN_ALLOCATOR_TIMEOUT) < 0 && timeValue.compareTo(TimeValue.MINUS_ONE) != 0) { + throw new IllegalArgumentException( + "Setting [" + + PRIMARY_BATCH_ALLOCATOR_TIMEOUT_SETTING.getKey() + + "] should be more than 20s or -1ms to disable timeout" + ); + } + } + }, Setting.Property.NodeScope, Setting.Property.Dynamic ); + /** + * Timeout for existing replica shards batch allocator. + * Timeout value must be greater than or equal to 20s or -1ms to effectively disable timeout + */ public static final Setting REPLICA_BATCH_ALLOCATOR_TIMEOUT_SETTING = Setting.timeSetting( REPLICA_BATCH_ALLOCATOR_TIMEOUT_SETTING_KEY, TimeValue.MINUS_ONE, + TimeValue.MINUS_ONE, + new Setting.Validator<>() { + @Override + public void validate(TimeValue timeValue) { + if (timeValue.compareTo(MIN_ALLOCATOR_TIMEOUT) < 0 && timeValue.compareTo(TimeValue.MINUS_ONE) != 0) { + throw new IllegalArgumentException( + "Setting [" + + REPLICA_BATCH_ALLOCATOR_TIMEOUT_SETTING.getKey() + + "] should be more than 20s or -1ms to disable timeout" + ); + } + } + }, Setting.Property.NodeScope, Setting.Property.Dynamic ); diff --git a/server/src/test/java/org/opensearch/gateway/GatewayAllocatorTests.java b/server/src/test/java/org/opensearch/gateway/GatewayAllocatorTests.java index bd56123f6df1f..1596a0b566b28 100644 --- a/server/src/test/java/org/opensearch/gateway/GatewayAllocatorTests.java +++ b/server/src/test/java/org/opensearch/gateway/GatewayAllocatorTests.java @@ -47,6 +47,11 @@ import java.util.Set; import java.util.stream.Collectors; +import static org.opensearch.gateway.ShardsBatchGatewayAllocator.PRIMARY_BATCH_ALLOCATOR_TIMEOUT_SETTING; +import static org.opensearch.gateway.ShardsBatchGatewayAllocator.PRIMARY_BATCH_ALLOCATOR_TIMEOUT_SETTING_KEY; +import static org.opensearch.gateway.ShardsBatchGatewayAllocator.REPLICA_BATCH_ALLOCATOR_TIMEOUT_SETTING; +import static org.opensearch.gateway.ShardsBatchGatewayAllocator.REPLICA_BATCH_ALLOCATOR_TIMEOUT_SETTING_KEY; + public class GatewayAllocatorTests extends OpenSearchAllocationTestCase { private final Logger logger = LogManager.getLogger(GatewayAllocatorTests.class); @@ -368,6 +373,56 @@ public void testCreatePrimaryAndReplicaExecutorOfSizeTwo() { assertEquals(executor.getTimeoutAwareRunnables().size(), 2); } + public void testPrimaryAllocatorTimeout() { + // Valid setting with timeout = 20s + Settings build = Settings.builder().put(PRIMARY_BATCH_ALLOCATOR_TIMEOUT_SETTING_KEY, "20s").build(); + assertEquals(20, PRIMARY_BATCH_ALLOCATOR_TIMEOUT_SETTING.get(build).getSeconds()); + + // Valid setting with timeout > 20s + build = Settings.builder().put(PRIMARY_BATCH_ALLOCATOR_TIMEOUT_SETTING_KEY, "30000ms").build(); + assertEquals(30, PRIMARY_BATCH_ALLOCATOR_TIMEOUT_SETTING.get(build).getSeconds()); + + // Invalid setting with timeout < 20s + Settings lessThan20sSetting = Settings.builder().put(PRIMARY_BATCH_ALLOCATOR_TIMEOUT_SETTING_KEY, "10s").build(); + IllegalArgumentException iae = expectThrows( + IllegalArgumentException.class, + () -> PRIMARY_BATCH_ALLOCATOR_TIMEOUT_SETTING.get(lessThan20sSetting) + ); + assertEquals( + "Setting [" + PRIMARY_BATCH_ALLOCATOR_TIMEOUT_SETTING.getKey() + "] should be more than 20s or -1ms to disable timeout", + iae.getMessage() + ); + + // Valid setting with timeout = -1 + build = Settings.builder().put(PRIMARY_BATCH_ALLOCATOR_TIMEOUT_SETTING_KEY, "-1").build(); + assertEquals(-1, PRIMARY_BATCH_ALLOCATOR_TIMEOUT_SETTING.get(build).getMillis()); + } + + public void testReplicaAllocatorTimeout() { + // Valid setting with timeout = 20s + Settings build = Settings.builder().put(REPLICA_BATCH_ALLOCATOR_TIMEOUT_SETTING_KEY, "20s").build(); + assertEquals(20, REPLICA_BATCH_ALLOCATOR_TIMEOUT_SETTING.get(build).getSeconds()); + + // Valid setting with timeout > 20s + build = Settings.builder().put(REPLICA_BATCH_ALLOCATOR_TIMEOUT_SETTING_KEY, "30000ms").build(); + assertEquals(30, REPLICA_BATCH_ALLOCATOR_TIMEOUT_SETTING.get(build).getSeconds()); + + // Invalid setting with timeout < 20s + Settings lessThan20sSetting = Settings.builder().put(REPLICA_BATCH_ALLOCATOR_TIMEOUT_SETTING_KEY, "10s").build(); + IllegalArgumentException iae = expectThrows( + IllegalArgumentException.class, + () -> REPLICA_BATCH_ALLOCATOR_TIMEOUT_SETTING.get(lessThan20sSetting) + ); + assertEquals( + "Setting [" + REPLICA_BATCH_ALLOCATOR_TIMEOUT_SETTING.getKey() + "] should be more than 20s or -1ms to disable timeout", + iae.getMessage() + ); + + // Valid setting with timeout = -1 + build = Settings.builder().put(REPLICA_BATCH_ALLOCATOR_TIMEOUT_SETTING_KEY, "-1").build(); + assertEquals(-1, REPLICA_BATCH_ALLOCATOR_TIMEOUT_SETTING.get(build).getMillis()); + } + private void createIndexAndUpdateClusterState(int count, int numberOfShards, int numberOfReplicas) { if (count == 0) return; Metadata.Builder metadata = Metadata.builder(); From f977f196c48570f3b16f305b39becae6838e7271 Mon Sep 17 00:00:00 2001 From: Marc Handalian Date: Tue, 30 Jul 2024 12:31:33 -0700 Subject: [PATCH 136/167] Fix test RestStatusTests.testStatusReturnsFailureStatusWhenFailuresExist (#15011) This test has a reproducible failure when the highest "failure" status is 100 level. This happens because RestStatus.status treats these as OK. Signed-off-by: Marc Handalian --- .../src/test/java/org/opensearch/core/RestStatusTests.java | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/server/src/test/java/org/opensearch/core/RestStatusTests.java b/server/src/test/java/org/opensearch/core/RestStatusTests.java index f8dba99aa8b60..fbd238bd035d0 100644 --- a/server/src/test/java/org/opensearch/core/RestStatusTests.java +++ b/server/src/test/java/org/opensearch/core/RestStatusTests.java @@ -55,7 +55,11 @@ public void testStatusReturnsFailureStatusWhenFailuresExist() { heapOfFailures.add(failure); } - assertEquals(heapOfFailures.peek().status(), RestStatus.status(successfulShards, totalShards, failures)); + final RestStatus status = heapOfFailures.peek().status(); + // RestStatus.status will return RestStatus.OK when the highest failure code is 100 level. + final RestStatus expected = status.getStatusFamilyCode() == 1 ? RestStatus.OK : status; + + assertEquals(expected, RestStatus.status(successfulShards, totalShards, failures)); } public void testSerialization() throws IOException { From eb306d2bab43de789b59adc01265c683a8fb69fb Mon Sep 17 00:00:00 2001 From: Kaushal Kumar Date: Tue, 30 Jul 2024 15:26:44 -0700 Subject: [PATCH 137/167] Add queryGroupId to search workload tasks at co-ordinator and data node level (#14708) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * add logic to add headers to Task Signed-off-by: Kaushal Kumar * add logic to add queryGroupId to task headers Signed-off-by: Kaushal Kumar * remove redundant code Signed-off-by: Kaushal Kumar * add changelog entry Signed-off-by: Kaushal Kumar * address comments Signed-off-by: Kaushal Kumar * fix precommit Signed-off-by: Kaushal Kumar * Add UTs for RemoteIndexMetadataManager (#14660) Signed-off-by: Shivansh Arora Co-authored-by: Arpit-Bandejiya Signed-off-by: Kaushal Kumar * Fix match_phrase_prefix_query not working on text field with multiple values and index_prefixes (#10959) * Fix match_phrase_prefix_query not working on text field with multiple values and index_prefixes Signed-off-by: Gao Binlong * Add more test Signed-off-by: Gao Binlong * modify change log Signed-off-by: Gao Binlong * Fix test failure Signed-off-by: Gao Binlong * Change the indexAnalyzer used by prefix field Signed-off-by: Gao Binlong * Skip old version for yaml test Signed-off-by: Gao Binlong * Optimize some code Signed-off-by: Gao Binlong * Fix test failure Signed-off-by: Gao Binlong * Modify yaml test description Signed-off-by: Gao Binlong * Remove the name parameter for setAnalyzer() Signed-off-by: Gao Binlong --------- Signed-off-by: Gao Binlong Signed-off-by: Kaushal Kumar * Offline calculation of total shard per node and caching it for weight calculation inside LocalShardBalancer (#14675) Signed-off-by: RS146BIJAY Signed-off-by: Kaushal Kumar * [bug fix] validate lower bound for top n size (#14587) Signed-off-by: Chenyang Ji Signed-off-by: Kaushal Kumar * Create SystemIndexRegistry with helper method matchesSystemIndex (#14415) * Create new extension point in SystemIndexPlugin for a single plugin to get registered system indices Signed-off-by: Craig Perkins * Add to CHANGELOG Signed-off-by: Craig Perkins * WIP on system indices from IndexNameExpressionResolver Signed-off-by: Craig Perkins * Add test in IndexNameExpressionResolverTests Signed-off-by: Craig Perkins * Remove changes in SystemIndexPlugin Signed-off-by: Craig Perkins * Add method in IndexNameExpressionResolver to get matching system indices Signed-off-by: Craig Perkins * Show how resolver can be chained to get system indices Signed-off-by: Craig Perkins * Fix forbiddenApis check Signed-off-by: Craig Perkins * Update CHANGELOG Signed-off-by: Craig Perkins * Make SystemIndices internal Signed-off-by: Craig Perkins * Remove unneeded changes Signed-off-by: Craig Perkins * Fix CI failures Signed-off-by: Craig Perkins * Fix precommit errors Signed-off-by: Craig Perkins * Use Regex instead of WildcardMatcher Signed-off-by: Craig Perkins * Address code review feedback Signed-off-by: Craig Perkins * Allow caller to pass index expressions Signed-off-by: Craig Perkins * Create SystemIndexRegistry Signed-off-by: Craig Perkins * Update CHANGELOG Signed-off-by: Craig Perkins * Remove singleton limitation Signed-off-by: Craig Perkins * Add javadoc Signed-off-by: Craig Perkins * Add @ExperimentalApi annotation Signed-off-by: Craig Perkins --------- Signed-off-by: Craig Perkins Signed-off-by: Kaushal Kumar * Refactor Grok validate pattern to iterative approach (#14206) * grok validate patterns recusrion to iterative Signed-off-by: Sandesh Kumar * Add max depth in resolving a pattern to avoid OOM Signed-off-by: Sandesh Kumar * change path from deque to arraylist Signed-off-by: Sandesh Kumar * rename queue to stack Signed-off-by: Sandesh Kumar * Change max depth to 500 Signed-off-by: Sandesh Kumar * typo originPatternName fix Signed-off-by: Sandesh Kumar * spotless Signed-off-by: Sandesh Kumar --------- Signed-off-by: Sandesh Kumar Signed-off-by: Kaushal Kumar * Bump opentelemetry from 1.39.0 to 1.40.0 (#14674) Signed-off-by: Andriy Redko Signed-off-by: Kaushal Kumar * Bump jackson from 2.17.1 to 2.17.2 (#14687) Signed-off-by: Andriy Redko Signed-off-by: Kaushal Kumar * Add release notes for release 1.3.18 (#14699) Signed-off-by: Zelin Hao Signed-off-by: Kaushal Kumar * Bump reactor from 3.5.19 to 3.5.20 (#14697) Signed-off-by: Andriy Redko Signed-off-by: Kaushal Kumar * Add unit tests for read flow of RemoteClusterStateService and bug fix for transient settings (#14476) Signed-off-by: Shivansh Arora Signed-off-by: Kaushal Kumar * Update version check for the bug fix of match_phrase_prefix_query not working on text field with multiple values and index_prefixes (#14703) Signed-off-by: Gao Binlong Signed-off-by: Kaushal Kumar * Remove unnecessary cast to int from test (#14696) Signed-off-by: Lukáš Vlček Signed-off-by: Kaushal Kumar * print reason why parent task was cancelled (#14604) Signed-off-by: kkewwei Signed-off-by: Kaushal Kumar * Use set of shard routing for shard in unassigned shard batch check. (#14533) Signed-off-by: Swetha Guptha Signed-off-by: Kaushal Kumar * Add versioning for UploadedIndexMetadata (#14677) * Add versioning for UploadedIndexMetadata * Handle componentPrefix for backward compatibility Signed-off-by: Sooraj Sinha Signed-off-by: Kaushal Kumar * Fix: update help output for _cat (#14722) * fixed help output for _cat Signed-off-by: ahmedsobeh * updated changelog Signed-off-by: ahmedsobeh * updated changelog Signed-off-by: ahmedsobeh --------- Signed-off-by: ahmedsobeh Signed-off-by: Kaushal Kumar * Fix hdfs-fixture kerb-admin & hadoop-minicluster dependencies are not being updated / false positive reports on CVEs (#14729) Signed-off-by: Andriy Redko Signed-off-by: Kaushal Kumar * Update to Gradle 8.9 (#14574) Signed-off-by: Andriy Redko Signed-off-by: Kaushal Kumar * Fix hdfs-fixture hadoop-minicluster dependencies are not being updated / false positive reports on CVEs (#14732) Signed-off-by: Andriy Redko Signed-off-by: Kaushal Kumar * Add `strict_allow_templates` dynamic mapping option (#14555) * The dynamic mapping parameter supports strict_allow_templates Signed-off-by: Gao Binlong * Modify change log Signed-off-by: Gao Binlong * Modify skip version in yml test file Signed-off-by: Gao Binlong * Refactor some code Signed-off-by: Gao Binlong * Keep the old methods Signed-off-by: Gao Binlong * change public to private Signed-off-by: Gao Binlong * Optimize some code Signed-off-by: Gao Binlong * Do not override toString method for Dynamic Signed-off-by: Gao Binlong * Optimize some code and modify the changelog Signed-off-by: Gao Binlong --------- Signed-off-by: Gao Binlong Signed-off-by: Kaushal Kumar * Bump net.minidev:json-smart from 2.5.0 to 2.5.1 in /plugins/repository-azure (#14748) * Bump net.minidev:json-smart in /plugins/repository-azure Bumps [net.minidev:json-smart](https://github.com/netplex/json-smart-v2) from 2.5.0 to 2.5.1. - [Release notes](https://github.com/netplex/json-smart-v2/releases) - [Commits](https://github.com/netplex/json-smart-v2/compare/2.5.0...2.5.1) --- updated-dependencies: - dependency-name: net.minidev:json-smart dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] * Updating SHAs Signed-off-by: dependabot[bot] * Update changelog Signed-off-by: dependabot[bot] --------- Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] Signed-off-by: Kaushal Kumar * remove query insights plugin from core (#14743) Signed-off-by: Chenyang Ji Signed-off-by: Kaushal Kumar * Add `strict_allow_templates` dynamic mapping option (#14555) (#14737) (#14742) * The dynamic mapping parameter supports strict_allow_templates * Modify change log * Modify skip version in yml test file * Refactor some code * Keep the old methods * change public to private * Optimize some code * Do not override toString method for Dynamic * Optimize some code and modify the changelog --------- (cherry picked from commit 6b8b3efe01a62c221f308a2e3b019d75a7f5ad8a) Signed-off-by: Gao Binlong Signed-off-by: github-actions[bot] Signed-off-by: Andriy Redko Co-authored-by: opensearch-trigger-bot[bot] <98922864+opensearch-trigger-bot[bot]@users.noreply.github.com> Co-authored-by: github-actions[bot] Signed-off-by: Kaushal Kumar * Fix create or update alias API doesn't throw exception for unsupported parameters (#14719) * Fix create or update alias API doesn't throw exception for unsupported parameters Signed-off-by: Gao Binlong * Update version check in yml test Signed-off-by: Gao Binlong * modify change log Signed-off-by: Gao Binlong --------- Signed-off-by: Gao Binlong Signed-off-by: Kaushal Kumar * Remove query categorization from core (#14759) * Remove query categorization from core Signed-off-by: Siddhant Deshmukh * Add changelog Signed-off-by: Siddhant Deshmukh * Trigger Build Signed-off-by: Siddhant Deshmukh --------- Signed-off-by: Siddhant Deshmukh Signed-off-by: Kaushal Kumar * Add changes to propagate queryGroupId across child requests and nodes (#14614) * add query group header propagator Signed-off-by: Kaushal Kumar * apply spotless check Signed-off-by: Kaushal Kumar * add new propagator in ThreadContext Signed-off-by: Kaushal Kumar * spotlessApply Signed-off-by: Kaushal Kumar * address comments Signed-off-by: Kaushal Kumar * Bump com.microsoft.azure:msal4j from 1.15.1 to 1.16.0 in /plugins/repository-azure (#14610) * Bump com.microsoft.azure:msal4j in /plugins/repository-azure Bumps [com.microsoft.azure:msal4j](https://github.com/AzureAD/microsoft-authentication-library-for-java) from 1.15.1 to 1.16.0. - [Release notes](https://github.com/AzureAD/microsoft-authentication-library-for-java/releases) - [Changelog](https://github.com/AzureAD/microsoft-authentication-library-for-java/blob/dev/changelog.txt) - [Commits](https://github.com/AzureAD/microsoft-authentication-library-for-java/compare/v1.15.1...v1.16.0) --- updated-dependencies: - dependency-name: com.microsoft.azure:msal4j dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] * Updating SHAs Signed-off-by: dependabot[bot] * Update changelog Signed-off-by: dependabot[bot] --------- Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] Signed-off-by: Kaushal Kumar * [Bugfix] Fix ICacheKeySerializerTests flakiness (#14564) * Fix testInvalidInput flakiness Signed-off-by: Peter Alfonsi * Addressed andrross's comment Signed-off-by: Peter Alfonsi * rerun security check Signed-off-by: Peter Alfonsi --------- Signed-off-by: Peter Alfonsi Co-authored-by: Peter Alfonsi Signed-off-by: Kaushal Kumar * Correct typo in method name (#14621) Signed-off-by: vatsal Signed-off-by: Kaushal Kumar * Refactoring FilterPath.parse by using an iterative approach instead of recursion. (#14200) * Refactor FilterPath parse function (#12067) Signed-off-by: Robin Friedmann * Implement unit tests for FilterPathTests (#12067) Signed-off-by: Robin Friedmann * Write warn log if Filter is empty; Add comments (#12067) Signed-off-by: Robin Friedmann * Add changelog Signed-off-by: Siddhant Deshmukh * Remove unnecessary log statement Signed-off-by: Siddhant Deshmukh * Remove unused logger Signed-off-by: Siddhant Deshmukh * Spotless apply Signed-off-by: Siddhant Deshmukh * Remove incorrect changelog Signed-off-by: Siddhant Deshmukh --------- Signed-off-by: Siddhant Deshmukh Co-authored-by: Robin Friedmann Signed-off-by: Kaushal Kumar * Removing String format in RemoteStoreMigrationAllocationDecider to optimise performance(#14612) Signed-off-by: RS146BIJAY Signed-off-by: Kaushal Kumar * Clear templates before Adding; Use NamedWriteableAwareStreamInput for RemoteCustomMetadata; Correct the check for deciding upload of HashesOfConsistentSettings (#14513) * Clear templates before Adding; Use NamedWriteableAwareStreamInput for RemoteCustomMetadata * Correct the check for deciding upload of hashes of consistent settings Signed-off-by: Sooraj Sinha Signed-off-by: Kaushal Kumar * add changelog Signed-off-by: Kaushal Kumar * add PR link changelog Signed-off-by: Kaushal Kumar * Improve reroute performance by optimising List.removeAll in LocalShardsBalancer to filter remote search shard from relocation decision (#14613) Signed-off-by: RS146BIJAY Signed-off-by: Kaushal Kumar * Fix assertion failure while deleting remote backed index (#14601) Signed-off-by: Sachin Kale Signed-off-by: Kaushal Kumar * Allow system index warning in OpenSearchRestTestCase.refreshAllIndices (#14635) * Allow system index warning Signed-off-by: Craig Perkins * Add to CHANGELOG Signed-off-by: Craig Perkins * Address code review comments Signed-off-by: Craig Perkins --------- Signed-off-by: Craig Perkins Signed-off-by: Kaushal Kumar * Star tree codec changes (#14514) --------- Signed-off-by: Bharathwaj G Signed-off-by: Kaushal Kumar * Bump com.github.spullara.mustache.java:compiler from 0.9.13 to 0.9.14 in /modules/lang-mustache (#14672) * Bump com.github.spullara.mustache.java:compiler Bumps [com.github.spullara.mustache.java:compiler](https://github.com/spullara/mustache.java) from 0.9.13 to 0.9.14. - [Commits](https://github.com/spullara/mustache.java/compare/mustache.java-0.9.13...mustache.java-0.9.14) --- updated-dependencies: - dependency-name: com.github.spullara.mustache.java:compiler dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] * Updating SHAs Signed-off-by: dependabot[bot] * Update changelog Signed-off-by: dependabot[bot] --------- Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] Signed-off-by: Kaushal Kumar * Bump net.minidev:accessors-smart from 2.5.0 to 2.5.1 in /plugins/repository-azure (#14673) * Bump net.minidev:accessors-smart in /plugins/repository-azure Bumps [net.minidev:accessors-smart](https://github.com/netplex/json-smart-v2) from 2.5.0 to 2.5.1. - [Release notes](https://github.com/netplex/json-smart-v2/releases) - [Commits](https://github.com/netplex/json-smart-v2/compare/2.5.0...2.5.1) --- updated-dependencies: - dependency-name: net.minidev:accessors-smart dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] * Updating SHAs Signed-off-by: dependabot[bot] * Update changelog Signed-off-by: dependabot[bot] --------- Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] Signed-off-by: Kaushal Kumar * move query group thread context propagator out of ThreadContext Signed-off-by: Kaushal Kumar --------- Signed-off-by: Kaushal Kumar Signed-off-by: dependabot[bot] Signed-off-by: Peter Alfonsi Signed-off-by: vatsal Signed-off-by: Siddhant Deshmukh Signed-off-by: RS146BIJAY Signed-off-by: Sooraj Sinha Signed-off-by: Sachin Kale Signed-off-by: Craig Perkins Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] Co-authored-by: Peter Alfonsi Co-authored-by: Peter Alfonsi Co-authored-by: Vatsal <36672090+imvtsl@users.noreply.github.com> Co-authored-by: Siddhant Deshmukh Co-authored-by: Robin Friedmann Co-authored-by: rishavz_sagar Co-authored-by: Sooraj Sinha <81695996+soosinha@users.noreply.github.com> Co-authored-by: Sachin Kale Co-authored-by: Craig Perkins Co-authored-by: Bharathwaj G Signed-off-by: Kaushal Kumar * Add consumers to remote store based index settings (#14764) Signed-off-by: Shourya Dutta Biswas <114977491+shourya035@users.noreply.github.com> Signed-off-by: Kaushal Kumar * Add matchesPluginSystemIndexPattern to SystemIndexRegistry (#14750) * Add matchesPluginSystemIndexPattern to SystemIndexRegistry Signed-off-by: Craig Perkins * Add to CHANGELOG Signed-off-by: Craig Perkins * Use single data structure to keep track of system indices Signed-off-by: Craig Perkins * Address code review comments Signed-off-by: Craig Perkins * Add test for getAllDescriptors Signed-off-by: Craig Perkins * Update server/src/main/java/org/opensearch/indices/SystemIndexRegistry.java Co-authored-by: Andriy Redko Signed-off-by: Craig Perkins --------- Signed-off-by: Craig Perkins Signed-off-by: Craig Perkins Co-authored-by: Andriy Redko Signed-off-by: Kaushal Kumar * SPI for loading ABC templates (#14659) * SPI for loading ABC templates Signed-off-by: mgodwan Signed-off-by: Kaushal Kumar * Fix bulk upsert ignores the default_pipeline and final_pipeline when the auto-created index matches the index template (#12891) * Fix bulk upsert ignores the default_pipeline and final_pipeline when auto-created index matches with the index template Signed-off-by: Gao Binlong * Modify changelog & comment Signed-off-by: Gao Binlong * Use new approach Signed-off-by: Gao Binlong * Fix test failure Signed-off-by: Gao Binlong --------- Signed-off-by: Gao Binlong Signed-off-by: Kaushal Kumar * Fix flaky test due to node being used across all tests (#14787) Signed-off-by: Mohit Godwani Signed-off-by: Kaushal Kumar * Star Tree Implementation [OnHeap] (#14512) --------- Signed-off-by: Sarthak Aggarwal Signed-off-by: Kaushal Kumar * Add Gao Binlong as maintainer (#14796) Signed-off-by: Andriy Redko Signed-off-by: Kaushal Kumar * Clear ehcache disk cache files during initialization (#14738) * Clear ehcache disk cache files during initialization Signed-off-by: Sagar Upadhyaya * Adding UT to fix line coverage Signed-off-by: Sagar Upadhyaya * Addressing comment Signed-off-by: Sagar Upadhyaya * Adding more Uts for better line coverage Signed-off-by: Sagar Upadhyaya * Throwing exception in case we fail to clear cache files during startup Signed-off-by: Sagar Upadhyaya * Adding more UTs Signed-off-by: Sagar Upadhyaya * Adding a UT for more coverage Signed-off-by: Sagar Upadhyaya * Fixing gradle build Signed-off-by: Sagar Upadhyaya * Update ehcache disk cache close() logic Signed-off-by: Sagar Upadhyaya --------- Signed-off-by: Sagar Upadhyaya Signed-off-by: Kaushal Kumar * Refactor remote-routing-table service inline with remote state interfaces (#14668) --------- Signed-off-by: Arpit Bandejiya Signed-off-by: Arpit-Bandejiya Signed-off-by: Kaushal Kumar * Set version to 2.15 for determining metadata during migration to remote store Signed-off-by: Sandeep Kumawat Co-authored-by: Sandeep Kumawat Signed-off-by: Kaushal Kumar * Fix bulk upsert ignores the default_pipeline and final_pipeline when the auto-created index matches the index template (#14793) Signed-off-by: Andriy Redko Signed-off-by: Kaushal Kumar * Fix create or update alias API doesn't throw exception for unsupported parameters (#14769) Signed-off-by: Andriy Redko Signed-off-by: Kaushal Kumar * Change RCSS info logs to debug (#14814) Signed-off-by: Shivansh Arora Signed-off-by: Kaushal Kumar * [Bugfix] Fix NPE in ReplicaShardAllocator (#13993) (#14385) * [Bugfix] Fix NPE in ReplicaShardAllocator (#13993) Signed-off-by: Daniil Roman * Add fix info to CHANGELOG.md Signed-off-by: Daniil Roman --------- Signed-off-by: Daniil Roman Signed-off-by: Daniil Roman Signed-off-by: Kaushal Kumar * Run performance benchmark on pull requests (#14760) * add performance benchmark workflow for pull requests Signed-off-by: Rishabh Singh * Update PERFORMANCE_BENCHMARKS.md Co-authored-by: Andriy Redko Signed-off-by: Rishabh Singh * Update PERFORMANCE_BENCHMARKS.md Co-authored-by: Andriy Redko Signed-off-by: Rishabh Singh * Update .github/workflows/benchmark-pull-request.yml Co-authored-by: Andriy Redko Signed-off-by: Rishabh Singh * Update .github/workflows/benchmark-pull-request.yml Co-authored-by: Andriy Redko Signed-off-by: Rishabh Singh * Update .github/workflows/benchmark-pull-request.yml Co-authored-by: Andriy Redko Signed-off-by: Rishabh Singh * Update .github/workflows/benchmark-pull-request.yml Co-authored-by: Andriy Redko Signed-off-by: Rishabh Singh --------- Signed-off-by: Rishabh Singh Signed-off-by: Rishabh Singh Co-authored-by: Andriy Redko Signed-off-by: Kaushal Kumar * fix constant_keyword field type (#14807) Signed-off-by: kkewwei test Signed-off-by: Daniel (dB.) Doubrovkine Co-authored-by: Daniel (dB.) Doubrovkine Signed-off-by: Kaushal Kumar * [Remote Store Migration] Reconcile remote store based index settings during STRICT mode switch (#14792) Signed-off-by: Shourya Dutta Biswas <114977491+shourya035@users.noreply.github.com> Signed-off-by: Kaushal Kumar * Add prefix mode verification setting for repository verification (#14790) * Add prefix mode verification setting for repository verification Signed-off-by: Ashish Singh * Add UTs and randomise prefix mode repository verification Signed-off-by: Ashish Singh * Incorporate PR review feedback Signed-off-by: Ashish Singh --------- Signed-off-by: Ashish Singh Signed-off-by: Kaushal Kumar * add length check on comment body for benchmark workflow (#14834) Signed-off-by: Rishabh Singh Signed-off-by: Kaushal Kumar * Add restore-from-snapshot test procedure for snapshot run benchmark config (#14842) Signed-off-by: Rishabh Singh Signed-off-by: Kaushal Kumar * Fix env variable name typo (#14843) Signed-off-by: Rishabh Singh Signed-off-by: Kaushal Kumar * Use circuit breaker in InternalHistogram when adding empty buckets (#14754) * introduce circuit breaker in InternalHistogram Signed-off-by: bowenlan-amzn * use circuit breaker from reduce context Signed-off-by: bowenlan-amzn * add test Signed-off-by: bowenlan-amzn * revert use_real_memory change in OpenSearchNode Signed-off-by: bowenlan-amzn * add change log Signed-off-by: bowenlan-amzn --------- Signed-off-by: bowenlan-amzn Signed-off-by: Kaushal Kumar * [Remote State] Create interface RemoteEntitiesManager (#14671) * Create interface RemoteEntitiesManager Signed-off-by: Shivansh Arora Signed-off-by: Kaushal Kumar * Optimise TransportNodesAction to not send DiscoveryNodes for NodeStat… (#14749) * Optimize TransportNodesAction to not send DiscoveryNodes for NodeStats, NodesInfo and ClusterStats call Signed-off-by: Pranshu Shukla Signed-off-by: Kaushal Kumar * Enabling term version check on local state for all ClusterManager Read Transport Actions (#14273) * enabling term version check on local state for all admin read actions Signed-off-by: Rajiv Kumar Vaidyanathan Signed-off-by: Kaushal Kumar * Reduce logging in DEBUG for MasterService:run (#14795) * Reduce logging in DEBUG for MasteService:run by introducing short and long summary in Taskbatcher Signed-off-by: Sumit Bansal Signed-off-by: Kaushal Kumar * Add SplitResponseProcessor to Search Pipelines (#14800) * Add SplitResponseProcessor for search pipelines Signed-off-by: Daniel Widdis * Register the split processor factory Signed-off-by: Daniel Widdis * Address code review comments Signed-off-by: Daniel Widdis * Avoid list copy by casting array Signed-off-by: Daniel Widdis --------- Signed-off-by: Daniel Widdis Signed-off-by: Kaushal Kumar * Add integration tests for RemoteRoutingTable Service. (#14631) Signed-off-by: Shailendra Singh Signed-off-by: Kaushal Kumar * Add SortResponseProcessor to Search Pipelines (#14785) * Add SortResponseProcessor for search pipelines Signed-off-by: Daniel Widdis * Add stupid and unnecessary javadocs to satisfy overly strict CI Signed-off-by: Daniel Widdis * Split casting and sorting methods for readability Signed-off-by: Daniel Widdis * Register the sort processor factory Signed-off-by: Daniel Widdis * Address code review comments Signed-off-by: Daniel Widdis * Cast individual list elements to avoid creating two lists Signed-off-by: Daniel Widdis * Add yamlRestTests Signed-off-by: Daniel Widdis * Clarify why there's unusual sorting Signed-off-by: Daniel Widdis * Use instanceof instead of isAssignableFrom Signed-off-by: Daniel Widdis --------- Signed-off-by: Daniel Widdis Signed-off-by: Kaushal Kumar * Fix allowUnmappedFields, mapUnmappedFieldAsString settings to be applied when parsing query string query (#13957) * Modify to invoke QueryShardContext.fieldMapper() method to apply allowUnmappedFields and mapUnmappedFieldAsString settings Signed-off-by: imyp92 * Add test cases to verify returning 400 responses if unmapped fields are included for some types of query Signed-off-by: imyp92 * Add changelog Signed-off-by: imyp92 --------- Signed-off-by: imyp92 Signed-off-by: gaobinlong Co-authored-by: gaobinlong Signed-off-by: Kaushal Kumar * Bump com.microsoft.azure:msal4j from 1.16.0 to 1.16.1 in /plugins/repository-azure (#14857) * Bump com.microsoft.azure:msal4j in /plugins/repository-azure Bumps [com.microsoft.azure:msal4j](https://github.com/AzureAD/microsoft-authentication-library-for-java) from 1.16.0 to 1.16.1. - [Release notes](https://github.com/AzureAD/microsoft-authentication-library-for-java/releases) - [Changelog](https://github.com/AzureAD/microsoft-authentication-library-for-java/blob/dev/changelog.txt) - [Commits](https://github.com/AzureAD/microsoft-authentication-library-for-java/compare/v1.16.0...v1.16.1) --- updated-dependencies: - dependency-name: com.microsoft.azure:msal4j dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] * Updating SHAs Signed-off-by: dependabot[bot] * Update changelog Signed-off-by: dependabot[bot] --------- Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] Signed-off-by: Kaushal Kumar * Bump com.gradle.develocity from 3.17.5 to 3.17.6 (#14856) * Bump com.gradle.develocity from 3.17.5 to 3.17.6 Bumps com.gradle.develocity from 3.17.5 to 3.17.6. --- updated-dependencies: - dependency-name: com.gradle.develocity dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] * Update changelog Signed-off-by: dependabot[bot] --------- Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] Signed-off-by: Kaushal Kumar * Bump org.jline:jline in /test/fixtures/hdfs-fixture (#14859) Bumps [org.jline:jline](https://github.com/jline/jline3) from 3.26.2 to 3.26.3. - [Release notes](https://github.com/jline/jline3/releases) - [Changelog](https://github.com/jline/jline3/blob/master/changelog.md) - [Commits](https://github.com/jline/jline3/compare/jline-parent-3.26.2...jline-parent-3.26.3) --- updated-dependencies: - dependency-name: org.jline:jline dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Signed-off-by: Kaushal Kumar * Use Lucene provided Persian stem (#14847) Lucene provided Persian stem apparently isn't hooked yet and this change is doing that based on what is done for Arabic stem support. Signed-off-by: Ebrahim Byagowi Signed-off-by: Daniel (dB.) Doubrovkine Co-authored-by: Daniel (dB.) Doubrovkine Signed-off-by: Kaushal Kumar * Bump actions/checkout from 2 to 4 (#14858) * Bump actions/checkout from 2 to 4 Bumps [actions/checkout](https://github.com/actions/checkout) from 2 to 4. - [Release notes](https://github.com/actions/checkout/releases) - [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md) - [Commits](https://github.com/actions/checkout/compare/v2...v4) --- updated-dependencies: - dependency-name: actions/checkout dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] * Update changelog Signed-off-by: dependabot[bot] --------- Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] Signed-off-by: Kaushal Kumar * Deprecate batch_size parameter on bulk API (#14725) By default the full _bulk payload will be passed to ingest processors as a batch, with any sub batching logic to be implemented by each processor if necessary. Signed-off-by: Liyun Xiu Signed-off-by: Kaushal Kumar * Add perms for remote snapshot cache eviction on scripted query (#14411) Signed-off-by: Finn Carroll Signed-off-by: Kaushal Kumar * add transport interceptor to populate queryGroupId in task headers Signed-off-by: Kaushal Kumar * Add rest, transport layer changes for Hot to warm tiering - dedicated setup (#13980) Signed-off-by: Neetika Singhal Signed-off-by: Kaushal Kumar * Create listener to refresh search thread resource usage (#14832) * [bug fix] fix incorrect coordinator node search resource usages Signed-off-by: Chenyang Ji * fix bug on serialization when passing task resource usage to coordinator Signed-off-by: Chenyang Ji * add more unit tests Signed-off-by: Chenyang Ji * remove query insights plugin related code Signed-off-by: Chenyang Ji * create per request listener to refresh task resource usage Signed-off-by: Chenyang Ji * Make new listener API public Signed-off-by: Siddhant Deshmukh * Add changelog Signed-off-by: Siddhant Deshmukh * Remove wrong files added Signed-off-by: Siddhant Deshmukh * Address review comments Signed-off-by: Siddhant Deshmukh * Build fix Signed-off-by: Siddhant Deshmukh * Make singleton Signed-off-by: Siddhant Deshmukh * Address review comments Signed-off-by: Siddhant Deshmukh * Make sure listener runs before plugin listeners Signed-off-by: Siddhant Deshmukh * Spotless Signed-off-by: Siddhant Deshmukh * Minor fix Signed-off-by: Siddhant Deshmukh --------- Signed-off-by: Chenyang Ji Signed-off-by: Siddhant Deshmukh Signed-off-by: Jay Deng Co-authored-by: Chenyang Ji Co-authored-by: Jay Deng Signed-off-by: Kaushal Kumar * Caching avg total bytes and avg free bytes inside ClusterInfo (#14851) Signed-off-by: RS146BIJAY Signed-off-by: Kaushal Kumar * Use default value when index.number_of_replicas is null (#14812) * Use default value when index.number_of_replicas is null Signed-off-by: Liyun Xiu * Add integration test Signed-off-by: Liyun Xiu * Add changelog Signed-off-by: Liyun Xiu --------- Signed-off-by: Liyun Xiu Signed-off-by: Kaushal Kumar * [Remote Routing Table] Implement write and read flow for shard diff file. (#14684) * Implement write and read flow to upload/download shard diff file. Signed-off-by: Shailendra Singh Signed-off-by: Kaushal Kumar * Optimized ClusterStatsIndices to precomute shard stats (#14426) * Optimize Cluster Stats Indices to precomute node level stats Signed-off-by: Pranshu Shukla Signed-off-by: Kaushal Kumar * Fix constraint bug which allows more primary shards than average primary shards per index (#14908) Signed-off-by: Gaurav Bafna Signed-off-by: Kaushal Kumar * Optmising AwarenessAllocationDecider for hashmap.get call (#14761) Signed-off-by: RS146BIJAY Signed-off-by: Kaushal Kumar * update comment Signed-off-by: Kaushal Kumar * Fix IngestServiceTests.testBulkRequestExecutionWithFailures (#14918) The test would previously fail if the randomness led to only a single indexing request being included in the bulk payload. This change guarantees multiple indexing requests in order to ensure the batch logic kicks in. Also replace some unneeded mocks with real classes. Signed-off-by: Andrew Ross Signed-off-by: Kaushal Kumar * add queryGroupTask Signed-off-by: Kaushal Kumar * remove unnecessary imports Signed-off-by: Kaushal Kumar * add QueryGroupTask tests Signed-off-by: Kaushal Kumar * rename WLM transport request handler Signed-off-by: Kaushal Kumar * add CHANGELOG entry Signed-off-by: Kaushal Kumar * fix ut Signed-off-by: Kaushal Kumar * address comments Signed-off-by: Kaushal Kumar * fix UT to remove the verify for final method Signed-off-by: Kaushal Kumar * apply spotless Signed-off-by: Kaushal Kumar --------- Signed-off-by: Kaushal Kumar Signed-off-by: Shivansh Arora Signed-off-by: Gao Binlong Signed-off-by: RS146BIJAY Signed-off-by: Chenyang Ji Signed-off-by: Craig Perkins Signed-off-by: Sandesh Kumar Signed-off-by: Andriy Redko Signed-off-by: Zelin Hao Signed-off-by: Lukáš Vlček Signed-off-by: kkewwei Signed-off-by: Swetha Guptha Signed-off-by: Sooraj Sinha Signed-off-by: ahmedsobeh Signed-off-by: dependabot[bot] Signed-off-by: github-actions[bot] Signed-off-by: Siddhant Deshmukh Signed-off-by: Peter Alfonsi Signed-off-by: vatsal Signed-off-by: Sachin Kale Signed-off-by: Shourya Dutta Biswas <114977491+shourya035@users.noreply.github.com> Signed-off-by: Craig Perkins Signed-off-by: mgodwan Signed-off-by: Mohit Godwani Signed-off-by: Sagar Upadhyaya Signed-off-by: Sandeep Kumawat Signed-off-by: Daniil Roman Signed-off-by: Daniil Roman Signed-off-by: Rishabh Singh Signed-off-by: Rishabh Singh Signed-off-by: Daniel (dB.) Doubrovkine Signed-off-by: Ashish Singh Signed-off-by: bowenlan-amzn Signed-off-by: Pranshu Shukla Signed-off-by: Rajiv Kumar Vaidyanathan Signed-off-by: Sumit Bansal Signed-off-by: Daniel Widdis Signed-off-by: Shailendra Singh Signed-off-by: imyp92 Signed-off-by: gaobinlong Signed-off-by: Ebrahim Byagowi Signed-off-by: Liyun Xiu Signed-off-by: Finn Carroll Signed-off-by: Neetika Singhal Signed-off-by: Jay Deng Signed-off-by: Gaurav Bafna Signed-off-by: Andrew Ross Co-authored-by: Shivansh Arora Co-authored-by: Arpit-Bandejiya Co-authored-by: gaobinlong Co-authored-by: rishavz_sagar Co-authored-by: Chenyang Ji Co-authored-by: Craig Perkins Co-authored-by: Sandesh Kumar Co-authored-by: Andriy Redko Co-authored-by: Zelin Hao Co-authored-by: Lukáš Vlček Co-authored-by: kkewwei Co-authored-by: SwethaGuptha <156877431+SwethaGuptha@users.noreply.github.com> Co-authored-by: Sooraj Sinha <81695996+soosinha@users.noreply.github.com> Co-authored-by: Ahmed Sobeh Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] Co-authored-by: opensearch-trigger-bot[bot] <98922864+opensearch-trigger-bot[bot]@users.noreply.github.com> Co-authored-by: github-actions[bot] Co-authored-by: Siddhant Deshmukh Co-authored-by: Peter Alfonsi Co-authored-by: Peter Alfonsi Co-authored-by: Vatsal <36672090+imvtsl@users.noreply.github.com> Co-authored-by: Robin Friedmann Co-authored-by: Sachin Kale Co-authored-by: Bharathwaj G Co-authored-by: Shourya Dutta Biswas <114977491+shourya035@users.noreply.github.com> Co-authored-by: Craig Perkins Co-authored-by: Andriy Redko Co-authored-by: Mohit Godwani <81609427+mgodwan@users.noreply.github.com> Co-authored-by: Sarthak Aggarwal Co-authored-by: Sagar <99425694+sgup432@users.noreply.github.com> Co-authored-by: Sandeep Kumawat <2025sandeepkumawat@gmail.com> Co-authored-by: Sandeep Kumawat Co-authored-by: Daniil Roman Co-authored-by: Rishabh Singh Co-authored-by: kkewwei Co-authored-by: Daniel (dB.) Doubrovkine Co-authored-by: Ashish Singh Co-authored-by: bowenlan-amzn Co-authored-by: Pranshu Shukla <55992439+Pranshu-S@users.noreply.github.com> Co-authored-by: rajiv-kv <157019998+rajiv-kv@users.noreply.github.com> Co-authored-by: Sumit Bansal Co-authored-by: Daniel Widdis Co-authored-by: shailendra0811 <167273922+shailendra0811@users.noreply.github.com> Co-authored-by: Park, Yeongwu Co-authored-by: ebraminio Co-authored-by: Liyun Xiu Co-authored-by: Finn Co-authored-by: Neetika Singhal Co-authored-by: Jay Deng Co-authored-by: Gaurav Bafna <85113518+gbbafna@users.noreply.github.com> Co-authored-by: Andrew Ross --- CHANGELOG.md | 1 + .../action/search/SearchShardTask.java | 4 +- .../opensearch/action/search/SearchTask.java | 4 +- .../action/search/TransportSearchAction.java | 7 ++ .../main/java/org/opensearch/node/Node.java | 10 ++- .../org/opensearch/wlm/QueryGroupTask.java | 76 +++++++++++++++++++ ...orkloadManagementTransportInterceptor.java | 64 ++++++++++++++++ .../opensearch/wlm/QueryGroupTaskTests.java | 44 +++++++++++ ...adManagementTransportInterceptorTests.java | 40 ++++++++++ ...anagementTransportRequestHandlerTests.java | 75 ++++++++++++++++++ 10 files changed, 320 insertions(+), 5 deletions(-) create mode 100644 server/src/main/java/org/opensearch/wlm/QueryGroupTask.java create mode 100644 server/src/main/java/org/opensearch/wlm/WorkloadManagementTransportInterceptor.java create mode 100644 server/src/test/java/org/opensearch/wlm/QueryGroupTaskTests.java create mode 100644 server/src/test/java/org/opensearch/wlm/WorkloadManagementTransportInterceptorTests.java create mode 100644 server/src/test/java/org/opensearch/wlm/WorkloadManagementTransportRequestHandlerTests.java diff --git a/CHANGELOG.md b/CHANGELOG.md index a5a3e9c60b664..a5355f010a99f 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -6,6 +6,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), ## [Unreleased 2.x] ### Added - Fix for hasInitiatedFetching to fix allocation explain and manual reroute APIs (([#14972](https://github.com/opensearch-project/OpenSearch/pull/14972)) +- [Workload Management] Add queryGroupId to Task ([14708](https://github.com/opensearch-project/OpenSearch/pull/14708)) - Add basic aggregation support for derived fields ([#14618](https://github.com/opensearch-project/OpenSearch/pull/14618)) ### Dependencies diff --git a/server/src/main/java/org/opensearch/action/search/SearchShardTask.java b/server/src/main/java/org/opensearch/action/search/SearchShardTask.java index dfecf4f462c4d..ed2943db94420 100644 --- a/server/src/main/java/org/opensearch/action/search/SearchShardTask.java +++ b/server/src/main/java/org/opensearch/action/search/SearchShardTask.java @@ -37,8 +37,8 @@ import org.opensearch.core.tasks.TaskId; import org.opensearch.search.fetch.ShardFetchSearchRequest; import org.opensearch.search.internal.ShardSearchRequest; -import org.opensearch.tasks.CancellableTask; import org.opensearch.tasks.SearchBackpressureTask; +import org.opensearch.wlm.QueryGroupTask; import java.util.Map; import java.util.function.Supplier; @@ -50,7 +50,7 @@ * @opensearch.api */ @PublicApi(since = "1.0.0") -public class SearchShardTask extends CancellableTask implements SearchBackpressureTask { +public class SearchShardTask extends QueryGroupTask implements SearchBackpressureTask { // generating metadata in a lazy way since source can be quite big private final MemoizedSupplier metadataSupplier; diff --git a/server/src/main/java/org/opensearch/action/search/SearchTask.java b/server/src/main/java/org/opensearch/action/search/SearchTask.java index d3c1043c50cce..2a1a961e7607b 100644 --- a/server/src/main/java/org/opensearch/action/search/SearchTask.java +++ b/server/src/main/java/org/opensearch/action/search/SearchTask.java @@ -35,8 +35,8 @@ import org.opensearch.common.annotation.PublicApi; import org.opensearch.common.unit.TimeValue; import org.opensearch.core.tasks.TaskId; -import org.opensearch.tasks.CancellableTask; import org.opensearch.tasks.SearchBackpressureTask; +import org.opensearch.wlm.QueryGroupTask; import java.util.Map; import java.util.function.Supplier; @@ -49,7 +49,7 @@ * @opensearch.api */ @PublicApi(since = "1.0.0") -public class SearchTask extends CancellableTask implements SearchBackpressureTask { +public class SearchTask extends QueryGroupTask implements SearchBackpressureTask { // generating description in a lazy way since source can be quite big private final Supplier descriptionSupplier; private SearchProgressListener progressListener = SearchProgressListener.NOOP; diff --git a/server/src/main/java/org/opensearch/action/search/TransportSearchAction.java b/server/src/main/java/org/opensearch/action/search/TransportSearchAction.java index 7d3237d43cd5c..88bf7ebea8e52 100644 --- a/server/src/main/java/org/opensearch/action/search/TransportSearchAction.java +++ b/server/src/main/java/org/opensearch/action/search/TransportSearchAction.java @@ -101,6 +101,7 @@ import org.opensearch.transport.RemoteTransportException; import org.opensearch.transport.Transport; import org.opensearch.transport.TransportService; +import org.opensearch.wlm.QueryGroupTask; import java.util.ArrayList; import java.util.Arrays; @@ -442,6 +443,12 @@ private void executeRequest( ); searchRequestContext.getSearchRequestOperationsListener().onRequestStart(searchRequestContext); + // At this point either the QUERY_GROUP_ID header will be present in ThreadContext either via ActionFilter + // or HTTP header (HTTP header will be deprecated once ActionFilter is implemented) + if (task instanceof QueryGroupTask) { + ((QueryGroupTask) task).setQueryGroupId(threadPool.getThreadContext()); + } + PipelinedRequest searchRequest; ActionListener listener; try { diff --git a/server/src/main/java/org/opensearch/node/Node.java b/server/src/main/java/org/opensearch/node/Node.java index 448cb3627651c..8684b1b383cab 100644 --- a/server/src/main/java/org/opensearch/node/Node.java +++ b/server/src/main/java/org/opensearch/node/Node.java @@ -263,6 +263,7 @@ import org.opensearch.transport.TransportService; import org.opensearch.usage.UsageService; import org.opensearch.watcher.ResourceWatcherService; +import org.opensearch.wlm.WorkloadManagementTransportInterceptor; import javax.net.ssl.SNIHostName; @@ -1047,6 +1048,10 @@ protected Node( admissionControlService ); + WorkloadManagementTransportInterceptor workloadManagementTransportInterceptor = new WorkloadManagementTransportInterceptor( + threadPool + ); + final Collection secureSettingsFactories = pluginsService.filterPlugins(Plugin.class) .stream() .map(p -> p.getSecureSettingFactory(settings)) @@ -1054,7 +1059,10 @@ protected Node( .map(Optional::get) .collect(Collectors.toList()); - List transportInterceptors = List.of(admissionControlTransportInterceptor); + List transportInterceptors = List.of( + admissionControlTransportInterceptor, + workloadManagementTransportInterceptor + ); final NetworkModule networkModule = new NetworkModule( settings, pluginsService.filterPlugins(NetworkPlugin.class), diff --git a/server/src/main/java/org/opensearch/wlm/QueryGroupTask.java b/server/src/main/java/org/opensearch/wlm/QueryGroupTask.java new file mode 100644 index 0000000000000..4eb413be61b72 --- /dev/null +++ b/server/src/main/java/org/opensearch/wlm/QueryGroupTask.java @@ -0,0 +1,76 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.wlm; + +import org.apache.logging.log4j.LogManager; +import org.apache.logging.log4j.Logger; +import org.opensearch.common.unit.TimeValue; +import org.opensearch.common.util.concurrent.ThreadContext; +import org.opensearch.core.tasks.TaskId; +import org.opensearch.tasks.CancellableTask; + +import java.util.Map; +import java.util.Optional; +import java.util.function.Supplier; + +import static org.opensearch.search.SearchService.NO_TIMEOUT; + +/** + * Base class to define QueryGroup tasks + */ +public class QueryGroupTask extends CancellableTask { + + private static final Logger logger = LogManager.getLogger(QueryGroupTask.class); + public static final String QUERY_GROUP_ID_HEADER = "queryGroupId"; + public static final Supplier DEFAULT_QUERY_GROUP_ID_SUPPLIER = () -> "DEFAULT_QUERY_GROUP"; + private String queryGroupId; + + public QueryGroupTask(long id, String type, String action, String description, TaskId parentTaskId, Map headers) { + this(id, type, action, description, parentTaskId, headers, NO_TIMEOUT); + } + + public QueryGroupTask( + long id, + String type, + String action, + String description, + TaskId parentTaskId, + Map headers, + TimeValue cancelAfterTimeInterval + ) { + super(id, type, action, description, parentTaskId, headers, cancelAfterTimeInterval); + } + + /** + * This method should always be called after calling setQueryGroupId at least once on this object + * @return task queryGroupId + */ + public final String getQueryGroupId() { + if (queryGroupId == null) { + logger.warn("QueryGroup _id can't be null, It should be set before accessing it. This is abnormal behaviour "); + } + return queryGroupId; + } + + /** + * sets the queryGroupId from threadContext into the task itself, + * This method was defined since the queryGroupId can only be evaluated after task creation + * @param threadContext current threadContext + */ + public final void setQueryGroupId(final ThreadContext threadContext) { + this.queryGroupId = Optional.ofNullable(threadContext) + .map(threadContext1 -> threadContext1.getHeader(QUERY_GROUP_ID_HEADER)) + .orElse(DEFAULT_QUERY_GROUP_ID_SUPPLIER.get()); + } + + @Override + public boolean shouldCancelChildrenOnCancellation() { + return false; + } +} diff --git a/server/src/main/java/org/opensearch/wlm/WorkloadManagementTransportInterceptor.java b/server/src/main/java/org/opensearch/wlm/WorkloadManagementTransportInterceptor.java new file mode 100644 index 0000000000000..848df8712549a --- /dev/null +++ b/server/src/main/java/org/opensearch/wlm/WorkloadManagementTransportInterceptor.java @@ -0,0 +1,64 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.wlm; + +import org.opensearch.tasks.Task; +import org.opensearch.threadpool.ThreadPool; +import org.opensearch.transport.TransportChannel; +import org.opensearch.transport.TransportInterceptor; +import org.opensearch.transport.TransportRequest; +import org.opensearch.transport.TransportRequestHandler; + +/** + * This class is used to intercept search traffic requests and populate the queryGroupId header in task headers + */ +public class WorkloadManagementTransportInterceptor implements TransportInterceptor { + private final ThreadPool threadPool; + + public WorkloadManagementTransportInterceptor(ThreadPool threadPool) { + this.threadPool = threadPool; + } + + @Override + public TransportRequestHandler interceptHandler( + String action, + String executor, + boolean forceExecution, + TransportRequestHandler actualHandler + ) { + return new RequestHandler(threadPool, actualHandler); + } + + /** + * This class is mainly used to populate the queryGroupId header + * @param T is Search related request + */ + public static class RequestHandler implements TransportRequestHandler { + + private final ThreadPool threadPool; + TransportRequestHandler actualHandler; + + public RequestHandler(ThreadPool threadPool, TransportRequestHandler actualHandler) { + this.threadPool = threadPool; + this.actualHandler = actualHandler; + } + + @Override + public void messageReceived(T request, TransportChannel channel, Task task) throws Exception { + if (isSearchWorkloadRequest(task)) { + ((QueryGroupTask) task).setQueryGroupId(threadPool.getThreadContext()); + } + actualHandler.messageReceived(request, channel, task); + } + + boolean isSearchWorkloadRequest(Task task) { + return task instanceof QueryGroupTask; + } + } +} diff --git a/server/src/test/java/org/opensearch/wlm/QueryGroupTaskTests.java b/server/src/test/java/org/opensearch/wlm/QueryGroupTaskTests.java new file mode 100644 index 0000000000000..d292809c30124 --- /dev/null +++ b/server/src/test/java/org/opensearch/wlm/QueryGroupTaskTests.java @@ -0,0 +1,44 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.wlm; + +import org.opensearch.test.OpenSearchTestCase; +import org.opensearch.threadpool.TestThreadPool; +import org.opensearch.threadpool.ThreadPool; + +import java.util.Collections; + +import static org.opensearch.wlm.QueryGroupTask.DEFAULT_QUERY_GROUP_ID_SUPPLIER; +import static org.opensearch.wlm.QueryGroupTask.QUERY_GROUP_ID_HEADER; + +public class QueryGroupTaskTests extends OpenSearchTestCase { + private ThreadPool threadPool; + private QueryGroupTask sut; + + public void setUp() throws Exception { + super.setUp(); + threadPool = new TestThreadPool(getTestName()); + sut = new QueryGroupTask(123, "transport", "Search", "test task", null, Collections.emptyMap()); + } + + public void tearDown() throws Exception { + super.tearDown(); + threadPool.shutdown(); + } + + public void testSuccessfulSetQueryGroupId() { + sut.setQueryGroupId(threadPool.getThreadContext()); + assertEquals(DEFAULT_QUERY_GROUP_ID_SUPPLIER.get(), sut.getQueryGroupId()); + + threadPool.getThreadContext().putHeader(QUERY_GROUP_ID_HEADER, "akfanglkaglknag2332"); + + sut.setQueryGroupId(threadPool.getThreadContext()); + assertEquals("akfanglkaglknag2332", sut.getQueryGroupId()); + } +} diff --git a/server/src/test/java/org/opensearch/wlm/WorkloadManagementTransportInterceptorTests.java b/server/src/test/java/org/opensearch/wlm/WorkloadManagementTransportInterceptorTests.java new file mode 100644 index 0000000000000..db4e5e45d49ed --- /dev/null +++ b/server/src/test/java/org/opensearch/wlm/WorkloadManagementTransportInterceptorTests.java @@ -0,0 +1,40 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.wlm; + +import org.opensearch.test.OpenSearchTestCase; +import org.opensearch.threadpool.TestThreadPool; +import org.opensearch.threadpool.ThreadPool; +import org.opensearch.transport.TransportRequest; +import org.opensearch.transport.TransportRequestHandler; +import org.opensearch.wlm.WorkloadManagementTransportInterceptor.RequestHandler; + +import static org.opensearch.threadpool.ThreadPool.Names.SAME; + +public class WorkloadManagementTransportInterceptorTests extends OpenSearchTestCase { + + private ThreadPool threadPool; + private WorkloadManagementTransportInterceptor sut; + + public void setUp() throws Exception { + super.setUp(); + threadPool = new TestThreadPool(getTestName()); + sut = new WorkloadManagementTransportInterceptor(threadPool); + } + + public void tearDown() throws Exception { + super.tearDown(); + threadPool.shutdown(); + } + + public void testInterceptHandler() { + TransportRequestHandler requestHandler = sut.interceptHandler("Search", SAME, false, null); + assertTrue(requestHandler instanceof RequestHandler); + } +} diff --git a/server/src/test/java/org/opensearch/wlm/WorkloadManagementTransportRequestHandlerTests.java b/server/src/test/java/org/opensearch/wlm/WorkloadManagementTransportRequestHandlerTests.java new file mode 100644 index 0000000000000..789c02345e774 --- /dev/null +++ b/server/src/test/java/org/opensearch/wlm/WorkloadManagementTransportRequestHandlerTests.java @@ -0,0 +1,75 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.wlm; + +import org.opensearch.action.index.IndexRequest; +import org.opensearch.search.internal.ShardSearchRequest; +import org.opensearch.tasks.Task; +import org.opensearch.test.OpenSearchTestCase; +import org.opensearch.threadpool.TestThreadPool; +import org.opensearch.threadpool.ThreadPool; +import org.opensearch.transport.TransportChannel; +import org.opensearch.transport.TransportRequest; +import org.opensearch.transport.TransportRequestHandler; + +import java.util.Collections; + +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.spy; + +public class WorkloadManagementTransportRequestHandlerTests extends OpenSearchTestCase { + private WorkloadManagementTransportInterceptor.RequestHandler sut; + private ThreadPool threadPool; + + private TestTransportRequestHandler actualHandler; + + public void setUp() throws Exception { + super.setUp(); + threadPool = new TestThreadPool(getTestName()); + actualHandler = new TestTransportRequestHandler<>(); + + sut = new WorkloadManagementTransportInterceptor.RequestHandler<>(threadPool, actualHandler); + } + + public void tearDown() throws Exception { + super.tearDown(); + threadPool.shutdown(); + } + + public void testMessageReceivedForSearchWorkload() throws Exception { + ShardSearchRequest request = mock(ShardSearchRequest.class); + QueryGroupTask spyTask = getSpyTask(); + + sut.messageReceived(request, mock(TransportChannel.class), spyTask); + assertTrue(sut.isSearchWorkloadRequest(spyTask)); + } + + public void testMessageReceivedForNonSearchWorkload() throws Exception { + IndexRequest indexRequest = mock(IndexRequest.class); + Task task = mock(Task.class); + sut.messageReceived(indexRequest, mock(TransportChannel.class), task); + assertFalse(sut.isSearchWorkloadRequest(task)); + assertEquals(1, actualHandler.invokeCount); + } + + private static QueryGroupTask getSpyTask() { + final QueryGroupTask task = new QueryGroupTask(123, "transport", "Search", "test task", null, Collections.emptyMap()); + + return spy(task); + } + + private static class TestTransportRequestHandler implements TransportRequestHandler { + int invokeCount = 0; + + @Override + public void messageReceived(TransportRequest request, TransportChannel channel, Task task) throws Exception { + invokeCount += 1; + } + }; +} From 5c19809ec05d0a2cf03a5105c5333303bc21cb0d Mon Sep 17 00:00:00 2001 From: Gaurav Bafna <85113518+gbbafna@users.noreply.github.com> Date: Wed, 31 Jul 2024 09:50:18 +0530 Subject: [PATCH 138/167] Add setting to ignore throttling nodes for allocation of unassigned remote primaries (#14991) Signed-off-by: Gaurav Bafna --- CHANGELOG.md | 2 + .../allocator/BalancedShardsAllocator.java | 23 ++- .../allocator/LocalShardsBalancer.java | 17 +- .../common/settings/ClusterSettings.java | 1 + .../allocation/BalancedSingleShardTests.java | 15 -- .../DecideAllocateUnassignedTests.java | 154 ++++++++++++++++++ .../cluster/OpenSearchAllocationTestCase.java | 15 ++ .../cluster/routing/TestShardRouting.java | 26 +++ 8 files changed, 233 insertions(+), 20 deletions(-) create mode 100644 server/src/test/java/org/opensearch/cluster/routing/allocation/DecideAllocateUnassignedTests.java diff --git a/CHANGELOG.md b/CHANGELOG.md index a5355f010a99f..9689e391c6df3 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -7,6 +7,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), ### Added - Fix for hasInitiatedFetching to fix allocation explain and manual reroute APIs (([#14972](https://github.com/opensearch-project/OpenSearch/pull/14972)) - [Workload Management] Add queryGroupId to Task ([14708](https://github.com/opensearch-project/OpenSearch/pull/14708)) +- Add setting to ignore throttling nodes for allocation of unassigned primaries in remote restore ([#14991](https://github.com/opensearch-project/OpenSearch/pull/14991)) - Add basic aggregation support for derived fields ([#14618](https://github.com/opensearch-project/OpenSearch/pull/14618)) ### Dependencies @@ -23,6 +24,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), ### Removed ### Fixed +- Fix constraint bug which allows more primary shards than average primary shards per index ([#14908](https://github.com/opensearch-project/OpenSearch/pull/14908)) - Fix missing value of FieldSort for unsigned_long ([#14963](https://github.com/opensearch-project/OpenSearch/pull/14963)) ### Security diff --git a/server/src/main/java/org/opensearch/cluster/routing/allocation/allocator/BalancedShardsAllocator.java b/server/src/main/java/org/opensearch/cluster/routing/allocation/allocator/BalancedShardsAllocator.java index b2443490dd973..ae173bbf06c4f 100644 --- a/server/src/main/java/org/opensearch/cluster/routing/allocation/allocator/BalancedShardsAllocator.java +++ b/server/src/main/java/org/opensearch/cluster/routing/allocation/allocator/BalancedShardsAllocator.java @@ -154,6 +154,13 @@ public class BalancedShardsAllocator implements ShardsAllocator { Property.NodeScope ); + public static final Setting IGNORE_THROTTLE_FOR_REMOTE_RESTORE = Setting.boolSetting( + "cluster.routing.allocation.remote_primary.ignore_throttle", + true, + Property.Dynamic, + Property.NodeScope + ); + public static final Setting PRIMARY_SHARD_REBALANCE_BUFFER = Setting.floatSetting( "cluster.routing.allocation.rebalance.primary.buffer", 0.10f, @@ -173,6 +180,8 @@ public class BalancedShardsAllocator implements ShardsAllocator { private volatile WeightFunction weightFunction; private volatile float threshold; + private volatile boolean ignoreThrottleInRestore; + public BalancedShardsAllocator(Settings settings) { this(settings, new ClusterSettings(settings, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS)); } @@ -182,6 +191,7 @@ public BalancedShardsAllocator(Settings settings, ClusterSettings clusterSetting setShardBalanceFactor(SHARD_BALANCE_FACTOR_SETTING.get(settings)); setIndexBalanceFactor(INDEX_BALANCE_FACTOR_SETTING.get(settings)); setPreferPrimaryShardRebalanceBuffer(PRIMARY_SHARD_REBALANCE_BUFFER.get(settings)); + setIgnoreThrottleInRestore(IGNORE_THROTTLE_FOR_REMOTE_RESTORE.get(settings)); updateWeightFunction(); setThreshold(THRESHOLD_SETTING.get(settings)); setPreferPrimaryShardBalance(PREFER_PRIMARY_SHARD_BALANCE.get(settings)); @@ -195,6 +205,7 @@ public BalancedShardsAllocator(Settings settings, ClusterSettings clusterSetting clusterSettings.addSettingsUpdateConsumer(PRIMARY_SHARD_REBALANCE_BUFFER, this::updatePreferPrimaryShardBalanceBuffer); clusterSettings.addSettingsUpdateConsumer(PREFER_PRIMARY_SHARD_REBALANCE, this::setPreferPrimaryShardRebalance); clusterSettings.addSettingsUpdateConsumer(THRESHOLD_SETTING, this::setThreshold); + clusterSettings.addSettingsUpdateConsumer(IGNORE_THROTTLE_FOR_REMOTE_RESTORE, this::setIgnoreThrottleInRestore); } /** @@ -205,6 +216,10 @@ private void setMovePrimaryFirst(boolean movePrimaryFirst) { setShardMovementStrategy(this.shardMovementStrategy); } + private void setIgnoreThrottleInRestore(boolean ignoreThrottleInRestore) { + this.ignoreThrottleInRestore = ignoreThrottleInRestore; + } + /** * Sets the correct Shard movement strategy to use. * If users are still using deprecated setting `move_primary_first`, we want behavior to remain unchanged. @@ -282,7 +297,8 @@ public void allocate(RoutingAllocation allocation) { weightFunction, threshold, preferPrimaryShardBalance, - preferPrimaryShardRebalance + preferPrimaryShardRebalance, + ignoreThrottleInRestore ); localShardsBalancer.allocateUnassigned(); localShardsBalancer.moveShards(); @@ -304,7 +320,8 @@ public ShardAllocationDecision decideShardAllocation(final ShardRouting shard, f weightFunction, threshold, preferPrimaryShardBalance, - preferPrimaryShardRebalance + preferPrimaryShardRebalance, + ignoreThrottleInRestore ); AllocateUnassignedDecision allocateUnassignedDecision = AllocateUnassignedDecision.NOT_TAKEN; MoveDecision moveDecision = MoveDecision.NOT_TAKEN; @@ -558,7 +575,7 @@ public Balancer( float threshold, boolean preferPrimaryBalance ) { - super(logger, allocation, shardMovementStrategy, weight, threshold, preferPrimaryBalance, false); + super(logger, allocation, shardMovementStrategy, weight, threshold, preferPrimaryBalance, false, false); } } diff --git a/server/src/main/java/org/opensearch/cluster/routing/allocation/allocator/LocalShardsBalancer.java b/server/src/main/java/org/opensearch/cluster/routing/allocation/allocator/LocalShardsBalancer.java index 00eb79add9f1d..7e4ae58548c55 100644 --- a/server/src/main/java/org/opensearch/cluster/routing/allocation/allocator/LocalShardsBalancer.java +++ b/server/src/main/java/org/opensearch/cluster/routing/allocation/allocator/LocalShardsBalancer.java @@ -13,6 +13,7 @@ import org.apache.lucene.util.IntroSorter; import org.opensearch.cluster.metadata.IndexMetadata; import org.opensearch.cluster.metadata.Metadata; +import org.opensearch.cluster.routing.RecoverySource; import org.opensearch.cluster.routing.RoutingNode; import org.opensearch.cluster.routing.RoutingNodes; import org.opensearch.cluster.routing.RoutingPool; @@ -60,6 +61,8 @@ public class LocalShardsBalancer extends ShardsBalancer { private final boolean preferPrimaryBalance; private final boolean preferPrimaryRebalance; + + private final boolean ignoreThrottleInRestore; private final BalancedShardsAllocator.WeightFunction weight; private final float threshold; @@ -77,7 +80,8 @@ public LocalShardsBalancer( BalancedShardsAllocator.WeightFunction weight, float threshold, boolean preferPrimaryBalance, - boolean preferPrimaryRebalance + boolean preferPrimaryRebalance, + boolean ignoreThrottleInRestore ) { this.logger = logger; this.allocation = allocation; @@ -94,6 +98,7 @@ public LocalShardsBalancer( this.preferPrimaryBalance = preferPrimaryBalance; this.preferPrimaryRebalance = preferPrimaryRebalance; this.shardMovementStrategy = shardMovementStrategy; + this.ignoreThrottleInRestore = ignoreThrottleInRestore; } /** @@ -918,7 +923,15 @@ AllocateUnassignedDecision decideAllocateUnassigned(final ShardRouting shard) { nodeExplanationMap.put(node.getNodeId(), new NodeAllocationResult(node.getRoutingNode().node(), currentDecision, 0)); nodeWeights.add(Tuple.tuple(node.getNodeId(), currentWeight)); } - if (currentDecision.type() == Decision.Type.YES || currentDecision.type() == Decision.Type.THROTTLE) { + + // For REMOTE_STORE recoveries, THROTTLE is as good as NO as we want faster recoveries + // The side effect of this are increased relocations post these allocations. + boolean considerThrottleAsNo = ignoreThrottleInRestore + && shard.recoverySource().getType() == RecoverySource.Type.REMOTE_STORE + && shard.primary(); + + if (currentDecision.type() == Decision.Type.YES + || (currentDecision.type() == Decision.Type.THROTTLE && considerThrottleAsNo == false)) { final boolean updateMinNode; if (currentWeight == minWeight) { /* we have an equal weight tie breaking: diff --git a/server/src/main/java/org/opensearch/common/settings/ClusterSettings.java b/server/src/main/java/org/opensearch/common/settings/ClusterSettings.java index 2f60c731bc554..a73e5d44b7e02 100644 --- a/server/src/main/java/org/opensearch/common/settings/ClusterSettings.java +++ b/server/src/main/java/org/opensearch/common/settings/ClusterSettings.java @@ -268,6 +268,7 @@ public void apply(Settings value, Settings current, Settings previous) { BalancedShardsAllocator.SHARD_MOVE_PRIMARY_FIRST_SETTING, BalancedShardsAllocator.SHARD_MOVEMENT_STRATEGY_SETTING, BalancedShardsAllocator.THRESHOLD_SETTING, + BalancedShardsAllocator.IGNORE_THROTTLE_FOR_REMOTE_RESTORE, BreakerSettings.CIRCUIT_BREAKER_LIMIT_SETTING, BreakerSettings.CIRCUIT_BREAKER_OVERHEAD_SETTING, BreakerSettings.CIRCUIT_BREAKER_TYPE, diff --git a/server/src/test/java/org/opensearch/cluster/routing/allocation/BalancedSingleShardTests.java b/server/src/test/java/org/opensearch/cluster/routing/allocation/BalancedSingleShardTests.java index d29249cef0818..11a43019f648e 100644 --- a/server/src/test/java/org/opensearch/cluster/routing/allocation/BalancedSingleShardTests.java +++ b/server/src/test/java/org/opensearch/cluster/routing/allocation/BalancedSingleShardTests.java @@ -33,7 +33,6 @@ package org.opensearch.cluster.routing.allocation; import org.opensearch.action.support.replication.ClusterStateCreationUtils; -import org.opensearch.cluster.ClusterInfo; import org.opensearch.cluster.ClusterState; import org.opensearch.cluster.OpenSearchAllocationTestCase; import org.opensearch.cluster.node.DiscoveryNode; @@ -50,7 +49,6 @@ import org.opensearch.cluster.routing.allocation.decider.Decision.Type; import org.opensearch.common.collect.Tuple; import org.opensearch.common.settings.Settings; -import org.opensearch.snapshots.SnapshotShardSizeInfo; import java.util.Arrays; import java.util.Collections; @@ -398,19 +396,6 @@ public Decision canRebalance(ShardRouting shardRouting, RoutingAllocation alloca return Tuple.tuple(clusterState, rebalanceDecision); } - private RoutingAllocation newRoutingAllocation(AllocationDeciders deciders, ClusterState state) { - RoutingAllocation allocation = new RoutingAllocation( - deciders, - new RoutingNodes(state, false), - state, - ClusterInfo.EMPTY, - SnapshotShardSizeInfo.EMPTY, - System.nanoTime() - ); - allocation.debugDecision(true); - return allocation; - } - private void assertAssignedNodeRemainsSame( BalancedShardsAllocator allocator, RoutingAllocation routingAllocation, diff --git a/server/src/test/java/org/opensearch/cluster/routing/allocation/DecideAllocateUnassignedTests.java b/server/src/test/java/org/opensearch/cluster/routing/allocation/DecideAllocateUnassignedTests.java new file mode 100644 index 0000000000000..6df2ffc6149d5 --- /dev/null +++ b/server/src/test/java/org/opensearch/cluster/routing/allocation/DecideAllocateUnassignedTests.java @@ -0,0 +1,154 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.cluster.routing.allocation; + +import org.opensearch.Version; +import org.opensearch.action.support.replication.ClusterStateCreationUtils; +import org.opensearch.cluster.ClusterState; +import org.opensearch.cluster.OpenSearchAllocationTestCase; +import org.opensearch.cluster.metadata.IndexMetadata; +import org.opensearch.cluster.metadata.Metadata; +import org.opensearch.cluster.node.DiscoveryNode; +import org.opensearch.cluster.node.DiscoveryNodes; +import org.opensearch.cluster.routing.AllocationId; +import org.opensearch.cluster.routing.IndexRoutingTable; +import org.opensearch.cluster.routing.IndexShardRoutingTable; +import org.opensearch.cluster.routing.RoutingNode; +import org.opensearch.cluster.routing.RoutingTable; +import org.opensearch.cluster.routing.ShardRouting; +import org.opensearch.cluster.routing.ShardRoutingState; +import org.opensearch.cluster.routing.TestShardRouting; +import org.opensearch.cluster.routing.UnassignedInfo; +import org.opensearch.cluster.routing.allocation.allocator.BalancedShardsAllocator; +import org.opensearch.cluster.routing.allocation.decider.AllocationDecider; +import org.opensearch.cluster.routing.allocation.decider.AllocationDeciders; +import org.opensearch.cluster.routing.allocation.decider.Decision; +import org.opensearch.common.settings.Settings; +import org.opensearch.core.index.shard.ShardId; + +import java.util.Arrays; +import java.util.HashSet; +import java.util.List; +import java.util.Set; +import java.util.stream.Collectors; + +import static org.opensearch.cluster.metadata.IndexMetadata.SETTING_CREATION_DATE; +import static org.opensearch.cluster.metadata.IndexMetadata.SETTING_NUMBER_OF_REPLICAS; +import static org.opensearch.cluster.metadata.IndexMetadata.SETTING_NUMBER_OF_SHARDS; +import static org.opensearch.cluster.metadata.IndexMetadata.SETTING_VERSION_CREATED; +import static org.opensearch.cluster.routing.allocation.allocator.BalancedShardsAllocator.IGNORE_THROTTLE_FOR_REMOTE_RESTORE; + +public class DecideAllocateUnassignedTests extends OpenSearchAllocationTestCase { + public void testAllocateUnassignedRemoteRestore_IgnoreThrottle() { + final String[] indices = { "idx1" }; + // Create a cluster state with 1 indices, each with 1 started primary shard, and only + // one node initially so that all primary shards get allocated to the same node. + // + // When we add 1 more 1 index with 1 started primary shard and 1 more node , if the new node throttles the recovery + // shard should get assigned on the older node if IgnoreThrottle is set to true + ClusterState clusterState = ClusterStateCreationUtils.state(1, indices, 1); + clusterState = addNodesToClusterState(clusterState, 1); + clusterState = addRestoringIndexToClusterState(clusterState, "idx2"); + List allocationDeciders = getAllocationDecidersThrottleOnNode1(); + RoutingAllocation routingAllocation = newRoutingAllocation(new AllocationDeciders(allocationDeciders), clusterState); + // allocate and get the node that is now relocating + Settings build = Settings.builder().put(IGNORE_THROTTLE_FOR_REMOTE_RESTORE.getKey(), true).build(); + BalancedShardsAllocator allocator = new BalancedShardsAllocator(build); + allocator.allocate(routingAllocation); + assertEquals(routingAllocation.routingNodes().shardsWithState(ShardRoutingState.INITIALIZING).get(0).currentNodeId(), "node_0"); + assertEquals(routingAllocation.routingNodes().shardsWithState(ShardRoutingState.INITIALIZING).get(0).getIndexName(), "idx2"); + assertFalse(routingAllocation.routingNodes().hasUnassignedPrimaries()); + } + + public void testAllocateUnassignedRemoteRestore() { + final String[] indices = { "idx1" }; + // Create a cluster state with 1 indices, each with 1 started primary shard, and only + // one node initially so that all primary shards get allocated to the same node. + // + // When we add 1 more 1 index with 1 started primary shard and 1 more node , if the new node throttles the recovery + // shard should remain unassigned if IgnoreThrottle is set to false + ClusterState clusterState = ClusterStateCreationUtils.state(1, indices, 1); + clusterState = addNodesToClusterState(clusterState, 1); + clusterState = addRestoringIndexToClusterState(clusterState, "idx2"); + List allocationDeciders = getAllocationDecidersThrottleOnNode1(); + RoutingAllocation routingAllocation = newRoutingAllocation(new AllocationDeciders(allocationDeciders), clusterState); + // allocate and get the node that is now relocating + Settings build = Settings.builder().put(IGNORE_THROTTLE_FOR_REMOTE_RESTORE.getKey(), false).build(); + BalancedShardsAllocator allocator = new BalancedShardsAllocator(build); + allocator.allocate(routingAllocation); + assertEquals(routingAllocation.routingNodes().shardsWithState(ShardRoutingState.INITIALIZING).size(), 0); + assertTrue(routingAllocation.routingNodes().hasUnassignedPrimaries()); + } + + private static List getAllocationDecidersThrottleOnNode1() { + // Allocation Deciders to throttle on `node_1` + final Set throttleNodes = new HashSet<>(); + throttleNodes.add("node_1"); + AllocationDecider allocationDecider = new AllocationDecider() { + @Override + public Decision canAllocate(ShardRouting shardRouting, RoutingNode node, RoutingAllocation allocation) { + if (throttleNodes.contains(node.nodeId())) { + return Decision.THROTTLE; + } + return Decision.YES; + } + }; + List allocationDeciders = Arrays.asList(allocationDecider); + return allocationDeciders; + } + + private ClusterState addNodesToClusterState(ClusterState clusterState, int nodeId) { + DiscoveryNodes.Builder nodesBuilder = DiscoveryNodes.builder(clusterState.nodes()); + DiscoveryNode discoveryNode = newNode("node_" + nodeId); + nodesBuilder.add(discoveryNode); + return ClusterState.builder(clusterState).nodes(nodesBuilder).build(); + } + + private ClusterState addRestoringIndexToClusterState(ClusterState clusterState, String index) { + final int primaryTerm = 1 + randomInt(200); + final ShardId shardId = new ShardId(index, "_na_", 0); + + IndexMetadata indexMetadata = IndexMetadata.builder(index) + .settings( + Settings.builder() + .put(SETTING_VERSION_CREATED, Version.CURRENT) + .put(SETTING_NUMBER_OF_SHARDS, 1) + .put(SETTING_NUMBER_OF_REPLICAS, 0) + .put(SETTING_CREATION_DATE, System.currentTimeMillis()) + ) + .primaryTerm(0, primaryTerm) + .build(); + + IndexShardRoutingTable.Builder indexShardRoutingBuilder = new IndexShardRoutingTable.Builder(shardId); + UnassignedInfo unassignedInfo = new UnassignedInfo(UnassignedInfo.Reason.EXISTING_INDEX_RESTORED, null); + indexShardRoutingBuilder.addShard( + TestShardRouting.newShardRoutingRemoteRestore(index, shardId, null, null, true, ShardRoutingState.UNASSIGNED, unassignedInfo) + ); + final IndexShardRoutingTable indexShardRoutingTable = indexShardRoutingBuilder.build(); + + IndexMetadata.Builder indexMetadataBuilder = new IndexMetadata.Builder(indexMetadata); + indexMetadataBuilder.putInSyncAllocationIds( + 0, + indexShardRoutingTable.activeShards() + .stream() + .map(ShardRouting::allocationId) + .map(AllocationId::getId) + .collect(Collectors.toSet()) + ); + ClusterState.Builder state = ClusterState.builder(clusterState); + state.metadata(Metadata.builder(clusterState.metadata()).put(indexMetadataBuilder.build(), false).generateClusterUuidIfNeeded()); + state.routingTable( + RoutingTable.builder(clusterState.routingTable()) + .add(IndexRoutingTable.builder(indexMetadata.getIndex()).addIndexShard(indexShardRoutingTable)) + .build() + ); + return state.build(); + } + +} diff --git a/test/framework/src/main/java/org/opensearch/cluster/OpenSearchAllocationTestCase.java b/test/framework/src/main/java/org/opensearch/cluster/OpenSearchAllocationTestCase.java index f6113860e3907..34b8c58a9c5b2 100644 --- a/test/framework/src/main/java/org/opensearch/cluster/OpenSearchAllocationTestCase.java +++ b/test/framework/src/main/java/org/opensearch/cluster/OpenSearchAllocationTestCase.java @@ -37,6 +37,7 @@ import org.opensearch.cluster.node.DiscoveryNodeRole; import org.opensearch.cluster.routing.RecoverySource; import org.opensearch.cluster.routing.RoutingNode; +import org.opensearch.cluster.routing.RoutingNodes; import org.opensearch.cluster.routing.ShardRouting; import org.opensearch.cluster.routing.UnassignedInfo; import org.opensearch.cluster.routing.allocation.AllocationService; @@ -287,6 +288,19 @@ public static ClusterState startShardsAndReroute( return allocationService.reroute(allocationService.applyStartedShards(clusterState, initializingShards), "reroute after starting"); } + protected RoutingAllocation newRoutingAllocation(AllocationDeciders deciders, ClusterState state) { + RoutingAllocation allocation = new RoutingAllocation( + deciders, + new RoutingNodes(state, false), + state, + ClusterInfo.EMPTY, + SnapshotShardSizeInfo.EMPTY, + System.nanoTime() + ); + allocation.debugDecision(true); + return allocation; + } + public static class TestAllocateDecision extends AllocationDecider { private final Decision decision; @@ -465,5 +479,6 @@ public void allocateUnassigned( unassignedAllocationHandler.removeAndIgnore(UnassignedInfo.AllocationStatus.DELAYED_ALLOCATION, allocation.changes()); } } + } } diff --git a/test/framework/src/main/java/org/opensearch/cluster/routing/TestShardRouting.java b/test/framework/src/main/java/org/opensearch/cluster/routing/TestShardRouting.java index f67108345550f..c7c71f0f569e5 100644 --- a/test/framework/src/main/java/org/opensearch/cluster/routing/TestShardRouting.java +++ b/test/framework/src/main/java/org/opensearch/cluster/routing/TestShardRouting.java @@ -205,6 +205,32 @@ public static ShardRouting newShardRouting( ); } + public static ShardRouting newShardRoutingRemoteRestore( + String index, + ShardId shardId, + String currentNodeId, + String relocatingNodeId, + boolean primary, + ShardRoutingState state, + UnassignedInfo unassignedInfo + ) { + return new ShardRouting( + shardId, + currentNodeId, + relocatingNodeId, + primary, + state, + new RecoverySource.RemoteStoreRecoverySource( + UUIDs.randomBase64UUID(), + Version.V_EMPTY, + new IndexId(shardId.getIndexName(), shardId.getIndexName()) + ), + unassignedInfo, + buildAllocationId(state), + -1 + ); + } + public static ShardRouting newShardRouting( ShardId shardId, String currentNodeId, From 597747dcbf7c14513dd07887048976620164f4e0 Mon Sep 17 00:00:00 2001 From: Craig Perkins Date: Wed, 31 Jul 2024 07:56:11 -0400 Subject: [PATCH 139/167] Add ThreadContextPermission for markAsSystemContext and allow core to perform the method (#15016) * Add RuntimePermission for markAsSystemContext and allow core to perform the method Signed-off-by: Craig Perkins * private Signed-off-by: Craig Perkins * Surround with doPrivileged Signed-off-by: Craig Perkins * Create ThreadContextAccess Signed-off-by: Craig Perkins * Create notion of ThreadContextPermission Signed-off-by: Craig Perkins * Add to CHANGELOG Signed-off-by: Craig Perkins * Add javadoc Signed-off-by: Craig Perkins * Add to test-framework.policy file Signed-off-by: Craig Perkins * Mark as internal Signed-off-by: Craig Perkins --------- Signed-off-by: Craig Perkins --- CHANGELOG.md | 1 + .../secure_sm/ThreadContextPermission.java | 40 ++++++++++++++++++ .../service/ClusterApplierService.java | 3 +- .../cluster/service/MasterService.java | 3 +- .../common/util/concurrent/ThreadContext.java | 17 ++++++++ .../util/concurrent/ThreadContextAccess.java | 41 +++++++++++++++++++ .../seqno/GlobalCheckpointSyncAction.java | 3 +- .../RetentionLeaseBackgroundSyncAction.java | 3 +- .../index/seqno/RetentionLeaseSyncAction.java | 3 +- .../checkpoint/PublishCheckpointAction.java | 3 +- .../transport/RemoteClusterConnection.java | 3 +- .../transport/SniffConnectionStrategy.java | 3 +- .../org/opensearch/bootstrap/security.policy | 1 + .../bootstrap/test-framework.policy | 1 + .../metadata/TemplateUpgradeServiceTests.java | 3 +- .../util/concurrent/ThreadContextTests.java | 8 ++-- ...ContextBasedTracerContextStorageTests.java | 3 +- .../org/opensearch/bootstrap/test.policy | 2 +- .../FakeThreadPoolClusterManagerService.java | 3 +- 19 files changed, 128 insertions(+), 16 deletions(-) create mode 100644 libs/secure-sm/src/main/java/org/opensearch/secure_sm/ThreadContextPermission.java create mode 100644 server/src/main/java/org/opensearch/common/util/concurrent/ThreadContextAccess.java diff --git a/CHANGELOG.md b/CHANGELOG.md index 9689e391c6df3..7b49298192800 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -9,6 +9,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - [Workload Management] Add queryGroupId to Task ([14708](https://github.com/opensearch-project/OpenSearch/pull/14708)) - Add setting to ignore throttling nodes for allocation of unassigned primaries in remote restore ([#14991](https://github.com/opensearch-project/OpenSearch/pull/14991)) - Add basic aggregation support for derived fields ([#14618](https://github.com/opensearch-project/OpenSearch/pull/14618)) +- Add ThreadContextPermission for markAsSystemContext and allow core to perform the method ([#15016](https://github.com/opensearch-project/OpenSearch/pull/15016)) ### Dependencies - Bump `org.apache.commons:commons-lang3` from 3.14.0 to 3.15.0 ([#14861](https://github.com/opensearch-project/OpenSearch/pull/14861)) diff --git a/libs/secure-sm/src/main/java/org/opensearch/secure_sm/ThreadContextPermission.java b/libs/secure-sm/src/main/java/org/opensearch/secure_sm/ThreadContextPermission.java new file mode 100644 index 0000000000000..2f33eb513c165 --- /dev/null +++ b/libs/secure-sm/src/main/java/org/opensearch/secure_sm/ThreadContextPermission.java @@ -0,0 +1,40 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.secure_sm; + +import java.security.BasicPermission; + +/** + * Permission to utilize methods in the ThreadContext class that are normally not accessible + * + * @see ThreadGroup + * @see SecureSM + */ +public final class ThreadContextPermission extends BasicPermission { + + /** + * Creates a new ThreadContextPermission object. + * + * @param name target name + */ + public ThreadContextPermission(String name) { + super(name); + } + + /** + * Creates a new ThreadContextPermission object. + * This constructor exists for use by the {@code Policy} object to instantiate new Permission objects. + * + * @param name target name + * @param actions ignored + */ + public ThreadContextPermission(String name, String actions) { + super(name, actions); + } +} diff --git a/server/src/main/java/org/opensearch/cluster/service/ClusterApplierService.java b/server/src/main/java/org/opensearch/cluster/service/ClusterApplierService.java index 6234427445754..b2548a8976c73 100644 --- a/server/src/main/java/org/opensearch/cluster/service/ClusterApplierService.java +++ b/server/src/main/java/org/opensearch/cluster/service/ClusterApplierService.java @@ -61,6 +61,7 @@ import org.opensearch.common.util.concurrent.OpenSearchExecutors; import org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor; import org.opensearch.common.util.concurrent.ThreadContext; +import org.opensearch.common.util.concurrent.ThreadContextAccess; import org.opensearch.core.concurrency.OpenSearchRejectedExecutionException; import org.opensearch.telemetry.metrics.noop.NoopMetricsRegistry; import org.opensearch.telemetry.metrics.tags.Tags; @@ -396,7 +397,7 @@ private void submitStateUpdateTask( final ThreadContext threadContext = threadPool.getThreadContext(); final Supplier supplier = threadContext.newRestorableContext(true); try (ThreadContext.StoredContext ignore = threadContext.stashContext()) { - threadContext.markAsSystemContext(); + ThreadContextAccess.doPrivilegedVoid(threadContext::markAsSystemContext); final UpdateTask updateTask = new UpdateTask( config.priority(), source, diff --git a/server/src/main/java/org/opensearch/cluster/service/MasterService.java b/server/src/main/java/org/opensearch/cluster/service/MasterService.java index 4ab8255df7658..713de8cdd0fda 100644 --- a/server/src/main/java/org/opensearch/cluster/service/MasterService.java +++ b/server/src/main/java/org/opensearch/cluster/service/MasterService.java @@ -66,6 +66,7 @@ import org.opensearch.common.util.concurrent.OpenSearchExecutors; import org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor; import org.opensearch.common.util.concurrent.ThreadContext; +import org.opensearch.common.util.concurrent.ThreadContextAccess; import org.opensearch.core.Assertions; import org.opensearch.core.common.text.Text; import org.opensearch.core.concurrency.OpenSearchRejectedExecutionException; @@ -1022,7 +1023,7 @@ public void submitStateUpdateTasks( final ThreadContext threadContext = threadPool.getThreadContext(); final Supplier supplier = threadContext.newRestorableContext(true); try (ThreadContext.StoredContext ignore = threadContext.stashContext()) { - threadContext.markAsSystemContext(); + ThreadContextAccess.doPrivilegedVoid(threadContext::markAsSystemContext); List safeTasks = tasks.entrySet() .stream() diff --git a/server/src/main/java/org/opensearch/common/util/concurrent/ThreadContext.java b/server/src/main/java/org/opensearch/common/util/concurrent/ThreadContext.java index 906a27e9f398c..b955934c4f547 100644 --- a/server/src/main/java/org/opensearch/common/util/concurrent/ThreadContext.java +++ b/server/src/main/java/org/opensearch/common/util/concurrent/ThreadContext.java @@ -45,11 +45,13 @@ import org.opensearch.core.common.io.stream.StreamOutput; import org.opensearch.core.common.io.stream.Writeable; import org.opensearch.http.HttpTransportSettings; +import org.opensearch.secure_sm.ThreadContextPermission; import org.opensearch.tasks.Task; import org.opensearch.tasks.TaskThreadContextStatePropagator; import java.io.IOException; import java.nio.charset.StandardCharsets; +import java.security.Permission; import java.util.ArrayList; import java.util.Collection; import java.util.Collections; @@ -111,6 +113,10 @@ public final class ThreadContext implements Writeable { */ public static final String ACTION_ORIGIN_TRANSIENT_NAME = "action.origin"; + // thread context permissions + + private static final Permission ACCESS_SYSTEM_THREAD_CONTEXT_PERMISSION = new ThreadContextPermission("markAsSystemContext"); + private static final Logger logger = LogManager.getLogger(ThreadContext.class); private static final ThreadContextStruct DEFAULT_CONTEXT = new ThreadContextStruct(); private final Map defaultHeader; @@ -554,8 +560,19 @@ boolean isDefaultContext() { /** * Marks this thread context as an internal system context. This signals that actions in this context are issued * by the system itself rather than by a user action. + * + * Usage of markAsSystemContext is guarded by a ThreadContextPermission. In order to use + * markAsSystemContext, the codebase needs to explicitly be granted permission in the JSM policy file. + * + * Add an entry in the grant portion of the policy file like this: + * + * permission org.opensearch.secure_sm.ThreadContextPermission "markAsSystemContext"; */ public void markAsSystemContext() { + SecurityManager sm = System.getSecurityManager(); + if (sm != null) { + sm.checkPermission(ACCESS_SYSTEM_THREAD_CONTEXT_PERMISSION); + } threadLocal.set(threadLocal.get().setSystemContext(propagators)); } diff --git a/server/src/main/java/org/opensearch/common/util/concurrent/ThreadContextAccess.java b/server/src/main/java/org/opensearch/common/util/concurrent/ThreadContextAccess.java new file mode 100644 index 0000000000000..14f8b8d79bf4d --- /dev/null +++ b/server/src/main/java/org/opensearch/common/util/concurrent/ThreadContextAccess.java @@ -0,0 +1,41 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.common.util.concurrent; + +import org.opensearch.SpecialPermission; +import org.opensearch.common.annotation.InternalApi; + +import java.security.AccessController; +import java.security.PrivilegedAction; + +/** + * This class wraps the {@link ThreadContext} operations requiring access in + * {@link AccessController#doPrivileged(PrivilegedAction)} blocks. + * + * @opensearch.internal + */ +@SuppressWarnings("removal") +@InternalApi +public final class ThreadContextAccess { + + private ThreadContextAccess() {} + + public static T doPrivileged(PrivilegedAction operation) { + SpecialPermission.check(); + return AccessController.doPrivileged(operation); + } + + public static void doPrivilegedVoid(Runnable action) { + SpecialPermission.check(); + AccessController.doPrivileged((PrivilegedAction) () -> { + action.run(); + return null; + }); + } +} diff --git a/server/src/main/java/org/opensearch/index/seqno/GlobalCheckpointSyncAction.java b/server/src/main/java/org/opensearch/index/seqno/GlobalCheckpointSyncAction.java index c6a1f5f27a875..fedf239871368 100644 --- a/server/src/main/java/org/opensearch/index/seqno/GlobalCheckpointSyncAction.java +++ b/server/src/main/java/org/opensearch/index/seqno/GlobalCheckpointSyncAction.java @@ -44,6 +44,7 @@ import org.opensearch.common.inject.Inject; import org.opensearch.common.settings.Settings; import org.opensearch.common.util.concurrent.ThreadContext; +import org.opensearch.common.util.concurrent.ThreadContextAccess; import org.opensearch.core.action.ActionListener; import org.opensearch.core.common.io.stream.StreamInput; import org.opensearch.core.index.shard.ShardId; @@ -98,7 +99,7 @@ public GlobalCheckpointSyncAction( public void updateGlobalCheckpointForShard(final ShardId shardId) { final ThreadContext threadContext = threadPool.getThreadContext(); try (ThreadContext.StoredContext ignore = threadContext.stashContext()) { - threadContext.markAsSystemContext(); + ThreadContextAccess.doPrivilegedVoid(threadContext::markAsSystemContext); execute(new Request(shardId), ActionListener.wrap(r -> {}, e -> { if (ExceptionsHelper.unwrap(e, AlreadyClosedException.class, IndexShardClosedException.class) == null) { logger.info(new ParameterizedMessage("{} global checkpoint sync failed", shardId), e); diff --git a/server/src/main/java/org/opensearch/index/seqno/RetentionLeaseBackgroundSyncAction.java b/server/src/main/java/org/opensearch/index/seqno/RetentionLeaseBackgroundSyncAction.java index 5fa0a1a6459e7..e8ebf11ef0e5c 100644 --- a/server/src/main/java/org/opensearch/index/seqno/RetentionLeaseBackgroundSyncAction.java +++ b/server/src/main/java/org/opensearch/index/seqno/RetentionLeaseBackgroundSyncAction.java @@ -48,6 +48,7 @@ import org.opensearch.common.inject.Inject; import org.opensearch.common.settings.Settings; import org.opensearch.common.util.concurrent.ThreadContext; +import org.opensearch.common.util.concurrent.ThreadContextAccess; import org.opensearch.core.action.ActionListener; import org.opensearch.core.common.io.stream.StreamInput; import org.opensearch.core.common.io.stream.StreamOutput; @@ -122,7 +123,7 @@ final void backgroundSync(ShardId shardId, String primaryAllocationId, long prim final ThreadContext threadContext = threadPool.getThreadContext(); try (ThreadContext.StoredContext ignore = threadContext.stashContext()) { // we have to execute under the system context so that if security is enabled the sync is authorized - threadContext.markAsSystemContext(); + ThreadContextAccess.doPrivilegedVoid(threadContext::markAsSystemContext); final Request request = new Request(shardId, retentionLeases); final ReplicationTask task = (ReplicationTask) taskManager.register("transport", "retention_lease_background_sync", request); transportService.sendChildRequest( diff --git a/server/src/main/java/org/opensearch/index/seqno/RetentionLeaseSyncAction.java b/server/src/main/java/org/opensearch/index/seqno/RetentionLeaseSyncAction.java index ca3c7e1d49700..9e8437ca78879 100644 --- a/server/src/main/java/org/opensearch/index/seqno/RetentionLeaseSyncAction.java +++ b/server/src/main/java/org/opensearch/index/seqno/RetentionLeaseSyncAction.java @@ -50,6 +50,7 @@ import org.opensearch.common.inject.Inject; import org.opensearch.common.settings.Settings; import org.opensearch.common.util.concurrent.ThreadContext; +import org.opensearch.common.util.concurrent.ThreadContextAccess; import org.opensearch.core.action.ActionListener; import org.opensearch.core.common.io.stream.StreamInput; import org.opensearch.core.common.io.stream.StreamOutput; @@ -137,7 +138,7 @@ final void sync( final ThreadContext threadContext = threadPool.getThreadContext(); try (ThreadContext.StoredContext ignore = threadContext.stashContext()) { // we have to execute under the system context so that if security is enabled the sync is authorized - threadContext.markAsSystemContext(); + ThreadContextAccess.doPrivilegedVoid(threadContext::markAsSystemContext); final Request request = new Request(shardId, retentionLeases); final ReplicationTask task = (ReplicationTask) taskManager.register("transport", "retention_lease_sync", request); transportService.sendChildRequest( diff --git a/server/src/main/java/org/opensearch/indices/replication/checkpoint/PublishCheckpointAction.java b/server/src/main/java/org/opensearch/indices/replication/checkpoint/PublishCheckpointAction.java index 8f39aa194b06c..d1e2884956f5c 100644 --- a/server/src/main/java/org/opensearch/indices/replication/checkpoint/PublishCheckpointAction.java +++ b/server/src/main/java/org/opensearch/indices/replication/checkpoint/PublishCheckpointAction.java @@ -24,6 +24,7 @@ import org.opensearch.common.inject.Inject; import org.opensearch.common.settings.Settings; import org.opensearch.common.util.concurrent.ThreadContext; +import org.opensearch.common.util.concurrent.ThreadContextAccess; import org.opensearch.core.action.ActionListener; import org.opensearch.core.common.io.stream.StreamInput; import org.opensearch.index.IndexNotFoundException; @@ -113,7 +114,7 @@ final void publish(IndexShard indexShard, ReplicationCheckpoint checkpoint) { final ThreadContext threadContext = threadPool.getThreadContext(); try (ThreadContext.StoredContext ignore = threadContext.stashContext()) { // we have to execute under the system context so that if security is enabled the sync is authorized - threadContext.markAsSystemContext(); + ThreadContextAccess.doPrivilegedVoid(threadContext::markAsSystemContext); PublishCheckpointRequest request = new PublishCheckpointRequest(checkpoint); final ReplicationTask task = (ReplicationTask) taskManager.register("transport", "segrep_publish_checkpoint", request); final ReplicationTimer timer = new ReplicationTimer(); diff --git a/server/src/main/java/org/opensearch/transport/RemoteClusterConnection.java b/server/src/main/java/org/opensearch/transport/RemoteClusterConnection.java index 8a5f6dfffb036..8f0ee52ac3acd 100644 --- a/server/src/main/java/org/opensearch/transport/RemoteClusterConnection.java +++ b/server/src/main/java/org/opensearch/transport/RemoteClusterConnection.java @@ -40,6 +40,7 @@ import org.opensearch.common.settings.Settings; import org.opensearch.common.unit.TimeValue; import org.opensearch.common.util.concurrent.ThreadContext; +import org.opensearch.common.util.concurrent.ThreadContextAccess; import org.opensearch.common.util.io.IOUtils; import org.opensearch.core.action.ActionListener; import org.opensearch.core.common.io.stream.StreamInput; @@ -136,7 +137,7 @@ void collectNodes(ActionListener> listener) { new ContextPreservingActionListener<>(threadContext.newRestorableContext(false), listener); try (ThreadContext.StoredContext ignore = threadContext.stashContext()) { // we stash any context here since this is an internal execution and should not leak any existing context information - threadContext.markAsSystemContext(); + ThreadContextAccess.doPrivilegedVoid(threadContext::markAsSystemContext); final ClusterStateRequest request = new ClusterStateRequest(); request.clear(); diff --git a/server/src/main/java/org/opensearch/transport/SniffConnectionStrategy.java b/server/src/main/java/org/opensearch/transport/SniffConnectionStrategy.java index 07ba96b135189..1d94228218fd0 100644 --- a/server/src/main/java/org/opensearch/transport/SniffConnectionStrategy.java +++ b/server/src/main/java/org/opensearch/transport/SniffConnectionStrategy.java @@ -47,6 +47,7 @@ import org.opensearch.common.settings.Setting; import org.opensearch.common.settings.Settings; import org.opensearch.common.util.concurrent.ThreadContext; +import org.opensearch.common.util.concurrent.ThreadContextAccess; import org.opensearch.common.util.io.IOUtils; import org.opensearch.core.action.ActionListener; import org.opensearch.core.common.Strings; @@ -349,7 +350,7 @@ private void collectRemoteNodes(Iterator> seedNodes, Act try (ThreadContext.StoredContext ignore = threadContext.stashContext()) { // we stash any context here since this is an internal execution and should not leak any // existing context information. - threadContext.markAsSystemContext(); + ThreadContextAccess.doPrivilegedVoid(threadContext::markAsSystemContext); transportService.sendRequest( connection, ClusterStateAction.NAME, diff --git a/server/src/main/resources/org/opensearch/bootstrap/security.policy b/server/src/main/resources/org/opensearch/bootstrap/security.policy index 55e8db0d9c6a3..b7aaa2e3eec48 100644 --- a/server/src/main/resources/org/opensearch/bootstrap/security.policy +++ b/server/src/main/resources/org/opensearch/bootstrap/security.policy @@ -48,6 +48,7 @@ grant codeBase "${codebase.opensearch}" { permission java.lang.RuntimePermission "setContextClassLoader"; // needed for SPI class loading permission java.lang.RuntimePermission "accessDeclaredMembers"; + permission org.opensearch.secure_sm.ThreadContextPermission "markAsSystemContext"; }; //// Very special jar permissions: diff --git a/server/src/main/resources/org/opensearch/bootstrap/test-framework.policy b/server/src/main/resources/org/opensearch/bootstrap/test-framework.policy index 0abfd7ef22ae7..f674c90c45a0e 100644 --- a/server/src/main/resources/org/opensearch/bootstrap/test-framework.policy +++ b/server/src/main/resources/org/opensearch/bootstrap/test-framework.policy @@ -157,4 +157,5 @@ grant { permission java.lang.RuntimePermission "reflectionFactoryAccess"; permission java.lang.RuntimePermission "accessClassInPackage.sun.reflect"; permission java.lang.reflect.ReflectPermission "suppressAccessChecks"; + permission org.opensearch.secure_sm.ThreadContextPermission "markAsSystemContext"; }; diff --git a/server/src/test/java/org/opensearch/cluster/metadata/TemplateUpgradeServiceTests.java b/server/src/test/java/org/opensearch/cluster/metadata/TemplateUpgradeServiceTests.java index 36d984b7eb99b..562e293083633 100644 --- a/server/src/test/java/org/opensearch/cluster/metadata/TemplateUpgradeServiceTests.java +++ b/server/src/test/java/org/opensearch/cluster/metadata/TemplateUpgradeServiceTests.java @@ -47,6 +47,7 @@ import org.opensearch.cluster.service.ClusterService; import org.opensearch.common.collect.Tuple; import org.opensearch.common.util.concurrent.ThreadContext; +import org.opensearch.common.util.concurrent.ThreadContextAccess; import org.opensearch.core.action.ActionListener; import org.opensearch.core.common.bytes.BytesArray; import org.opensearch.core.common.bytes.BytesReference; @@ -225,7 +226,7 @@ public void testUpdateTemplates() { service.upgradesInProgress.set(additionsCount + deletionsCount + 2); // +2 to skip tryFinishUpgrade final ThreadContext threadContext = threadPool.getThreadContext(); try (ThreadContext.StoredContext ignore = threadContext.stashContext()) { - threadContext.markAsSystemContext(); + ThreadContextAccess.doPrivilegedVoid(threadContext::markAsSystemContext); service.upgradeTemplates(additions, deletions); } diff --git a/server/src/test/java/org/opensearch/common/util/concurrent/ThreadContextTests.java b/server/src/test/java/org/opensearch/common/util/concurrent/ThreadContextTests.java index 4e66575711046..4c7cd4513412d 100644 --- a/server/src/test/java/org/opensearch/common/util/concurrent/ThreadContextTests.java +++ b/server/src/test/java/org/opensearch/common/util/concurrent/ThreadContextTests.java @@ -565,7 +565,7 @@ public void testPreservesThreadsOriginalContextOnRunException() throws IOExcepti threadContext.putHeader("foo", "bar"); boolean systemContext = randomBoolean(); if (systemContext) { - threadContext.markAsSystemContext(); + ThreadContextAccess.doPrivilegedVoid(threadContext::markAsSystemContext); } threadContext.putTransient("foo", "bar_transient"); withContext = threadContext.preserveContext(new AbstractRunnable() { @@ -736,7 +736,7 @@ public void testMarkAsSystemContext() throws IOException { assertFalse(threadContext.isSystemContext()); try (ThreadContext.StoredContext context = threadContext.stashContext()) { assertFalse(threadContext.isSystemContext()); - threadContext.markAsSystemContext(); + ThreadContextAccess.doPrivilegedVoid(threadContext::markAsSystemContext); assertTrue(threadContext.isSystemContext()); } assertFalse(threadContext.isSystemContext()); @@ -761,7 +761,7 @@ public void testSystemContextWithPropagator() { assertEquals(Integer.valueOf(1), threadContext.getTransient("test_transient_propagation_key")); assertEquals("bar", threadContext.getHeader("foo")); try (ThreadContext.StoredContext ctx = threadContext.stashContext()) { - threadContext.markAsSystemContext(); + ThreadContextAccess.doPrivilegedVoid(threadContext::markAsSystemContext); assertNull(threadContext.getHeader("foo")); assertNull(threadContext.getTransient("test_transient_propagation_key")); assertEquals("1", threadContext.getHeader("default")); @@ -793,7 +793,7 @@ public void testSerializeSystemContext() throws IOException { threadContext.writeTo(out); try (ThreadContext.StoredContext ctx = threadContext.stashContext()) { assertEquals("test", threadContext.getTransient("test_transient_propagation_key")); - threadContext.markAsSystemContext(); + ThreadContextAccess.doPrivilegedVoid(threadContext::markAsSystemContext); threadContext.writeTo(outFromSystemContext); assertNull(threadContext.getHeader("foo")); assertNull(threadContext.getTransient("test_transient_propagation_key")); diff --git a/server/src/test/java/org/opensearch/telemetry/tracing/ThreadContextBasedTracerContextStorageTests.java b/server/src/test/java/org/opensearch/telemetry/tracing/ThreadContextBasedTracerContextStorageTests.java index bf11bcaf39a96..98dfc367c20f5 100644 --- a/server/src/test/java/org/opensearch/telemetry/tracing/ThreadContextBasedTracerContextStorageTests.java +++ b/server/src/test/java/org/opensearch/telemetry/tracing/ThreadContextBasedTracerContextStorageTests.java @@ -12,6 +12,7 @@ import org.opensearch.common.settings.Settings; import org.opensearch.common.util.concurrent.ThreadContext; import org.opensearch.common.util.concurrent.ThreadContext.StoredContext; +import org.opensearch.common.util.concurrent.ThreadContextAccess; import org.opensearch.telemetry.Telemetry; import org.opensearch.telemetry.TelemetrySettings; import org.opensearch.telemetry.metrics.MetricsTelemetry; @@ -260,7 +261,7 @@ public void testSpanNotPropagatedToChildSystemThreadContext() { try (StoredContext ignored = threadContext.stashContext()) { assertThat(threadContext.getTransient(ThreadContextBasedTracerContextStorage.CURRENT_SPAN), is(not(nullValue()))); assertThat(threadContextStorage.get(ThreadContextBasedTracerContextStorage.CURRENT_SPAN), is(span)); - threadContext.markAsSystemContext(); + ThreadContextAccess.doPrivilegedVoid(threadContext::markAsSystemContext); assertThat(threadContext.getTransient(ThreadContextBasedTracerContextStorage.CURRENT_SPAN), is(nullValue())); } } diff --git a/server/src/test/resources/org/opensearch/bootstrap/test.policy b/server/src/test/resources/org/opensearch/bootstrap/test.policy index 7b0a9b3d5d709..c2b5a8e9c0a4e 100644 --- a/server/src/test/resources/org/opensearch/bootstrap/test.policy +++ b/server/src/test/resources/org/opensearch/bootstrap/test.policy @@ -7,7 +7,7 @@ */ grant { - // allow to test Security policy and codebases + // allow to test Security policy and codebases permission java.util.PropertyPermission "*", "read,write"; permission java.security.SecurityPermission "createPolicy.JavaPolicy"; }; diff --git a/test/framework/src/main/java/org/opensearch/cluster/service/FakeThreadPoolClusterManagerService.java b/test/framework/src/main/java/org/opensearch/cluster/service/FakeThreadPoolClusterManagerService.java index 53ef595c7931e..64f3dbc4fd967 100644 --- a/test/framework/src/main/java/org/opensearch/cluster/service/FakeThreadPoolClusterManagerService.java +++ b/test/framework/src/main/java/org/opensearch/cluster/service/FakeThreadPoolClusterManagerService.java @@ -44,6 +44,7 @@ import org.opensearch.common.util.concurrent.OpenSearchExecutors; import org.opensearch.common.util.concurrent.PrioritizedOpenSearchThreadPoolExecutor; import org.opensearch.common.util.concurrent.ThreadContext; +import org.opensearch.common.util.concurrent.ThreadContextAccess; import org.opensearch.core.action.ActionListener; import org.opensearch.node.Node; import org.opensearch.telemetry.metrics.noop.NoopMetricsRegistry; @@ -134,7 +135,7 @@ public void run() { scheduledNextTask = false; final ThreadContext threadContext = threadPool.getThreadContext(); try (ThreadContext.StoredContext ignored = threadContext.stashContext()) { - threadContext.markAsSystemContext(); + ThreadContextAccess.doPrivilegedVoid(threadContext::markAsSystemContext); task.run(); } if (waitForPublish == false) { From d158ec6d431615b16192c34490b46752f3eb3e0f Mon Sep 17 00:00:00 2001 From: Andriy Redko Date: Wed, 31 Jul 2024 11:11:28 -0400 Subject: [PATCH 140/167] Fix MacOS Mx (arm64) and Linux (arm64, ppc64le, s390x) checks (#15036) Signed-off-by: Andriy Redko --- .../internal/InternalDistributionBwcSetupPlugin.java | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/buildSrc/src/main/java/org/opensearch/gradle/internal/InternalDistributionBwcSetupPlugin.java b/buildSrc/src/main/java/org/opensearch/gradle/internal/InternalDistributionBwcSetupPlugin.java index 6892af1b17f97..0502280cb69ad 100644 --- a/buildSrc/src/main/java/org/opensearch/gradle/internal/InternalDistributionBwcSetupPlugin.java +++ b/buildSrc/src/main/java/org/opensearch/gradle/internal/InternalDistributionBwcSetupPlugin.java @@ -158,7 +158,17 @@ private static List resolveArchiveProjects(File checkoutDir projects.addAll(asList("deb", "rpm")); if (bwcVersion.onOrAfter("7.0.0")) { // starting with 7.0 we bundle a jdk which means we have platform-specific archives - projects.addAll(asList("darwin-tar", "linux-tar", "windows-zip")); + projects.addAll( + asList( + "darwin-tar", + "darwin-arm64-tar", + "linux-tar", + "linux-arm64-tar", + "linux-ppc64le-tar", + "linux-s390x-tar", + "windows-zip" + ) + ); } else { // prior to 7.0 we published only a single zip and tar archives projects.addAll(asList("zip", "tar")); } From 79f45be4a544dd3519521294b63bd1630c3dfd54 Mon Sep 17 00:00:00 2001 From: Andriy Redko Date: Wed, 31 Jul 2024 13:49:41 -0400 Subject: [PATCH 141/167] [Streaming Indexing] Enhance RestClient with a new streaming API support (#14437) Signed-off-by: Andriy Redko --- CHANGELOG.md | 1 + buildSrc/version.properties | 4 +- client/rest/build.gradle | 67 ++- .../rest/licenses/httpclient5-5.2.1.jar.sha1 | 1 - .../rest/licenses/httpclient5-5.2.3.jar.sha1 | 1 + client/rest/licenses/httpcore5-5.2.2.jar.sha1 | 1 - client/rest/licenses/httpcore5-5.2.5.jar.sha1 | 1 + .../rest/licenses/httpcore5-h2-5.2.2.jar.sha1 | 1 - .../rest/licenses/httpcore5-h2-5.2.5.jar.sha1 | 1 + .../httpcore5-reactive-5.2.5.jar.sha1 | 1 + .../licenses/httpcore5-reactive-LICENSE.txt | 558 ++++++++++++++++++ .../licenses/httpcore5-reactive-NOTICE.txt | 8 + .../licenses/reactive-streams-1.0.4.jar.sha1 | 1 + .../licenses/reactive-streams-LICENSE.txt | 21 + .../rest/licenses/reactive-streams-NOTICE.txt | 0 .../licenses/reactor-core-3.5.19.jar.sha1 | 1 + client/rest/licenses/reactor-core-LICENSE.txt | 201 +++++++ client/rest/licenses/reactor-core-NOTICE.txt | 0 .../org/opensearch/client/Cancellable.java | 29 +- .../java/org/opensearch/client/Response.java | 73 +-- .../client/ResponseWarningsExtractor.java | 99 ++++ .../org/opensearch/client/RestClient.java | 291 ++++++++- .../opensearch/client/StreamingRequest.java | 114 ++++ .../opensearch/client/StreamingResponse.java | 96 +++ .../http/ReactiveHttpUriRequestProducer.java | 75 +++ .../opensearch/client/RestClientTests.java | 13 + .../licenses/httpclient5-5.2.1.jar.sha1 | 1 - .../licenses/httpclient5-5.2.3.jar.sha1 | 1 + .../sniffer/licenses/httpcore5-5.2.2.jar.sha1 | 1 - .../sniffer/licenses/httpcore5-5.2.5.jar.sha1 | 1 + plugins/transport-reactor-netty4/build.gradle | 6 +- .../rest/ReactorNetty4BadRequestIT.java | 115 ++++ .../rest/ReactorNetty4HeadBodyIsEmptyIT.java | 204 +++++++ .../rest/ReactorNetty4StreamingIT.java | 139 +++++ .../rest/ReactorNetty4StreamingStressIT.java | 95 +++ .../ReactorNetty4HttpServerTransport.java | 5 +- .../ReactorNetty4NonStreamingHttpChannel.java | 11 +- .../ReactorNetty4StreamingHttpChannel.java | 2 + ...ReactorNetty4StreamingRequestConsumer.java | 2 +- ...eactorNetty4StreamingResponseProducer.java | 6 +- qa/smoke-test-http/build.gradle | 1 + .../opensearch/http/HttpSmokeTestCase.java | 7 +- .../http/IdentityAuthenticationIT.java | 4 +- .../WEB-INF/jboss-deployment-structure.xml | 3 + .../org/opensearch/rest/RestController.java | 3 +- .../document/RestBulkStreamingAction.java | 48 +- 46 files changed, 2180 insertions(+), 134 deletions(-) delete mode 100644 client/rest/licenses/httpclient5-5.2.1.jar.sha1 create mode 100644 client/rest/licenses/httpclient5-5.2.3.jar.sha1 delete mode 100644 client/rest/licenses/httpcore5-5.2.2.jar.sha1 create mode 100644 client/rest/licenses/httpcore5-5.2.5.jar.sha1 delete mode 100644 client/rest/licenses/httpcore5-h2-5.2.2.jar.sha1 create mode 100644 client/rest/licenses/httpcore5-h2-5.2.5.jar.sha1 create mode 100644 client/rest/licenses/httpcore5-reactive-5.2.5.jar.sha1 create mode 100644 client/rest/licenses/httpcore5-reactive-LICENSE.txt create mode 100644 client/rest/licenses/httpcore5-reactive-NOTICE.txt create mode 100644 client/rest/licenses/reactive-streams-1.0.4.jar.sha1 create mode 100644 client/rest/licenses/reactive-streams-LICENSE.txt create mode 100644 client/rest/licenses/reactive-streams-NOTICE.txt create mode 100644 client/rest/licenses/reactor-core-3.5.19.jar.sha1 create mode 100644 client/rest/licenses/reactor-core-LICENSE.txt create mode 100644 client/rest/licenses/reactor-core-NOTICE.txt create mode 100644 client/rest/src/main/java/org/opensearch/client/ResponseWarningsExtractor.java create mode 100644 client/rest/src/main/java/org/opensearch/client/StreamingRequest.java create mode 100644 client/rest/src/main/java/org/opensearch/client/StreamingResponse.java create mode 100644 client/rest/src/main/java/org/opensearch/client/http/ReactiveHttpUriRequestProducer.java delete mode 100644 client/sniffer/licenses/httpclient5-5.2.1.jar.sha1 create mode 100644 client/sniffer/licenses/httpclient5-5.2.3.jar.sha1 delete mode 100644 client/sniffer/licenses/httpcore5-5.2.2.jar.sha1 create mode 100644 client/sniffer/licenses/httpcore5-5.2.5.jar.sha1 create mode 100644 plugins/transport-reactor-netty4/src/javaRestTest/java/org/opensearch/rest/ReactorNetty4BadRequestIT.java create mode 100644 plugins/transport-reactor-netty4/src/javaRestTest/java/org/opensearch/rest/ReactorNetty4HeadBodyIsEmptyIT.java create mode 100644 plugins/transport-reactor-netty4/src/javaRestTest/java/org/opensearch/rest/ReactorNetty4StreamingIT.java create mode 100644 plugins/transport-reactor-netty4/src/javaRestTest/java/org/opensearch/rest/ReactorNetty4StreamingStressIT.java diff --git a/CHANGELOG.md b/CHANGELOG.md index 7b49298192800..f63c7c5524d86 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -8,6 +8,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Fix for hasInitiatedFetching to fix allocation explain and manual reroute APIs (([#14972](https://github.com/opensearch-project/OpenSearch/pull/14972)) - [Workload Management] Add queryGroupId to Task ([14708](https://github.com/opensearch-project/OpenSearch/pull/14708)) - Add setting to ignore throttling nodes for allocation of unassigned primaries in remote restore ([#14991](https://github.com/opensearch-project/OpenSearch/pull/14991)) +- [Streaming Indexing] Enhance RestClient with a new streaming API support ([#14437](https://github.com/opensearch-project/OpenSearch/pull/14437)) - Add basic aggregation support for derived fields ([#14618](https://github.com/opensearch-project/OpenSearch/pull/14618)) - Add ThreadContextPermission for markAsSystemContext and allow core to perform the method ([#15016](https://github.com/opensearch-project/OpenSearch/pull/15016)) diff --git a/buildSrc/version.properties b/buildSrc/version.properties index 7d32ed3df7b76..eb67af909bccf 100644 --- a/buildSrc/version.properties +++ b/buildSrc/version.properties @@ -37,8 +37,8 @@ reactor_netty = 1.1.21 reactor = 3.5.19 # client dependencies -httpclient5 = 5.2.1 -httpcore5 = 5.2.2 +httpclient5 = 5.2.3 +httpcore5 = 5.2.5 httpclient = 4.5.14 httpcore = 4.4.16 httpasyncclient = 4.1.5 diff --git a/client/rest/build.gradle b/client/rest/build.gradle index f18df65dfddfa..93faf0024b51e 100644 --- a/client/rest/build.gradle +++ b/client/rest/build.gradle @@ -47,10 +47,15 @@ dependencies { api "org.apache.httpcomponents.client5:httpclient5:${versions.httpclient5}" api "org.apache.httpcomponents.core5:httpcore5:${versions.httpcore5}" api "org.apache.httpcomponents.core5:httpcore5-h2:${versions.httpcore5}" + api "org.apache.httpcomponents.core5:httpcore5-reactive:${versions.httpcore5}" api "commons-codec:commons-codec:${versions.commonscodec}" api "commons-logging:commons-logging:${versions.commonslogging}" api "org.slf4j:slf4j-api:${versions.slf4j}" + // reactor + api "io.projectreactor:reactor-core:${versions.reactor}" + api "org.reactivestreams:reactive-streams:${versions.reactivestreams}" + testImplementation project(":client:test") testImplementation "com.carrotsearch.randomizedtesting:randomizedtesting-runner:${versions.randomizedrunner}" testImplementation "junit:junit:${versions.junit}" @@ -93,22 +98,52 @@ testingConventions { } } -thirdPartyAudit.ignoreMissingClasses( - 'org.conscrypt.Conscrypt', - 'org.slf4j.impl.StaticLoggerBinder', - 'org.slf4j.impl.StaticMDCBinder', - 'org.slf4j.impl.StaticMarkerBinder', - //commons-logging optional dependencies - 'org.apache.avalon.framework.logger.Logger', - 'org.apache.log.Hierarchy', - 'org.apache.log.Logger', - 'org.apache.log4j.Level', - 'org.apache.log4j.Logger', - 'org.apache.log4j.Priority', - //commons-logging provided dependencies - 'javax.servlet.ServletContextEvent', - 'javax.servlet.ServletContextListener' -) +thirdPartyAudit { + ignoreMissingClasses( + 'org.conscrypt.Conscrypt', + 'org.slf4j.impl.StaticLoggerBinder', + 'org.slf4j.impl.StaticMDCBinder', + 'org.slf4j.impl.StaticMarkerBinder', + //commons-logging optional dependencies + 'org.apache.avalon.framework.logger.Logger', + 'org.apache.log.Hierarchy', + 'org.apache.log.Logger', + 'org.apache.log4j.Level', + 'org.apache.log4j.Logger', + 'org.apache.log4j.Priority', + //commons-logging provided dependencies + 'javax.servlet.ServletContextEvent', + 'javax.servlet.ServletContextListener', + 'io.micrometer.context.ContextAccessor', + 'io.micrometer.context.ContextRegistry', + 'io.micrometer.context.ContextSnapshot', + 'io.micrometer.context.ContextSnapshot$Scope', + 'io.micrometer.context.ContextSnapshotFactory', + 'io.micrometer.context.ContextSnapshotFactory$Builder', + 'io.micrometer.context.ThreadLocalAccessor', + 'io.micrometer.core.instrument.Clock', + 'io.micrometer.core.instrument.Counter', + 'io.micrometer.core.instrument.Counter$Builder', + 'io.micrometer.core.instrument.DistributionSummary', + 'io.micrometer.core.instrument.DistributionSummary$Builder', + 'io.micrometer.core.instrument.Meter', + 'io.micrometer.core.instrument.MeterRegistry', + 'io.micrometer.core.instrument.Metrics', + 'io.micrometer.core.instrument.Tag', + 'io.micrometer.core.instrument.Tags', + 'io.micrometer.core.instrument.Timer', + 'io.micrometer.core.instrument.Timer$Builder', + 'io.micrometer.core.instrument.Timer$Sample', + 'io.micrometer.core.instrument.binder.jvm.ExecutorServiceMetrics', + 'io.micrometer.core.instrument.composite.CompositeMeterRegistry', + 'io.micrometer.core.instrument.search.Search', + 'reactor.blockhound.BlockHound$Builder', + 'reactor.blockhound.integration.BlockHoundIntegration' + ) + ignoreViolations( + 'reactor.core.publisher.Traces$SharedSecretsCallSiteSupplierFactory$TracingException' + ) +} tasks.withType(JavaCompile) { // Suppressing '[options] target value 8 is obsolete and will be removed in a future release' diff --git a/client/rest/licenses/httpclient5-5.2.1.jar.sha1 b/client/rest/licenses/httpclient5-5.2.1.jar.sha1 deleted file mode 100644 index 3555fe22f8e12..0000000000000 --- a/client/rest/licenses/httpclient5-5.2.1.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -0c900514d3446d9ce5d9dbd90c21192048125440 \ No newline at end of file diff --git a/client/rest/licenses/httpclient5-5.2.3.jar.sha1 b/client/rest/licenses/httpclient5-5.2.3.jar.sha1 new file mode 100644 index 0000000000000..43e233e72001a --- /dev/null +++ b/client/rest/licenses/httpclient5-5.2.3.jar.sha1 @@ -0,0 +1 @@ +5d753a99d299756998a08c488f2efdf9cf26198e \ No newline at end of file diff --git a/client/rest/licenses/httpcore5-5.2.2.jar.sha1 b/client/rest/licenses/httpcore5-5.2.2.jar.sha1 deleted file mode 100644 index b641256c7d4a4..0000000000000 --- a/client/rest/licenses/httpcore5-5.2.2.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -6da28f5aa6c2b129ef49632e041a5203ce7507b2 \ No newline at end of file diff --git a/client/rest/licenses/httpcore5-5.2.5.jar.sha1 b/client/rest/licenses/httpcore5-5.2.5.jar.sha1 new file mode 100644 index 0000000000000..ca97e8612ea39 --- /dev/null +++ b/client/rest/licenses/httpcore5-5.2.5.jar.sha1 @@ -0,0 +1 @@ +dab1e18842971a45ca8942491ce005ab86a028d7 \ No newline at end of file diff --git a/client/rest/licenses/httpcore5-h2-5.2.2.jar.sha1 b/client/rest/licenses/httpcore5-h2-5.2.2.jar.sha1 deleted file mode 100644 index 94bc0fa49bdb0..0000000000000 --- a/client/rest/licenses/httpcore5-h2-5.2.2.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -54ee1ed58fe8ac40be1083ea9873a6c734939ab9 \ No newline at end of file diff --git a/client/rest/licenses/httpcore5-h2-5.2.5.jar.sha1 b/client/rest/licenses/httpcore5-h2-5.2.5.jar.sha1 new file mode 100644 index 0000000000000..bb40fe65854f6 --- /dev/null +++ b/client/rest/licenses/httpcore5-h2-5.2.5.jar.sha1 @@ -0,0 +1 @@ +09425df4d1365cee86a8e031a036bdca4343da4b \ No newline at end of file diff --git a/client/rest/licenses/httpcore5-reactive-5.2.5.jar.sha1 b/client/rest/licenses/httpcore5-reactive-5.2.5.jar.sha1 new file mode 100644 index 0000000000000..ab9241fc93d45 --- /dev/null +++ b/client/rest/licenses/httpcore5-reactive-5.2.5.jar.sha1 @@ -0,0 +1 @@ +f68949965075b957c12b4c1ef89fd4bab2a0fdb1 \ No newline at end of file diff --git a/client/rest/licenses/httpcore5-reactive-LICENSE.txt b/client/rest/licenses/httpcore5-reactive-LICENSE.txt new file mode 100644 index 0000000000000..32f01eda18fe9 --- /dev/null +++ b/client/rest/licenses/httpcore5-reactive-LICENSE.txt @@ -0,0 +1,558 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + +========================================================================= + +This project includes Public Suffix List copied from + +licensed under the terms of the Mozilla Public License, v. 2.0 + +Full license text: + +Mozilla Public License Version 2.0 +================================== + +1. Definitions +-------------- + +1.1. "Contributor" + means each individual or legal entity that creates, contributes to + the creation of, or owns Covered Software. + +1.2. "Contributor Version" + means the combination of the Contributions of others (if any) used + by a Contributor and that particular Contributor's Contribution. + +1.3. "Contribution" + means Covered Software of a particular Contributor. + +1.4. "Covered Software" + means Source Code Form to which the initial Contributor has attached + the notice in Exhibit A, the Executable Form of such Source Code + Form, and Modifications of such Source Code Form, in each case + including portions thereof. + +1.5. "Incompatible With Secondary Licenses" + means + + (a) that the initial Contributor has attached the notice described + in Exhibit B to the Covered Software; or + + (b) that the Covered Software was made available under the terms of + version 1.1 or earlier of the License, but not also under the + terms of a Secondary License. + +1.6. "Executable Form" + means any form of the work other than Source Code Form. + +1.7. "Larger Work" + means a work that combines Covered Software with other material, in + a separate file or files, that is not Covered Software. + +1.8. "License" + means this document. + +1.9. "Licensable" + means having the right to grant, to the maximum extent possible, + whether at the time of the initial grant or subsequently, any and + all of the rights conveyed by this License. + +1.10. "Modifications" + means any of the following: + + (a) any file in Source Code Form that results from an addition to, + deletion from, or modification of the contents of Covered + Software; or + + (b) any new file in Source Code Form that contains any Covered + Software. + +1.11. "Patent Claims" of a Contributor + means any patent claim(s), including without limitation, method, + process, and apparatus claims, in any patent Licensable by such + Contributor that would be infringed, but for the grant of the + License, by the making, using, selling, offering for sale, having + made, import, or transfer of either its Contributions or its + Contributor Version. + +1.12. "Secondary License" + means either the GNU General Public License, Version 2.0, the GNU + Lesser General Public License, Version 2.1, the GNU Affero General + Public License, Version 3.0, or any later versions of those + licenses. + +1.13. "Source Code Form" + means the form of the work preferred for making modifications. + +1.14. "You" (or "Your") + means an individual or a legal entity exercising rights under this + License. For legal entities, "You" includes any entity that + controls, is controlled by, or is under common control with You. For + purposes of this definition, "control" means (a) the power, direct + or indirect, to cause the direction or management of such entity, + whether by contract or otherwise, or (b) ownership of more than + fifty percent (50%) of the outstanding shares or beneficial + ownership of such entity. + +2. License Grants and Conditions +-------------------------------- + +2.1. Grants + +Each Contributor hereby grants You a world-wide, royalty-free, +non-exclusive license: + +(a) under intellectual property rights (other than patent or trademark) + Licensable by such Contributor to use, reproduce, make available, + modify, display, perform, distribute, and otherwise exploit its + Contributions, either on an unmodified basis, with Modifications, or + as part of a Larger Work; and + +(b) under Patent Claims of such Contributor to make, use, sell, offer + for sale, have made, import, and otherwise transfer either its + Contributions or its Contributor Version. + +2.2. Effective Date + +The licenses granted in Section 2.1 with respect to any Contribution +become effective for each Contribution on the date the Contributor first +distributes such Contribution. + +2.3. Limitations on Grant Scope + +The licenses granted in this Section 2 are the only rights granted under +this License. No additional rights or licenses will be implied from the +distribution or licensing of Covered Software under this License. +Notwithstanding Section 2.1(b) above, no patent license is granted by a +Contributor: + +(a) for any code that a Contributor has removed from Covered Software; + or + +(b) for infringements caused by: (i) Your and any other third party's + modifications of Covered Software, or (ii) the combination of its + Contributions with other software (except as part of its Contributor + Version); or + +(c) under Patent Claims infringed by Covered Software in the absence of + its Contributions. + +This License does not grant any rights in the trademarks, service marks, +or logos of any Contributor (except as may be necessary to comply with +the notice requirements in Section 3.4). + +2.4. Subsequent Licenses + +No Contributor makes additional grants as a result of Your choice to +distribute the Covered Software under a subsequent version of this +License (see Section 10.2) or under the terms of a Secondary License (if +permitted under the terms of Section 3.3). + +2.5. Representation + +Each Contributor represents that the Contributor believes its +Contributions are its original creation(s) or it has sufficient rights +to grant the rights to its Contributions conveyed by this License. + +2.6. Fair Use + +This License is not intended to limit any rights You have under +applicable copyright doctrines of fair use, fair dealing, or other +equivalents. + +2.7. Conditions + +Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted +in Section 2.1. + +3. Responsibilities +------------------- + +3.1. Distribution of Source Form + +All distribution of Covered Software in Source Code Form, including any +Modifications that You create or to which You contribute, must be under +the terms of this License. You must inform recipients that the Source +Code Form of the Covered Software is governed by the terms of this +License, and how they can obtain a copy of this License. You may not +attempt to alter or restrict the recipients' rights in the Source Code +Form. + +3.2. Distribution of Executable Form + +If You distribute Covered Software in Executable Form then: + +(a) such Covered Software must also be made available in Source Code + Form, as described in Section 3.1, and You must inform recipients of + the Executable Form how they can obtain a copy of such Source Code + Form by reasonable means in a timely manner, at a charge no more + than the cost of distribution to the recipient; and + +(b) You may distribute such Executable Form under the terms of this + License, or sublicense it under different terms, provided that the + license for the Executable Form does not attempt to limit or alter + the recipients' rights in the Source Code Form under this License. + +3.3. Distribution of a Larger Work + +You may create and distribute a Larger Work under terms of Your choice, +provided that You also comply with the requirements of this License for +the Covered Software. If the Larger Work is a combination of Covered +Software with a work governed by one or more Secondary Licenses, and the +Covered Software is not Incompatible With Secondary Licenses, this +License permits You to additionally distribute such Covered Software +under the terms of such Secondary License(s), so that the recipient of +the Larger Work may, at their option, further distribute the Covered +Software under the terms of either this License or such Secondary +License(s). + +3.4. Notices + +You may not remove or alter the substance of any license notices +(including copyright notices, patent notices, disclaimers of warranty, +or limitations of liability) contained within the Source Code Form of +the Covered Software, except that You may alter any license notices to +the extent required to remedy known factual inaccuracies. + +3.5. Application of Additional Terms + +You may choose to offer, and to charge a fee for, warranty, support, +indemnity or liability obligations to one or more recipients of Covered +Software. However, You may do so only on Your own behalf, and not on +behalf of any Contributor. You must make it absolutely clear that any +such warranty, support, indemnity, or liability obligation is offered by +You alone, and You hereby agree to indemnify every Contributor for any +liability incurred by such Contributor as a result of warranty, support, +indemnity or liability terms You offer. You may include additional +disclaimers of warranty and limitations of liability specific to any +jurisdiction. + +4. Inability to Comply Due to Statute or Regulation +--------------------------------------------------- + +If it is impossible for You to comply with any of the terms of this +License with respect to some or all of the Covered Software due to +statute, judicial order, or regulation then You must: (a) comply with +the terms of this License to the maximum extent possible; and (b) +describe the limitations and the code they affect. Such description must +be placed in a text file included with all distributions of the Covered +Software under this License. Except to the extent prohibited by statute +or regulation, such description must be sufficiently detailed for a +recipient of ordinary skill to be able to understand it. + +5. Termination +-------------- + +5.1. The rights granted under this License will terminate automatically +if You fail to comply with any of its terms. However, if You become +compliant, then the rights granted under this License from a particular +Contributor are reinstated (a) provisionally, unless and until such +Contributor explicitly and finally terminates Your grants, and (b) on an +ongoing basis, if such Contributor fails to notify You of the +non-compliance by some reasonable means prior to 60 days after You have +come back into compliance. Moreover, Your grants from a particular +Contributor are reinstated on an ongoing basis if such Contributor +notifies You of the non-compliance by some reasonable means, this is the +first time You have received notice of non-compliance with this License +from such Contributor, and You become compliant prior to 30 days after +Your receipt of the notice. + +5.2. If You initiate litigation against any entity by asserting a patent +infringement claim (excluding declaratory judgment actions, +counter-claims, and cross-claims) alleging that a Contributor Version +directly or indirectly infringes any patent, then the rights granted to +You by any and all Contributors for the Covered Software under Section +2.1 of this License shall terminate. + +5.3. In the event of termination under Sections 5.1 or 5.2 above, all +end user license agreements (excluding distributors and resellers) which +have been validly granted by You or Your distributors under this License +prior to termination shall survive termination. + +************************************************************************ +* * +* 6. Disclaimer of Warranty * +* ------------------------- * +* * +* Covered Software is provided under this License on an "as is" * +* basis, without warranty of any kind, either expressed, implied, or * +* statutory, including, without limitation, warranties that the * +* Covered Software is free of defects, merchantable, fit for a * +* particular purpose or non-infringing. The entire risk as to the * +* quality and performance of the Covered Software is with You. * +* Should any Covered Software prove defective in any respect, You * +* (not any Contributor) assume the cost of any necessary servicing, * +* repair, or correction. This disclaimer of warranty constitutes an * +* essential part of this License. No use of any Covered Software is * +* authorized under this License except under this disclaimer. * +* * +************************************************************************ + +************************************************************************ +* * +* 7. Limitation of Liability * +* -------------------------- * +* * +* Under no circumstances and under no legal theory, whether tort * +* (including negligence), contract, or otherwise, shall any * +* Contributor, or anyone who distributes Covered Software as * +* permitted above, be liable to You for any direct, indirect, * +* special, incidental, or consequential damages of any character * +* including, without limitation, damages for lost profits, loss of * +* goodwill, work stoppage, computer failure or malfunction, or any * +* and all other commercial damages or losses, even if such party * +* shall have been informed of the possibility of such damages. This * +* limitation of liability shall not apply to liability for death or * +* personal injury resulting from such party's negligence to the * +* extent applicable law prohibits such limitation. Some * +* jurisdictions do not allow the exclusion or limitation of * +* incidental or consequential damages, so this exclusion and * +* limitation may not apply to You. * +* * +************************************************************************ + +8. Litigation +------------- + +Any litigation relating to this License may be brought only in the +courts of a jurisdiction where the defendant maintains its principal +place of business and such litigation shall be governed by laws of that +jurisdiction, without reference to its conflict-of-law provisions. +Nothing in this Section shall prevent a party's ability to bring +cross-claims or counter-claims. + +9. Miscellaneous +---------------- + +This License represents the complete agreement concerning the subject +matter hereof. If any provision of this License is held to be +unenforceable, such provision shall be reformed only to the extent +necessary to make it enforceable. Any law or regulation which provides +that the language of a contract shall be construed against the drafter +shall not be used to construe this License against a Contributor. + +10. Versions of the License +--------------------------- + +10.1. New Versions + +Mozilla Foundation is the license steward. Except as provided in Section +10.3, no one other than the license steward has the right to modify or +publish new versions of this License. Each version will be given a +distinguishing version number. + +10.2. Effect of New Versions + +You may distribute the Covered Software under the terms of the version +of the License under which You originally received the Covered Software, +or under the terms of any subsequent version published by the license +steward. + +10.3. Modified Versions + +If you create software not governed by this License, and you want to +create a new license for such software, you may create and use a +modified version of this License if you rename the license and remove +any references to the name of the license steward (except to note that +such modified license differs from this License). + +10.4. Distributing Source Code Form that is Incompatible With Secondary +Licenses + +If You choose to distribute Source Code Form that is Incompatible With +Secondary Licenses under the terms of this version of the License, the +notice described in Exhibit B of this License must be attached. + +Exhibit A - Source Code Form License Notice +------------------------------------------- + + This Source Code Form is subject to the terms of the Mozilla Public + License, v. 2.0. If a copy of the MPL was not distributed with this + file, You can obtain one at http://mozilla.org/MPL/2.0/. + +If it is not possible or desirable to put the notice in a particular +file, then You may include the notice in a location (such as a LICENSE +file in a relevant directory) where a recipient would be likely to look +for such a notice. + +You may add additional accurate notices of copyright ownership. + +Exhibit B - "Incompatible With Secondary Licenses" Notice +--------------------------------------------------------- + + This Source Code Form is "Incompatible With Secondary Licenses", as + defined by the Mozilla Public License, v. 2.0. diff --git a/client/rest/licenses/httpcore5-reactive-NOTICE.txt b/client/rest/licenses/httpcore5-reactive-NOTICE.txt new file mode 100644 index 0000000000000..fcf14beb5c1ec --- /dev/null +++ b/client/rest/licenses/httpcore5-reactive-NOTICE.txt @@ -0,0 +1,8 @@ + +Apache HttpComponents Core Reactive Extensions +Copyright 2005-2021 The Apache Software Foundation + +This product includes software developed at +The Apache Software Foundation (http://www.apache.org/). + + diff --git a/client/rest/licenses/reactive-streams-1.0.4.jar.sha1 b/client/rest/licenses/reactive-streams-1.0.4.jar.sha1 new file mode 100644 index 0000000000000..45a80e3f7e361 --- /dev/null +++ b/client/rest/licenses/reactive-streams-1.0.4.jar.sha1 @@ -0,0 +1 @@ +3864a1320d97d7b045f729a326e1e077661f31b7 \ No newline at end of file diff --git a/client/rest/licenses/reactive-streams-LICENSE.txt b/client/rest/licenses/reactive-streams-LICENSE.txt new file mode 100644 index 0000000000000..1e3c7e7c77495 --- /dev/null +++ b/client/rest/licenses/reactive-streams-LICENSE.txt @@ -0,0 +1,21 @@ +MIT No Attribution + +Copyright 2014 Reactive Streams + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. \ No newline at end of file diff --git a/client/rest/licenses/reactive-streams-NOTICE.txt b/client/rest/licenses/reactive-streams-NOTICE.txt new file mode 100644 index 0000000000000..e69de29bb2d1d diff --git a/client/rest/licenses/reactor-core-3.5.19.jar.sha1 b/client/rest/licenses/reactor-core-3.5.19.jar.sha1 new file mode 100644 index 0000000000000..04b59d2faae04 --- /dev/null +++ b/client/rest/licenses/reactor-core-3.5.19.jar.sha1 @@ -0,0 +1 @@ +1d49ce1d0df79f28d3927da5f4c46a895b94335f \ No newline at end of file diff --git a/client/rest/licenses/reactor-core-LICENSE.txt b/client/rest/licenses/reactor-core-LICENSE.txt new file mode 100644 index 0000000000000..e5583c184e67a --- /dev/null +++ b/client/rest/licenses/reactor-core-LICENSE.txt @@ -0,0 +1,201 @@ +Apache License + Version 2.0, January 2004 + https://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "{}" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright {yyyy} {name of copyright owner} + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + https://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/client/rest/licenses/reactor-core-NOTICE.txt b/client/rest/licenses/reactor-core-NOTICE.txt new file mode 100644 index 0000000000000..e69de29bb2d1d diff --git a/client/rest/src/main/java/org/opensearch/client/Cancellable.java b/client/rest/src/main/java/org/opensearch/client/Cancellable.java index 56e31a3742f35..d087c60927e3e 100644 --- a/client/rest/src/main/java/org/opensearch/client/Cancellable.java +++ b/client/rest/src/main/java/org/opensearch/client/Cancellable.java @@ -34,6 +34,8 @@ import org.apache.hc.client5.http.classic.methods.HttpUriRequestBase; import org.apache.hc.core5.concurrent.CancellableDependency; +import java.io.IOException; +import java.util.concurrent.Callable; import java.util.concurrent.CancellationException; /** @@ -77,7 +79,7 @@ public synchronized boolean cancel() { } /** - * Executes some arbitrary code iff the on-going request has not been cancelled, otherwise throws {@link CancellationException}. + * Executes some arbitrary code if the on-going request has not been cancelled, otherwise throws {@link CancellationException}. * This is needed to guarantee that cancelling a request works correctly even in case {@link #cancel()} is called between different * attempts of the same request. The low-level client reuses the same instance of the {@link CancellableDependency} by calling * {@link HttpUriRequestBase#reset()} between subsequent retries. The {@link #cancel()} method can be called at anytime, @@ -95,6 +97,31 @@ synchronized void runIfNotCancelled(Runnable runnable) { runnable.run(); } + /** + * Executes some arbitrary code if the on-going request has not been cancelled, otherwise throws {@link CancellationException}. + * This is needed to guarantee that cancelling a request works correctly even in case {@link #cancel()} is called between different + * attempts of the same request. The low-level client reuses the same instance of the {@link CancellableDependency} by calling + * {@link HttpUriRequestBase#reset()} between subsequent retries. The {@link #cancel()} method can be called at anytime, + * and we need to handle the case where it gets called while there is no request being executed as one attempt may have failed and + * the subsequent attempt has not been started yet. + * If the request has already been cancelled we don't go ahead with the next attempt, and artificially raise the + * {@link CancellationException}, otherwise we run the provided {@link Runnable} which will reset the request and send the next attempt. + * Note that this method must be synchronized as well as the {@link #cancel()} method, to prevent a request from being cancelled + * when there is no future to cancel, which would make cancelling the request a no-op. + */ + synchronized T callIfNotCancelled(Callable callable) throws IOException { + if (this.httpRequest.isCancelled()) { + throw newCancellationException(); + } + try { + return callable.call(); + } catch (final IOException ex) { + throw ex; + } catch (final Exception ex) { + throw new IOException(ex); + } + } + static CancellationException newCancellationException() { return new CancellationException("request was cancelled"); } diff --git a/client/rest/src/main/java/org/opensearch/client/Response.java b/client/rest/src/main/java/org/opensearch/client/Response.java index b062d937ed630..cb92e33e49156 100644 --- a/client/rest/src/main/java/org/opensearch/client/Response.java +++ b/client/rest/src/main/java/org/opensearch/client/Response.java @@ -40,11 +40,8 @@ import org.apache.hc.core5.http.message.RequestLine; import org.apache.hc.core5.http.message.StatusLine; -import java.util.ArrayList; import java.util.List; import java.util.Objects; -import java.util.regex.Matcher; -import java.util.regex.Pattern; /** * Holds an opensearch response. It wraps the {@link HttpResponse} returned and associates it with @@ -116,79 +113,11 @@ public HttpEntity getEntity() { return response.getEntity(); } - /** - * Optimized regular expression to test if a string matches the RFC 1123 date - * format (with quotes and leading space). Start/end of line characters and - * atomic groups are used to prevent backtracking. - */ - private static final Pattern WARNING_HEADER_DATE_PATTERN = Pattern.compile("^ " + // start of line, leading space - // quoted RFC 1123 date format - "\"" + // opening quote - "(?>Mon|Tue|Wed|Thu|Fri|Sat|Sun), " + // day of week, atomic group to prevent backtracking - "\\d{2} " + // 2-digit day - "(?>Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec) " + // month, atomic group to prevent backtracking - "\\d{4} " + // 4-digit year - "\\d{2}:\\d{2}:\\d{2} " + // (two-digit hour):(two-digit minute):(two-digit second) - "GMT" + // GMT - "\"$"); // closing quote (optional, since an older version can still send a warn-date), end of line - - /** - * Length of RFC 1123 format (with quotes and leading space), used in - * matchWarningHeaderPatternByPrefix(String). - */ - private static final int WARNING_HEADER_DATE_LENGTH = 0 + 1 + 1 + 3 + 1 + 1 + 2 + 1 + 3 + 1 + 4 + 1 + 2 + 1 + 2 + 1 + 2 + 1 + 3 + 1; - - /** - * Tests if a string matches the RFC 7234 specification for warning headers. - * This assumes that the warn code is always 299 and the warn agent is always - * OpenSearch. - * - * @param s the value of a warning header formatted according to RFC 7234 - * @return {@code true} if the input string matches the specification - */ - private static boolean matchWarningHeaderPatternByPrefix(final String s) { - return s.startsWith("299 OpenSearch-"); - } - - /** - * Refer to org.opensearch.common.logging.DeprecationLogger - */ - private static String extractWarningValueFromWarningHeader(final String s) { - String warningHeader = s; - - /* - * The following block tests for the existence of a RFC 1123 date in the warning header. If the date exists, it is removed for - * extractWarningValueFromWarningHeader(String) to work properly (as it does not handle dates). - */ - if (s.length() > WARNING_HEADER_DATE_LENGTH) { - final String possibleDateString = s.substring(s.length() - WARNING_HEADER_DATE_LENGTH); - final Matcher matcher = WARNING_HEADER_DATE_PATTERN.matcher(possibleDateString); - - if (matcher.matches()) { - warningHeader = warningHeader.substring(0, s.length() - WARNING_HEADER_DATE_LENGTH); - } - } - - final int firstQuote = warningHeader.indexOf('\"'); - final int lastQuote = warningHeader.length() - 1; - final String warningValue = warningHeader.substring(firstQuote + 1, lastQuote); - return warningValue; - } - /** * Returns a list of all warning headers returned in the response. */ public List getWarnings() { - List warnings = new ArrayList<>(); - for (Header header : response.getHeaders("Warning")) { - String warning = header.getValue(); - if (matchWarningHeaderPatternByPrefix(warning)) { - warnings.add(extractWarningValueFromWarningHeader(warning)); - } else { - warnings.add(warning); - } - } - return warnings; + return ResponseWarningsExtractor.getWarnings(response); } /** diff --git a/client/rest/src/main/java/org/opensearch/client/ResponseWarningsExtractor.java b/client/rest/src/main/java/org/opensearch/client/ResponseWarningsExtractor.java new file mode 100644 index 0000000000000..441daff4f3af4 --- /dev/null +++ b/client/rest/src/main/java/org/opensearch/client/ResponseWarningsExtractor.java @@ -0,0 +1,99 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.client; + +import org.apache.hc.core5.http.Header; +import org.apache.hc.core5.http.HttpResponse; + +import java.util.ArrayList; +import java.util.List; +import java.util.regex.Matcher; +import java.util.regex.Pattern; + +final class ResponseWarningsExtractor { + + /** + * Optimized regular expression to test if a string matches the RFC 1123 date + * format (with quotes and leading space). Start/end of line characters and + * atomic groups are used to prevent backtracking. + */ + private static final Pattern WARNING_HEADER_DATE_PATTERN = Pattern.compile("^ " + // start of line, leading space + // quoted RFC 1123 date format + "\"" + // opening quote + "(?>Mon|Tue|Wed|Thu|Fri|Sat|Sun), " + // day of week, atomic group to prevent backtracking + "\\d{2} " + // 2-digit day + "(?>Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec) " + // month, atomic group to prevent backtracking + "\\d{4} " + // 4-digit year + "\\d{2}:\\d{2}:\\d{2} " + // (two-digit hour):(two-digit minute):(two-digit second) + "GMT" + // GMT + "\"$"); // closing quote (optional, since an older version can still send a warn-date), end of line + + /** + * Length of RFC 1123 format (with quotes and leading space), used in + * matchWarningHeaderPatternByPrefix(String). + */ + private static final int WARNING_HEADER_DATE_LENGTH = 0 + 1 + 1 + 3 + 1 + 1 + 2 + 1 + 3 + 1 + 4 + 1 + 2 + 1 + 2 + 1 + 2 + 1 + 3 + 1; + + private ResponseWarningsExtractor() {} + + /** + * Returns a list of all warning headers returned in the response. + * @param response HTTP response + */ + static List getWarnings(final HttpResponse response) { + List warnings = new ArrayList<>(); + for (Header header : response.getHeaders("Warning")) { + String warning = header.getValue(); + if (matchWarningHeaderPatternByPrefix(warning)) { + warnings.add(extractWarningValueFromWarningHeader(warning)); + } else { + warnings.add(warning); + } + } + return warnings; + } + + /** + * Tests if a string matches the RFC 7234 specification for warning headers. + * This assumes that the warn code is always 299 and the warn agent is always + * OpenSearch. + * + * @param s the value of a warning header formatted according to RFC 7234 + * @return {@code true} if the input string matches the specification + */ + private static boolean matchWarningHeaderPatternByPrefix(final String s) { + return s.startsWith("299 OpenSearch-"); + } + + /** + * Refer to org.opensearch.common.logging.DeprecationLogger + */ + private static String extractWarningValueFromWarningHeader(final String s) { + String warningHeader = s; + + /* + * The following block tests for the existence of a RFC 1123 date in the warning header. If the date exists, it is removed for + * extractWarningValueFromWarningHeader(String) to work properly (as it does not handle dates). + */ + if (s.length() > WARNING_HEADER_DATE_LENGTH) { + final String possibleDateString = s.substring(s.length() - WARNING_HEADER_DATE_LENGTH); + final Matcher matcher = WARNING_HEADER_DATE_PATTERN.matcher(possibleDateString); + + if (matcher.matches()) { + warningHeader = warningHeader.substring(0, s.length() - WARNING_HEADER_DATE_LENGTH); + } + } + + final int firstQuote = warningHeader.indexOf('\"'); + final int lastQuote = warningHeader.length() - 1; + final String warningValue = warningHeader.substring(firstQuote + 1, lastQuote); + return warningValue; + } + +} diff --git a/client/rest/src/main/java/org/opensearch/client/RestClient.java b/client/rest/src/main/java/org/opensearch/client/RestClient.java index 15905add76c4f..5c87e3fda5701 100644 --- a/client/rest/src/main/java/org/opensearch/client/RestClient.java +++ b/client/rest/src/main/java/org/opensearch/client/RestClient.java @@ -62,14 +62,19 @@ import org.apache.hc.core5.http.HttpEntity; import org.apache.hc.core5.http.HttpHost; import org.apache.hc.core5.http.HttpRequest; +import org.apache.hc.core5.http.HttpResponse; +import org.apache.hc.core5.http.Message; import org.apache.hc.core5.http.io.entity.HttpEntityWrapper; +import org.apache.hc.core5.http.message.BasicClassicHttpResponse; import org.apache.hc.core5.http.message.RequestLine; import org.apache.hc.core5.http.nio.AsyncRequestProducer; import org.apache.hc.core5.http.nio.AsyncResponseConsumer; import org.apache.hc.core5.net.URIBuilder; +import org.apache.hc.core5.reactive.ReactiveResponseConsumer; import org.apache.hc.core5.reactor.IOReactorStatus; import org.apache.hc.core5.util.Args; import org.opensearch.client.http.HttpUriRequestProducer; +import org.opensearch.client.http.ReactiveHttpUriRequestProducer; import javax.net.ssl.SSLHandshakeException; @@ -83,6 +88,7 @@ import java.net.SocketTimeoutException; import java.net.URI; import java.net.URISyntaxException; +import java.nio.ByteBuffer; import java.util.ArrayList; import java.util.Arrays; import java.util.Base64; @@ -98,6 +104,7 @@ import java.util.Objects; import java.util.Optional; import java.util.Set; +import java.util.concurrent.CancellationException; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentMap; import java.util.concurrent.ExecutionException; @@ -106,6 +113,10 @@ import java.util.stream.Collectors; import java.util.zip.GZIPOutputStream; +import org.reactivestreams.Publisher; +import reactor.core.publisher.Mono; +import reactor.core.publisher.MonoSink; + import static java.nio.charset.StandardCharsets.UTF_8; import static java.util.Collections.singletonList; @@ -300,6 +311,23 @@ public boolean isRunning() { return client.getStatus() == IOReactorStatus.ACTIVE; } + /** + * Sends a streaming request to the OpenSearch cluster that the client points to and returns streaming response. This is an experimental API. + * @param request streaming request + * @return streaming response + * @throws IOException IOException + */ + public StreamingResponse streamRequest(StreamingRequest request) throws IOException { + final InternalStreamingRequest internalRequest = new InternalStreamingRequest(request); + + final StreamingResponse response = new StreamingResponse<>( + new RequestLine(internalRequest.httpRequest), + streamRequest(nextNodes(), internalRequest) + ); + + return response; + } + /** * Sends a request to the OpenSearch cluster that the client points to. * Blocks until the request is completed and returns its response or fails @@ -332,13 +360,13 @@ public Response performRequest(Request request) throws IOException { private Response performRequest(final NodeTuple> nodeTuple, final InternalRequest request, Exception previousException) throws IOException { - RequestContext context = request.createContextForNextAttempt(nodeTuple.nodes.next(), nodeTuple.authCache); + RequestContext context = request.createContextForNextAttempt(nodeTuple.nodes.next(), nodeTuple.authCache); ClassicHttpResponse httpResponse; try { - httpResponse = client.execute(context.requestProducer, context.asyncResponseConsumer, context.context, null).get(); + httpResponse = client.execute(context.requestProducer(), context.asyncResponseConsumer(), context.context(), null).get(); } catch (Exception e) { - RequestLogger.logFailedRequest(logger, request.httpRequest, context.node, e); - onFailure(context.node); + RequestLogger.logFailedRequest(logger, request.httpRequest, context.node(), e); + onFailure(context.node()); Exception cause = extractAndWrapCause(e); addSuppressedException(previousException, cause); if (nodeTuple.nodes.hasNext()) { @@ -352,7 +380,7 @@ private Response performRequest(final NodeTuple> nodeTuple, final } throw new IllegalStateException("unexpected exception type: must be either RuntimeException or IOException", cause); } - ResponseOrResponseException responseOrResponseException = convertResponse(request, context.node, httpResponse); + ResponseOrResponseException responseOrResponseException = convertResponse(request, context.node(), httpResponse); if (responseOrResponseException.responseException == null) { return responseOrResponseException.response; } @@ -363,6 +391,46 @@ private Response performRequest(final NodeTuple> nodeTuple, final throw responseOrResponseException.responseException; } + private Publisher>> streamRequest( + final NodeTuple> nodeTuple, + final InternalStreamingRequest request + ) throws IOException { + return request.cancellable.callIfNotCancelled(() -> { + final Node node = nodeTuple.nodes.next(); + + final Mono>> publisher = Mono.create(emitter -> { + final RequestContext context = request.createContextForNextAttempt(node, nodeTuple.authCache, emitter); + final Future future = client.execute( + context.requestProducer(), + context.asyncResponseConsumer(), + context.context(), + null + ); + + if (future instanceof org.apache.hc.core5.concurrent.Cancellable) { + request.httpRequest.setDependency((org.apache.hc.core5.concurrent.Cancellable) future); + } + }); + + return publisher.flatMap(message -> { + try { + final ResponseOrResponseException responseOrResponseException = convertResponse(request, node, message); + if (responseOrResponseException.responseException == null) { + return Mono.just(message); + } else { + if (nodeTuple.nodes.hasNext()) { + return Mono.from(streamRequest(nodeTuple, request)); + } else { + return Mono.error(responseOrResponseException.responseException); + } + } + } catch (final Exception ex) { + return Mono.error(ex); + } + }); + }); + } + private ResponseOrResponseException convertResponse(InternalRequest request, Node node, ClassicHttpResponse httpResponse) throws IOException { RequestLogger.logResponse(logger, request.httpRequest, node.getHost(), httpResponse); @@ -393,6 +461,40 @@ private ResponseOrResponseException convertResponse(InternalRequest request, Nod throw responseException; } + private ResponseOrResponseException convertResponse( + InternalStreamingRequest request, + Node node, + Message> message + ) throws IOException { + + // Streaming Response could accumulate a lot of data so we may not be able to fully consume it. + final ClassicHttpResponse httpResponse = new BasicClassicHttpResponse( + message.getHead().getCode(), + message.getHead().getReasonPhrase() + ); + final Response response = new Response(new RequestLine(request.httpRequest), node.getHost(), httpResponse); + + RequestLogger.logResponse(logger, request.httpRequest, node.getHost(), httpResponse); + int statusCode = httpResponse.getCode(); + + if (isSuccessfulResponse(statusCode) || request.ignoreErrorCodes.contains(response.getStatusLine().getStatusCode())) { + onResponse(node); + if (request.warningsHandler.warningsShouldFailRequest(response.getWarnings())) { + throw new WarningFailureException(response); + } + return new ResponseOrResponseException(response); + } + ResponseException responseException = new ResponseException(response); + if (isRetryStatus(statusCode)) { + // mark host dead and retry against next one + onFailure(node); + return new ResponseOrResponseException(responseException); + } + // mark host alive and don't retry, as the error should be a request problem + onResponse(node); + throw responseException; + } + /** * Sends a request to the OpenSearch cluster that the client points to. * The request is executed asynchronously and the provided @@ -427,16 +529,23 @@ private void performRequestAsync( final FailureTrackingResponseListener listener ) { request.cancellable.runIfNotCancelled(() -> { - final RequestContext context = request.createContextForNextAttempt(nodeTuple.nodes.next(), nodeTuple.authCache); + final RequestContext context = request.createContextForNextAttempt( + nodeTuple.nodes.next(), + nodeTuple.authCache + ); Future future = client.execute( - context.requestProducer, - context.asyncResponseConsumer, - context.context, + context.requestProducer(), + context.asyncResponseConsumer(), + context.context(), new FutureCallback() { @Override public void completed(ClassicHttpResponse httpResponse) { try { - ResponseOrResponseException responseOrResponseException = convertResponse(request, context.node, httpResponse); + ResponseOrResponseException responseOrResponseException = convertResponse( + request, + context.node(), + httpResponse + ); if (responseOrResponseException.responseException == null) { listener.onSuccess(responseOrResponseException.response); } else { @@ -455,8 +564,8 @@ public void completed(ClassicHttpResponse httpResponse) { @Override public void failed(Exception failure) { try { - RequestLogger.logFailedRequest(logger, request.httpRequest, context.node, failure); - onFailure(context.node); + RequestLogger.logFailedRequest(logger, request.httpRequest, context.node(), failure); + onFailure(context.node()); if (nodeTuple.nodes.hasNext()) { listener.trackFailure(failure); performRequestAsync(nodeTuple, request, listener); @@ -822,6 +931,66 @@ public void remove() { } } + private class InternalStreamingRequest { + private final StreamingRequest request; + private final Set ignoreErrorCodes; + private final HttpUriRequestBase httpRequest; + private final Cancellable cancellable; + private final WarningsHandler warningsHandler; + + InternalStreamingRequest(StreamingRequest request) { + this.request = request; + Map params = new HashMap<>(request.getParameters()); + // ignore is a special parameter supported by the clients, shouldn't be sent to es + String ignoreString = params.remove("ignore"); + this.ignoreErrorCodes = getIgnoreErrorCodes(ignoreString, request.getMethod()); + URI uri = buildUri(pathPrefix, request.getEndpoint(), params); + this.httpRequest = createHttpRequest(request.getMethod(), uri, null); + this.cancellable = Cancellable.fromRequest(httpRequest); + setHeaders(httpRequest, request.getOptions().getHeaders()); + setRequestConfig(httpRequest, request.getOptions().getRequestConfig()); + this.warningsHandler = request.getOptions().getWarningsHandler() == null + ? RestClient.this.warningsHandler + : request.getOptions().getWarningsHandler(); + } + + private void setHeaders(HttpRequest httpRequest, Collection
    requestHeaders) { + // request headers override default headers, so we don't add default headers if they exist as request headers + final Set requestNames = new HashSet<>(requestHeaders.size()); + for (Header requestHeader : requestHeaders) { + httpRequest.addHeader(requestHeader); + requestNames.add(requestHeader.getName()); + } + for (Header defaultHeader : defaultHeaders) { + if (requestNames.contains(defaultHeader.getName()) == false) { + httpRequest.addHeader(defaultHeader); + } + } + if (compressionEnabled) { + httpRequest.addHeader("Accept-Encoding", "gzip"); + } + } + + private void setRequestConfig(HttpUriRequestBase httpRequest, RequestConfig requestConfig) { + if (requestConfig != null) { + httpRequest.setConfig(requestConfig); + } + } + + public Publisher getPublisher() { + return request.getBody(); + } + + RequestContext createContextForNextAttempt( + Node node, + AuthCache authCache, + MonoSink>> emitter + ) { + this.httpRequest.reset(); + return new ReactiveRequestContext(this, node, authCache, emitter); + } + } + private class InternalRequest { private final Request request; private final Set ignoreErrorCodes; @@ -868,12 +1037,22 @@ private void setRequestConfig(HttpUriRequestBase httpRequest, RequestConfig requ } } - RequestContext createContextForNextAttempt(Node node, AuthCache authCache) { + RequestContext createContextForNextAttempt(Node node, AuthCache authCache) { this.httpRequest.reset(); - return new RequestContext(this, node, authCache); + return new AsyncRequestContext(this, node, authCache); } } + private interface RequestContext { + Node node(); + + AsyncRequestProducer requestProducer(); + + AsyncResponseConsumer asyncResponseConsumer(); + + HttpClientContext context(); + } + /** * The Apache HttpClient 5 adds "Authorization" header even if the credentials for basic authentication are not provided * (effectively, username and password are 'null'). To workaround that, wrapping the AuthCache around current HttpClientContext @@ -934,13 +1113,73 @@ public void clear() { } - private static class RequestContext { + private static class ReactiveRequestContext implements RequestContext { + private final Node node; + private final AsyncRequestProducer requestProducer; + private final AsyncResponseConsumer asyncResponseConsumer; + private final HttpClientContext context; + + ReactiveRequestContext( + InternalStreamingRequest request, + Node node, + AuthCache authCache, + MonoSink>> emitter + ) { + this.node = node; + // we stream the request body if the entity allows for it + this.requestProducer = ReactiveHttpUriRequestProducer.create(request.httpRequest, node.getHost(), request.getPublisher()); + this.asyncResponseConsumer = new ReactiveResponseConsumer(new FutureCallback>>() { + @Override + public void failed(Exception ex) { + emitter.error(ex); + } + + @Override + public void completed(Message> result) { + if (result == null) { + emitter.success(); + } else { + emitter.success(result); + } + } + + @Override + public void cancelled() { + failed(new CancellationException("Future cancelled")); + } + }); + this.context = HttpClientContext.create(); + context.setAuthCache(new WrappingAuthCache(context, authCache)); + } + + @Override + public AsyncResponseConsumer asyncResponseConsumer() { + return asyncResponseConsumer; + } + + @Override + public HttpClientContext context() { + return context; + } + + @Override + public Node node() { + return node; + } + + @Override + public AsyncRequestProducer requestProducer() { + return requestProducer; + } + } + + private static class AsyncRequestContext implements RequestContext { private final Node node; private final AsyncRequestProducer requestProducer; private final AsyncResponseConsumer asyncResponseConsumer; private final HttpClientContext context; - RequestContext(InternalRequest request, Node node, AuthCache authCache) { + AsyncRequestContext(InternalRequest request, Node node, AuthCache authCache) { this.node = node; // we stream the request body if the entity allows for it this.requestProducer = HttpUriRequestProducer.create(request.httpRequest, node.getHost()); @@ -950,6 +1189,26 @@ private static class RequestContext { this.context = HttpClientContext.create(); context.setAuthCache(new WrappingAuthCache(context, authCache)); } + + @Override + public AsyncResponseConsumer asyncResponseConsumer() { + return asyncResponseConsumer; + } + + @Override + public HttpClientContext context() { + return context; + } + + @Override + public Node node() { + return node; + } + + @Override + public AsyncRequestProducer requestProducer() { + return requestProducer; + } } private static Set getIgnoreErrorCodes(String ignoreString, String requestMethod) { diff --git a/client/rest/src/main/java/org/opensearch/client/StreamingRequest.java b/client/rest/src/main/java/org/opensearch/client/StreamingRequest.java new file mode 100644 index 0000000000000..e1767407b1353 --- /dev/null +++ b/client/rest/src/main/java/org/opensearch/client/StreamingRequest.java @@ -0,0 +1,114 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.client; + +import java.util.HashMap; +import java.util.Map; +import java.util.Objects; + +import org.reactivestreams.Publisher; + +import static java.util.Collections.unmodifiableMap; + +/** + * HTTP Streaming Request to OpenSearch. This is an experimental API. + */ +public class StreamingRequest { + private final String method; + private final String endpoint; + private final Map parameters = new HashMap<>(); + + private RequestOptions options = RequestOptions.DEFAULT; + private final Publisher publisher; + + /** + * Constructor + * @param method method + * @param endpoint endpoint + * @param publisher publisher + */ + public StreamingRequest(String method, String endpoint, Publisher publisher) { + this.method = method; + this.endpoint = endpoint; + this.publisher = publisher; + } + + /** + * Get endpoint + * @return endpoint + */ + public String getEndpoint() { + return endpoint; + } + + /** + * Get method + * @return method + */ + public String getMethod() { + return method; + } + + /** + * Get options + * @return options + */ + public RequestOptions getOptions() { + return options; + } + + /** + * Get parameters + * @return parameters + */ + public Map getParameters() { + if (options.getParameters().isEmpty()) { + return unmodifiableMap(parameters); + } else { + Map combinedParameters = new HashMap<>(parameters); + combinedParameters.putAll(options.getParameters()); + return unmodifiableMap(combinedParameters); + } + } + + /** + * Add a query string parameter. + * @param name the name of the url parameter. Must not be null. + * @param value the value of the url url parameter. If {@code null} then + * the parameter is sent as {@code name} rather than {@code name=value} + * @throws IllegalArgumentException if a parameter with that name has + * already been set + */ + public void addParameter(String name, String value) { + Objects.requireNonNull(name, "url parameter name cannot be null"); + if (parameters.containsKey(name)) { + throw new IllegalArgumentException("url parameter [" + name + "] has already been set to [" + parameters.get(name) + "]"); + } else { + parameters.put(name, value); + } + } + + /** + * Add query parameters using the provided map of key value pairs. + * + * @param paramSource a map of key value pairs where the key is the url parameter. + * @throws IllegalArgumentException if a parameter with that name has already been set. + */ + public void addParameters(Map paramSource) { + paramSource.forEach(this::addParameter); + } + + /** + * Body publisher + * @return body publisher + */ + public Publisher getBody() { + return publisher; + } +} diff --git a/client/rest/src/main/java/org/opensearch/client/StreamingResponse.java b/client/rest/src/main/java/org/opensearch/client/StreamingResponse.java new file mode 100644 index 0000000000000..87d404c115723 --- /dev/null +++ b/client/rest/src/main/java/org/opensearch/client/StreamingResponse.java @@ -0,0 +1,96 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.client; + +import org.apache.hc.core5.http.HttpHost; +import org.apache.hc.core5.http.HttpResponse; +import org.apache.hc.core5.http.Message; +import org.apache.hc.core5.http.message.RequestLine; +import org.apache.hc.core5.http.message.StatusLine; + +import java.util.List; + +import org.reactivestreams.Publisher; +import reactor.core.publisher.Flux; +import reactor.core.publisher.Mono; + +/** + * HTTP Streaming Response from OpenSearch. This is an experimental API. + */ +public class StreamingResponse { + private final RequestLine requestLine; + private final Mono>> publisher; + private volatile HttpHost host; + + /** + * Constructor + * @param requestLine request line + * @param publisher message publisher(response with a body) + */ + public StreamingResponse(RequestLine requestLine, Publisher>> publisher) { + this.requestLine = requestLine; + // We cache the publisher here so the body or / and HttpResponse could + // be consumed independently or/and more than once. + this.publisher = Mono.from(publisher).cache(); + } + + /** + * Set host + * @param host host + */ + public void setHost(HttpHost host) { + this.host = host; + } + + /** + * Get request line + * @return request line + */ + public RequestLine getRequestLine() { + return requestLine; + } + + /** + * Get host + * @return host + */ + public HttpHost getHost() { + return host; + } + + /** + * Get response boby {@link Publisher} + * @return response boby {@link Publisher} + */ + public Publisher getBody() { + return publisher.flatMapMany(m -> Flux.from(m.getBody())); + } + + /** + * Returns the status line of the current response + */ + public StatusLine getStatusLine() { + return new StatusLine( + publisher.map(Message::getHead) + .onErrorResume(ResponseException.class, e -> Mono.just(e.getResponse().getHttpResponse())) + .block() + ); + } + + /** + * Returns a list of all warning headers returned in the response. + */ + public List getWarnings() { + return ResponseWarningsExtractor.getWarnings( + publisher.map(Message::getHead) + .onErrorResume(ResponseException.class, e -> Mono.just(e.getResponse().getHttpResponse())) + .block() + ); + } +} diff --git a/client/rest/src/main/java/org/opensearch/client/http/ReactiveHttpUriRequestProducer.java b/client/rest/src/main/java/org/opensearch/client/http/ReactiveHttpUriRequestProducer.java new file mode 100644 index 0000000000000..63a71e29b8b31 --- /dev/null +++ b/client/rest/src/main/java/org/opensearch/client/http/ReactiveHttpUriRequestProducer.java @@ -0,0 +1,75 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.client.http; + +import org.apache.hc.client5.http.classic.methods.HttpUriRequestBase; +import org.apache.hc.core5.http.ContentType; +import org.apache.hc.core5.http.Header; +import org.apache.hc.core5.http.HttpHost; +import org.apache.hc.core5.http.nio.AsyncEntityProducer; +import org.apache.hc.core5.http.nio.support.BasicRequestProducer; +import org.apache.hc.core5.net.URIAuthority; +import org.apache.hc.core5.reactive.ReactiveEntityProducer; +import org.apache.hc.core5.util.Args; + +import java.nio.ByteBuffer; + +import org.reactivestreams.Publisher; + +/** + * The reactive producer of the {@link HttpUriRequestBase} instances associated with a particular {@link HttpHost} + */ +public class ReactiveHttpUriRequestProducer extends BasicRequestProducer { + private final HttpUriRequestBase request; + + ReactiveHttpUriRequestProducer(final HttpUriRequestBase request, final AsyncEntityProducer entityProducer) { + super(request, entityProducer); + this.request = request; + } + + /** + * Get the produced {@link HttpUriRequestBase} instance + * @return produced {@link HttpUriRequestBase} instance + */ + public HttpUriRequestBase getRequest() { + return request; + } + + /** + * Create new request producer for {@link HttpUriRequestBase} instance and {@link HttpHost} + * @param request {@link HttpUriRequestBase} instance + * @param host {@link HttpHost} instance + * @param publisher publisher + * @return new request producer + */ + public static ReactiveHttpUriRequestProducer create( + final HttpUriRequestBase request, + final HttpHost host, + Publisher publisher + ) { + Args.notNull(request, "Request"); + Args.notNull(host, "HttpHost"); + + // TODO: Should we copy request here instead of modifying in place? + request.setAuthority(new URIAuthority(host)); + request.setScheme(host.getSchemeName()); + + final Header contentTypeHeader = request.getFirstHeader("Content-Type"); + final ContentType contentType = (contentTypeHeader == null) + ? ContentType.APPLICATION_JSON + : ContentType.parse(contentTypeHeader.getValue()); + + final Header contentEncodingHeader = request.getFirstHeader("Content-Encoding"); + final String contentEncoding = (contentEncodingHeader == null) ? null : contentEncodingHeader.getValue(); + + final AsyncEntityProducer entityProducer = new ReactiveEntityProducer(publisher, -1, contentType, contentEncoding); + return new ReactiveHttpUriRequestProducer(request, entityProducer); + } + +} diff --git a/client/rest/src/test/java/org/opensearch/client/RestClientTests.java b/client/rest/src/test/java/org/opensearch/client/RestClientTests.java index dd51da3a30d8c..f4f1c57cdd588 100644 --- a/client/rest/src/test/java/org/opensearch/client/RestClientTests.java +++ b/client/rest/src/test/java/org/opensearch/client/RestClientTests.java @@ -56,12 +56,15 @@ import java.util.concurrent.atomic.AtomicLong; import java.util.function.Supplier; +import reactor.core.publisher.Mono; + import static java.util.Collections.singletonList; import static org.hamcrest.Matchers.instanceOf; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertFalse; import static org.junit.Assert.assertSame; import static org.junit.Assert.assertThat; +import static org.junit.Assert.assertThrows; import static org.junit.Assert.assertTrue; import static org.junit.Assert.fail; import static org.mockito.Mockito.mock; @@ -418,6 +421,16 @@ public void testIsRunning() { assertFalse(restClient.isRunning()); } + public void testStreamWithUnsupportedMethod() throws Exception { + try (RestClient restClient = createRestClient()) { + final UnsupportedOperationException ex = assertThrows( + UnsupportedOperationException.class, + () -> restClient.streamRequest(new StreamingRequest<>("unsupported", randomAsciiLettersOfLength(5), Mono.empty())) + ); + assertEquals("http method not supported: unsupported", ex.getMessage()); + } + } + private static void assertNodes(NodeTuple> nodeTuple, AtomicInteger lastNodeIndex, int runs) throws IOException { int distance = lastNodeIndex.get() % nodeTuple.nodes.size(); /* diff --git a/client/sniffer/licenses/httpclient5-5.2.1.jar.sha1 b/client/sniffer/licenses/httpclient5-5.2.1.jar.sha1 deleted file mode 100644 index 3555fe22f8e12..0000000000000 --- a/client/sniffer/licenses/httpclient5-5.2.1.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -0c900514d3446d9ce5d9dbd90c21192048125440 \ No newline at end of file diff --git a/client/sniffer/licenses/httpclient5-5.2.3.jar.sha1 b/client/sniffer/licenses/httpclient5-5.2.3.jar.sha1 new file mode 100644 index 0000000000000..43e233e72001a --- /dev/null +++ b/client/sniffer/licenses/httpclient5-5.2.3.jar.sha1 @@ -0,0 +1 @@ +5d753a99d299756998a08c488f2efdf9cf26198e \ No newline at end of file diff --git a/client/sniffer/licenses/httpcore5-5.2.2.jar.sha1 b/client/sniffer/licenses/httpcore5-5.2.2.jar.sha1 deleted file mode 100644 index b641256c7d4a4..0000000000000 --- a/client/sniffer/licenses/httpcore5-5.2.2.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -6da28f5aa6c2b129ef49632e041a5203ce7507b2 \ No newline at end of file diff --git a/client/sniffer/licenses/httpcore5-5.2.5.jar.sha1 b/client/sniffer/licenses/httpcore5-5.2.5.jar.sha1 new file mode 100644 index 0000000000000..ca97e8612ea39 --- /dev/null +++ b/client/sniffer/licenses/httpcore5-5.2.5.jar.sha1 @@ -0,0 +1 @@ +dab1e18842971a45ca8942491ce005ab86a028d7 \ No newline at end of file diff --git a/plugins/transport-reactor-netty4/build.gradle b/plugins/transport-reactor-netty4/build.gradle index 1a94def3fdff1..089e57f062a9f 100644 --- a/plugins/transport-reactor-netty4/build.gradle +++ b/plugins/transport-reactor-netty4/build.gradle @@ -46,7 +46,7 @@ dependencies { api "io.projectreactor.netty:reactor-netty-core:${versions.reactor_netty}" testImplementation "org.apache.logging.log4j:log4j-slf4j-impl:${versions.log4j}" - testImplementation "io.projectreactor:reactor-test:${versions.reactor}" + javaRestTestImplementation "io.projectreactor:reactor-test:${versions.reactor}" testImplementation project(":modules:transport-netty4") } @@ -80,6 +80,10 @@ javaRestTest { systemProperty 'opensearch.set.netty.runtime.available.processors', 'false' } +testClusters.javaRestTest { + setting 'http.type', 'reactor-netty4' +} + thirdPartyAudit { ignoreMissingClasses( 'com.aayushatharva.brotli4j.Brotli4jLoader', diff --git a/plugins/transport-reactor-netty4/src/javaRestTest/java/org/opensearch/rest/ReactorNetty4BadRequestIT.java b/plugins/transport-reactor-netty4/src/javaRestTest/java/org/opensearch/rest/ReactorNetty4BadRequestIT.java new file mode 100644 index 0000000000000..62834483b5e9b --- /dev/null +++ b/plugins/transport-reactor-netty4/src/javaRestTest/java/org/opensearch/rest/ReactorNetty4BadRequestIT.java @@ -0,0 +1,115 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +/* + * Modifications Copyright OpenSearch Contributors. See + * GitHub history for details. + */ + +package org.opensearch.rest; + +import org.opensearch.client.Request; +import org.opensearch.client.RequestOptions; +import org.opensearch.client.Response; +import org.opensearch.client.ResponseException; +import org.opensearch.common.settings.Setting; +import org.opensearch.common.settings.Settings; +import org.opensearch.core.common.unit.ByteSizeValue; +import org.opensearch.http.HttpTransportSettings; +import org.opensearch.test.rest.OpenSearchRestTestCase; +import org.opensearch.test.rest.yaml.ObjectPath; + +import java.io.IOException; +import java.nio.charset.Charset; +import java.util.Map; + +import static org.opensearch.core.rest.RestStatus.REQUEST_URI_TOO_LONG; +import static org.hamcrest.Matchers.equalTo; + +public class ReactorNetty4BadRequestIT extends OpenSearchRestTestCase { + + public void testBadRequest() throws IOException { + final Response response = client().performRequest(new Request("GET", "/_nodes/settings")); + final ObjectPath objectPath = ObjectPath.createFromResponse(response); + final Map map = objectPath.evaluate("nodes"); + int maxMaxInitialLineLength = Integer.MIN_VALUE; + final Setting httpMaxInitialLineLength = HttpTransportSettings.SETTING_HTTP_MAX_INITIAL_LINE_LENGTH; + final String key = httpMaxInitialLineLength.getKey().substring("http.".length()); + for (Map.Entry entry : map.entrySet()) { + @SuppressWarnings("unchecked") + final Map settings = (Map) ((Map) entry.getValue()).get("settings"); + final int maxIntialLineLength; + if (settings.containsKey("http")) { + @SuppressWarnings("unchecked") + final Map httpSettings = (Map) settings.get("http"); + if (httpSettings.containsKey(key)) { + maxIntialLineLength = ByteSizeValue.parseBytesSizeValue((String) httpSettings.get(key), key).bytesAsInt(); + } else { + maxIntialLineLength = httpMaxInitialLineLength.getDefault(Settings.EMPTY).bytesAsInt(); + } + } else { + maxIntialLineLength = httpMaxInitialLineLength.getDefault(Settings.EMPTY).bytesAsInt(); + } + maxMaxInitialLineLength = Math.max(maxMaxInitialLineLength, maxIntialLineLength); + } + + final String path = "/" + new String(new byte[maxMaxInitialLineLength], Charset.forName("UTF-8")).replace('\0', 'a'); + final ResponseException e = expectThrows( + ResponseException.class, + () -> client().performRequest(new Request(randomFrom("GET", "POST", "PUT"), path)) + ); + // The reactor-netty implementation does not provide a hook to customize or intercept request decoder errors at the moment (see + // please https://github.com/reactor/reactor-netty/issues/3327). + assertThat(e.getResponse().getStatusLine().getStatusCode(), equalTo(REQUEST_URI_TOO_LONG.getStatus())); + } + + public void testInvalidParameterValue() throws IOException { + final Request request = new Request("GET", "/_cluster/settings"); + request.addParameter("pretty", "neither-true-nor-false"); + final ResponseException e = expectThrows(ResponseException.class, () -> client().performRequest(request)); + final Response response = e.getResponse(); + assertThat(response.getStatusLine().getStatusCode(), equalTo(400)); + final ObjectPath objectPath = ObjectPath.createFromResponse(response); + final Map map = objectPath.evaluate("error"); + assertThat(map.get("type"), equalTo("illegal_argument_exception")); + assertThat(map.get("reason"), equalTo("Failed to parse value [neither-true-nor-false] as only [true] or [false] are allowed.")); + } + + public void testInvalidHeaderValue() throws IOException { + final Request request = new Request("GET", "/_cluster/settings"); + final RequestOptions.Builder options = request.getOptions().toBuilder(); + options.addHeader("Content-Type", "\t"); + request.setOptions(options); + final ResponseException e = expectThrows(ResponseException.class, () -> client().performRequest(request)); + final Response response = e.getResponse(); + assertThat(response.getStatusLine().getStatusCode(), equalTo(400)); + final ObjectPath objectPath = ObjectPath.createFromResponse(response); + final Map map = objectPath.evaluate("error"); + assertThat(map.get("type"), equalTo("content_type_header_exception")); + assertThat(map.get("reason"), equalTo("java.lang.IllegalArgumentException: invalid Content-Type header []")); + } +} diff --git a/plugins/transport-reactor-netty4/src/javaRestTest/java/org/opensearch/rest/ReactorNetty4HeadBodyIsEmptyIT.java b/plugins/transport-reactor-netty4/src/javaRestTest/java/org/opensearch/rest/ReactorNetty4HeadBodyIsEmptyIT.java new file mode 100644 index 0000000000000..663eb9ef6e946 --- /dev/null +++ b/plugins/transport-reactor-netty4/src/javaRestTest/java/org/opensearch/rest/ReactorNetty4HeadBodyIsEmptyIT.java @@ -0,0 +1,204 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +/* + * Modifications Copyright OpenSearch Contributors. See + * GitHub history for details. + */ + +package org.opensearch.rest; + +import org.opensearch.client.Request; +import org.opensearch.client.Response; +import org.opensearch.core.xcontent.XContentBuilder; +import org.opensearch.test.rest.OpenSearchRestTestCase; +import org.hamcrest.Matcher; + +import java.io.IOException; +import java.util.Map; + +import static java.util.Collections.emptyMap; +import static java.util.Collections.singletonMap; +import static org.opensearch.common.xcontent.XContentFactory.jsonBuilder; +import static org.opensearch.core.rest.RestStatus.NOT_FOUND; +import static org.opensearch.core.rest.RestStatus.OK; +import static org.hamcrest.Matchers.greaterThan; + +public class ReactorNetty4HeadBodyIsEmptyIT extends OpenSearchRestTestCase { + public void testHeadRoot() throws IOException { + headTestCase("/", emptyMap(), greaterThan(0)); + headTestCase("/", singletonMap("pretty", ""), greaterThan(0)); + headTestCase("/", singletonMap("pretty", "true"), greaterThan(0)); + } + + private void createTestDoc() throws IOException { + createTestDoc("test"); + } + + private void createTestDoc(final String indexName) throws IOException { + try (XContentBuilder builder = jsonBuilder()) { + builder.startObject(); + { + builder.field("test", "test"); + } + builder.endObject(); + Request request = new Request("PUT", "/" + indexName + "/_doc/" + "1"); + request.setJsonEntity(builder.toString()); + client().performRequest(request); + } + } + + public void testDocumentExists() throws IOException { + createTestDoc(); + headTestCase("/test/_doc/1", emptyMap(), greaterThan(0)); + headTestCase("/test/_doc/1", singletonMap("pretty", "true"), greaterThan(0)); + headTestCase("/test/_doc/2", emptyMap(), NOT_FOUND.getStatus(), greaterThan(0)); + } + + public void testIndexExists() throws IOException { + createTestDoc(); + headTestCase("/test", emptyMap(), greaterThan(0)); + headTestCase("/test", singletonMap("pretty", "true"), greaterThan(0)); + } + + public void testAliasExists() throws IOException { + createTestDoc(); + try (XContentBuilder builder = jsonBuilder()) { + builder.startObject(); + { + builder.startArray("actions"); + { + builder.startObject(); + { + builder.startObject("add"); + { + builder.field("index", "test"); + builder.field("alias", "test_alias"); + } + builder.endObject(); + } + builder.endObject(); + } + builder.endArray(); + } + builder.endObject(); + + Request request = new Request("POST", "/_aliases"); + request.setJsonEntity(builder.toString()); + client().performRequest(request); + headTestCase("/_alias/test_alias", emptyMap(), greaterThan(0)); + headTestCase("/test/_alias/test_alias", emptyMap(), greaterThan(0)); + } + } + + public void testAliasDoesNotExist() throws IOException { + createTestDoc(); + headTestCase("/_alias/test_alias", emptyMap(), NOT_FOUND.getStatus(), greaterThan(0)); + headTestCase("/test/_alias/test_alias", emptyMap(), NOT_FOUND.getStatus(), greaterThan(0)); + } + + public void testTemplateExists() throws IOException { + try (XContentBuilder builder = jsonBuilder()) { + builder.startObject(); + { + builder.array("index_patterns", "*"); + builder.startObject("settings"); + { + builder.field("number_of_replicas", 0); + } + builder.endObject(); + } + builder.endObject(); + + Request request = new Request("PUT", "/_template/template"); + request.setJsonEntity(builder.toString()); + client().performRequest(request); + headTestCase("/_template/template", emptyMap(), greaterThan(0)); + } + } + + public void testGetSourceAction() throws IOException { + createTestDoc(); + headTestCase("/test/_source/1", emptyMap(), greaterThan(0)); + headTestCase("/test/_source/2", emptyMap(), NOT_FOUND.getStatus(), greaterThan(0)); + + try (XContentBuilder builder = jsonBuilder()) { + builder.startObject(); + { + builder.startObject("mappings"); + { + builder.startObject("_source"); + { + builder.field("enabled", false); + } + builder.endObject(); + } + builder.endObject(); + } + builder.endObject(); + + Request request = new Request("PUT", "/test-no-source"); + request.setJsonEntity(builder.toString()); + client().performRequest(request); + createTestDoc("test-no-source"); + headTestCase("/test-no-source/_source/1", emptyMap(), NOT_FOUND.getStatus(), greaterThan(0)); + } + } + + public void testException() throws IOException { + /* + * This will throw an index not found exception which will be sent on the channel; previously when handling HEAD requests that would + * throw an exception, the content was swallowed and a content length header of zero was returned. Instead of swallowing the content + * we now let it rise up to the upstream channel so that it can compute the content length that would be returned. This test case is + * a test for this situation. + */ + headTestCase("/index-not-found-exception", emptyMap(), NOT_FOUND.getStatus(), greaterThan(0)); + } + + private void headTestCase(final String url, final Map params, final Matcher matcher) throws IOException { + headTestCase(url, params, OK.getStatus(), matcher); + } + + private void headTestCase( + final String url, + final Map params, + final int expectedStatusCode, + final Matcher matcher, + final String... expectedWarnings + ) throws IOException { + Request request = new Request("HEAD", url); + for (Map.Entry param : params.entrySet()) { + request.addParameter(param.getKey(), param.getValue()); + } + request.setOptions(expectWarnings(expectedWarnings)); + Response response = client().performRequest(request); + assertEquals(expectedStatusCode, response.getStatusLine().getStatusCode()); + assertThat(Integer.valueOf(response.getHeader("Content-Length")), matcher); + assertNull("HEAD requests shouldn't have a response body but " + url + " did", response.getEntity()); + } + +} diff --git a/plugins/transport-reactor-netty4/src/javaRestTest/java/org/opensearch/rest/ReactorNetty4StreamingIT.java b/plugins/transport-reactor-netty4/src/javaRestTest/java/org/opensearch/rest/ReactorNetty4StreamingIT.java new file mode 100644 index 0000000000000..c564e289e3f88 --- /dev/null +++ b/plugins/transport-reactor-netty4/src/javaRestTest/java/org/opensearch/rest/ReactorNetty4StreamingIT.java @@ -0,0 +1,139 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.rest; + +import org.opensearch.client.Request; +import org.opensearch.client.Response; +import org.opensearch.client.ResponseException; +import org.opensearch.client.StreamingRequest; +import org.opensearch.client.StreamingResponse; +import org.opensearch.test.rest.OpenSearchRestTestCase; +import org.opensearch.test.rest.yaml.ObjectPath; +import org.junit.After; + +import java.io.IOException; +import java.nio.ByteBuffer; +import java.nio.charset.StandardCharsets; +import java.time.Duration; +import java.util.stream.IntStream; +import java.util.stream.Stream; + +import reactor.core.publisher.Flux; +import reactor.test.StepVerifier; +import reactor.test.scheduler.VirtualTimeScheduler; + +import static org.hamcrest.CoreMatchers.equalTo; +import static org.hamcrest.collection.IsEmptyCollection.empty; + +public class ReactorNetty4StreamingIT extends OpenSearchRestTestCase { + @After + @Override + public void tearDown() throws Exception { + final Request request = new Request("DELETE", "/test-streaming"); + request.addParameter("ignore_unavailable", "true"); + + final Response response = client().performRequest(request); + assertThat(response.getStatusLine().getStatusCode(), equalTo(200)); + + super.tearDown(); + } + + public void testStreamingRequest() throws IOException { + final VirtualTimeScheduler scheduler = VirtualTimeScheduler.create(true); + + final Stream stream = IntStream.range(1, 6) + .mapToObj(id -> "{ \"index\": { \"_index\": \"test-streaming\", \"_id\": \"" + id + "\" } }\n" + "{ \"name\": \"josh\" }\n"); + + final Duration delay = Duration.ofMillis(1); + final StreamingRequest streamingRequest = new StreamingRequest<>( + "POST", + "/_bulk/stream", + Flux.fromStream(stream).delayElements(delay, scheduler).map(s -> ByteBuffer.wrap(s.getBytes(StandardCharsets.UTF_8))) + ); + streamingRequest.addParameter("refresh", "true"); + + final StreamingResponse streamingResponse = client().streamRequest(streamingRequest); + scheduler.advanceTimeBy(delay); /* emit first element */ + + StepVerifier.create(Flux.from(streamingResponse.getBody()).map(b -> new String(b.array(), StandardCharsets.UTF_8))) + .expectNextMatches(s -> s.contains("\"result\":\"created\"") && s.contains("\"_id\":\"1\"")) + .then(() -> scheduler.advanceTimeBy(delay)) + .expectNextMatches(s -> s.contains("\"result\":\"created\"") && s.contains("\"_id\":\"2\"")) + .then(() -> scheduler.advanceTimeBy(delay)) + .expectNextMatches(s -> s.contains("\"result\":\"created\"") && s.contains("\"_id\":\"3\"")) + .then(() -> scheduler.advanceTimeBy(delay)) + .expectNextMatches(s -> s.contains("\"result\":\"created\"") && s.contains("\"_id\":\"4\"")) + .then(() -> scheduler.advanceTimeBy(delay)) + .expectNextMatches(s -> s.contains("\"result\":\"created\"") && s.contains("\"_id\":\"5\"")) + .then(() -> scheduler.advanceTimeBy(delay)) + .expectComplete() + .verify(); + + assertThat(streamingResponse.getStatusLine().getStatusCode(), equalTo(200)); + assertThat(streamingResponse.getWarnings(), empty()); + + final Request request = new Request("GET", "/test-streaming/_count"); + final Response response = client().performRequest(request); + final ObjectPath objectPath = ObjectPath.createFromResponse(response); + final Integer count = objectPath.evaluate("count"); + assertThat(count, equalTo(5)); + } + + public void testStreamingBadRequest() throws IOException { + final Stream stream = Stream.of( + "{ \"index\": { \"_index\": \"test-streaming\", \"_id\": \"1\" } }\n" + "{ \"name\": \"josh\" }\n" + ); + + final StreamingRequest streamingRequest = new StreamingRequest<>( + "POST", + "/_bulk/stream", + Flux.fromStream(stream).map(s -> ByteBuffer.wrap(s.getBytes(StandardCharsets.UTF_8))) + ); + streamingRequest.addParameter("refresh", "not-supported-policy"); + + final StreamingResponse streamingResponse = client().streamRequest(streamingRequest); + StepVerifier.create(Flux.from(streamingResponse.getBody()).map(b -> new String(b.array(), StandardCharsets.UTF_8))) + .expectErrorMatches( + ex -> ex instanceof ResponseException && ((ResponseException) ex).getResponse().getStatusLine().getStatusCode() == 400 + ) + .verify(Duration.ofSeconds(10)); + assertThat(streamingResponse.getStatusLine().getStatusCode(), equalTo(400)); + assertThat(streamingResponse.getWarnings(), empty()); + } + + public void testStreamingBadStream() throws IOException { + final VirtualTimeScheduler scheduler = VirtualTimeScheduler.create(true); + + final Stream stream = Stream.of( + "{ \"index\": { \"_index\": \"test-streaming\", \"_id\": \"1\" } }\n" + "{ \"name\": \"josh\" }\n", + "{ \"name\": \"josh\" }\n" + ); + + final Duration delay = Duration.ofMillis(1); + final StreamingRequest streamingRequest = new StreamingRequest<>( + "POST", + "/_bulk/stream", + Flux.fromStream(stream).delayElements(delay, scheduler).map(s -> ByteBuffer.wrap(s.getBytes(StandardCharsets.UTF_8))) + ); + + final StreamingResponse streamingResponse = client().streamRequest(streamingRequest); + scheduler.advanceTimeBy(delay); /* emit first element */ + + StepVerifier.create(Flux.from(streamingResponse.getBody()).map(b -> new String(b.array(), StandardCharsets.UTF_8))) + .expectNextMatches(s -> s.contains("\"result\":\"created\"") && s.contains("\"_id\":\"1\"")) + .then(() -> scheduler.advanceTimeBy(delay)) + .expectNextMatches(s -> s.contains("\"type\":\"illegal_argument_exception\"")) + .then(() -> scheduler.advanceTimeBy(delay)) + .expectComplete() + .verify(); + + assertThat(streamingResponse.getStatusLine().getStatusCode(), equalTo(200)); + assertThat(streamingResponse.getWarnings(), empty()); + } +} diff --git a/plugins/transport-reactor-netty4/src/javaRestTest/java/org/opensearch/rest/ReactorNetty4StreamingStressIT.java b/plugins/transport-reactor-netty4/src/javaRestTest/java/org/opensearch/rest/ReactorNetty4StreamingStressIT.java new file mode 100644 index 0000000000000..a978af1b11db4 --- /dev/null +++ b/plugins/transport-reactor-netty4/src/javaRestTest/java/org/opensearch/rest/ReactorNetty4StreamingStressIT.java @@ -0,0 +1,95 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.rest; + +import org.apache.hc.core5.http.ConnectionClosedException; +import org.opensearch.client.Request; +import org.opensearch.client.Response; +import org.opensearch.client.StreamingRequest; +import org.opensearch.client.StreamingResponse; +import org.opensearch.test.rest.OpenSearchRestTestCase; +import org.junit.After; + +import java.io.InterruptedIOException; +import java.nio.ByteBuffer; +import java.nio.charset.StandardCharsets; +import java.time.Duration; +import java.util.concurrent.Executors; +import java.util.concurrent.ScheduledExecutorService; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.stream.Stream; + +import reactor.core.publisher.Flux; +import reactor.test.subscriber.TestSubscriber; + +import static org.hamcrest.CoreMatchers.anyOf; +import static org.hamcrest.CoreMatchers.equalTo; +import static org.hamcrest.CoreMatchers.instanceOf; +import static org.hamcrest.CoreMatchers.not; +import static org.hamcrest.collection.IsEmptyCollection.empty; + +public class ReactorNetty4StreamingStressIT extends OpenSearchRestTestCase { + @After + @Override + public void tearDown() throws Exception { + final Request request = new Request("DELETE", "/test-stress-streaming"); + request.addParameter("ignore_unavailable", "true"); + + final Response response = adminClient().performRequest(request); + assertThat(response.getStatusLine().getStatusCode(), equalTo(200)); + + super.tearDown(); + } + + public void testCloseClientStreamingRequest() throws Exception { + final AtomicInteger id = new AtomicInteger(0); + final Stream stream = Stream.generate( + () -> "{ \"index\": { \"_index\": \"test-stress-streaming\", \"_id\": \"" + + id.incrementAndGet() + + "\" } }\n" + + "{ \"name\": \"josh\" }\n" + ); + + final StreamingRequest streamingRequest = new StreamingRequest<>( + "POST", + "/_bulk/stream", + Flux.fromStream(stream).delayElements(Duration.ofMillis(500)).map(s -> ByteBuffer.wrap(s.getBytes(StandardCharsets.UTF_8))) + ); + streamingRequest.addParameter("refresh", "true"); + + final StreamingResponse streamingResponse = client().streamRequest(streamingRequest); + TestSubscriber subscriber = TestSubscriber.create(); + streamingResponse.getBody().subscribe(subscriber); + + final ScheduledExecutorService executor = Executors.newSingleThreadScheduledExecutor(); + try { + // Await for subscriber to receive at least one chunk + assertBusy(() -> assertThat(subscriber.getReceivedOnNext(), not(empty()))); + + // Close client forceably + executor.schedule(() -> { + client().close(); + return null; + }, 2, TimeUnit.SECONDS); + + // Await for subscriber to terminate + subscriber.block(Duration.ofSeconds(10)); + assertThat( + subscriber.expectTerminalError(), + anyOf(instanceOf(InterruptedIOException.class), instanceOf(ConnectionClosedException.class)) + ); + } finally { + executor.shutdown(); + if (executor.awaitTermination(1, TimeUnit.SECONDS) == false) { + executor.shutdownNow(); + } + } + } +} diff --git a/plugins/transport-reactor-netty4/src/main/java/org/opensearch/http/reactor/netty4/ReactorNetty4HttpServerTransport.java b/plugins/transport-reactor-netty4/src/main/java/org/opensearch/http/reactor/netty4/ReactorNetty4HttpServerTransport.java index 906bbfd072da8..7f4a8f6cdef02 100644 --- a/plugins/transport-reactor-netty4/src/main/java/org/opensearch/http/reactor/netty4/ReactorNetty4HttpServerTransport.java +++ b/plugins/transport-reactor-netty4/src/main/java/org/opensearch/http/reactor/netty4/ReactorNetty4HttpServerTransport.java @@ -44,6 +44,7 @@ import java.util.List; import java.util.Optional; +import io.netty.buffer.ByteBuf; import io.netty.buffer.ByteBufAllocator; import io.netty.channel.ChannelOption; import io.netty.channel.socket.nio.NioChannelOption; @@ -390,7 +391,9 @@ protected Publisher incomingRequest(HttpServerRequest request, HttpServerR response.chunkedTransfer(false); response.compression(true); r.headers().forEach(h -> response.addHeader(h.getKey(), h.getValue())); - return Mono.from(response.sendObject(r.content())); + + final ByteBuf content = r.content().copy(); + return Mono.from(response.sendObject(content)); }); } } diff --git a/plugins/transport-reactor-netty4/src/main/java/org/opensearch/http/reactor/netty4/ReactorNetty4NonStreamingHttpChannel.java b/plugins/transport-reactor-netty4/src/main/java/org/opensearch/http/reactor/netty4/ReactorNetty4NonStreamingHttpChannel.java index 7df0b3c0c35fe..3dae2d57cf6a6 100644 --- a/plugins/transport-reactor-netty4/src/main/java/org/opensearch/http/reactor/netty4/ReactorNetty4NonStreamingHttpChannel.java +++ b/plugins/transport-reactor-netty4/src/main/java/org/opensearch/http/reactor/netty4/ReactorNetty4NonStreamingHttpChannel.java @@ -55,9 +55,14 @@ public void addCloseListener(ActionListener listener) { @Override public void sendResponse(HttpResponse response, ActionListener listener) { - emitter.next(createResponse(response)); - listener.onResponse(null); - emitter.complete(); + try { + emitter.next(createResponse(response)); + listener.onResponse(null); + emitter.complete(); + } catch (final Exception ex) { + emitter.error(ex); + listener.onFailure(ex); + } } @Override diff --git a/plugins/transport-reactor-netty4/src/main/java/org/opensearch/http/reactor/netty4/ReactorNetty4StreamingHttpChannel.java b/plugins/transport-reactor-netty4/src/main/java/org/opensearch/http/reactor/netty4/ReactorNetty4StreamingHttpChannel.java index 56dadea0477c5..1aa03aa9967e2 100644 --- a/plugins/transport-reactor-netty4/src/main/java/org/opensearch/http/reactor/netty4/ReactorNetty4StreamingHttpChannel.java +++ b/plugins/transport-reactor-netty4/src/main/java/org/opensearch/http/reactor/netty4/ReactorNetty4StreamingHttpChannel.java @@ -101,6 +101,8 @@ public void receiveChunk(HttpChunk message) { lastChunkReceived = true; producer.complete(); } + } catch (final Exception ex) { + producer.error(ex); } finally { message.close(); } diff --git a/plugins/transport-reactor-netty4/src/main/java/org/opensearch/http/reactor/netty4/ReactorNetty4StreamingRequestConsumer.java b/plugins/transport-reactor-netty4/src/main/java/org/opensearch/http/reactor/netty4/ReactorNetty4StreamingRequestConsumer.java index f34f54e561021..8ed6710c8a1e3 100644 --- a/plugins/transport-reactor-netty4/src/main/java/org/opensearch/http/reactor/netty4/ReactorNetty4StreamingRequestConsumer.java +++ b/plugins/transport-reactor-netty4/src/main/java/org/opensearch/http/reactor/netty4/ReactorNetty4StreamingRequestConsumer.java @@ -44,7 +44,7 @@ public void subscribe(Subscriber s) { } HttpChunk createChunk(HttpContent chunk, boolean last) { - return new ReactorNetty4HttpChunk(chunk.content().retain(), last); + return new ReactorNetty4HttpChunk(chunk.copy().content(), last); } StreamingHttpChannel httpChannel() { diff --git a/plugins/transport-reactor-netty4/src/main/java/org/opensearch/http/reactor/netty4/ReactorNetty4StreamingResponseProducer.java b/plugins/transport-reactor-netty4/src/main/java/org/opensearch/http/reactor/netty4/ReactorNetty4StreamingResponseProducer.java index 616edccdfc396..6aaccc500072b 100644 --- a/plugins/transport-reactor-netty4/src/main/java/org/opensearch/http/reactor/netty4/ReactorNetty4StreamingResponseProducer.java +++ b/plugins/transport-reactor-netty4/src/main/java/org/opensearch/http/reactor/netty4/ReactorNetty4StreamingResponseProducer.java @@ -21,7 +21,11 @@ class ReactorNetty4StreamingResponseProducer implements StreamingHttpContentSend private volatile FluxSink emitter; ReactorNetty4StreamingResponseProducer() { - this.sender = Flux.create(emitter -> this.emitter = emitter); + this.sender = Flux.create(emitter -> register(emitter)); + } + + private void register(FluxSink emitter) { + this.emitter = emitter; } @Override diff --git a/qa/smoke-test-http/build.gradle b/qa/smoke-test-http/build.gradle index f48ddc26d929b..496fda6bb717d 100644 --- a/qa/smoke-test-http/build.gradle +++ b/qa/smoke-test-http/build.gradle @@ -35,6 +35,7 @@ apply plugin: 'opensearch.test-with-dependencies' dependencies { testImplementation project(path: ':modules:transport-netty4') // for http + testImplementation project(path: ':plugins:transport-reactor-netty4') // for http testImplementation project(path: ':plugins:transport-nio') testImplementation project(path: ':plugins:identity-shiro') // for http } diff --git a/qa/smoke-test-http/src/test/java/org/opensearch/http/HttpSmokeTestCase.java b/qa/smoke-test-http/src/test/java/org/opensearch/http/HttpSmokeTestCase.java index 08974b902c418..6d8e80a0a63ea 100644 --- a/qa/smoke-test-http/src/test/java/org/opensearch/http/HttpSmokeTestCase.java +++ b/qa/smoke-test-http/src/test/java/org/opensearch/http/HttpSmokeTestCase.java @@ -38,6 +38,7 @@ import org.opensearch.transport.Netty4ModulePlugin; import org.opensearch.transport.nio.MockNioTransportPlugin; import org.opensearch.transport.nio.NioTransportPlugin; +import org.opensearch.transport.reactor.ReactorNetty4Plugin; import org.junit.BeforeClass; import java.util.Arrays; @@ -53,7 +54,7 @@ public abstract class HttpSmokeTestCase extends OpenSearchIntegTestCase { @BeforeClass public static void setUpTransport() { nodeTransportTypeKey = getTypeKey(randomFrom(getTestTransportPlugin(), Netty4ModulePlugin.class, NioTransportPlugin.class)); - nodeHttpTypeKey = getHttpTypeKey(randomFrom(Netty4ModulePlugin.class, NioTransportPlugin.class)); + nodeHttpTypeKey = getHttpTypeKey(randomFrom(Netty4ModulePlugin.class, NioTransportPlugin.class, ReactorNetty4Plugin.class)); clientTypeKey = getTypeKey(randomFrom(getTestTransportPlugin(), Netty4ModulePlugin.class, NioTransportPlugin.class)); } @@ -71,6 +72,8 @@ private static String getTypeKey(Class clazz) { private static String getHttpTypeKey(Class clazz) { if (clazz.equals(NioTransportPlugin.class)) { return NioTransportPlugin.NIO_HTTP_TRANSPORT_NAME; + } else if (clazz.equals(ReactorNetty4Plugin.class)) { + return ReactorNetty4Plugin.REACTOR_NETTY_HTTP_TRANSPORT_NAME; } else { assert clazz.equals(Netty4ModulePlugin.class); return Netty4ModulePlugin.NETTY_HTTP_TRANSPORT_NAME; @@ -92,7 +95,7 @@ protected Settings nodeSettings(int nodeOrdinal) { @Override protected Collection> nodePlugins() { - return Arrays.asList(getTestTransportPlugin(), Netty4ModulePlugin.class, NioTransportPlugin.class); + return Arrays.asList(getTestTransportPlugin(), Netty4ModulePlugin.class, NioTransportPlugin.class, ReactorNetty4Plugin.class); } @Override diff --git a/qa/smoke-test-http/src/test/java/org/opensearch/http/IdentityAuthenticationIT.java b/qa/smoke-test-http/src/test/java/org/opensearch/http/IdentityAuthenticationIT.java index 78398e10b9ce8..1a806b033eb8a 100644 --- a/qa/smoke-test-http/src/test/java/org/opensearch/http/IdentityAuthenticationIT.java +++ b/qa/smoke-test-http/src/test/java/org/opensearch/http/IdentityAuthenticationIT.java @@ -26,6 +26,8 @@ import org.opensearch.test.OpenSearchTestCase; import org.opensearch.transport.Netty4ModulePlugin; import org.opensearch.transport.nio.NioTransportPlugin; +import org.opensearch.transport.reactor.ReactorNetty4Plugin; + import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.core.StringContains.containsString; @@ -42,7 +44,7 @@ protected Settings nodeSettings(int nodeOrdinal) { @Override protected Collection> nodePlugins() { - return Arrays.asList(OpenSearchTestCase.getTestTransportPlugin(), Netty4ModulePlugin.class, NioTransportPlugin.class, ShiroIdentityPlugin.class); + return Arrays.asList(OpenSearchTestCase.getTestTransportPlugin(), Netty4ModulePlugin.class, NioTransportPlugin.class, ReactorNetty4Plugin.class, ShiroIdentityPlugin.class); } diff --git a/qa/wildfly/src/main/webapp/WEB-INF/jboss-deployment-structure.xml b/qa/wildfly/src/main/webapp/WEB-INF/jboss-deployment-structure.xml index a08090100989a..4fabd038cf915 100644 --- a/qa/wildfly/src/main/webapp/WEB-INF/jboss-deployment-structure.xml +++ b/qa/wildfly/src/main/webapp/WEB-INF/jboss-deployment-structure.xml @@ -3,5 +3,8 @@ + + + diff --git a/server/src/main/java/org/opensearch/rest/RestController.java b/server/src/main/java/org/opensearch/rest/RestController.java index 0c173523fa7cd..7d0c1e2260de1 100644 --- a/server/src/main/java/org/opensearch/rest/RestController.java +++ b/server/src/main/java/org/opensearch/rest/RestController.java @@ -748,8 +748,9 @@ public void sendResponse(RestResponse response) { // over so we need to populate those **before** that, if possible. if (subscribed.get() == false) { prepareResponse(response.status(), Map.of("Content-Type", List.of(response.contentType()))); - Mono.ignoreElements(this).then(Mono.just(response)).subscribe(delegate::sendResponse); } + + Mono.ignoreElements(this).then(Mono.just(response)).subscribe(delegate::sendResponse); } @Override diff --git a/server/src/main/java/org/opensearch/rest/action/document/RestBulkStreamingAction.java b/server/src/main/java/org/opensearch/rest/action/document/RestBulkStreamingAction.java index ce6e32a7824c9..a38244fe9ff20 100644 --- a/server/src/main/java/org/opensearch/rest/action/document/RestBulkStreamingAction.java +++ b/server/src/main/java/org/opensearch/rest/action/document/RestBulkStreamingAction.java @@ -8,6 +8,7 @@ package org.opensearch.rest.action.document; +import com.google.protobuf.ExperimentalApi; import org.opensearch.action.ActionRequestValidationException; import org.opensearch.action.DocWriteRequest; import org.opensearch.action.bulk.BulkItemResponse; @@ -26,6 +27,7 @@ import org.opensearch.core.xcontent.MediaType; import org.opensearch.core.xcontent.ToXContent; import org.opensearch.core.xcontent.XContentBuilder; +import org.opensearch.http.HttpChunk; import org.opensearch.rest.BaseRestHandler; import org.opensearch.rest.BytesRestResponse; import org.opensearch.rest.RestRequest; @@ -37,6 +39,7 @@ import java.util.List; import java.util.Map; import java.util.concurrent.CompletableFuture; +import java.util.stream.Stream; import reactor.core.publisher.Flux; import reactor.core.publisher.Mono; @@ -57,6 +60,7 @@ * * @opensearch.api */ +@ExperimentalApi public class RestBulkStreamingAction extends BaseRestHandler { private static final BulkResponse EMPTY = new BulkResponse(new BulkItemResponse[0], 0L); private final boolean allowExplicitIndex; @@ -95,6 +99,18 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC final StreamingRestChannelConsumer consumer = (channel) -> { final MediaType mediaType = request.getMediaType(); + // We prepare (and more importantly, validate) the templated BulkRequest instance: in case the parameters + // are incorrect, we are going to fail the request immediately, instead of producing a possibly large amount + // of failed chunks. + FetchSourceContext defaultFetchSourceContext = FetchSourceContext.parseFromRestRequest(request); + BulkRequest prepareBulkRequest = Requests.bulkRequest(); + if (waitForActiveShards != null) { + prepareBulkRequest.waitForActiveShards(ActiveShardCount.parseString(waitForActiveShards)); + } + + prepareBulkRequest.timeout(timeout); + prepareBulkRequest.setRefreshPolicy(refresh); + // Set the content type and the status code before sending the response stream over channel.prepareResponse(RestStatus.OK, Map.of("Content-Type", List.of(mediaType.mediaTypeWithoutParameters()))); @@ -105,17 +121,17 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC // TODOs: // - add batching (by interval and/or count) // - eliminate serialization inefficiencies - Flux.from(channel).map(chunk -> { - FetchSourceContext defaultFetchSourceContext = FetchSourceContext.parseFromRestRequest(request); + Flux.from(channel).zipWith(Flux.fromStream(Stream.generate(() -> { BulkRequest bulkRequest = Requests.bulkRequest(); - if (waitForActiveShards != null) { - bulkRequest.waitForActiveShards(ActiveShardCount.parseString(waitForActiveShards)); - } - - bulkRequest.timeout(timeout); - bulkRequest.setRefreshPolicy(refresh); - - try { + bulkRequest.waitForActiveShards(prepareBulkRequest.waitForActiveShards()); + bulkRequest.timeout(prepareBulkRequest.timeout()); + bulkRequest.setRefreshPolicy(prepareBulkRequest.getRefreshPolicy()); + return bulkRequest; + }))).map(t -> { + final HttpChunk chunk = t.getT1(); + final BulkRequest bulkRequest = t.getT2(); + + try (chunk) { bulkRequest.add( chunk.content(), defaultIndex, @@ -168,7 +184,17 @@ public void onFailure(Exception ex) { } catch (IOException ex) { throw new UncheckedIOException(ex); } - })).subscribe(); + })).onErrorComplete(ex -> { + if (ex instanceof Error) { + return false; + } + try { + channel.sendResponse(new BytesRestResponse(channel, (Exception) ex)); + return true; + } catch (final IOException e) { + throw new UncheckedIOException(e); + } + }).subscribe(); }; return channel -> { From 47078850355562c5cf7ab3540866b4958ec196be Mon Sep 17 00:00:00 2001 From: rishavz_sagar Date: Wed, 31 Jul 2024 23:58:30 +0530 Subject: [PATCH 142/167] Caching number of primary shards per node for evaluating constraints on avg primary shards across all indices per node (#14992) Signed-off-by: RS146BIJAY --- .../allocator/BalancedShardsAllocator.java | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/server/src/main/java/org/opensearch/cluster/routing/allocation/allocator/BalancedShardsAllocator.java b/server/src/main/java/org/opensearch/cluster/routing/allocation/allocator/BalancedShardsAllocator.java index ae173bbf06c4f..212583d1fb14f 100644 --- a/server/src/main/java/org/opensearch/cluster/routing/allocation/allocator/BalancedShardsAllocator.java +++ b/server/src/main/java/org/opensearch/cluster/routing/allocation/allocator/BalancedShardsAllocator.java @@ -476,6 +476,7 @@ void updateRebalanceConstraint(String constraint, boolean add) { public static class ModelNode implements Iterable { private final Map indices = new HashMap<>(); private int numShards = 0; + private int numPrimaryShards = 0; private final RoutingNode routingNode; ModelNode(RoutingNode routingNode) { @@ -509,7 +510,7 @@ public int numPrimaryShards(String idx) { } public int numPrimaryShards() { - return indices.values().stream().mapToInt(index -> index.numPrimaryShards()).sum(); + return numPrimaryShards; } public int highestPrimary(String index) { @@ -527,6 +528,10 @@ public void addShard(ShardRouting shard) { indices.put(index.getIndexId(), index); } index.addShard(shard); + if (shard.primary()) { + numPrimaryShards++; + } + numShards++; } @@ -538,6 +543,11 @@ public void removeShard(ShardRouting shard) { indices.remove(shard.getIndexName()); } } + + if (shard.primary()) { + numPrimaryShards--; + } + numShards--; } From e7ee950992911739eb1b079491731073ddd52e4f Mon Sep 17 00:00:00 2001 From: Craig Perkins Date: Wed, 31 Jul 2024 14:34:54 -0400 Subject: [PATCH 143/167] Add ThreadContextPermission for stashAndMergeHeaders and stashWithOrigin (#15039) * Add ThreadContextPermission for stashAndMergeHeaders and stashWithOrigin Signed-off-by: Craig Perkins * Add to CHANGELOG Signed-off-by: Craig Perkins * Use ThreadContextAccess Signed-off-by: Craig Perkins --------- Signed-off-by: Craig Perkins --- CHANGELOG.md | 1 + .../client/OriginSettingClient.java | 7 ++++++- .../client/support/AbstractClient.java | 5 ++++- .../common/util/concurrent/ThreadContext.java | 10 ++++++++++ .../org/opensearch/bootstrap/security.policy | 1 + .../bootstrap/test-framework.policy | 2 ++ .../util/concurrent/ThreadContextTests.java | 20 +++++++++++++++---- 7 files changed, 40 insertions(+), 6 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index f63c7c5524d86..c1846bd5e7cfd 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -11,6 +11,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - [Streaming Indexing] Enhance RestClient with a new streaming API support ([#14437](https://github.com/opensearch-project/OpenSearch/pull/14437)) - Add basic aggregation support for derived fields ([#14618](https://github.com/opensearch-project/OpenSearch/pull/14618)) - Add ThreadContextPermission for markAsSystemContext and allow core to perform the method ([#15016](https://github.com/opensearch-project/OpenSearch/pull/15016)) +- Add ThreadContextPermission for stashAndMergeHeaders and stashWithOrigin ([#15039](https://github.com/opensearch-project/OpenSearch/pull/15039)) ### Dependencies - Bump `org.apache.commons:commons-lang3` from 3.14.0 to 3.15.0 ([#14861](https://github.com/opensearch-project/OpenSearch/pull/14861)) diff --git a/server/src/main/java/org/opensearch/client/OriginSettingClient.java b/server/src/main/java/org/opensearch/client/OriginSettingClient.java index 1b0e08cc489c4..27d87227df7bc 100644 --- a/server/src/main/java/org/opensearch/client/OriginSettingClient.java +++ b/server/src/main/java/org/opensearch/client/OriginSettingClient.java @@ -36,6 +36,7 @@ import org.opensearch.action.ActionType; import org.opensearch.action.support.ContextPreservingActionListener; import org.opensearch.common.util.concurrent.ThreadContext; +import org.opensearch.common.util.concurrent.ThreadContextAccess; import org.opensearch.core.action.ActionListener; import org.opensearch.core.action.ActionResponse; @@ -65,7 +66,11 @@ protected void ActionListener listener ) { final Supplier supplier = in().threadPool().getThreadContext().newRestorableContext(false); - try (ThreadContext.StoredContext ignore = in().threadPool().getThreadContext().stashWithOrigin(origin)) { + try ( + ThreadContext.StoredContext ignore = ThreadContextAccess.doPrivileged( + () -> in().threadPool().getThreadContext().stashWithOrigin(origin) + ) + ) { super.doExecute(action, request, new ContextPreservingActionListener<>(supplier, listener)); } } diff --git a/server/src/main/java/org/opensearch/client/support/AbstractClient.java b/server/src/main/java/org/opensearch/client/support/AbstractClient.java index 6c6049f04231b..509cd732357d6 100644 --- a/server/src/main/java/org/opensearch/client/support/AbstractClient.java +++ b/server/src/main/java/org/opensearch/client/support/AbstractClient.java @@ -416,6 +416,7 @@ import org.opensearch.common.action.ActionFuture; import org.opensearch.common.settings.Settings; import org.opensearch.common.util.concurrent.ThreadContext; +import org.opensearch.common.util.concurrent.ThreadContextAccess; import org.opensearch.core.action.ActionListener; import org.opensearch.core.action.ActionResponse; import org.opensearch.core.common.bytes.BytesReference; @@ -2148,7 +2149,9 @@ protected void ActionListener listener ) { ThreadContext threadContext = threadPool().getThreadContext(); - try (ThreadContext.StoredContext ctx = threadContext.stashAndMergeHeaders(headers)) { + try ( + ThreadContext.StoredContext ctx = ThreadContextAccess.doPrivileged(() -> threadContext.stashAndMergeHeaders(headers)) + ) { super.doExecute(action, request, listener); } } diff --git a/server/src/main/java/org/opensearch/common/util/concurrent/ThreadContext.java b/server/src/main/java/org/opensearch/common/util/concurrent/ThreadContext.java index b955934c4f547..3e02a26aab488 100644 --- a/server/src/main/java/org/opensearch/common/util/concurrent/ThreadContext.java +++ b/server/src/main/java/org/opensearch/common/util/concurrent/ThreadContext.java @@ -116,6 +116,8 @@ public final class ThreadContext implements Writeable { // thread context permissions private static final Permission ACCESS_SYSTEM_THREAD_CONTEXT_PERMISSION = new ThreadContextPermission("markAsSystemContext"); + private static final Permission STASH_AND_MERGE_THREAD_CONTEXT_PERMISSION = new ThreadContextPermission("stashAndMergeHeaders"); + private static final Permission STASH_WITH_ORIGIN_THREAD_CONTEXT_PERMISSION = new ThreadContextPermission("stashWithOrigin"); private static final Logger logger = LogManager.getLogger(ThreadContext.class); private static final ThreadContextStruct DEFAULT_CONTEXT = new ThreadContextStruct(); @@ -213,6 +215,10 @@ public Writeable captureAsWriteable() { * if it can't find the task in memory. */ public StoredContext stashWithOrigin(String origin) { + SecurityManager sm = System.getSecurityManager(); + if (sm != null) { + sm.checkPermission(STASH_WITH_ORIGIN_THREAD_CONTEXT_PERMISSION); + } final ThreadContext.StoredContext storedContext = stashContext(); putTransient(ACTION_ORIGIN_TRANSIENT_NAME, origin); return storedContext; @@ -224,6 +230,10 @@ public StoredContext stashWithOrigin(String origin) { * that are already existing are preserved unless they are defaults. */ public StoredContext stashAndMergeHeaders(Map headers) { + SecurityManager sm = System.getSecurityManager(); + if (sm != null) { + sm.checkPermission(STASH_AND_MERGE_THREAD_CONTEXT_PERMISSION); + } final ThreadContextStruct context = threadLocal.get(); Map newHeader = new HashMap<>(headers); newHeader.putAll(context.requestHeaders); diff --git a/server/src/main/resources/org/opensearch/bootstrap/security.policy b/server/src/main/resources/org/opensearch/bootstrap/security.policy index b7aaa2e3eec48..22e445f7d9022 100644 --- a/server/src/main/resources/org/opensearch/bootstrap/security.policy +++ b/server/src/main/resources/org/opensearch/bootstrap/security.policy @@ -49,6 +49,7 @@ grant codeBase "${codebase.opensearch}" { // needed for SPI class loading permission java.lang.RuntimePermission "accessDeclaredMembers"; permission org.opensearch.secure_sm.ThreadContextPermission "markAsSystemContext"; + permission org.opensearch.secure_sm.ThreadContextPermission "stashWithOrigin"; }; //// Very special jar permissions: diff --git a/server/src/main/resources/org/opensearch/bootstrap/test-framework.policy b/server/src/main/resources/org/opensearch/bootstrap/test-framework.policy index f674c90c45a0e..19f8adbe003ca 100644 --- a/server/src/main/resources/org/opensearch/bootstrap/test-framework.policy +++ b/server/src/main/resources/org/opensearch/bootstrap/test-framework.policy @@ -158,4 +158,6 @@ grant { permission java.lang.RuntimePermission "accessClassInPackage.sun.reflect"; permission java.lang.reflect.ReflectPermission "suppressAccessChecks"; permission org.opensearch.secure_sm.ThreadContextPermission "markAsSystemContext"; + permission org.opensearch.secure_sm.ThreadContextPermission "stashAndMergeHeaders"; + permission org.opensearch.secure_sm.ThreadContextPermission "stashWithOrigin"; }; diff --git a/server/src/test/java/org/opensearch/common/util/concurrent/ThreadContextTests.java b/server/src/test/java/org/opensearch/common/util/concurrent/ThreadContextTests.java index 4c7cd4513412d..5992ffa1465b4 100644 --- a/server/src/test/java/org/opensearch/common/util/concurrent/ThreadContextTests.java +++ b/server/src/test/java/org/opensearch/common/util/concurrent/ThreadContextTests.java @@ -206,7 +206,7 @@ public void testStashWithOrigin() { } assertNull(threadContext.getTransient(ThreadContext.ACTION_ORIGIN_TRANSIENT_NAME)); - try (ThreadContext.StoredContext storedContext = threadContext.stashWithOrigin(origin)) { + try (ThreadContext.StoredContext storedContext = ThreadContextAccess.doPrivileged(() -> threadContext.stashWithOrigin(origin))) { assertEquals(origin, threadContext.getTransient(ThreadContext.ACTION_ORIGIN_TRANSIENT_NAME)); assertNull(threadContext.getTransient("foo")); assertNull(threadContext.getTransient("bar")); @@ -231,7 +231,7 @@ public void testStashAndMerge() { HashMap toMerge = new HashMap<>(); toMerge.put("foo", "baz"); toMerge.put("simon", "says"); - try (ThreadContext.StoredContext ctx = threadContext.stashAndMergeHeaders(toMerge)) { + try (ThreadContext.StoredContext ctx = ThreadContextAccess.doPrivileged(() -> threadContext.stashAndMergeHeaders(toMerge))) { assertEquals("bar", threadContext.getHeader("foo")); assertEquals("says", threadContext.getHeader("simon")); assertNull(threadContext.getTransient("ctx.foo")); @@ -493,7 +493,13 @@ public void testStashAndMergeWithModifiedDefaults() { ThreadContext threadContext = new ThreadContext(build); HashMap toMerge = new HashMap<>(); toMerge.put("default", "2"); - try (ThreadContext.StoredContext ctx = threadContext.stashAndMergeHeaders(toMerge)) { + ThreadContext finalThreadContext1 = threadContext; + HashMap finalToMerge1 = toMerge; + try ( + ThreadContext.StoredContext ctx = ThreadContextAccess.doPrivileged( + () -> finalThreadContext1.stashAndMergeHeaders(finalToMerge1) + ) + ) { assertEquals("2", threadContext.getHeader("default")); } @@ -502,7 +508,13 @@ public void testStashAndMergeWithModifiedDefaults() { threadContext.putHeader("default", "4"); toMerge = new HashMap<>(); toMerge.put("default", "2"); - try (ThreadContext.StoredContext ctx = threadContext.stashAndMergeHeaders(toMerge)) { + ThreadContext finalThreadContext2 = threadContext; + HashMap finalToMerge2 = toMerge; + try ( + ThreadContext.StoredContext ctx = ThreadContextAccess.doPrivileged( + () -> finalThreadContext2.stashAndMergeHeaders(finalToMerge2) + ) + ) { assertEquals("4", threadContext.getHeader("default")); } } From 0324edda286a2ab8a795d7f86b3402d229cd2255 Mon Sep 17 00:00:00 2001 From: Neetika Singhal Date: Wed, 31 Jul 2024 12:09:54 -0700 Subject: [PATCH 144/167] Route search traffic to _primary_first for warm index (#14934) Signed-off-by: Neetika Singhal --- .../cluster/routing/OperationRouting.java | 9 +++ .../routing/OperationRoutingTests.java | 64 +++++++++++++++++++ 2 files changed, 73 insertions(+) diff --git a/server/src/main/java/org/opensearch/cluster/routing/OperationRouting.java b/server/src/main/java/org/opensearch/cluster/routing/OperationRouting.java index 6158461c7d4e9..6242247f34a93 100644 --- a/server/src/main/java/org/opensearch/cluster/routing/OperationRouting.java +++ b/server/src/main/java/org/opensearch/cluster/routing/OperationRouting.java @@ -42,8 +42,10 @@ import org.opensearch.common.settings.ClusterSettings; import org.opensearch.common.settings.Setting; import org.opensearch.common.settings.Settings; +import org.opensearch.common.util.FeatureFlags; import org.opensearch.core.common.Strings; import org.opensearch.core.index.shard.ShardId; +import org.opensearch.index.IndexModule; import org.opensearch.index.IndexNotFoundException; import org.opensearch.node.ResponseCollectorService; @@ -245,6 +247,13 @@ public GroupShardsIterator searchShards( preference = Preference.PRIMARY.type(); } + if (FeatureFlags.isEnabled(FeatureFlags.TIERED_REMOTE_INDEX) + && IndexModule.DataLocalityType.PARTIAL.name() + .equals(indexMetadataForShard.getSettings().get(IndexModule.INDEX_STORE_LOCALITY_SETTING.getKey())) + && (preference == null || preference.isEmpty())) { + preference = Preference.PRIMARY_FIRST.type(); + } + ShardIterator iterator = preferenceActiveShardIterator( shard, clusterState.nodes().getLocalNodeId(), diff --git a/server/src/test/java/org/opensearch/cluster/routing/OperationRoutingTests.java b/server/src/test/java/org/opensearch/cluster/routing/OperationRoutingTests.java index 4f3e50eebb9c6..ad8b48d56c417 100644 --- a/server/src/test/java/org/opensearch/cluster/routing/OperationRoutingTests.java +++ b/server/src/test/java/org/opensearch/cluster/routing/OperationRoutingTests.java @@ -41,9 +41,11 @@ import org.opensearch.cluster.node.DiscoveryNodeRole; import org.opensearch.cluster.routing.allocation.decider.AwarenessAllocationDecider; import org.opensearch.cluster.service.ClusterService; +import org.opensearch.common.SuppressForbidden; import org.opensearch.common.settings.ClusterSettings; import org.opensearch.common.settings.Settings; import org.opensearch.common.unit.TimeValue; +import org.opensearch.common.util.FeatureFlags; import org.opensearch.common.util.io.IOUtils; import org.opensearch.core.index.Index; import org.opensearch.core.index.shard.ShardId; @@ -1054,6 +1056,68 @@ public void testSearchableSnapshotPrimaryDefault() throws Exception { } } + @SuppressForbidden(reason = "feature flag overrides") + public void testPartialIndexPrimaryDefault() throws Exception { + System.setProperty(FeatureFlags.TIERED_REMOTE_INDEX, "true"); + final int numIndices = 1; + final int numShards = 2; + final int numReplicas = 2; + final String[] indexNames = new String[numIndices]; + for (int i = 0; i < numIndices; i++) { + indexNames[i] = "test" + i; + } + // The first index is a partial index + final String indexName = indexNames[0]; + ClusterService clusterService = null; + ThreadPool threadPool = null; + + try { + OperationRouting opRouting = new OperationRouting( + Settings.EMPTY, + new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS) + ); + + ClusterState state = ClusterStateCreationUtils.stateWithAssignedPrimariesAndReplicas(indexNames, numShards, numReplicas); + threadPool = new TestThreadPool("testPartialIndexPrimaryDefault"); + clusterService = ClusterServiceUtils.createClusterService(threadPool); + + // Update the index config within the cluster state to modify the index to a partial index + IndexMetadata partialIndexMetadata = IndexMetadata.builder(indexName) + .settings( + Settings.builder() + .put(state.metadata().index(indexName).getSettings()) + .put(IndexModule.INDEX_STORE_LOCALITY_SETTING.getKey(), IndexModule.DataLocalityType.PARTIAL) + .build() + ) + .build(); + Metadata.Builder metadataBuilder = Metadata.builder(state.metadata()) + .put(partialIndexMetadata, false) + .generateClusterUuidIfNeeded(); + state = ClusterState.builder(state).metadata(metadataBuilder.build()).build(); + + // Verify default preference is primary only + GroupShardsIterator groupIterator = opRouting.searchShards(state, indexNames, null, null); + assertThat("One group per index shard", groupIterator.size(), equalTo(numIndices * numShards)); + + for (ShardIterator shardIterator : groupIterator) { + assertTrue("Only primary should exist with no preference", shardIterator.nextOrNull().primary()); + } + + // Verify alternative preference can be applied to a partial index + groupIterator = opRouting.searchShards(state, indexNames, null, "_replica"); + assertThat("One group per index shard", groupIterator.size(), equalTo(numIndices * numShards)); + + for (ShardIterator shardIterator : groupIterator) { + assertThat("Replica shards will be returned", shardIterator.size(), equalTo(numReplicas)); + assertFalse("Returned shard should be a replica", shardIterator.nextOrNull().primary()); + } + } finally { + IOUtils.close(clusterService); + terminate(threadPool); + System.setProperty(FeatureFlags.TIERED_REMOTE_INDEX, "false"); + } + } + private DiscoveryNode[] setupNodes() { // Sets up two data nodes in zone-a and one data node in zone-b List zones = Arrays.asList("a", "a", "b"); From 67a2e4c7275afa93ce2c6fc2107ca0f7a8c461bd Mon Sep 17 00:00:00 2001 From: Craig Perkins Date: Wed, 31 Jul 2024 20:23:30 -0400 Subject: [PATCH 145/167] Add javadoc about ThreadContextPermission for stashWithOrigin and stashAndMergeHeaders (#15051) Signed-off-by: Craig Perkins --- .../common/util/concurrent/ThreadContext.java | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/server/src/main/java/org/opensearch/common/util/concurrent/ThreadContext.java b/server/src/main/java/org/opensearch/common/util/concurrent/ThreadContext.java index 3e02a26aab488..070e18481f2a3 100644 --- a/server/src/main/java/org/opensearch/common/util/concurrent/ThreadContext.java +++ b/server/src/main/java/org/opensearch/common/util/concurrent/ThreadContext.java @@ -213,6 +213,13 @@ public Writeable captureAsWriteable() { * For example, a user might not have permission to GET from the tasks index * but the tasks API will perform a get on their behalf using this method * if it can't find the task in memory. + * + * Usage of stashWithOrigin is guarded by a ThreadContextPermission. In order to use + * stashWithOrigin, the codebase needs to explicitly be granted permission in the JSM policy file. + * + * Add an entry in the grant portion of the policy file like this: + * + * permission org.opensearch.secure_sm.ThreadContextPermission "stashWithOrigin"; */ public StoredContext stashWithOrigin(String origin) { SecurityManager sm = System.getSecurityManager(); @@ -228,6 +235,13 @@ public StoredContext stashWithOrigin(String origin) { * Removes the current context and resets a new context that contains a merge of the current headers and the given headers. * The removed context can be restored when closing the returned {@link StoredContext}. The merge strategy is that headers * that are already existing are preserved unless they are defaults. + * + * Usage of stashAndMergeHeaders is guarded by a ThreadContextPermission. In order to use + * stashAndMergeHeaders, the codebase needs to explicitly be granted permission in the JSM policy file. + * + * Add an entry in the grant portion of the policy file like this: + * + * permission org.opensearch.secure_sm.ThreadContextPermission "stashAndMergeHeaders"; */ public StoredContext stashAndMergeHeaders(Map headers) { SecurityManager sm = System.getSecurityManager(); From d4e7766a90f45fc54ecd5658a5fae472ed9b7030 Mon Sep 17 00:00:00 2001 From: bowenlan-amzn Date: Fri, 2 Aug 2024 04:51:09 -0700 Subject: [PATCH 146/167] Add 2.17.0 in main branch (#15053) Signed-off-by: bowenlan-amzn --- .ci/bwcVersions | 1 + libs/core/src/main/java/org/opensearch/Version.java | 1 + 2 files changed, 2 insertions(+) diff --git a/.ci/bwcVersions b/.ci/bwcVersions index a738eb54e17f6..771bfe694b698 100644 --- a/.ci/bwcVersions +++ b/.ci/bwcVersions @@ -36,3 +36,4 @@ BWC_VERSION: - "2.15.0" - "2.15.1" - "2.16.0" + - "2.17.0" diff --git a/libs/core/src/main/java/org/opensearch/Version.java b/libs/core/src/main/java/org/opensearch/Version.java index b647a92d6708a..c2d8ce9be29dd 100644 --- a/libs/core/src/main/java/org/opensearch/Version.java +++ b/libs/core/src/main/java/org/opensearch/Version.java @@ -107,6 +107,7 @@ public class Version implements Comparable, ToXContentFragment { public static final Version V_2_15_0 = new Version(2150099, org.apache.lucene.util.Version.LUCENE_9_10_0); public static final Version V_2_15_1 = new Version(2150199, org.apache.lucene.util.Version.LUCENE_9_10_0); public static final Version V_2_16_0 = new Version(2160099, org.apache.lucene.util.Version.LUCENE_9_11_1); + public static final Version V_2_17_0 = new Version(2170099, org.apache.lucene.util.Version.LUCENE_9_11_1); public static final Version V_3_0_0 = new Version(3000099, org.apache.lucene.util.Version.LUCENE_9_12_0); public static final Version CURRENT = V_3_0_0; From 7c471a0f02bfded2e987608f45254bbce1bd4734 Mon Sep 17 00:00:00 2001 From: Andriy Redko Date: Fri, 2 Aug 2024 13:34:16 -0400 Subject: [PATCH 147/167] Add MacOS aarch64 to precommit since we rolled out the support for such distribution (#15082) Signed-off-by: Andriy Redko --- .github/workflows/precommit.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/precommit.yml b/.github/workflows/precommit.yml index 95ca49ac9cb43..793fdae5df4da 100644 --- a/.github/workflows/precommit.yml +++ b/.github/workflows/precommit.yml @@ -8,7 +8,7 @@ jobs: strategy: matrix: java: [ 11, 17, 21 ] - os: [ubuntu-latest, windows-latest, macos-13] + os: [ubuntu-latest, windows-latest, macos-latest, macos-13] steps: - uses: actions/checkout@v4 - name: Set up JDK ${{ matrix.java }} From 48634bdc277d44ff3027ca702b0f08b111fcc88a Mon Sep 17 00:00:00 2001 From: Andriy Redko Date: Fri, 2 Aug 2024 13:35:57 -0400 Subject: [PATCH 148/167] Bump Netty to 4.1.112.Final (#15081) Signed-off-by: Andriy Redko --- CHANGELOG.md | 1 + buildSrc/version.properties | 2 +- .../licenses/netty-buffer-4.1.111.Final.jar.sha1 | 1 - .../licenses/netty-buffer-4.1.112.Final.jar.sha1 | 1 + .../licenses/netty-codec-4.1.111.Final.jar.sha1 | 1 - .../licenses/netty-codec-4.1.112.Final.jar.sha1 | 1 + .../licenses/netty-codec-http-4.1.111.Final.jar.sha1 | 1 - .../licenses/netty-codec-http-4.1.112.Final.jar.sha1 | 1 + .../licenses/netty-codec-http2-4.1.111.Final.jar.sha1 | 1 - .../licenses/netty-codec-http2-4.1.112.Final.jar.sha1 | 1 + .../licenses/netty-common-4.1.111.Final.jar.sha1 | 1 - .../licenses/netty-common-4.1.112.Final.jar.sha1 | 1 + .../licenses/netty-handler-4.1.111.Final.jar.sha1 | 1 - .../licenses/netty-handler-4.1.112.Final.jar.sha1 | 1 + .../licenses/netty-resolver-4.1.111.Final.jar.sha1 | 1 - .../licenses/netty-resolver-4.1.112.Final.jar.sha1 | 1 + .../licenses/netty-transport-4.1.111.Final.jar.sha1 | 1 - .../licenses/netty-transport-4.1.112.Final.jar.sha1 | 1 + .../netty-transport-native-unix-common-4.1.111.Final.jar.sha1 | 1 - .../netty-transport-native-unix-common-4.1.112.Final.jar.sha1 | 1 + .../licenses/netty-codec-dns-4.1.111.Final.jar.sha1 | 1 - .../licenses/netty-codec-dns-4.1.112.Final.jar.sha1 | 1 + .../licenses/netty-codec-http2-4.1.111.Final.jar.sha1 | 1 - .../licenses/netty-codec-http2-4.1.112.Final.jar.sha1 | 1 + .../licenses/netty-codec-socks-4.1.111.Final.jar.sha1 | 1 - .../licenses/netty-codec-socks-4.1.112.Final.jar.sha1 | 1 + .../licenses/netty-handler-proxy-4.1.111.Final.jar.sha1 | 1 - .../licenses/netty-handler-proxy-4.1.112.Final.jar.sha1 | 1 + .../licenses/netty-resolver-dns-4.1.111.Final.jar.sha1 | 1 - .../licenses/netty-resolver-dns-4.1.112.Final.jar.sha1 | 1 + .../netty-transport-native-unix-common-4.1.111.Final.jar.sha1 | 1 - .../netty-transport-native-unix-common-4.1.112.Final.jar.sha1 | 1 + .../repository-hdfs/licenses/netty-all-4.1.111.Final.jar.sha1 | 1 - .../repository-hdfs/licenses/netty-all-4.1.112.Final.jar.sha1 | 1 + .../repository-s3/licenses/netty-buffer-4.1.111.Final.jar.sha1 | 1 - .../repository-s3/licenses/netty-buffer-4.1.112.Final.jar.sha1 | 1 + .../repository-s3/licenses/netty-codec-4.1.111.Final.jar.sha1 | 1 - .../repository-s3/licenses/netty-codec-4.1.112.Final.jar.sha1 | 1 + .../licenses/netty-codec-http-4.1.111.Final.jar.sha1 | 1 - .../licenses/netty-codec-http-4.1.112.Final.jar.sha1 | 1 + .../licenses/netty-codec-http2-4.1.111.Final.jar.sha1 | 1 - .../licenses/netty-codec-http2-4.1.112.Final.jar.sha1 | 1 + .../repository-s3/licenses/netty-common-4.1.111.Final.jar.sha1 | 1 - .../repository-s3/licenses/netty-common-4.1.112.Final.jar.sha1 | 1 + .../repository-s3/licenses/netty-handler-4.1.111.Final.jar.sha1 | 1 - .../repository-s3/licenses/netty-handler-4.1.112.Final.jar.sha1 | 1 + .../licenses/netty-resolver-4.1.111.Final.jar.sha1 | 1 - .../licenses/netty-resolver-4.1.112.Final.jar.sha1 | 1 + .../licenses/netty-transport-4.1.111.Final.jar.sha1 | 1 - .../licenses/netty-transport-4.1.112.Final.jar.sha1 | 1 + .../netty-transport-classes-epoll-4.1.111.Final.jar.sha1 | 1 - .../netty-transport-classes-epoll-4.1.112.Final.jar.sha1 | 1 + .../netty-transport-native-unix-common-4.1.111.Final.jar.sha1 | 1 - .../netty-transport-native-unix-common-4.1.112.Final.jar.sha1 | 1 + .../transport-nio/licenses/netty-buffer-4.1.111.Final.jar.sha1 | 1 - .../transport-nio/licenses/netty-buffer-4.1.112.Final.jar.sha1 | 1 + .../transport-nio/licenses/netty-codec-4.1.111.Final.jar.sha1 | 1 - .../transport-nio/licenses/netty-codec-4.1.112.Final.jar.sha1 | 1 + .../licenses/netty-codec-http-4.1.111.Final.jar.sha1 | 1 - .../licenses/netty-codec-http-4.1.112.Final.jar.sha1 | 1 + .../transport-nio/licenses/netty-common-4.1.111.Final.jar.sha1 | 1 - .../transport-nio/licenses/netty-common-4.1.112.Final.jar.sha1 | 1 + .../transport-nio/licenses/netty-handler-4.1.111.Final.jar.sha1 | 1 - .../transport-nio/licenses/netty-handler-4.1.112.Final.jar.sha1 | 1 + .../licenses/netty-resolver-4.1.111.Final.jar.sha1 | 1 - .../licenses/netty-resolver-4.1.112.Final.jar.sha1 | 1 + .../licenses/netty-transport-4.1.111.Final.jar.sha1 | 1 - .../licenses/netty-transport-4.1.112.Final.jar.sha1 | 1 + .../licenses/netty-buffer-4.1.111.Final.jar.sha1 | 1 - .../licenses/netty-buffer-4.1.112.Final.jar.sha1 | 1 + .../licenses/netty-codec-4.1.111.Final.jar.sha1 | 1 - .../licenses/netty-codec-4.1.112.Final.jar.sha1 | 1 + .../licenses/netty-codec-dns-4.1.111.Final.jar.sha1 | 1 - .../licenses/netty-codec-dns-4.1.112.Final.jar.sha1 | 1 + .../licenses/netty-codec-http-4.1.111.Final.jar.sha1 | 1 - .../licenses/netty-codec-http-4.1.112.Final.jar.sha1 | 1 + .../licenses/netty-codec-http2-4.1.111.Final.jar.sha1 | 1 - .../licenses/netty-codec-http2-4.1.112.Final.jar.sha1 | 1 + .../licenses/netty-common-4.1.111.Final.jar.sha1 | 1 - .../licenses/netty-common-4.1.112.Final.jar.sha1 | 1 + .../licenses/netty-handler-4.1.111.Final.jar.sha1 | 1 - .../licenses/netty-handler-4.1.112.Final.jar.sha1 | 1 + .../licenses/netty-resolver-4.1.111.Final.jar.sha1 | 1 - .../licenses/netty-resolver-4.1.112.Final.jar.sha1 | 1 + .../licenses/netty-resolver-dns-4.1.111.Final.jar.sha1 | 1 - .../licenses/netty-resolver-dns-4.1.112.Final.jar.sha1 | 1 + .../licenses/netty-transport-4.1.111.Final.jar.sha1 | 1 - .../licenses/netty-transport-4.1.112.Final.jar.sha1 | 1 + .../netty-transport-native-unix-common-4.1.111.Final.jar.sha1 | 1 - .../netty-transport-native-unix-common-4.1.112.Final.jar.sha1 | 1 + 90 files changed, 46 insertions(+), 45 deletions(-) delete mode 100644 modules/transport-netty4/licenses/netty-buffer-4.1.111.Final.jar.sha1 create mode 100644 modules/transport-netty4/licenses/netty-buffer-4.1.112.Final.jar.sha1 delete mode 100644 modules/transport-netty4/licenses/netty-codec-4.1.111.Final.jar.sha1 create mode 100644 modules/transport-netty4/licenses/netty-codec-4.1.112.Final.jar.sha1 delete mode 100644 modules/transport-netty4/licenses/netty-codec-http-4.1.111.Final.jar.sha1 create mode 100644 modules/transport-netty4/licenses/netty-codec-http-4.1.112.Final.jar.sha1 delete mode 100644 modules/transport-netty4/licenses/netty-codec-http2-4.1.111.Final.jar.sha1 create mode 100644 modules/transport-netty4/licenses/netty-codec-http2-4.1.112.Final.jar.sha1 delete mode 100644 modules/transport-netty4/licenses/netty-common-4.1.111.Final.jar.sha1 create mode 100644 modules/transport-netty4/licenses/netty-common-4.1.112.Final.jar.sha1 delete mode 100644 modules/transport-netty4/licenses/netty-handler-4.1.111.Final.jar.sha1 create mode 100644 modules/transport-netty4/licenses/netty-handler-4.1.112.Final.jar.sha1 delete mode 100644 modules/transport-netty4/licenses/netty-resolver-4.1.111.Final.jar.sha1 create mode 100644 modules/transport-netty4/licenses/netty-resolver-4.1.112.Final.jar.sha1 delete mode 100644 modules/transport-netty4/licenses/netty-transport-4.1.111.Final.jar.sha1 create mode 100644 modules/transport-netty4/licenses/netty-transport-4.1.112.Final.jar.sha1 delete mode 100644 modules/transport-netty4/licenses/netty-transport-native-unix-common-4.1.111.Final.jar.sha1 create mode 100644 modules/transport-netty4/licenses/netty-transport-native-unix-common-4.1.112.Final.jar.sha1 delete mode 100644 plugins/repository-azure/licenses/netty-codec-dns-4.1.111.Final.jar.sha1 create mode 100644 plugins/repository-azure/licenses/netty-codec-dns-4.1.112.Final.jar.sha1 delete mode 100644 plugins/repository-azure/licenses/netty-codec-http2-4.1.111.Final.jar.sha1 create mode 100644 plugins/repository-azure/licenses/netty-codec-http2-4.1.112.Final.jar.sha1 delete mode 100644 plugins/repository-azure/licenses/netty-codec-socks-4.1.111.Final.jar.sha1 create mode 100644 plugins/repository-azure/licenses/netty-codec-socks-4.1.112.Final.jar.sha1 delete mode 100644 plugins/repository-azure/licenses/netty-handler-proxy-4.1.111.Final.jar.sha1 create mode 100644 plugins/repository-azure/licenses/netty-handler-proxy-4.1.112.Final.jar.sha1 delete mode 100644 plugins/repository-azure/licenses/netty-resolver-dns-4.1.111.Final.jar.sha1 create mode 100644 plugins/repository-azure/licenses/netty-resolver-dns-4.1.112.Final.jar.sha1 delete mode 100644 plugins/repository-azure/licenses/netty-transport-native-unix-common-4.1.111.Final.jar.sha1 create mode 100644 plugins/repository-azure/licenses/netty-transport-native-unix-common-4.1.112.Final.jar.sha1 delete mode 100644 plugins/repository-hdfs/licenses/netty-all-4.1.111.Final.jar.sha1 create mode 100644 plugins/repository-hdfs/licenses/netty-all-4.1.112.Final.jar.sha1 delete mode 100644 plugins/repository-s3/licenses/netty-buffer-4.1.111.Final.jar.sha1 create mode 100644 plugins/repository-s3/licenses/netty-buffer-4.1.112.Final.jar.sha1 delete mode 100644 plugins/repository-s3/licenses/netty-codec-4.1.111.Final.jar.sha1 create mode 100644 plugins/repository-s3/licenses/netty-codec-4.1.112.Final.jar.sha1 delete mode 100644 plugins/repository-s3/licenses/netty-codec-http-4.1.111.Final.jar.sha1 create mode 100644 plugins/repository-s3/licenses/netty-codec-http-4.1.112.Final.jar.sha1 delete mode 100644 plugins/repository-s3/licenses/netty-codec-http2-4.1.111.Final.jar.sha1 create mode 100644 plugins/repository-s3/licenses/netty-codec-http2-4.1.112.Final.jar.sha1 delete mode 100644 plugins/repository-s3/licenses/netty-common-4.1.111.Final.jar.sha1 create mode 100644 plugins/repository-s3/licenses/netty-common-4.1.112.Final.jar.sha1 delete mode 100644 plugins/repository-s3/licenses/netty-handler-4.1.111.Final.jar.sha1 create mode 100644 plugins/repository-s3/licenses/netty-handler-4.1.112.Final.jar.sha1 delete mode 100644 plugins/repository-s3/licenses/netty-resolver-4.1.111.Final.jar.sha1 create mode 100644 plugins/repository-s3/licenses/netty-resolver-4.1.112.Final.jar.sha1 delete mode 100644 plugins/repository-s3/licenses/netty-transport-4.1.111.Final.jar.sha1 create mode 100644 plugins/repository-s3/licenses/netty-transport-4.1.112.Final.jar.sha1 delete mode 100644 plugins/repository-s3/licenses/netty-transport-classes-epoll-4.1.111.Final.jar.sha1 create mode 100644 plugins/repository-s3/licenses/netty-transport-classes-epoll-4.1.112.Final.jar.sha1 delete mode 100644 plugins/repository-s3/licenses/netty-transport-native-unix-common-4.1.111.Final.jar.sha1 create mode 100644 plugins/repository-s3/licenses/netty-transport-native-unix-common-4.1.112.Final.jar.sha1 delete mode 100644 plugins/transport-nio/licenses/netty-buffer-4.1.111.Final.jar.sha1 create mode 100644 plugins/transport-nio/licenses/netty-buffer-4.1.112.Final.jar.sha1 delete mode 100644 plugins/transport-nio/licenses/netty-codec-4.1.111.Final.jar.sha1 create mode 100644 plugins/transport-nio/licenses/netty-codec-4.1.112.Final.jar.sha1 delete mode 100644 plugins/transport-nio/licenses/netty-codec-http-4.1.111.Final.jar.sha1 create mode 100644 plugins/transport-nio/licenses/netty-codec-http-4.1.112.Final.jar.sha1 delete mode 100644 plugins/transport-nio/licenses/netty-common-4.1.111.Final.jar.sha1 create mode 100644 plugins/transport-nio/licenses/netty-common-4.1.112.Final.jar.sha1 delete mode 100644 plugins/transport-nio/licenses/netty-handler-4.1.111.Final.jar.sha1 create mode 100644 plugins/transport-nio/licenses/netty-handler-4.1.112.Final.jar.sha1 delete mode 100644 plugins/transport-nio/licenses/netty-resolver-4.1.111.Final.jar.sha1 create mode 100644 plugins/transport-nio/licenses/netty-resolver-4.1.112.Final.jar.sha1 delete mode 100644 plugins/transport-nio/licenses/netty-transport-4.1.111.Final.jar.sha1 create mode 100644 plugins/transport-nio/licenses/netty-transport-4.1.112.Final.jar.sha1 delete mode 100644 plugins/transport-reactor-netty4/licenses/netty-buffer-4.1.111.Final.jar.sha1 create mode 100644 plugins/transport-reactor-netty4/licenses/netty-buffer-4.1.112.Final.jar.sha1 delete mode 100644 plugins/transport-reactor-netty4/licenses/netty-codec-4.1.111.Final.jar.sha1 create mode 100644 plugins/transport-reactor-netty4/licenses/netty-codec-4.1.112.Final.jar.sha1 delete mode 100644 plugins/transport-reactor-netty4/licenses/netty-codec-dns-4.1.111.Final.jar.sha1 create mode 100644 plugins/transport-reactor-netty4/licenses/netty-codec-dns-4.1.112.Final.jar.sha1 delete mode 100644 plugins/transport-reactor-netty4/licenses/netty-codec-http-4.1.111.Final.jar.sha1 create mode 100644 plugins/transport-reactor-netty4/licenses/netty-codec-http-4.1.112.Final.jar.sha1 delete mode 100644 plugins/transport-reactor-netty4/licenses/netty-codec-http2-4.1.111.Final.jar.sha1 create mode 100644 plugins/transport-reactor-netty4/licenses/netty-codec-http2-4.1.112.Final.jar.sha1 delete mode 100644 plugins/transport-reactor-netty4/licenses/netty-common-4.1.111.Final.jar.sha1 create mode 100644 plugins/transport-reactor-netty4/licenses/netty-common-4.1.112.Final.jar.sha1 delete mode 100644 plugins/transport-reactor-netty4/licenses/netty-handler-4.1.111.Final.jar.sha1 create mode 100644 plugins/transport-reactor-netty4/licenses/netty-handler-4.1.112.Final.jar.sha1 delete mode 100644 plugins/transport-reactor-netty4/licenses/netty-resolver-4.1.111.Final.jar.sha1 create mode 100644 plugins/transport-reactor-netty4/licenses/netty-resolver-4.1.112.Final.jar.sha1 delete mode 100644 plugins/transport-reactor-netty4/licenses/netty-resolver-dns-4.1.111.Final.jar.sha1 create mode 100644 plugins/transport-reactor-netty4/licenses/netty-resolver-dns-4.1.112.Final.jar.sha1 delete mode 100644 plugins/transport-reactor-netty4/licenses/netty-transport-4.1.111.Final.jar.sha1 create mode 100644 plugins/transport-reactor-netty4/licenses/netty-transport-4.1.112.Final.jar.sha1 delete mode 100644 plugins/transport-reactor-netty4/licenses/netty-transport-native-unix-common-4.1.111.Final.jar.sha1 create mode 100644 plugins/transport-reactor-netty4/licenses/netty-transport-native-unix-common-4.1.112.Final.jar.sha1 diff --git a/CHANGELOG.md b/CHANGELOG.md index c1846bd5e7cfd..c240cf26627cd 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -14,6 +14,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Add ThreadContextPermission for stashAndMergeHeaders and stashWithOrigin ([#15039](https://github.com/opensearch-project/OpenSearch/pull/15039)) ### Dependencies +- Bump `netty` from 4.1.111.Final to 4.1.112.Final ([#15081](https://github.com/opensearch-project/OpenSearch/pull/15081)) - Bump `org.apache.commons:commons-lang3` from 3.14.0 to 3.15.0 ([#14861](https://github.com/opensearch-project/OpenSearch/pull/14861)) - OpenJDK Update (July 2024 Patch releases) ([#14998](https://github.com/opensearch-project/OpenSearch/pull/14998)) - Bump `com.microsoft.azure:msal4j` from 1.16.1 to 1.16.2 ([#14995](https://github.com/opensearch-project/OpenSearch/pull/14995)) diff --git a/buildSrc/version.properties b/buildSrc/version.properties index eb67af909bccf..08c45ef058716 100644 --- a/buildSrc/version.properties +++ b/buildSrc/version.properties @@ -29,7 +29,7 @@ hdrhistogram = 2.2.2 # when updating the JNA version, also update the version in buildSrc/build.gradle jna = 5.13.0 -netty = 4.1.111.Final +netty = 4.1.112.Final joda = 2.12.7 # project reactor diff --git a/modules/transport-netty4/licenses/netty-buffer-4.1.111.Final.jar.sha1 b/modules/transport-netty4/licenses/netty-buffer-4.1.111.Final.jar.sha1 deleted file mode 100644 index 6784ac6c3b64f..0000000000000 --- a/modules/transport-netty4/licenses/netty-buffer-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -b54863f578939e135d3b3aea610284ae57c188cf \ No newline at end of file diff --git a/modules/transport-netty4/licenses/netty-buffer-4.1.112.Final.jar.sha1 b/modules/transport-netty4/licenses/netty-buffer-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..5c26883046fed --- /dev/null +++ b/modules/transport-netty4/licenses/netty-buffer-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +bdc12df04bb6858890b8aa108060b5b365a26102 \ No newline at end of file diff --git a/modules/transport-netty4/licenses/netty-codec-4.1.111.Final.jar.sha1 b/modules/transport-netty4/licenses/netty-codec-4.1.111.Final.jar.sha1 deleted file mode 100644 index 3d86194de9213..0000000000000 --- a/modules/transport-netty4/licenses/netty-codec-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -a6762ec00a6d268f9980741f5b755838bcd658bf \ No newline at end of file diff --git a/modules/transport-netty4/licenses/netty-codec-4.1.112.Final.jar.sha1 b/modules/transport-netty4/licenses/netty-codec-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..1fd224fdd0b44 --- /dev/null +++ b/modules/transport-netty4/licenses/netty-codec-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +c87f2ec3d9a97bd2b793d16817abb2bab93a7fc3 \ No newline at end of file diff --git a/modules/transport-netty4/licenses/netty-codec-http-4.1.111.Final.jar.sha1 b/modules/transport-netty4/licenses/netty-codec-http-4.1.111.Final.jar.sha1 deleted file mode 100644 index 4ef1adb818300..0000000000000 --- a/modules/transport-netty4/licenses/netty-codec-http-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -c6ecbc452321e632bf3cea0f9758839b650455c7 \ No newline at end of file diff --git a/modules/transport-netty4/licenses/netty-codec-http-4.1.112.Final.jar.sha1 b/modules/transport-netty4/licenses/netty-codec-http-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..22d35128c3ad5 --- /dev/null +++ b/modules/transport-netty4/licenses/netty-codec-http-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +81af1040bfa977f98dd0e1bd9639513ea862ca04 \ No newline at end of file diff --git a/modules/transport-netty4/licenses/netty-codec-http2-4.1.111.Final.jar.sha1 b/modules/transport-netty4/licenses/netty-codec-http2-4.1.111.Final.jar.sha1 deleted file mode 100644 index 06c86b8fda557..0000000000000 --- a/modules/transport-netty4/licenses/netty-codec-http2-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -f0cca5df75bfb4f858d0435f601d8b1cae1de054 \ No newline at end of file diff --git a/modules/transport-netty4/licenses/netty-codec-http2-4.1.112.Final.jar.sha1 b/modules/transport-netty4/licenses/netty-codec-http2-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..d4767d06b22bf --- /dev/null +++ b/modules/transport-netty4/licenses/netty-codec-http2-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +7fa28b510f0f16f4d5d7188b86bef59e048f62f9 \ No newline at end of file diff --git a/modules/transport-netty4/licenses/netty-common-4.1.111.Final.jar.sha1 b/modules/transport-netty4/licenses/netty-common-4.1.111.Final.jar.sha1 deleted file mode 100644 index 16cb1cce7f504..0000000000000 --- a/modules/transport-netty4/licenses/netty-common-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -58210befcb31adbcadd5724966a061444db91863 \ No newline at end of file diff --git a/modules/transport-netty4/licenses/netty-common-4.1.112.Final.jar.sha1 b/modules/transport-netty4/licenses/netty-common-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..47af3100f0f2d --- /dev/null +++ b/modules/transport-netty4/licenses/netty-common-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +b2798069092a981a832b7510d0462ee9efb7a80e \ No newline at end of file diff --git a/modules/transport-netty4/licenses/netty-handler-4.1.111.Final.jar.sha1 b/modules/transport-netty4/licenses/netty-handler-4.1.111.Final.jar.sha1 deleted file mode 100644 index 2f70f791f65ed..0000000000000 --- a/modules/transport-netty4/licenses/netty-handler-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -2bc6a58ad2e9e279634b6e55022e8dcd3c175cc4 \ No newline at end of file diff --git a/modules/transport-netty4/licenses/netty-handler-4.1.112.Final.jar.sha1 b/modules/transport-netty4/licenses/netty-handler-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..8b30272861770 --- /dev/null +++ b/modules/transport-netty4/licenses/netty-handler-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +3d5e2d5bcc6baeeb8c13a230980c6132a778e036 \ No newline at end of file diff --git a/modules/transport-netty4/licenses/netty-resolver-4.1.111.Final.jar.sha1 b/modules/transport-netty4/licenses/netty-resolver-4.1.111.Final.jar.sha1 deleted file mode 100644 index 621cbf58f3133..0000000000000 --- a/modules/transport-netty4/licenses/netty-resolver-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -3493179999f211dc49714319f81da2be86523a3b \ No newline at end of file diff --git a/modules/transport-netty4/licenses/netty-resolver-4.1.112.Final.jar.sha1 b/modules/transport-netty4/licenses/netty-resolver-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..1a094fa19a623 --- /dev/null +++ b/modules/transport-netty4/licenses/netty-resolver-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +58a631d9d44c4ed7cc0dcc9cffa6641da9374d72 \ No newline at end of file diff --git a/modules/transport-netty4/licenses/netty-transport-4.1.111.Final.jar.sha1 b/modules/transport-netty4/licenses/netty-transport-4.1.111.Final.jar.sha1 deleted file mode 100644 index ac96e7545ed58..0000000000000 --- a/modules/transport-netty4/licenses/netty-transport-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -24e97cf14ea9d80afe4c5ab69066b587fccc154a \ No newline at end of file diff --git a/modules/transport-netty4/licenses/netty-transport-4.1.112.Final.jar.sha1 b/modules/transport-netty4/licenses/netty-transport-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..5fbfde0836e0c --- /dev/null +++ b/modules/transport-netty4/licenses/netty-transport-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +77cd136dd3843f5e7cbcf68c824975d745c49ddb \ No newline at end of file diff --git a/modules/transport-netty4/licenses/netty-transport-native-unix-common-4.1.111.Final.jar.sha1 b/modules/transport-netty4/licenses/netty-transport-native-unix-common-4.1.111.Final.jar.sha1 deleted file mode 100644 index 0847ac3034db7..0000000000000 --- a/modules/transport-netty4/licenses/netty-transport-native-unix-common-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -acafc128cddafa021bc0b48b0788eb0e118add5e \ No newline at end of file diff --git a/modules/transport-netty4/licenses/netty-transport-native-unix-common-4.1.112.Final.jar.sha1 b/modules/transport-netty4/licenses/netty-transport-native-unix-common-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..8dad0e3104dc8 --- /dev/null +++ b/modules/transport-netty4/licenses/netty-transport-native-unix-common-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +b50ff619cdcdc48e748cba3405c9988529f28f60 \ No newline at end of file diff --git a/plugins/repository-azure/licenses/netty-codec-dns-4.1.111.Final.jar.sha1 b/plugins/repository-azure/licenses/netty-codec-dns-4.1.111.Final.jar.sha1 deleted file mode 100644 index 5e3f819012811..0000000000000 --- a/plugins/repository-azure/licenses/netty-codec-dns-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -f988dbb527efb0e7cf7d444cc50b0fc3f5f380ec \ No newline at end of file diff --git a/plugins/repository-azure/licenses/netty-codec-dns-4.1.112.Final.jar.sha1 b/plugins/repository-azure/licenses/netty-codec-dns-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..a42a41b6387c8 --- /dev/null +++ b/plugins/repository-azure/licenses/netty-codec-dns-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +06724b184ee870ecc4d8fc36931beeb3c387b0ee \ No newline at end of file diff --git a/plugins/repository-azure/licenses/netty-codec-http2-4.1.111.Final.jar.sha1 b/plugins/repository-azure/licenses/netty-codec-http2-4.1.111.Final.jar.sha1 deleted file mode 100644 index 06c86b8fda557..0000000000000 --- a/plugins/repository-azure/licenses/netty-codec-http2-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -f0cca5df75bfb4f858d0435f601d8b1cae1de054 \ No newline at end of file diff --git a/plugins/repository-azure/licenses/netty-codec-http2-4.1.112.Final.jar.sha1 b/plugins/repository-azure/licenses/netty-codec-http2-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..d4767d06b22bf --- /dev/null +++ b/plugins/repository-azure/licenses/netty-codec-http2-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +7fa28b510f0f16f4d5d7188b86bef59e048f62f9 \ No newline at end of file diff --git a/plugins/repository-azure/licenses/netty-codec-socks-4.1.111.Final.jar.sha1 b/plugins/repository-azure/licenses/netty-codec-socks-4.1.111.Final.jar.sha1 deleted file mode 100644 index 226ee06d39d6c..0000000000000 --- a/plugins/repository-azure/licenses/netty-codec-socks-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -ea52ef6617a9b69b0baaebb7f0b80373527f9607 \ No newline at end of file diff --git a/plugins/repository-azure/licenses/netty-codec-socks-4.1.112.Final.jar.sha1 b/plugins/repository-azure/licenses/netty-codec-socks-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..5291a16c10448 --- /dev/null +++ b/plugins/repository-azure/licenses/netty-codec-socks-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +9aed7e78c467d06a47a45b5b27466380a6427e2f \ No newline at end of file diff --git a/plugins/repository-azure/licenses/netty-handler-proxy-4.1.111.Final.jar.sha1 b/plugins/repository-azure/licenses/netty-handler-proxy-4.1.111.Final.jar.sha1 deleted file mode 100644 index dcc2b0c7ca923..0000000000000 --- a/plugins/repository-azure/licenses/netty-handler-proxy-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -1e459c8630bb7c942b79a97e62dd728798de6a8c \ No newline at end of file diff --git a/plugins/repository-azure/licenses/netty-handler-proxy-4.1.112.Final.jar.sha1 b/plugins/repository-azure/licenses/netty-handler-proxy-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..cf50574b87da0 --- /dev/null +++ b/plugins/repository-azure/licenses/netty-handler-proxy-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +b23c87a85451b3b0e7c3e8e89698cea6831a8418 \ No newline at end of file diff --git a/plugins/repository-azure/licenses/netty-resolver-dns-4.1.111.Final.jar.sha1 b/plugins/repository-azure/licenses/netty-resolver-dns-4.1.111.Final.jar.sha1 deleted file mode 100644 index b22ad6784809b..0000000000000 --- a/plugins/repository-azure/licenses/netty-resolver-dns-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -5ac6a3d96935129ba45ea768ad30e31cad0d8c4d \ No newline at end of file diff --git a/plugins/repository-azure/licenses/netty-resolver-dns-4.1.112.Final.jar.sha1 b/plugins/repository-azure/licenses/netty-resolver-dns-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..24e8177190e04 --- /dev/null +++ b/plugins/repository-azure/licenses/netty-resolver-dns-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +375872f1c16bb51aac016ff6ee4f5d28b1288d4d \ No newline at end of file diff --git a/plugins/repository-azure/licenses/netty-transport-native-unix-common-4.1.111.Final.jar.sha1 b/plugins/repository-azure/licenses/netty-transport-native-unix-common-4.1.111.Final.jar.sha1 deleted file mode 100644 index 0847ac3034db7..0000000000000 --- a/plugins/repository-azure/licenses/netty-transport-native-unix-common-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -acafc128cddafa021bc0b48b0788eb0e118add5e \ No newline at end of file diff --git a/plugins/repository-azure/licenses/netty-transport-native-unix-common-4.1.112.Final.jar.sha1 b/plugins/repository-azure/licenses/netty-transport-native-unix-common-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..8dad0e3104dc8 --- /dev/null +++ b/plugins/repository-azure/licenses/netty-transport-native-unix-common-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +b50ff619cdcdc48e748cba3405c9988529f28f60 \ No newline at end of file diff --git a/plugins/repository-hdfs/licenses/netty-all-4.1.111.Final.jar.sha1 b/plugins/repository-hdfs/licenses/netty-all-4.1.111.Final.jar.sha1 deleted file mode 100644 index 076124a7d1f89..0000000000000 --- a/plugins/repository-hdfs/licenses/netty-all-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -8fba10bb4911517eb1bdcc05ef392499dda4d5ac \ No newline at end of file diff --git a/plugins/repository-hdfs/licenses/netty-all-4.1.112.Final.jar.sha1 b/plugins/repository-hdfs/licenses/netty-all-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..7c36b789e839c --- /dev/null +++ b/plugins/repository-hdfs/licenses/netty-all-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +d6b2e543749a86957777a46cf68aaa337cc558cb \ No newline at end of file diff --git a/plugins/repository-s3/licenses/netty-buffer-4.1.111.Final.jar.sha1 b/plugins/repository-s3/licenses/netty-buffer-4.1.111.Final.jar.sha1 deleted file mode 100644 index 6784ac6c3b64f..0000000000000 --- a/plugins/repository-s3/licenses/netty-buffer-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -b54863f578939e135d3b3aea610284ae57c188cf \ No newline at end of file diff --git a/plugins/repository-s3/licenses/netty-buffer-4.1.112.Final.jar.sha1 b/plugins/repository-s3/licenses/netty-buffer-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..5c26883046fed --- /dev/null +++ b/plugins/repository-s3/licenses/netty-buffer-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +bdc12df04bb6858890b8aa108060b5b365a26102 \ No newline at end of file diff --git a/plugins/repository-s3/licenses/netty-codec-4.1.111.Final.jar.sha1 b/plugins/repository-s3/licenses/netty-codec-4.1.111.Final.jar.sha1 deleted file mode 100644 index 3d86194de9213..0000000000000 --- a/plugins/repository-s3/licenses/netty-codec-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -a6762ec00a6d268f9980741f5b755838bcd658bf \ No newline at end of file diff --git a/plugins/repository-s3/licenses/netty-codec-4.1.112.Final.jar.sha1 b/plugins/repository-s3/licenses/netty-codec-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..1fd224fdd0b44 --- /dev/null +++ b/plugins/repository-s3/licenses/netty-codec-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +c87f2ec3d9a97bd2b793d16817abb2bab93a7fc3 \ No newline at end of file diff --git a/plugins/repository-s3/licenses/netty-codec-http-4.1.111.Final.jar.sha1 b/plugins/repository-s3/licenses/netty-codec-http-4.1.111.Final.jar.sha1 deleted file mode 100644 index 4ef1adb818300..0000000000000 --- a/plugins/repository-s3/licenses/netty-codec-http-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -c6ecbc452321e632bf3cea0f9758839b650455c7 \ No newline at end of file diff --git a/plugins/repository-s3/licenses/netty-codec-http-4.1.112.Final.jar.sha1 b/plugins/repository-s3/licenses/netty-codec-http-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..22d35128c3ad5 --- /dev/null +++ b/plugins/repository-s3/licenses/netty-codec-http-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +81af1040bfa977f98dd0e1bd9639513ea862ca04 \ No newline at end of file diff --git a/plugins/repository-s3/licenses/netty-codec-http2-4.1.111.Final.jar.sha1 b/plugins/repository-s3/licenses/netty-codec-http2-4.1.111.Final.jar.sha1 deleted file mode 100644 index 06c86b8fda557..0000000000000 --- a/plugins/repository-s3/licenses/netty-codec-http2-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -f0cca5df75bfb4f858d0435f601d8b1cae1de054 \ No newline at end of file diff --git a/plugins/repository-s3/licenses/netty-codec-http2-4.1.112.Final.jar.sha1 b/plugins/repository-s3/licenses/netty-codec-http2-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..d4767d06b22bf --- /dev/null +++ b/plugins/repository-s3/licenses/netty-codec-http2-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +7fa28b510f0f16f4d5d7188b86bef59e048f62f9 \ No newline at end of file diff --git a/plugins/repository-s3/licenses/netty-common-4.1.111.Final.jar.sha1 b/plugins/repository-s3/licenses/netty-common-4.1.111.Final.jar.sha1 deleted file mode 100644 index 16cb1cce7f504..0000000000000 --- a/plugins/repository-s3/licenses/netty-common-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -58210befcb31adbcadd5724966a061444db91863 \ No newline at end of file diff --git a/plugins/repository-s3/licenses/netty-common-4.1.112.Final.jar.sha1 b/plugins/repository-s3/licenses/netty-common-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..47af3100f0f2d --- /dev/null +++ b/plugins/repository-s3/licenses/netty-common-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +b2798069092a981a832b7510d0462ee9efb7a80e \ No newline at end of file diff --git a/plugins/repository-s3/licenses/netty-handler-4.1.111.Final.jar.sha1 b/plugins/repository-s3/licenses/netty-handler-4.1.111.Final.jar.sha1 deleted file mode 100644 index 2f70f791f65ed..0000000000000 --- a/plugins/repository-s3/licenses/netty-handler-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -2bc6a58ad2e9e279634b6e55022e8dcd3c175cc4 \ No newline at end of file diff --git a/plugins/repository-s3/licenses/netty-handler-4.1.112.Final.jar.sha1 b/plugins/repository-s3/licenses/netty-handler-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..8b30272861770 --- /dev/null +++ b/plugins/repository-s3/licenses/netty-handler-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +3d5e2d5bcc6baeeb8c13a230980c6132a778e036 \ No newline at end of file diff --git a/plugins/repository-s3/licenses/netty-resolver-4.1.111.Final.jar.sha1 b/plugins/repository-s3/licenses/netty-resolver-4.1.111.Final.jar.sha1 deleted file mode 100644 index 621cbf58f3133..0000000000000 --- a/plugins/repository-s3/licenses/netty-resolver-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -3493179999f211dc49714319f81da2be86523a3b \ No newline at end of file diff --git a/plugins/repository-s3/licenses/netty-resolver-4.1.112.Final.jar.sha1 b/plugins/repository-s3/licenses/netty-resolver-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..1a094fa19a623 --- /dev/null +++ b/plugins/repository-s3/licenses/netty-resolver-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +58a631d9d44c4ed7cc0dcc9cffa6641da9374d72 \ No newline at end of file diff --git a/plugins/repository-s3/licenses/netty-transport-4.1.111.Final.jar.sha1 b/plugins/repository-s3/licenses/netty-transport-4.1.111.Final.jar.sha1 deleted file mode 100644 index ac96e7545ed58..0000000000000 --- a/plugins/repository-s3/licenses/netty-transport-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -24e97cf14ea9d80afe4c5ab69066b587fccc154a \ No newline at end of file diff --git a/plugins/repository-s3/licenses/netty-transport-4.1.112.Final.jar.sha1 b/plugins/repository-s3/licenses/netty-transport-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..5fbfde0836e0c --- /dev/null +++ b/plugins/repository-s3/licenses/netty-transport-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +77cd136dd3843f5e7cbcf68c824975d745c49ddb \ No newline at end of file diff --git a/plugins/repository-s3/licenses/netty-transport-classes-epoll-4.1.111.Final.jar.sha1 b/plugins/repository-s3/licenses/netty-transport-classes-epoll-4.1.111.Final.jar.sha1 deleted file mode 100644 index 97001777eadf5..0000000000000 --- a/plugins/repository-s3/licenses/netty-transport-classes-epoll-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -8b97d32eb1489043e478deea99bd93ce487b82f6 \ No newline at end of file diff --git a/plugins/repository-s3/licenses/netty-transport-classes-epoll-4.1.112.Final.jar.sha1 b/plugins/repository-s3/licenses/netty-transport-classes-epoll-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..0196dacfe92ba --- /dev/null +++ b/plugins/repository-s3/licenses/netty-transport-classes-epoll-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +67e590356eb53c20aaabd67f61ae66f628e62e3d \ No newline at end of file diff --git a/plugins/repository-s3/licenses/netty-transport-native-unix-common-4.1.111.Final.jar.sha1 b/plugins/repository-s3/licenses/netty-transport-native-unix-common-4.1.111.Final.jar.sha1 deleted file mode 100644 index 0847ac3034db7..0000000000000 --- a/plugins/repository-s3/licenses/netty-transport-native-unix-common-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -acafc128cddafa021bc0b48b0788eb0e118add5e \ No newline at end of file diff --git a/plugins/repository-s3/licenses/netty-transport-native-unix-common-4.1.112.Final.jar.sha1 b/plugins/repository-s3/licenses/netty-transport-native-unix-common-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..8dad0e3104dc8 --- /dev/null +++ b/plugins/repository-s3/licenses/netty-transport-native-unix-common-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +b50ff619cdcdc48e748cba3405c9988529f28f60 \ No newline at end of file diff --git a/plugins/transport-nio/licenses/netty-buffer-4.1.111.Final.jar.sha1 b/plugins/transport-nio/licenses/netty-buffer-4.1.111.Final.jar.sha1 deleted file mode 100644 index 6784ac6c3b64f..0000000000000 --- a/plugins/transport-nio/licenses/netty-buffer-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -b54863f578939e135d3b3aea610284ae57c188cf \ No newline at end of file diff --git a/plugins/transport-nio/licenses/netty-buffer-4.1.112.Final.jar.sha1 b/plugins/transport-nio/licenses/netty-buffer-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..5c26883046fed --- /dev/null +++ b/plugins/transport-nio/licenses/netty-buffer-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +bdc12df04bb6858890b8aa108060b5b365a26102 \ No newline at end of file diff --git a/plugins/transport-nio/licenses/netty-codec-4.1.111.Final.jar.sha1 b/plugins/transport-nio/licenses/netty-codec-4.1.111.Final.jar.sha1 deleted file mode 100644 index 3d86194de9213..0000000000000 --- a/plugins/transport-nio/licenses/netty-codec-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -a6762ec00a6d268f9980741f5b755838bcd658bf \ No newline at end of file diff --git a/plugins/transport-nio/licenses/netty-codec-4.1.112.Final.jar.sha1 b/plugins/transport-nio/licenses/netty-codec-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..1fd224fdd0b44 --- /dev/null +++ b/plugins/transport-nio/licenses/netty-codec-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +c87f2ec3d9a97bd2b793d16817abb2bab93a7fc3 \ No newline at end of file diff --git a/plugins/transport-nio/licenses/netty-codec-http-4.1.111.Final.jar.sha1 b/plugins/transport-nio/licenses/netty-codec-http-4.1.111.Final.jar.sha1 deleted file mode 100644 index 4ef1adb818300..0000000000000 --- a/plugins/transport-nio/licenses/netty-codec-http-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -c6ecbc452321e632bf3cea0f9758839b650455c7 \ No newline at end of file diff --git a/plugins/transport-nio/licenses/netty-codec-http-4.1.112.Final.jar.sha1 b/plugins/transport-nio/licenses/netty-codec-http-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..22d35128c3ad5 --- /dev/null +++ b/plugins/transport-nio/licenses/netty-codec-http-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +81af1040bfa977f98dd0e1bd9639513ea862ca04 \ No newline at end of file diff --git a/plugins/transport-nio/licenses/netty-common-4.1.111.Final.jar.sha1 b/plugins/transport-nio/licenses/netty-common-4.1.111.Final.jar.sha1 deleted file mode 100644 index 16cb1cce7f504..0000000000000 --- a/plugins/transport-nio/licenses/netty-common-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -58210befcb31adbcadd5724966a061444db91863 \ No newline at end of file diff --git a/plugins/transport-nio/licenses/netty-common-4.1.112.Final.jar.sha1 b/plugins/transport-nio/licenses/netty-common-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..47af3100f0f2d --- /dev/null +++ b/plugins/transport-nio/licenses/netty-common-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +b2798069092a981a832b7510d0462ee9efb7a80e \ No newline at end of file diff --git a/plugins/transport-nio/licenses/netty-handler-4.1.111.Final.jar.sha1 b/plugins/transport-nio/licenses/netty-handler-4.1.111.Final.jar.sha1 deleted file mode 100644 index 2f70f791f65ed..0000000000000 --- a/plugins/transport-nio/licenses/netty-handler-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -2bc6a58ad2e9e279634b6e55022e8dcd3c175cc4 \ No newline at end of file diff --git a/plugins/transport-nio/licenses/netty-handler-4.1.112.Final.jar.sha1 b/plugins/transport-nio/licenses/netty-handler-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..8b30272861770 --- /dev/null +++ b/plugins/transport-nio/licenses/netty-handler-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +3d5e2d5bcc6baeeb8c13a230980c6132a778e036 \ No newline at end of file diff --git a/plugins/transport-nio/licenses/netty-resolver-4.1.111.Final.jar.sha1 b/plugins/transport-nio/licenses/netty-resolver-4.1.111.Final.jar.sha1 deleted file mode 100644 index 621cbf58f3133..0000000000000 --- a/plugins/transport-nio/licenses/netty-resolver-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -3493179999f211dc49714319f81da2be86523a3b \ No newline at end of file diff --git a/plugins/transport-nio/licenses/netty-resolver-4.1.112.Final.jar.sha1 b/plugins/transport-nio/licenses/netty-resolver-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..1a094fa19a623 --- /dev/null +++ b/plugins/transport-nio/licenses/netty-resolver-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +58a631d9d44c4ed7cc0dcc9cffa6641da9374d72 \ No newline at end of file diff --git a/plugins/transport-nio/licenses/netty-transport-4.1.111.Final.jar.sha1 b/plugins/transport-nio/licenses/netty-transport-4.1.111.Final.jar.sha1 deleted file mode 100644 index ac96e7545ed58..0000000000000 --- a/plugins/transport-nio/licenses/netty-transport-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -24e97cf14ea9d80afe4c5ab69066b587fccc154a \ No newline at end of file diff --git a/plugins/transport-nio/licenses/netty-transport-4.1.112.Final.jar.sha1 b/plugins/transport-nio/licenses/netty-transport-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..5fbfde0836e0c --- /dev/null +++ b/plugins/transport-nio/licenses/netty-transport-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +77cd136dd3843f5e7cbcf68c824975d745c49ddb \ No newline at end of file diff --git a/plugins/transport-reactor-netty4/licenses/netty-buffer-4.1.111.Final.jar.sha1 b/plugins/transport-reactor-netty4/licenses/netty-buffer-4.1.111.Final.jar.sha1 deleted file mode 100644 index 6784ac6c3b64f..0000000000000 --- a/plugins/transport-reactor-netty4/licenses/netty-buffer-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -b54863f578939e135d3b3aea610284ae57c188cf \ No newline at end of file diff --git a/plugins/transport-reactor-netty4/licenses/netty-buffer-4.1.112.Final.jar.sha1 b/plugins/transport-reactor-netty4/licenses/netty-buffer-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..5c26883046fed --- /dev/null +++ b/plugins/transport-reactor-netty4/licenses/netty-buffer-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +bdc12df04bb6858890b8aa108060b5b365a26102 \ No newline at end of file diff --git a/plugins/transport-reactor-netty4/licenses/netty-codec-4.1.111.Final.jar.sha1 b/plugins/transport-reactor-netty4/licenses/netty-codec-4.1.111.Final.jar.sha1 deleted file mode 100644 index 3d86194de9213..0000000000000 --- a/plugins/transport-reactor-netty4/licenses/netty-codec-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -a6762ec00a6d268f9980741f5b755838bcd658bf \ No newline at end of file diff --git a/plugins/transport-reactor-netty4/licenses/netty-codec-4.1.112.Final.jar.sha1 b/plugins/transport-reactor-netty4/licenses/netty-codec-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..1fd224fdd0b44 --- /dev/null +++ b/plugins/transport-reactor-netty4/licenses/netty-codec-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +c87f2ec3d9a97bd2b793d16817abb2bab93a7fc3 \ No newline at end of file diff --git a/plugins/transport-reactor-netty4/licenses/netty-codec-dns-4.1.111.Final.jar.sha1 b/plugins/transport-reactor-netty4/licenses/netty-codec-dns-4.1.111.Final.jar.sha1 deleted file mode 100644 index 5e3f819012811..0000000000000 --- a/plugins/transport-reactor-netty4/licenses/netty-codec-dns-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -f988dbb527efb0e7cf7d444cc50b0fc3f5f380ec \ No newline at end of file diff --git a/plugins/transport-reactor-netty4/licenses/netty-codec-dns-4.1.112.Final.jar.sha1 b/plugins/transport-reactor-netty4/licenses/netty-codec-dns-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..a42a41b6387c8 --- /dev/null +++ b/plugins/transport-reactor-netty4/licenses/netty-codec-dns-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +06724b184ee870ecc4d8fc36931beeb3c387b0ee \ No newline at end of file diff --git a/plugins/transport-reactor-netty4/licenses/netty-codec-http-4.1.111.Final.jar.sha1 b/plugins/transport-reactor-netty4/licenses/netty-codec-http-4.1.111.Final.jar.sha1 deleted file mode 100644 index 4ef1adb818300..0000000000000 --- a/plugins/transport-reactor-netty4/licenses/netty-codec-http-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -c6ecbc452321e632bf3cea0f9758839b650455c7 \ No newline at end of file diff --git a/plugins/transport-reactor-netty4/licenses/netty-codec-http-4.1.112.Final.jar.sha1 b/plugins/transport-reactor-netty4/licenses/netty-codec-http-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..22d35128c3ad5 --- /dev/null +++ b/plugins/transport-reactor-netty4/licenses/netty-codec-http-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +81af1040bfa977f98dd0e1bd9639513ea862ca04 \ No newline at end of file diff --git a/plugins/transport-reactor-netty4/licenses/netty-codec-http2-4.1.111.Final.jar.sha1 b/plugins/transport-reactor-netty4/licenses/netty-codec-http2-4.1.111.Final.jar.sha1 deleted file mode 100644 index 06c86b8fda557..0000000000000 --- a/plugins/transport-reactor-netty4/licenses/netty-codec-http2-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -f0cca5df75bfb4f858d0435f601d8b1cae1de054 \ No newline at end of file diff --git a/plugins/transport-reactor-netty4/licenses/netty-codec-http2-4.1.112.Final.jar.sha1 b/plugins/transport-reactor-netty4/licenses/netty-codec-http2-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..d4767d06b22bf --- /dev/null +++ b/plugins/transport-reactor-netty4/licenses/netty-codec-http2-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +7fa28b510f0f16f4d5d7188b86bef59e048f62f9 \ No newline at end of file diff --git a/plugins/transport-reactor-netty4/licenses/netty-common-4.1.111.Final.jar.sha1 b/plugins/transport-reactor-netty4/licenses/netty-common-4.1.111.Final.jar.sha1 deleted file mode 100644 index 16cb1cce7f504..0000000000000 --- a/plugins/transport-reactor-netty4/licenses/netty-common-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -58210befcb31adbcadd5724966a061444db91863 \ No newline at end of file diff --git a/plugins/transport-reactor-netty4/licenses/netty-common-4.1.112.Final.jar.sha1 b/plugins/transport-reactor-netty4/licenses/netty-common-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..47af3100f0f2d --- /dev/null +++ b/plugins/transport-reactor-netty4/licenses/netty-common-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +b2798069092a981a832b7510d0462ee9efb7a80e \ No newline at end of file diff --git a/plugins/transport-reactor-netty4/licenses/netty-handler-4.1.111.Final.jar.sha1 b/plugins/transport-reactor-netty4/licenses/netty-handler-4.1.111.Final.jar.sha1 deleted file mode 100644 index 2f70f791f65ed..0000000000000 --- a/plugins/transport-reactor-netty4/licenses/netty-handler-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -2bc6a58ad2e9e279634b6e55022e8dcd3c175cc4 \ No newline at end of file diff --git a/plugins/transport-reactor-netty4/licenses/netty-handler-4.1.112.Final.jar.sha1 b/plugins/transport-reactor-netty4/licenses/netty-handler-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..8b30272861770 --- /dev/null +++ b/plugins/transport-reactor-netty4/licenses/netty-handler-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +3d5e2d5bcc6baeeb8c13a230980c6132a778e036 \ No newline at end of file diff --git a/plugins/transport-reactor-netty4/licenses/netty-resolver-4.1.111.Final.jar.sha1 b/plugins/transport-reactor-netty4/licenses/netty-resolver-4.1.111.Final.jar.sha1 deleted file mode 100644 index 621cbf58f3133..0000000000000 --- a/plugins/transport-reactor-netty4/licenses/netty-resolver-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -3493179999f211dc49714319f81da2be86523a3b \ No newline at end of file diff --git a/plugins/transport-reactor-netty4/licenses/netty-resolver-4.1.112.Final.jar.sha1 b/plugins/transport-reactor-netty4/licenses/netty-resolver-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..1a094fa19a623 --- /dev/null +++ b/plugins/transport-reactor-netty4/licenses/netty-resolver-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +58a631d9d44c4ed7cc0dcc9cffa6641da9374d72 \ No newline at end of file diff --git a/plugins/transport-reactor-netty4/licenses/netty-resolver-dns-4.1.111.Final.jar.sha1 b/plugins/transport-reactor-netty4/licenses/netty-resolver-dns-4.1.111.Final.jar.sha1 deleted file mode 100644 index b22ad6784809b..0000000000000 --- a/plugins/transport-reactor-netty4/licenses/netty-resolver-dns-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -5ac6a3d96935129ba45ea768ad30e31cad0d8c4d \ No newline at end of file diff --git a/plugins/transport-reactor-netty4/licenses/netty-resolver-dns-4.1.112.Final.jar.sha1 b/plugins/transport-reactor-netty4/licenses/netty-resolver-dns-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..24e8177190e04 --- /dev/null +++ b/plugins/transport-reactor-netty4/licenses/netty-resolver-dns-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +375872f1c16bb51aac016ff6ee4f5d28b1288d4d \ No newline at end of file diff --git a/plugins/transport-reactor-netty4/licenses/netty-transport-4.1.111.Final.jar.sha1 b/plugins/transport-reactor-netty4/licenses/netty-transport-4.1.111.Final.jar.sha1 deleted file mode 100644 index ac96e7545ed58..0000000000000 --- a/plugins/transport-reactor-netty4/licenses/netty-transport-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -24e97cf14ea9d80afe4c5ab69066b587fccc154a \ No newline at end of file diff --git a/plugins/transport-reactor-netty4/licenses/netty-transport-4.1.112.Final.jar.sha1 b/plugins/transport-reactor-netty4/licenses/netty-transport-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..5fbfde0836e0c --- /dev/null +++ b/plugins/transport-reactor-netty4/licenses/netty-transport-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +77cd136dd3843f5e7cbcf68c824975d745c49ddb \ No newline at end of file diff --git a/plugins/transport-reactor-netty4/licenses/netty-transport-native-unix-common-4.1.111.Final.jar.sha1 b/plugins/transport-reactor-netty4/licenses/netty-transport-native-unix-common-4.1.111.Final.jar.sha1 deleted file mode 100644 index 0847ac3034db7..0000000000000 --- a/plugins/transport-reactor-netty4/licenses/netty-transport-native-unix-common-4.1.111.Final.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -acafc128cddafa021bc0b48b0788eb0e118add5e \ No newline at end of file diff --git a/plugins/transport-reactor-netty4/licenses/netty-transport-native-unix-common-4.1.112.Final.jar.sha1 b/plugins/transport-reactor-netty4/licenses/netty-transport-native-unix-common-4.1.112.Final.jar.sha1 new file mode 100644 index 0000000000000..8dad0e3104dc8 --- /dev/null +++ b/plugins/transport-reactor-netty4/licenses/netty-transport-native-unix-common-4.1.112.Final.jar.sha1 @@ -0,0 +1 @@ +b50ff619cdcdc48e748cba3405c9988529f28f60 \ No newline at end of file From f829a9f2a59aa2d864c197b58d8b20095d0081fb Mon Sep 17 00:00:00 2001 From: Peter Nied Date: Fri, 2 Aug 2024 12:38:15 -0500 Subject: [PATCH 149/167] Decommission the Core Triage meeting (#15085) Resolves: https://github.com/opensearch-project/OpenSearch/issues/14706 Signed-off-by: Peter Nied --- TRIAGING.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/TRIAGING.md b/TRIAGING.md index c7c07a8ce30bd..6791d5944ee6f 100644 --- a/TRIAGING.md +++ b/TRIAGING.md @@ -1,6 +1,6 @@ -The maintainers of the OpenSearch Repo seek to promote an inclusive and engaged community of contributors. In order to facilitate this, weekly triage meetings are open-to-all and attendance is encouraged for anyone who hopes to contribute, discuss an issue, or learn more about the project. There are several weekly triage meetings scoped to the following component areas: Search, Storage, Cluster Manager, and finally "Core" as a catch-all for all other issues. To learn more about contributing to the OpenSearch Repo visit the [Contributing](./CONTRIBUTING.md) documentation. +The maintainers of the OpenSearch Repo seek to promote an inclusive and engaged community of contributors. In order to facilitate this, weekly triage meetings are open-to-all and attendance is encouraged for anyone who hopes to contribute, discuss an issue, or learn more about the project. There are several weekly triage meetings scoped to the following component areas: Search, Storage, and Cluster Manager. To learn more about contributing to the OpenSearch Repo visit the [Contributing](./CONTRIBUTING.md) documentation. ### Do I need to attend for my issue to be addressed/triaged? @@ -14,7 +14,7 @@ Each meeting we seek to address all new issues. However, should we run out of ti ### How do I join a Triage meeting? - Check the [OpenSearch Meetup Group](https://www.meetup.com/opensearch/) for the latest schedule and details for joining each meeting. Each component area has its own meetup series: [Search](https://www.meetup.com/opensearch/events/300929493/), [Storage](https://www.meetup.com/opensearch/events/299907409/), [Cluster Manager](https://www.meetup.com/opensearch/events/301082218/), [Indexing](https://www.meetup.com/opensearch/events/301734024/), and [Core](https://www.meetup.com/opensearch/events/301061009/). + Check the [OpenSearch Meetup Group](https://www.meetup.com/opensearch/) for the latest schedule and details for joining each meeting. Each component area has its own meetup series: [Search](https://www.meetup.com/opensearch/events/300929493/), [Storage](https://www.meetup.com/opensearch/events/299907409/), [Cluster Manager](https://www.meetup.com/opensearch/events/301082218/), and [Indexing](https://www.meetup.com/opensearch/events/301734024/). After joining the virtual meeting, you can enable your video / voice to join the discussion. If you do not have a webcam or microphone available, you can still join in via the text chat. From bd226c215000866ea83f5aa872d7771d26effece Mon Sep 17 00:00:00 2001 From: kkewwei Date: Sat, 3 Aug 2024 02:51:55 +0800 Subject: [PATCH 150/167] support rangeQuery and regexpQuery in constant_keyword field type (#14711) --------- Signed-off-by: kkewwei --- CHANGELOG.md | 1 + .../test/index/110_constant_keyword.yml | 282 +++++++++++++++++- .../index/mapper/ConstantFieldType.java | 2 +- .../mapper/ConstantKeywordFieldMapper.java | 66 ++++ .../mapper/ConstantKeywordFieldTypeTests.java | 54 ++++ 5 files changed, 394 insertions(+), 11 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index c240cf26627cd..dfc330cfdaed2 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -12,6 +12,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Add basic aggregation support for derived fields ([#14618](https://github.com/opensearch-project/OpenSearch/pull/14618)) - Add ThreadContextPermission for markAsSystemContext and allow core to perform the method ([#15016](https://github.com/opensearch-project/OpenSearch/pull/15016)) - Add ThreadContextPermission for stashAndMergeHeaders and stashWithOrigin ([#15039](https://github.com/opensearch-project/OpenSearch/pull/15039)) +- Add `rangeQuery` and `regexpQuery` for `constant_keyword` field type ([#14711](https://github.com/opensearch-project/OpenSearch/pull/14711)) ### Dependencies - Bump `netty` from 4.1.111.Final to 4.1.112.Final ([#15081](https://github.com/opensearch-project/OpenSearch/pull/15081)) diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/index/110_constant_keyword.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/index/110_constant_keyword.yml index f4f8b3752bec8..1c50187534026 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/index/110_constant_keyword.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/index/110_constant_keyword.yml @@ -1,17 +1,13 @@ +# The test setup includes two parts: +# part1: test mapping and indexing +# part2: test query --- -# The test setup includes: -# - Create index with constant_keyword field type -# - Check mapping -# - Index two example documents -# - Search -# - Delete Index when connection is teardown - -"Mappings and Supported queries": +"Mappings and Indexing": - skip: version: " - 2.15.99" reason: "fixed in 2.16.0" - # Create index with constant_keyword field type + # Create indices with constant_keyword field type - do: indices.create: index: test @@ -22,7 +18,7 @@ type: "constant_keyword" value: "1" - # Index document + # Index documents to test integer and string are both ok. - do: index: index: test @@ -39,6 +35,7 @@ "genre": 1 } + # Refresh - do: indices.refresh: index: test @@ -54,6 +51,7 @@ # Verify Document Count - do: search: + index: test body: { query: { match_all: {} @@ -68,3 +66,267 @@ - do: indices.delete: index: test + +--- +"Queries": + - skip: + version: " - 2.99.99" + reason: "rangeQuery and regexpQuery are supported in 3.0.0 in main branch" + + - do: + indices.create: + index: test1 + body: + mappings: + properties: + genre: + type: "constant_keyword" + value: "d3efault" + + # Index documents to test query. + - do: + index: + index: test1 + id: 1 + body: { + "genre": "d3efault" + } + + # Refresh + - do: + indices.refresh: + index: test1 + + # Test rangeQuery + - do: + search: + index: test1 + body: { + query: { + range: { + genre: { + gte: "d3efault" + } + } + } + } + + - length: { hits.hits: 1 } + + - do: + search: + index: test1 + body: { + query: { + range: { + genre: { + from: "d3efault", + "include_lower": "false" + } + } + } + } + + - length: { hits.hits: 0 } + + - do: + search: + index: test1 + body: { + query: { + range: { + genre: { + lte: "d3efault" + } + } + } + } + + - length: { hits.hits: 1 } + + - do: + search: + index: test1 + body: { + query: { + range: { + genre: { + to: "d3efault", + include_upper: "false" + } + } + } + } + + - length: { hits.hits: 0 } + + - do: + search: + index: test1 + body: { + query: { + range: { + genre: { + from: "d3efault", + to: "d3efault", + include_lower: "false", + include_upper: "true" + } + } + } + } + + - length: { hits.hits: 0 } + + - do: + search: + index: test1 + body: { + query: { + range: { + genre: { + from: "d3efault", + to: "d3efault", + include_lower: "true", + include_upper: "false" + } + } + } + } + + - length: { hits.hits: 0 } + + - do: + search: + index: test1 + body: { + query: { + range: { + genre: { + from: null, + to: null + } + } + } + } + + - length: { hits.hits: 1 } + + - do: + search: + index: test1 + body: { + query: { + range: { + genre: { + from: "d3efault", + to: "d3efault", + include_lower: "true", + include_upper: "true" + } + } + } + } + + - length: { hits.hits: 1 } + + - do: + search: + index: test1 + body: { + query: { + range: { + genre: { + from: "d3efaul", + to: "d3efault1", + include_lower: "true", + include_upper: "true" + } + } + } + } + + - length: { hits.hits: 1 } + + # Test regexpQuery + - do: + search: + index: test1 + body: { + query: { + regexp: { + "genre":"d.*" + } + } + } + + - length: { hits.hits: 1 } + + - do: + search: + index: test1 + body: { + query: { + regexp: { + "genre":"d\\defau[a-z]?t" + } + } + } + + - length: { hits.hits: 1 } + + - do: + search: + index: test1 + body: { + query: { + regexp: { + "genre":"d\\defa[a-z]?t" + } + } + } + + - length: { hits.hits: 0 } + + - do: + search: + index: test1 + body: { + query: { + regexp: { + "genre":"d3efa[a-z]{3,3}" + } + } + } + + - length: { hits.hits: 1 } + + - do: + search: + index: test1 + body: { + query: { + regexp: { + "genre":"d3efa[a-z]{4,4}" + } + } + } + + - length: { hits.hits: 0 } + + - do: + search: + index: test1 + body: { + query: { + match_all: {} + } + } + + - length: { hits.hits: 1 } + - match: { hits.hits.0._source.genre: "d3efault" } + + # Delete Index when connection is teardown + - do: + indices.delete: + index: test1 diff --git a/server/src/main/java/org/opensearch/index/mapper/ConstantFieldType.java b/server/src/main/java/org/opensearch/index/mapper/ConstantFieldType.java index a28a6369b1aa4..cc581651e5295 100644 --- a/server/src/main/java/org/opensearch/index/mapper/ConstantFieldType.java +++ b/server/src/main/java/org/opensearch/index/mapper/ConstantFieldType.java @@ -76,7 +76,7 @@ public final boolean isAggregatable() { */ protected abstract boolean matches(String pattern, boolean caseInsensitive, QueryShardContext context); - private static String valueToString(Object value) { + static String valueToString(Object value) { return value instanceof BytesRef ? ((BytesRef) value).utf8ToString() : value.toString(); } diff --git a/server/src/main/java/org/opensearch/index/mapper/ConstantKeywordFieldMapper.java b/server/src/main/java/org/opensearch/index/mapper/ConstantKeywordFieldMapper.java index 2edd817f61f61..02c2214c18e72 100644 --- a/server/src/main/java/org/opensearch/index/mapper/ConstantKeywordFieldMapper.java +++ b/server/src/main/java/org/opensearch/index/mapper/ConstantKeywordFieldMapper.java @@ -9,10 +9,21 @@ package org.opensearch.index.mapper; import org.apache.lucene.search.MatchAllDocsQuery; +import org.apache.lucene.search.MatchNoDocsQuery; +import org.apache.lucene.search.MultiTermQuery; import org.apache.lucene.search.Query; +import org.apache.lucene.search.RegexpQuery; +import org.apache.lucene.util.BytesRef; +import org.apache.lucene.util.automaton.Automaton; +import org.apache.lucene.util.automaton.ByteRunAutomaton; +import org.apache.lucene.util.automaton.RegExp; import org.opensearch.OpenSearchParseException; +import org.opensearch.common.Nullable; import org.opensearch.common.annotation.PublicApi; +import org.opensearch.common.geo.ShapeRelation; +import org.opensearch.common.lucene.BytesRefs; import org.opensearch.common.regex.Regex; +import org.opensearch.common.time.DateMathParser; import org.opensearch.index.fielddata.IndexFieldData; import org.opensearch.index.fielddata.plain.ConstantIndexFieldData; import org.opensearch.index.query.QueryShardContext; @@ -20,6 +31,7 @@ import org.opensearch.search.lookup.SearchLookup; import java.io.IOException; +import java.time.ZoneId; import java.util.Arrays; import java.util.Collections; import java.util.List; @@ -122,6 +134,60 @@ public Query existsQuery(QueryShardContext context) { return new MatchAllDocsQuery(); } + @Override + public Query rangeQuery( + Object lowerTerm, + Object upperTerm, + boolean includeLower, + boolean includeUpper, + ShapeRelation relation, + ZoneId timeZone, + DateMathParser parser, + QueryShardContext context + ) { + if (lowerTerm != null) { + lowerTerm = valueToString(lowerTerm); + } + if (upperTerm != null) { + upperTerm = valueToString(upperTerm); + } + + if (lowerTerm != null && upperTerm != null && ((String) lowerTerm).compareTo((String) upperTerm) > 0) { + return new MatchNoDocsQuery(); + } + + if (lowerTerm != null && ((String) lowerTerm).compareTo(value) > (includeLower ? 0 : -1)) { + return new MatchNoDocsQuery(); + } + + if (upperTerm != null && ((String) upperTerm).compareTo(value) < (includeUpper ? 0 : 1)) { + return new MatchNoDocsQuery(); + } + return new MatchAllDocsQuery(); + } + + @Override + public Query regexpQuery( + String value, + int syntaxFlags, + int matchFlags, + int maxDeterminizedStates, + @Nullable MultiTermQuery.RewriteMethod method, + QueryShardContext context + ) { + Automaton automaton = new RegExp(value, syntaxFlags, matchFlags).toAutomaton( + RegexpQuery.DEFAULT_PROVIDER, + maxDeterminizedStates + ); + ByteRunAutomaton byteRunAutomaton = new ByteRunAutomaton(automaton); + BytesRef valueBytes = BytesRefs.toBytesRef(this.value); + if (byteRunAutomaton.run(valueBytes.bytes, valueBytes.offset, valueBytes.length)) { + return new MatchAllDocsQuery(); + } else { + return new MatchNoDocsQuery(); + } + } + @Override public IndexFieldData.Builder fielddataBuilder(String fullyQualifiedIndexName, Supplier searchLookup) { return new ConstantIndexFieldData.Builder(fullyQualifiedIndexName, name(), CoreValuesSourceType.BYTES); diff --git a/server/src/test/java/org/opensearch/index/mapper/ConstantKeywordFieldTypeTests.java b/server/src/test/java/org/opensearch/index/mapper/ConstantKeywordFieldTypeTests.java index 235811539a299..266d79fb8e8b8 100644 --- a/server/src/test/java/org/opensearch/index/mapper/ConstantKeywordFieldTypeTests.java +++ b/server/src/test/java/org/opensearch/index/mapper/ConstantKeywordFieldTypeTests.java @@ -10,6 +10,8 @@ import org.apache.lucene.search.MatchAllDocsQuery; import org.apache.lucene.search.MatchNoDocsQuery; +import org.apache.lucene.search.MultiTermQuery; +import org.apache.lucene.search.Query; import org.opensearch.Version; import org.opensearch.cluster.metadata.IndexMetadata; import org.opensearch.common.regex.Regex; @@ -61,6 +63,58 @@ public void testExistsQuery() { assertEquals(new MatchAllDocsQuery(), ft.existsQuery(createContext())); } + public void testRangeQuery() { + Query actual = ft.rangeQuery("default", null, true, false, null, null, null, MOCK_QSC); + assertEquals(new MatchAllDocsQuery(), actual); + + actual = ft.rangeQuery("default", null, false, false, null, null, null, MOCK_QSC); + assertEquals(new MatchNoDocsQuery(), actual); + + actual = ft.rangeQuery(null, "default", true, true, null, null, null, MOCK_QSC); + assertEquals(new MatchAllDocsQuery(), actual); + + actual = ft.rangeQuery(null, "default", false, false, null, null, null, MOCK_QSC); + assertEquals(new MatchNoDocsQuery(), actual); + + actual = ft.rangeQuery("default", "default", false, true, null, null, null, MOCK_QSC); + assertEquals(new MatchNoDocsQuery(), actual); + + actual = ft.rangeQuery("default", "default", true, false, null, null, null, MOCK_QSC); + assertEquals(new MatchNoDocsQuery(), actual); + + actual = ft.rangeQuery(null, null, false, false, null, null, null, MOCK_QSC); + assertEquals(new MatchAllDocsQuery(), actual); + + actual = ft.rangeQuery("default", "default", true, true, null, null, null, MOCK_QSC); + assertEquals(new MatchAllDocsQuery(), actual); + + actual = ft.rangeQuery("defaul", "default1", true, true, null, null, null, MOCK_QSC); + assertEquals(new MatchAllDocsQuery(), actual); + } + + public void testRegexpQuery() { + final ConstantKeywordFieldMapper.ConstantKeywordFieldType ft = new ConstantKeywordFieldMapper.ConstantKeywordFieldType( + "field", + "d3efault" + ); + // test .* + Query query = ft.regexpQuery("d.*", 0, 0, 10, MultiTermQuery.CONSTANT_SCORE_BLENDED_REWRITE, MOCK_QSC); + assertEquals(new MatchAllDocsQuery(), query); + // test \d and ? + query = ft.regexpQuery("d\\defau[a-z]?t", 0, 0, 10, MultiTermQuery.CONSTANT_SCORE_BLENDED_REWRITE, MOCK_QSC); + assertEquals(new MatchAllDocsQuery(), query); + + // test \d and ? + query = ft.regexpQuery("d\\defa[a-z]?t", 0, 0, 10, MultiTermQuery.CONSTANT_SCORE_BLENDED_REWRITE, MOCK_QSC); + assertEquals(new MatchNoDocsQuery(), query); + // \w{m,n} + query = ft.regexpQuery("d3efa[a-z]{3,3}", 0, 0, 10, MultiTermQuery.CONSTANT_SCORE_BLENDED_REWRITE, MOCK_QSC); + assertEquals(new MatchAllDocsQuery(), query); + // \w{m,n} + query = ft.regexpQuery("d3efa[a-z]{4,4}", 0, 0, 10, MultiTermQuery.CONSTANT_SCORE_BLENDED_REWRITE, MOCK_QSC); + assertEquals(new MatchNoDocsQuery(), query); + } + private QueryShardContext createContext() { IndexMetadata indexMetadata = IndexMetadata.builder("index") .settings(Settings.builder().put(IndexMetadata.SETTING_VERSION_CREATED, Version.CURRENT)) From a785073e5e7925ef8e5605427cae943822100f4a Mon Sep 17 00:00:00 2001 From: Jay Deng Date: Fri, 2 Aug 2024 12:57:00 -0700 Subject: [PATCH 151/167] Support scripting for composite aggs in concurrent segment search (#15072) Signed-off-by: Jay Deng --- CHANGELOG.md | 1 + modules/lang-painless/build.gradle | 1 + .../opensearch/painless/SimplePainlessIT.java | 231 ++++++++++++++++++ .../CompositeAggregationFactory.java | 4 +- .../search/lookup/SearchLookup.java | 13 +- .../search/lookup/SourceLookup.java | 2 +- 6 files changed, 247 insertions(+), 5 deletions(-) create mode 100644 modules/lang-painless/src/internalClusterTest/java/org/opensearch/painless/SimplePainlessIT.java diff --git a/CHANGELOG.md b/CHANGELOG.md index dfc330cfdaed2..708c9831236b2 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -12,6 +12,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Add basic aggregation support for derived fields ([#14618](https://github.com/opensearch-project/OpenSearch/pull/14618)) - Add ThreadContextPermission for markAsSystemContext and allow core to perform the method ([#15016](https://github.com/opensearch-project/OpenSearch/pull/15016)) - Add ThreadContextPermission for stashAndMergeHeaders and stashWithOrigin ([#15039](https://github.com/opensearch-project/OpenSearch/pull/15039)) +- [Concurrent Segment Search] Support composite aggregations with scripting ([#15072](https://github.com/opensearch-project/OpenSearch/pull/15072)) - Add `rangeQuery` and `regexpQuery` for `constant_keyword` field type ([#14711](https://github.com/opensearch-project/OpenSearch/pull/14711)) ### Dependencies diff --git a/modules/lang-painless/build.gradle b/modules/lang-painless/build.gradle index 7b828109139c8..7075901979e3b 100644 --- a/modules/lang-painless/build.gradle +++ b/modules/lang-painless/build.gradle @@ -33,6 +33,7 @@ import com.github.jengelman.gradle.plugins.shadow.ShadowBasePlugin apply plugin: 'opensearch.validate-rest-spec' apply plugin: 'opensearch.yaml-rest-test' +apply plugin: 'opensearch.internal-cluster-test' opensearchplugin { description 'An easy, safe and fast scripting language for OpenSearch' diff --git a/modules/lang-painless/src/internalClusterTest/java/org/opensearch/painless/SimplePainlessIT.java b/modules/lang-painless/src/internalClusterTest/java/org/opensearch/painless/SimplePainlessIT.java new file mode 100644 index 0000000000000..df327bf4871c6 --- /dev/null +++ b/modules/lang-painless/src/internalClusterTest/java/org/opensearch/painless/SimplePainlessIT.java @@ -0,0 +1,231 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.painless; + +import com.carrotsearch.randomizedtesting.annotations.ParametersFactory; + +import org.opensearch.action.search.SearchRequest; +import org.opensearch.action.search.SearchResponse; +import org.opensearch.action.support.WriteRequest; +import org.opensearch.cluster.metadata.IndexMetadata; +import org.opensearch.common.settings.Settings; +import org.opensearch.common.xcontent.XContentFactory; +import org.opensearch.core.xcontent.MediaTypeRegistry; +import org.opensearch.core.xcontent.XContentBuilder; +import org.opensearch.index.query.TermsQueryBuilder; +import org.opensearch.plugins.Plugin; +import org.opensearch.script.Script; +import org.opensearch.script.ScriptType; +import org.opensearch.search.aggregations.AggregationBuilder; +import org.opensearch.search.aggregations.AggregationBuilders; +import org.opensearch.search.aggregations.bucket.composite.InternalComposite; +import org.opensearch.search.aggregations.bucket.composite.TermsValuesSourceBuilder; +import org.opensearch.search.aggregations.bucket.terms.Terms; +import org.opensearch.search.aggregations.bucket.terms.TermsAggregationBuilder; +import org.opensearch.search.builder.SearchSourceBuilder; +import org.opensearch.test.OpenSearchIntegTestCase; +import org.opensearch.test.ParameterizedStaticSettingsOpenSearchIntegTestCase; + +import java.util.Arrays; +import java.util.Collection; +import java.util.Collections; +import java.util.List; +import java.util.Objects; + +import static org.opensearch.index.query.QueryBuilders.matchAllQuery; +import static org.opensearch.search.SearchService.CLUSTER_CONCURRENT_SEGMENT_SEARCH_SETTING; +import static org.opensearch.search.SearchService.CONCURRENT_SEGMENT_SEARCH_TARGET_MAX_SLICE_COUNT_SETTING; +import static org.opensearch.test.hamcrest.OpenSearchAssertions.assertAcked; +import static org.opensearch.test.hamcrest.OpenSearchAssertions.assertSearchResponse; + +@OpenSearchIntegTestCase.SuiteScopeTestCase +public class SimplePainlessIT extends ParameterizedStaticSettingsOpenSearchIntegTestCase { + + public SimplePainlessIT(Settings nodeSettings) { + super(nodeSettings); + } + + @ParametersFactory + public static Collection parameters() { + return Arrays.asList( + new Object[] { Settings.builder().put(CLUSTER_CONCURRENT_SEGMENT_SEARCH_SETTING.getKey(), true).build() }, + new Object[] { Settings.builder().put(CLUSTER_CONCURRENT_SEGMENT_SEARCH_SETTING.getKey(), false).build() } + ); + } + + @Override + protected Collection> nodePlugins() { + return List.of(PainlessModulePlugin.class); + } + + @Override + protected Settings nodeSettings(int nodeOrdinal) { + return Settings.builder() + .put(super.nodeSettings(nodeOrdinal)) + .put(CONCURRENT_SEGMENT_SEARCH_TARGET_MAX_SLICE_COUNT_SETTING.getKey(), "4") + .build(); + } + + @Override + public void setupSuiteScopeCluster() throws Exception { + XContentBuilder xContentBuilder = XContentFactory.jsonBuilder() + .startObject() + .field("dynamic", "false") + .startObject("_meta") + .field("schema_version", 5) + .endObject() + .startObject("properties") + .startObject("entity") + .field("type", "nested") + .endObject() + .endObject() + .endObject(); + + assertAcked( + prepareCreate("test").setMapping(xContentBuilder) + .setSettings( + Settings.builder().put(IndexMetadata.SETTING_NUMBER_OF_SHARDS, 1).put(IndexMetadata.SETTING_NUMBER_OF_REPLICAS, 0) + ) + ); + + assertAcked( + prepareCreate("test-df").setSettings( + Settings.builder().put(IndexMetadata.SETTING_NUMBER_OF_SHARDS, 1).put(IndexMetadata.SETTING_NUMBER_OF_REPLICAS, 0) + ) + ); + + client().prepareIndex("test") + .setId("a") + .setSource( + "{\"entity\":[{\"name\":\"ip-field\",\"value\":\"1.2.3.4\"},{\"name\":\"keyword-field\",\"value\":\"field-1\"}]}", + MediaTypeRegistry.JSON + ) + .setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE) + .get(); + client().prepareIndex("test") + .setId("b") + .setSource( + "{\"entity\":[{\"name\":\"ip-field\",\"value\":\"5.6.7.8\"},{\"name\":\"keyword-field\",\"value\":\"field-2\"}]}", + MediaTypeRegistry.JSON + ) + .setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE) + .get(); + client().prepareIndex("test") + .setId("c") + .setSource( + "{\"entity\":[{\"name\":\"ip-field\",\"value\":\"1.6.3.8\"},{\"name\":\"keyword-field\",\"value\":\"field-2\"}]}", + MediaTypeRegistry.JSON + ) + .setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE) + .get(); + client().prepareIndex("test") + .setId("d") + .setSource( + "{\"entity\":[{\"name\":\"ip-field\",\"value\":\"2.6.4.8\"},{\"name\":\"keyword-field\",\"value\":\"field-2\"}]}", + MediaTypeRegistry.JSON + ) + .setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE) + .get(); + ensureSearchable("test"); + + client().prepareIndex("test-df") + .setId("a") + .setSource("{\"field\":\"value1\"}", MediaTypeRegistry.JSON) + .setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE) + .get(); + client().prepareIndex("test-df") + .setId("b") + .setSource("{\"field\":\"value2\"}", MediaTypeRegistry.JSON) + .setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE) + .get(); + client().prepareIndex("test-df") + .setId("c") + .setSource("{\"field\":\"value3\"}", MediaTypeRegistry.JSON) + .setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE) + .get(); + client().prepareIndex("test-df") + .setId("d") + .setSource("{\"field\":\"value1\"}", MediaTypeRegistry.JSON) + .setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE) + .get(); + ensureSearchable("test"); + } + + public void testTermsValuesSource() throws Exception { + AggregationBuilder agg = AggregationBuilders.composite( + "multi_buckets", + Collections.singletonList( + new TermsValuesSourceBuilder("keyword-field").script( + new Script( + ScriptType.INLINE, + "painless", + "String value = null; if (params == null || params._source == null || params._source.entity == null) { return \"\"; } for (item in params._source.entity) { if (item[\"name\"] == \"keyword-field\") { value = item['value']; break; } } return value;", + Collections.emptyMap() + ) + ) + ) + ); + SearchResponse response = client().prepareSearch("test").setQuery(matchAllQuery()).addAggregation(agg).get(); + + assertSearchResponse(response); + assertEquals(2, ((InternalComposite) response.getAggregations().get("multi_buckets")).getBuckets().size()); + assertEquals( + "field-1", + ((InternalComposite) response.getAggregations().get("multi_buckets")).getBuckets().get(0).getKey().get("keyword-field") + ); + assertEquals(1, ((InternalComposite) response.getAggregations().get("multi_buckets")).getBuckets().get(0).getDocCount()); + assertEquals( + "field-2", + ((InternalComposite) response.getAggregations().get("multi_buckets")).getBuckets().get(1).getKey().get("keyword-field") + ); + assertEquals(3, ((InternalComposite) response.getAggregations().get("multi_buckets")).getBuckets().get(1).getDocCount()); + } + + public void testSimpleDerivedFieldsQuery() { + assumeFalse( + "Derived fields do not support concurrent search https://github.com/opensearch-project/OpenSearch/issues/15007", + internalCluster().clusterService().getClusterSettings().get(CLUSTER_CONCURRENT_SEGMENT_SEARCH_SETTING) + ); + SearchRequest searchRequest = new SearchRequest("test-df").source( + SearchSourceBuilder.searchSource() + .derivedField("result", "keyword", new Script("emit(params._source[\"field\"])")) + .fetchField("result") + .query(new TermsQueryBuilder("result", "value1")) + ); + SearchResponse response = client().search(searchRequest).actionGet(); + assertSearchResponse(response); + assertEquals(2, Objects.requireNonNull(response.getHits().getTotalHits()).value); + } + + public void testSimpleDerivedFieldsAgg() { + assumeFalse( + "Derived fields do not support concurrent search https://github.com/opensearch-project/OpenSearch/issues/15007", + internalCluster().clusterService().getClusterSettings().get(CLUSTER_CONCURRENT_SEGMENT_SEARCH_SETTING) + ); + SearchRequest searchRequest = new SearchRequest("test-df").source( + SearchSourceBuilder.searchSource() + .derivedField("result", "keyword", new Script("emit(params._source[\"field\"])")) + .fetchField("result") + .aggregation(new TermsAggregationBuilder("derived-agg").field("result")) + ); + SearchResponse response = client().search(searchRequest).actionGet(); + assertSearchResponse(response); + Terms aggResponse = response.getAggregations().get("derived-agg"); + assertEquals(3, aggResponse.getBuckets().size()); + Terms.Bucket bucket = aggResponse.getBuckets().get(0); + assertEquals("value1", bucket.getKey()); + assertEquals(2, bucket.getDocCount()); + bucket = aggResponse.getBuckets().get(1); + assertEquals("value2", bucket.getKey()); + assertEquals(1, bucket.getDocCount()); + bucket = aggResponse.getBuckets().get(2); + assertEquals("value3", bucket.getKey()); + assertEquals(1, bucket.getDocCount()); + } +} diff --git a/server/src/main/java/org/opensearch/search/aggregations/bucket/composite/CompositeAggregationFactory.java b/server/src/main/java/org/opensearch/search/aggregations/bucket/composite/CompositeAggregationFactory.java index 6c5619a843fae..2ff79fb623def 100644 --- a/server/src/main/java/org/opensearch/search/aggregations/bucket/composite/CompositeAggregationFactory.java +++ b/server/src/main/java/org/opensearch/search/aggregations/bucket/composite/CompositeAggregationFactory.java @@ -40,7 +40,6 @@ import org.opensearch.search.internal.SearchContext; import java.io.IOException; -import java.util.Arrays; import java.util.Map; /** @@ -81,7 +80,6 @@ protected Aggregator createInternal( @Override protected boolean supportsConcurrentSegmentSearch() { - // Disable concurrent search if any scripting is used. See https://github.com/opensearch-project/OpenSearch/issues/12331 for details - return Arrays.stream(sources).noneMatch(CompositeValuesSourceConfig::hasScript); + return true; } } diff --git a/server/src/main/java/org/opensearch/search/lookup/SearchLookup.java b/server/src/main/java/org/opensearch/search/lookup/SearchLookup.java index 906616eb9ba5f..dff8fae1a9ad1 100644 --- a/server/src/main/java/org/opensearch/search/lookup/SearchLookup.java +++ b/server/src/main/java/org/opensearch/search/lookup/SearchLookup.java @@ -153,14 +153,25 @@ public final SearchLookup forkAndTrackFieldReferences(String field) { return new SearchLookup(this, newFieldChain); } + /** + * SourceLookup is not thread safe, so we create a new instance for each leaf to support concurrent segment search + */ public LeafSearchLookup getLeafSearchLookup(LeafReaderContext context) { - return new LeafSearchLookup(context, docMap.getLeafDocLookup(context), sourceLookup, fieldsLookup.getLeafFieldsLookup(context)); + return new LeafSearchLookup( + context, + docMap.getLeafDocLookup(context), + new SourceLookup(), + fieldsLookup.getLeafFieldsLookup(context) + ); } public DocLookup doc() { return docMap; } + /** + * Returned SourceLookup will be unrelated to any created LeafSearchLookups. Instead, use {@link LeafSearchLookup#source()} to access the related {@link SearchLookup}. + */ public SourceLookup source() { return sourceLookup; } diff --git a/server/src/main/java/org/opensearch/search/lookup/SourceLookup.java b/server/src/main/java/org/opensearch/search/lookup/SourceLookup.java index cbac29fde7932..4644bcb3d9b92 100644 --- a/server/src/main/java/org/opensearch/search/lookup/SourceLookup.java +++ b/server/src/main/java/org/opensearch/search/lookup/SourceLookup.java @@ -57,7 +57,7 @@ import static java.util.Collections.emptyMap; /** - * Orchestrator class for source lookups + * Orchestrator class for source lookups. Not thread safe. * * @opensearch.api */ From 47171f8badbc185e79b175a17376bf3f294516e3 Mon Sep 17 00:00:00 2001 From: Gaurav Bafna <85113518+gbbafna@users.noreply.github.com> Date: Sat, 3 Aug 2024 14:03:17 +0530 Subject: [PATCH 152/167] Fix RemoteCloneIndex flaky test by using sync FS repo (#15037) Signed-off-by: Gaurav Bafna --- .../action/admin/indices/create/RemoteCloneIndexIT.java | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/server/src/internalClusterTest/java/org/opensearch/action/admin/indices/create/RemoteCloneIndexIT.java b/server/src/internalClusterTest/java/org/opensearch/action/admin/indices/create/RemoteCloneIndexIT.java index acbd68fff6dd0..009f5111078de 100644 --- a/server/src/internalClusterTest/java/org/opensearch/action/admin/indices/create/RemoteCloneIndexIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/action/admin/indices/create/RemoteCloneIndexIT.java @@ -79,7 +79,7 @@ protected boolean forbidPrivateIndexSettings() { @Before public void setup() { - asyncUploadMockFsRepo = true; + asyncUploadMockFsRepo = false; } public void testCreateCloneIndex() { @@ -153,6 +153,7 @@ public void testCreateCloneIndex() { } + @AwaitsFix(bugUrl = "https://github.com/opensearch-project/OpenSearch/issues/15056") public void testCreateCloneIndexLowPriorityRateLimit() { Version version = VersionUtils.randomIndexCompatibleVersion(random()); int numPrimaryShards = 1; @@ -280,7 +281,7 @@ public void testCreateCloneIndexFailure() throws ExecutionException, Interrupted throw new RuntimeException(e); } finally { setFailRate(REPOSITORY_NAME, 0); - ensureGreen(); + ensureGreen(TimeValue.timeValueSeconds(40)); // clean up client().admin() .cluster() From a9d09aa533c8c949e79c760569d875ab5391bd84 Mon Sep 17 00:00:00 2001 From: gaobinlong Date: Sat, 3 Aug 2024 20:38:11 +0800 Subject: [PATCH 153/167] Fix delete index template failed when the index template matches a data stream but is unused (#15080) * Fix delete not-using index template failed when the index pattern matches a data stream Signed-off-by: Gao Binlong * modify change log Signed-off-by: Gao Binlong * Fix version check Signed-off-by: Gao Binlong --------- Signed-off-by: Gao Binlong --- CHANGELOG.md | 1 + .../10_basic.yml | 52 +++++++++++++++++ .../MetadataIndexTemplateService.java | 2 +- .../MetadataIndexTemplateServiceTests.java | 58 +++++++++++++++++++ 4 files changed, 112 insertions(+), 1 deletion(-) create mode 100644 rest-api-spec/src/main/resources/rest-api-spec/test/indices.delete_index_template/10_basic.yml diff --git a/CHANGELOG.md b/CHANGELOG.md index 708c9831236b2..97464c7659f75 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -32,6 +32,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), ### Fixed - Fix constraint bug which allows more primary shards than average primary shards per index ([#14908](https://github.com/opensearch-project/OpenSearch/pull/14908)) - Fix missing value of FieldSort for unsigned_long ([#14963](https://github.com/opensearch-project/OpenSearch/pull/14963)) +- Fix delete index template failed when the index template matches a data stream but is unused ([#15080](https://github.com/opensearch-project/OpenSearch/pull/15080)) ### Security diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/indices.delete_index_template/10_basic.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/indices.delete_index_template/10_basic.yml new file mode 100644 index 0000000000000..c90e83ab59859 --- /dev/null +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/indices.delete_index_template/10_basic.yml @@ -0,0 +1,52 @@ +setup: + - do: + indices.put_index_template: + name: test_template_1 + body: + index_patterns: test-* + template: + settings: + number_of_shards: 1 + number_of_replicas: 0 + "priority": 50 + + - do: + indices.put_index_template: + name: test_template_2 + body: + index_patterns: test-* + data_stream: {} + template: + settings: + number_of_shards: 1 + number_of_replicas: 0 + "priority": 51 + +--- +teardown: + - do: + indices.delete_data_stream: + name: test-1 + ignore: 404 + - do: + indices.delete_index_template: + name: test_template_1 + ignore: 404 + - do: + indices.delete_index_template: + name: test_template_2 + ignore: 404 + +--- +"Delete index template which is not used by data stream but index pattern matches": + - skip: + version: " - 2.99.99" + reason: "fixed in 3.0.0" + + - do: + indices.create_data_stream: + name: test-1 + + - do: + indices.delete_index_template: + name: test_template_1 diff --git a/server/src/main/java/org/opensearch/cluster/metadata/MetadataIndexTemplateService.java b/server/src/main/java/org/opensearch/cluster/metadata/MetadataIndexTemplateService.java index 7bc3d279513cd..6b638c9920c27 100644 --- a/server/src/main/java/org/opensearch/cluster/metadata/MetadataIndexTemplateService.java +++ b/server/src/main/java/org/opensearch/cluster/metadata/MetadataIndexTemplateService.java @@ -944,7 +944,7 @@ static ClusterState innerRemoveIndexTemplateV2(ClusterState currentState, String static Set dataStreamsUsingTemplate(final ClusterState state, final String templateName) { final ComposableIndexTemplate template = state.metadata().templatesV2().get(templateName); - if (template == null) { + if (template == null || template.getDataStreamTemplate() == null) { return Collections.emptySet(); } final Set dataStreams = state.metadata().dataStreams().keySet(); diff --git a/server/src/test/java/org/opensearch/cluster/metadata/MetadataIndexTemplateServiceTests.java b/server/src/test/java/org/opensearch/cluster/metadata/MetadataIndexTemplateServiceTests.java index f26f45b69d133..cb98c34988cbe 100644 --- a/server/src/test/java/org/opensearch/cluster/metadata/MetadataIndexTemplateServiceTests.java +++ b/server/src/test/java/org/opensearch/cluster/metadata/MetadataIndexTemplateServiceTests.java @@ -560,6 +560,64 @@ public void testRemoveIndexTemplateV2() throws Exception { ClusterState updatedState = MetadataIndexTemplateService.innerRemoveIndexTemplateV2(state, "foo"); assertNull(updatedState.metadata().templatesV2().get("foo")); + + // test remove a template which is not used by a data stream but index patterns can match + Settings settings = Settings.builder() + .put(IndexMetadata.SETTING_BLOCKS_READ, randomBoolean()) + .put(IndexMetadata.SETTING_BLOCKS_WRITE, randomBoolean()) + .put(IndexMetadata.SETTING_NUMBER_OF_SHARDS, randomIntBetween(1, 10)) + .put(IndexMetadata.SETTING_NUMBER_OF_REPLICAS, randomIntBetween(0, 5)) + .put(IndexMetadata.SETTING_BLOCKS_WRITE, randomBoolean()) + .put(IndexMetadata.SETTING_PRIORITY, randomIntBetween(0, 100000)) + .build(); + CompressedXContent mappings = new CompressedXContent( + "{\"properties\":{\"" + randomAlphaOfLength(5) + "\":{\"type\":\"keyword\"}}}" + ); + + Map meta = Collections.singletonMap(randomAlphaOfLength(4), randomAlphaOfLength(4)); + List indexPatterns = List.of("foo*"); + List componentTemplates = randomList(0, 10, () -> randomAlphaOfLength(5)); + ComposableIndexTemplate templateToRemove = new ComposableIndexTemplate( + indexPatterns, + new Template(settings, mappings, null), + componentTemplates, + randomBoolean() ? null : randomNonNegativeLong(), + randomBoolean() ? null : randomNonNegativeLong(), + meta, + null + ); + + ClusterState stateWithDS = ClusterState.builder(state) + .metadata( + Metadata.builder(state.metadata()) + .put( + new DataStream( + "foo", + new DataStream.TimestampField("@timestamp"), + Collections.singletonList(new Index(".ds-foo-000001", "uuid2")) + ) + ) + .put( + IndexMetadata.builder(".ds-foo-000001") + .settings( + Settings.builder() + .put(IndexMetadata.SETTING_INDEX_UUID, "uuid2") + .put(IndexMetadata.SETTING_NUMBER_OF_SHARDS, 1) + .put(IndexMetadata.SETTING_NUMBER_OF_REPLICAS, 0) + .put(IndexMetadata.SETTING_VERSION_CREATED, Version.CURRENT) + .build() + ) + ) + .build() + ) + .build(); + + final ClusterState clusterState = metadataIndexTemplateService.addIndexTemplateV2(stateWithDS, false, "foo", templateToRemove); + assertNotNull(clusterState.metadata().templatesV2().get("foo")); + assertTemplatesEqual(clusterState.metadata().templatesV2().get("foo"), templateToRemove); + + updatedState = MetadataIndexTemplateService.innerRemoveIndexTemplateV2(clusterState, "foo"); + assertNull(updatedState.metadata().templatesV2().get("foo")); } /** From e8c6f0f1b15acdfbeeba84a551a2b42024f8502b Mon Sep 17 00:00:00 2001 From: Craig Perkins Date: Sat, 3 Aug 2024 21:14:39 -0400 Subject: [PATCH 154/167] Update README to 2.17.0 (#15099) Signed-off-by: Craig Perkins --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 17af2911b9221..5d4a9a671c013 100644 --- a/README.md +++ b/README.md @@ -7,7 +7,7 @@ [![Security Vulnerabilities](https://img.shields.io/github/issues/opensearch-project/OpenSearch/security%20vulnerability?labelColor=red)](https://github.com/opensearch-project/OpenSearch/issues?q=is%3Aissue+is%3Aopen+label%3A"security%20vulnerability") [![Open Issues](https://img.shields.io/github/issues/opensearch-project/OpenSearch)](https://github.com/opensearch-project/OpenSearch/issues) [![Open Pull Requests](https://img.shields.io/github/issues-pr/opensearch-project/OpenSearch)](https://github.com/opensearch-project/OpenSearch/pulls) -[![2.14.1 Open Issues](https://img.shields.io/github/issues/opensearch-project/OpenSearch/v2.14.1)](https://github.com/opensearch-project/OpenSearch/issues?q=is%3Aissue+is%3Aopen+label%3A"v2.14.1") +[![2.17.0 Open Issues](https://img.shields.io/github/issues/opensearch-project/OpenSearch/v2.17.0)](https://github.com/opensearch-project/OpenSearch/issues?q=is%3Aissue+is%3Aopen+label%3A"v2.17.0") [![3.0.0 Open Issues](https://img.shields.io/github/issues/opensearch-project/OpenSearch/v3.0.0)](https://github.com/opensearch-project/OpenSearch/issues?q=is%3Aissue+is%3Aopen+label%3A"v3.0.0") [![GHA gradle check](https://github.com/opensearch-project/OpenSearch/actions/workflows/gradle-check.yml/badge.svg)](https://github.com/opensearch-project/OpenSearch/actions/workflows/gradle-check.yml) [![GHA validate pull request](https://github.com/opensearch-project/OpenSearch/actions/workflows/wrapper.yml/badge.svg)](https://github.com/opensearch-project/OpenSearch/actions/workflows/wrapper.yml) From b911b6f204a0b7b2acc652f6524026f40eee9bea Mon Sep 17 00:00:00 2001 From: gaobinlong Date: Mon, 5 Aug 2024 21:49:19 +0800 Subject: [PATCH 155/167] Fix version check in yml test for the bug fix of delete index template failed (#15101) Signed-off-by: Gao Binlong --- .../test/indices.delete_index_template/10_basic.yml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/indices.delete_index_template/10_basic.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/indices.delete_index_template/10_basic.yml index c90e83ab59859..c8c08a2d088ac 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/indices.delete_index_template/10_basic.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/indices.delete_index_template/10_basic.yml @@ -40,8 +40,8 @@ teardown: --- "Delete index template which is not used by data stream but index pattern matches": - skip: - version: " - 2.99.99" - reason: "fixed in 3.0.0" + version: " - 2.16.99" + reason: "fixed in 2.17.0" - do: indices.create_data_stream: From 77750066b4dd5b4b0e5b41429a60c369d87ccbf3 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Mon, 5 Aug 2024 09:28:11 -0700 Subject: [PATCH 156/167] Bump org.tukaani:xz from 1.9 to 1.10 in /plugins/ingest-attachment (#15110) * Bump org.tukaani:xz from 1.9 to 1.10 in /plugins/ingest-attachment Bumps [org.tukaani:xz](https://github.com/tukaani-project/xz-java) from 1.9 to 1.10. - [Release notes](https://github.com/tukaani-project/xz-java/releases) - [Changelog](https://github.com/tukaani-project/xz-java/blob/master/NEWS.md) - [Commits](https://github.com/tukaani-project/xz-java/compare/v1.9...v1.10) --- updated-dependencies: - dependency-name: org.tukaani:xz dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] * Updating SHAs Signed-off-by: dependabot[bot] * Update changelog Signed-off-by: dependabot[bot] --------- Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] --- CHANGELOG.md | 1 + plugins/ingest-attachment/build.gradle | 2 +- plugins/ingest-attachment/licenses/xz-1.10.jar.sha1 | 1 + plugins/ingest-attachment/licenses/xz-1.9.jar.sha1 | 1 - 4 files changed, 3 insertions(+), 2 deletions(-) create mode 100644 plugins/ingest-attachment/licenses/xz-1.10.jar.sha1 delete mode 100644 plugins/ingest-attachment/licenses/xz-1.9.jar.sha1 diff --git a/CHANGELOG.md b/CHANGELOG.md index 97464c7659f75..f4db5c3ecb5cc 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -21,6 +21,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - OpenJDK Update (July 2024 Patch releases) ([#14998](https://github.com/opensearch-project/OpenSearch/pull/14998)) - Bump `com.microsoft.azure:msal4j` from 1.16.1 to 1.16.2 ([#14995](https://github.com/opensearch-project/OpenSearch/pull/14995)) - Bump `actions/github-script` from 6 to 7 ([#14997](https://github.com/opensearch-project/OpenSearch/pull/14997)) +- Bump `org.tukaani:xz` from 1.9 to 1.10 ([#15110](https://github.com/opensearch-project/OpenSearch/pull/15110)) ### Changed - Add lower limit for primary and replica batch allocators timeout ([#14979](https://github.com/opensearch-project/OpenSearch/pull/14979)) diff --git a/plugins/ingest-attachment/build.gradle b/plugins/ingest-attachment/build.gradle index d631855013527..81ac52b97cefa 100644 --- a/plugins/ingest-attachment/build.gradle +++ b/plugins/ingest-attachment/build.gradle @@ -66,7 +66,7 @@ dependencies { runtimeOnly "com.optimaize.languagedetector:language-detector:0.6" runtimeOnly "com.google.guava:guava:${versions.guava}" // Other dependencies - api 'org.tukaani:xz:1.9' + api 'org.tukaani:xz:1.10' api "commons-io:commons-io:${versions.commonsio}" api "org.slf4j:slf4j-api:${versions.slf4j}" diff --git a/plugins/ingest-attachment/licenses/xz-1.10.jar.sha1 b/plugins/ingest-attachment/licenses/xz-1.10.jar.sha1 new file mode 100644 index 0000000000000..e3757c19ce5ab --- /dev/null +++ b/plugins/ingest-attachment/licenses/xz-1.10.jar.sha1 @@ -0,0 +1 @@ +1be8166f89e035a56c6bfc67dbc423996fe577e2 \ No newline at end of file diff --git a/plugins/ingest-attachment/licenses/xz-1.9.jar.sha1 b/plugins/ingest-attachment/licenses/xz-1.9.jar.sha1 deleted file mode 100644 index c3e22d167212f..0000000000000 --- a/plugins/ingest-attachment/licenses/xz-1.9.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -1ea4bec1a921180164852c65006d928617bd2caf \ No newline at end of file From f0ef14d279c6446abfcb38616771da5e344e0de7 Mon Sep 17 00:00:00 2001 From: Neetika Singhal Date: Mon, 5 Aug 2024 10:07:04 -0700 Subject: [PATCH 157/167] Fix NODE_SEARCH_CACHE_SIZE_SETTING initialization for TIERED_REMOTE_INDEX_SETTING feature (#15076) Signed-off-by: Neetika Singhal --- .../opensearch/remotestore/WritableWarmIT.java | 15 +++++++++++++-- .../snapshots/SearchableSnapshotIT.java | 12 ++++++------ .../src/main/java/org/opensearch/node/Node.java | 5 ++--- 3 files changed, 21 insertions(+), 11 deletions(-) diff --git a/server/src/internalClusterTest/java/org/opensearch/remotestore/WritableWarmIT.java b/server/src/internalClusterTest/java/org/opensearch/remotestore/WritableWarmIT.java index a51bd6b20fff0..88c9ae436e85f 100644 --- a/server/src/internalClusterTest/java/org/opensearch/remotestore/WritableWarmIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/remotestore/WritableWarmIT.java @@ -20,6 +20,8 @@ import org.opensearch.common.settings.Settings; import org.opensearch.common.settings.SettingsException; import org.opensearch.common.util.FeatureFlags; +import org.opensearch.core.common.unit.ByteSizeUnit; +import org.opensearch.core.common.unit.ByteSizeValue; import org.opensearch.index.IndexModule; import org.opensearch.index.query.QueryBuilders; import org.opensearch.index.shard.IndexShard; @@ -65,11 +67,20 @@ protected Settings featureFlagSettings() { return featureSettings.build(); } + @Override + protected Settings nodeSettings(int nodeOrdinal) { + ByteSizeValue cacheSize = new ByteSizeValue(16, ByteSizeUnit.GB); + return Settings.builder() + .put(super.nodeSettings(nodeOrdinal)) + .put(Node.NODE_SEARCH_CACHE_SIZE_SETTING.getKey(), cacheSize.toString()) + .build(); + } + public void testWritableWarmFeatureFlagDisabled() { Settings clusterSettings = Settings.builder().put(super.nodeSettings(0)).put(FeatureFlags.TIERED_REMOTE_INDEX, false).build(); InternalTestCluster internalTestCluster = internalCluster(); internalTestCluster.startClusterManagerOnlyNode(clusterSettings); - internalTestCluster.startDataOnlyNode(clusterSettings); + internalTestCluster.startDataAndSearchNodes(1); Settings indexSettings = Settings.builder() .put(IndexMetadata.SETTING_NUMBER_OF_SHARDS, 1) @@ -94,7 +105,7 @@ public void testWritableWarmFeatureFlagDisabled() { public void testWritableWarmBasic() throws Exception { InternalTestCluster internalTestCluster = internalCluster(); internalTestCluster.startClusterManagerOnlyNode(); - internalTestCluster.startDataOnlyNode(); + internalTestCluster.startDataAndSearchNodes(1); Settings settings = Settings.builder() .put(IndexMetadata.SETTING_NUMBER_OF_SHARDS, 1) .put(IndexMetadata.SETTING_NUMBER_OF_REPLICAS, 0) diff --git a/server/src/internalClusterTest/java/org/opensearch/snapshots/SearchableSnapshotIT.java b/server/src/internalClusterTest/java/org/opensearch/snapshots/SearchableSnapshotIT.java index 1c199df4d548e..a19bbe49ad340 100644 --- a/server/src/internalClusterTest/java/org/opensearch/snapshots/SearchableSnapshotIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/snapshots/SearchableSnapshotIT.java @@ -67,7 +67,6 @@ import java.util.stream.StreamSupport; import static org.opensearch.action.admin.cluster.node.stats.NodesStatsRequest.Metric.FS; -import static org.opensearch.common.util.FeatureFlags.TIERED_REMOTE_INDEX; import static org.opensearch.core.common.util.CollectionUtils.iterableAsArrayList; import static org.opensearch.index.store.remote.filecache.FileCacheSettings.DATA_TO_FILE_CACHE_SIZE_RATIO_SETTING; import static org.opensearch.test.NodeRoles.clusterManagerOnlyNode; @@ -1019,11 +1018,12 @@ public void testStartSearchNode() throws Exception { internalCluster().startNode(Settings.builder().put(onlyRole(DiscoveryNodeRole.SEARCH_ROLE))); // test start node without search role internalCluster().startNode(Settings.builder().put(onlyRole(DiscoveryNodeRole.DATA_ROLE))); - // test start non-dedicated search node with TIERED_REMOTE_INDEX feature enabled - internalCluster().startNode( - Settings.builder() - .put(onlyRoles(Set.of(DiscoveryNodeRole.SEARCH_ROLE, DiscoveryNodeRole.DATA_ROLE))) - .put(TIERED_REMOTE_INDEX, true) + // test start non-dedicated search node, if the user doesn't configure the cache size, it fails + assertThrows( + SettingsException.class, + () -> internalCluster().startNode( + Settings.builder().put(onlyRoles(Set.of(DiscoveryNodeRole.SEARCH_ROLE, DiscoveryNodeRole.DATA_ROLE))) + ) ); // test start non-dedicated search node assertThrows( diff --git a/server/src/main/java/org/opensearch/node/Node.java b/server/src/main/java/org/opensearch/node/Node.java index 8684b1b383cab..cbed8dfea8cc4 100644 --- a/server/src/main/java/org/opensearch/node/Node.java +++ b/server/src/main/java/org/opensearch/node/Node.java @@ -382,7 +382,7 @@ public class Node implements Closeable { public static final Setting NODE_SEARCH_CACHE_SIZE_SETTING = new Setting<>( "node.search.cache.size", - s -> (FeatureFlags.isEnabled(FeatureFlags.TIERED_REMOTE_INDEX_SETTING) || DiscoveryNode.isDedicatedSearchNode(s)) ? "80%" : ZERO, + s -> (DiscoveryNode.isDedicatedSearchNode(s)) ? "80%" : ZERO, Node::validateFileCacheSize, Property.NodeScope ); @@ -2037,8 +2037,7 @@ DiscoveryNode getNode() { * Else it configures the size to 80% of total capacity for a dedicated search node, if not explicitly defined. */ private void initializeFileCache(Settings settings, CircuitBreaker circuitBreaker) throws IOException { - boolean isWritableRemoteIndexEnabled = FeatureFlags.isEnabled(FeatureFlags.TIERED_REMOTE_INDEX_SETTING); - if (DiscoveryNode.isSearchNode(settings) == false && isWritableRemoteIndexEnabled == false) { + if (DiscoveryNode.isSearchNode(settings) == false) { return; } From 7cbff4f9fd8ae53b1672aa8e2582e23bb2c16def Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Mon, 5 Aug 2024 12:57:25 -0500 Subject: [PATCH 158/167] Bump actions/setup-java from 1 to 4 (#15104) * Bump actions/setup-java from 1 to 4 Bumps [actions/setup-java](https://github.com/actions/setup-java) from 1 to 4. - [Release notes](https://github.com/actions/setup-java/releases) - [Commits](https://github.com/actions/setup-java/compare/v1...v4) --- updated-dependencies: - dependency-name: actions/setup-java dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] * Update changelog Signed-off-by: dependabot[bot] --------- Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] --- .github/workflows/benchmark-pull-request.yml | 2 +- CHANGELOG.md | 1 + 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/.github/workflows/benchmark-pull-request.yml b/.github/workflows/benchmark-pull-request.yml index 98dd39b1dad54..2a54c2072de59 100644 --- a/.github/workflows/benchmark-pull-request.yml +++ b/.github/workflows/benchmark-pull-request.yml @@ -123,7 +123,7 @@ jobs: ref: ${{ env.prHeadRefSha }} token: ${{ secrets.GITHUB_TOKEN }} - name: Setup Java - uses: actions/setup-java@v1 + uses: actions/setup-java@v4 with: java-version: 21 - name: Build and Assemble OpenSearch from PR diff --git a/CHANGELOG.md b/CHANGELOG.md index f4db5c3ecb5cc..2190f45fc9b09 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -22,6 +22,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Bump `com.microsoft.azure:msal4j` from 1.16.1 to 1.16.2 ([#14995](https://github.com/opensearch-project/OpenSearch/pull/14995)) - Bump `actions/github-script` from 6 to 7 ([#14997](https://github.com/opensearch-project/OpenSearch/pull/14997)) - Bump `org.tukaani:xz` from 1.9 to 1.10 ([#15110](https://github.com/opensearch-project/OpenSearch/pull/15110)) +- Bump `actions/setup-java` from 1 to 4 ([#15104](https://github.com/opensearch-project/OpenSearch/pull/15104)) ### Changed - Add lower limit for primary and replica batch allocators timeout ([#14979](https://github.com/opensearch-project/OpenSearch/pull/14979)) From caa0a2e05b85003116f5a3788f767370217e490e Mon Sep 17 00:00:00 2001 From: Peter Nied Date: Mon, 5 Aug 2024 14:09:58 -0500 Subject: [PATCH 159/167] Update old untriaged workflow to better issue url (#15086) GitHub doesn't suport dynamic days since created/modified, I've created a simple redirect on my website that will support this use case. See https://peternied.github.io/redirect/issue_search.html for full context on what is avaliable. Source is available on https://github.com/peternied/peternied.github.io Signed-off-by: Peter Nied --- TRIAGING.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/TRIAGING.md b/TRIAGING.md index 6791d5944ee6f..53ef77de49159 100644 --- a/TRIAGING.md +++ b/TRIAGING.md @@ -35,7 +35,7 @@ Meeting structure may vary slightly, but the general structure is as follows: - [Core](https://github.com/opensearch-project/OpenSearch/issues?q=is%3Aissue+is%3Aopen+label%3Auntriaged+-label%3A%22Search%22%2C%22Search%3ARemote+Search%22%2C%22Search%3AResiliency%22%2C%22Search%3APerformance%22%2C%22Search%3ARelevance%22%2C%22Search%3AAggregations%22%2C%22Search%3AQuery+Capabilities%22%2C%22Search%3AQuery+Insights%22%2C%22Search%3ASearchable+Snapshots%22%2C%22Search%3AUser+Behavior+Insights%22%2C%22Storage%22%2C%22Storage%3AResiliency%22%2C%22Storage%3APerformance%22%2C%22Storage%3ASnapshots%22%2C%22Storage%3ARemote%22%2C%22Storage%3ADurability%22%2C%22Cluster+Manager%22%2C%22ClusterManager%3ARemoteState%22%2C%22Indexing%3AReplication%22%2C%22Indexing%22%2C%22Indexing%3APerformance%22%2C%22Indexing+%26+Search%22) 5. **Attendee Requests:** An opportunity for any meeting member to request consideration of an issue or pull request. 6. **Open Discussion:** Attendees can bring up any topics not already covered by filed issues or pull requests. -7. **Review of Old Untriaged Issues:** Time permitting, each meeting will look at all [untriaged issues older than 14 days](https://github.com/opensearch-project/OpenSearch/issues?q=is%3Aissue+is%3Aopen+label%3Auntriaged+created%3A%3C2024-05-20) to prevent issues from falling through the cracks (note the GitHub API does not allow for relative times, so the date in this search must be updated every meeting). +7. **Review of Old Untriaged Issues:** Look at all [untriaged issues older than 14 days](https://peternied.github.io/redirect/issue_search.html?owner=opensearch-project&repo=OpenSearch&tag=untriaged&created-since-days=14) to prevent issues from falling through the cracks. ### What is the role of the facilitator? From 49b7cd47b0f0112ca21d1ea3952106f28f8253bb Mon Sep 17 00:00:00 2001 From: Craig Perkins Date: Mon, 5 Aug 2024 18:06:35 -0400 Subject: [PATCH 160/167] Bump org.apache.avro:avro from 1.11.3 to 1.12.0 in /plugins/repository-hdfs (#15119) * Bump org.apache.avro:avro from 1.11.3 to 1.12.0 in /plugins/repository-hdfs Signed-off-by: Craig Perkins * Add to CHANGELOG Signed-off-by: Craig Perkins --------- Signed-off-by: Craig Perkins --- CHANGELOG.md | 1 + plugins/repository-hdfs/build.gradle | 15 +-------------- .../repository-hdfs/licenses/avro-1.11.3.jar.sha1 | 1 - .../repository-hdfs/licenses/avro-1.12.0.jar.sha1 | 1 + 4 files changed, 3 insertions(+), 15 deletions(-) delete mode 100644 plugins/repository-hdfs/licenses/avro-1.11.3.jar.sha1 create mode 100644 plugins/repository-hdfs/licenses/avro-1.12.0.jar.sha1 diff --git a/CHANGELOG.md b/CHANGELOG.md index 2190f45fc9b09..5c7d7aa9bf780 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -23,6 +23,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Bump `actions/github-script` from 6 to 7 ([#14997](https://github.com/opensearch-project/OpenSearch/pull/14997)) - Bump `org.tukaani:xz` from 1.9 to 1.10 ([#15110](https://github.com/opensearch-project/OpenSearch/pull/15110)) - Bump `actions/setup-java` from 1 to 4 ([#15104](https://github.com/opensearch-project/OpenSearch/pull/15104)) +- Bump `org.apache.avro:avro` from 1.11.3 to 1.12.0 in /plugins/repository-hdfs ([#15119](https://github.com/opensearch-project/OpenSearch/pull/15119)) ### Changed - Add lower limit for primary and replica batch allocators timeout ([#14979](https://github.com/opensearch-project/OpenSearch/pull/14979)) diff --git a/plugins/repository-hdfs/build.gradle b/plugins/repository-hdfs/build.gradle index 884fb1333404a..f117bae658abe 100644 --- a/plugins/repository-hdfs/build.gradle +++ b/plugins/repository-hdfs/build.gradle @@ -66,7 +66,7 @@ dependencies { } api 'org.apache.htrace:htrace-core4:4.2.0-incubating' api "org.apache.logging.log4j:log4j-core:${versions.log4j}" - api 'org.apache.avro:avro:1.11.3' + api 'org.apache.avro:avro:1.12.0' api 'com.google.code.gson:gson:2.11.0' runtimeOnly "com.google.guava:guava:${versions.guava}" api "commons-logging:commons-logging:${versions.commonslogging}" @@ -425,19 +425,6 @@ thirdPartyAudit { 'org.apache.hadoop.shaded.org.apache.curator.shaded.com.google.common.util.concurrent.AbstractFuture$UnsafeAtomicHelper', 'org.apache.hadoop.shaded.org.apache.curator.shaded.com.google.common.util.concurrent.AbstractFuture$UnsafeAtomicHelper$1', 'org.apache.hadoop.shaded.org.xbill.DNS.spi.DNSJavaNameServiceDescriptor', - - 'org.apache.avro.reflect.FieldAccessUnsafe', - 'org.apache.avro.reflect.FieldAccessUnsafe$UnsafeBooleanField', - 'org.apache.avro.reflect.FieldAccessUnsafe$UnsafeByteField', - 'org.apache.avro.reflect.FieldAccessUnsafe$UnsafeCachedField', - 'org.apache.avro.reflect.FieldAccessUnsafe$UnsafeCharField', - 'org.apache.avro.reflect.FieldAccessUnsafe$UnsafeCustomEncodedField', - 'org.apache.avro.reflect.FieldAccessUnsafe$UnsafeDoubleField', - 'org.apache.avro.reflect.FieldAccessUnsafe$UnsafeFloatField', - 'org.apache.avro.reflect.FieldAccessUnsafe$UnsafeIntField', - 'org.apache.avro.reflect.FieldAccessUnsafe$UnsafeLongField', - 'org.apache.avro.reflect.FieldAccessUnsafe$UnsafeObjectField', - 'org.apache.avro.reflect.FieldAccessUnsafe$UnsafeShortField', ) } diff --git a/plugins/repository-hdfs/licenses/avro-1.11.3.jar.sha1 b/plugins/repository-hdfs/licenses/avro-1.11.3.jar.sha1 deleted file mode 100644 index fb43ecbcf22c9..0000000000000 --- a/plugins/repository-hdfs/licenses/avro-1.11.3.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -02b463409b373bff9ece09f54a43d42da5cea55a \ No newline at end of file diff --git a/plugins/repository-hdfs/licenses/avro-1.12.0.jar.sha1 b/plugins/repository-hdfs/licenses/avro-1.12.0.jar.sha1 new file mode 100644 index 0000000000000..83f7bb3677159 --- /dev/null +++ b/plugins/repository-hdfs/licenses/avro-1.12.0.jar.sha1 @@ -0,0 +1 @@ +6e692a464b213f6df49f8e3e7fcf42df0dbb7639 \ No newline at end of file From 7769ce5a9310e56169b7310cca71f1994ee4f8cc Mon Sep 17 00:00:00 2001 From: Craig Perkins Date: Tue, 6 Aug 2024 11:30:41 -0400 Subject: [PATCH 161/167] Bump org.bouncycastle:bcpg-fips from 1.0.7.1 to 2.0.8 and org.bouncycastle:bc-fips from 1.0.2.5 to 2.0.0 (#15122) * Bump org.bouncycastle:bcpg-fips from 1.0.7.1 to 2.0.8 and org.bouncycastle:bc-fips from 1.0.2.5 to 2.0.0 in /distribution/tools/plugin-cli Signed-off-by: Craig Perkins * Add to CHANGELOG Signed-off-by: Craig Perkins --------- Signed-off-by: Craig Perkins --- CHANGELOG.md | 1 + distribution/tools/plugin-cli/build.gradle | 31 ++----------------- .../licenses/bc-fips-1.0.2.5.jar.sha1 | 1 - .../licenses/bc-fips-2.0.0.jar.sha1 | 1 + .../licenses/bcpg-fips-1.0.7.1.jar.sha1 | 1 - .../licenses/bcpg-fips-2.0.8.jar.sha1 | 1 + 6 files changed, 5 insertions(+), 31 deletions(-) delete mode 100644 distribution/tools/plugin-cli/licenses/bc-fips-1.0.2.5.jar.sha1 create mode 100644 distribution/tools/plugin-cli/licenses/bc-fips-2.0.0.jar.sha1 delete mode 100644 distribution/tools/plugin-cli/licenses/bcpg-fips-1.0.7.1.jar.sha1 create mode 100644 distribution/tools/plugin-cli/licenses/bcpg-fips-2.0.8.jar.sha1 diff --git a/CHANGELOG.md b/CHANGELOG.md index 5c7d7aa9bf780..061e3280852e8 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -24,6 +24,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Bump `org.tukaani:xz` from 1.9 to 1.10 ([#15110](https://github.com/opensearch-project/OpenSearch/pull/15110)) - Bump `actions/setup-java` from 1 to 4 ([#15104](https://github.com/opensearch-project/OpenSearch/pull/15104)) - Bump `org.apache.avro:avro` from 1.11.3 to 1.12.0 in /plugins/repository-hdfs ([#15119](https://github.com/opensearch-project/OpenSearch/pull/15119)) +- Bump `org.bouncycastle:bcpg-fips` from 1.0.7.1 to 2.0.8 and `org.bouncycastle:bc-fips` from 1.0.2.5 to 2.0.0 in /distribution/tools/plugin-cli ([#15103](https://github.com/opensearch-project/OpenSearch/pull/15103)) ### Changed - Add lower limit for primary and replica batch allocators timeout ([#14979](https://github.com/opensearch-project/OpenSearch/pull/14979)) diff --git a/distribution/tools/plugin-cli/build.gradle b/distribution/tools/plugin-cli/build.gradle index 3083ad4375460..a619ba1acf6a7 100644 --- a/distribution/tools/plugin-cli/build.gradle +++ b/distribution/tools/plugin-cli/build.gradle @@ -37,8 +37,8 @@ base { dependencies { compileOnly project(":server") compileOnly project(":libs:opensearch-cli") - api "org.bouncycastle:bcpg-fips:1.0.7.1" - api "org.bouncycastle:bc-fips:1.0.2.5" + api "org.bouncycastle:bcpg-fips:2.0.8" + api "org.bouncycastle:bc-fips:2.0.0" testImplementation project(":test:framework") testImplementation 'com.google.jimfs:jimfs:1.3.0' testRuntimeOnly("com.google.guava:guava:${versions.guava}") { @@ -58,33 +58,6 @@ test { jvmArgs += [ "-Djava.security.egd=file:/dev/urandom" ] } -/* - * these two classes intentionally use the following JDK internal APIs in order to offer the necessary - * functionality - * - * sun.security.internal.spec.TlsKeyMaterialParameterSpec - * sun.security.internal.spec.TlsKeyMaterialSpec - * sun.security.internal.spec.TlsMasterSecretParameterSpec - * sun.security.internal.spec.TlsPrfParameterSpec - * sun.security.internal.spec.TlsRsaPremasterSecretParameterSpec - * sun.security.provider.SecureRandom - * - */ -thirdPartyAudit.ignoreViolations( - 'org.bouncycastle.jcajce.provider.BouncyCastleFipsProvider$CoreSecureRandom', - 'org.bouncycastle.jcajce.provider.ProvSunTLSKDF', - 'org.bouncycastle.jcajce.provider.ProvSunTLSKDF$BaseTLSKeyGeneratorSpi', - 'org.bouncycastle.jcajce.provider.ProvSunTLSKDF$TLSKeyMaterialGenerator', - 'org.bouncycastle.jcajce.provider.ProvSunTLSKDF$TLSKeyMaterialGenerator$2', - 'org.bouncycastle.jcajce.provider.ProvSunTLSKDF$TLSMasterSecretGenerator', - 'org.bouncycastle.jcajce.provider.ProvSunTLSKDF$TLSMasterSecretGenerator$2', - 'org.bouncycastle.jcajce.provider.ProvSunTLSKDF$TLSPRFKeyGenerator', - 'org.bouncycastle.jcajce.provider.ProvSunTLSKDF$TLSRsaPreMasterSecretGenerator', - 'org.bouncycastle.jcajce.provider.ProvSunTLSKDF$TLSRsaPreMasterSecretGenerator$2', - 'org.bouncycastle.jcajce.provider.ProvSunTLSKDF$TLSExtendedMasterSecretGenerator', - 'org.bouncycastle.jcajce.provider.ProvSunTLSKDF$TLSExtendedMasterSecretGenerator$2' -) - thirdPartyAudit.ignoreMissingClasses( 'org.brotli.dec.BrotliInputStream', 'org.objectweb.asm.AnnotationVisitor', diff --git a/distribution/tools/plugin-cli/licenses/bc-fips-1.0.2.5.jar.sha1 b/distribution/tools/plugin-cli/licenses/bc-fips-1.0.2.5.jar.sha1 deleted file mode 100644 index 1b44c77dd4ee1..0000000000000 --- a/distribution/tools/plugin-cli/licenses/bc-fips-1.0.2.5.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -704e65f7e4fe679e5ab2aa8a840f27f8ced4c522 \ No newline at end of file diff --git a/distribution/tools/plugin-cli/licenses/bc-fips-2.0.0.jar.sha1 b/distribution/tools/plugin-cli/licenses/bc-fips-2.0.0.jar.sha1 new file mode 100644 index 0000000000000..79f0e3e9930bb --- /dev/null +++ b/distribution/tools/plugin-cli/licenses/bc-fips-2.0.0.jar.sha1 @@ -0,0 +1 @@ +ee9ac432cf08f9a9ebee35d7cf8a45f94959a7ab \ No newline at end of file diff --git a/distribution/tools/plugin-cli/licenses/bcpg-fips-1.0.7.1.jar.sha1 b/distribution/tools/plugin-cli/licenses/bcpg-fips-1.0.7.1.jar.sha1 deleted file mode 100644 index 44cebc7c92d87..0000000000000 --- a/distribution/tools/plugin-cli/licenses/bcpg-fips-1.0.7.1.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -5e1952428655ea822066f86df2e3ecda8fa0ba2b \ No newline at end of file diff --git a/distribution/tools/plugin-cli/licenses/bcpg-fips-2.0.8.jar.sha1 b/distribution/tools/plugin-cli/licenses/bcpg-fips-2.0.8.jar.sha1 new file mode 100644 index 0000000000000..758ee2fdf9de6 --- /dev/null +++ b/distribution/tools/plugin-cli/licenses/bcpg-fips-2.0.8.jar.sha1 @@ -0,0 +1 @@ +51c2f633e0c32d10de1ebab4c86f93310ff820f8 \ No newline at end of file From 76b9931299b4a45308b7ede4659e750eacd5006a Mon Sep 17 00:00:00 2001 From: David Zane <38449481+dzane17@users.noreply.github.com> Date: Tue, 6 Aug 2024 09:33:34 -0700 Subject: [PATCH 162/167] Add took time to request nodes stats (#15054) Signed-off-by: David Zane --- CHANGELOG-3.0.md | 1 + .../action/search/SearchRequestStats.java | 31 ++++++++++++++++ .../index/search/stats/SearchStats.java | 24 ++++++++++++- .../search/SearchRequestStatsTests.java | 35 +++++++++++++++++++ 4 files changed, 90 insertions(+), 1 deletion(-) diff --git a/CHANGELOG-3.0.md b/CHANGELOG-3.0.md index 48d978bede420..78e93eed0158a 100644 --- a/CHANGELOG-3.0.md +++ b/CHANGELOG-3.0.md @@ -13,6 +13,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - GHA to verify checklist items completion in PR descriptions ([#10800](https://github.com/opensearch-project/OpenSearch/pull/10800)) - Allow to pass the list settings through environment variables (like [], ["a", "b", "c"], ...) ([#10625](https://github.com/opensearch-project/OpenSearch/pull/10625)) - Views, simplify data access and manipulation by providing a virtual layer over one or more indices ([#11957](https://github.com/opensearch-project/OpenSearch/pull/11957)) +- Add took time to request nodes stats ([#15054](https://github.com/opensearch-project/OpenSearch/pull/15054)) ### Dependencies diff --git a/server/src/main/java/org/opensearch/action/search/SearchRequestStats.java b/server/src/main/java/org/opensearch/action/search/SearchRequestStats.java index 97ef94055faf7..d1d5f568fc09d 100644 --- a/server/src/main/java/org/opensearch/action/search/SearchRequestStats.java +++ b/server/src/main/java/org/opensearch/action/search/SearchRequestStats.java @@ -27,6 +27,7 @@ @PublicApi(since = "2.11.0") public final class SearchRequestStats extends SearchRequestOperationsListener { Map phaseStatsMap = new EnumMap<>(SearchPhaseName.class); + StatsHolder tookStatsHolder; public static final String SEARCH_REQUEST_STATS_ENABLED_KEY = "search.request_stats_enabled"; public static final Setting SEARCH_REQUEST_STATS_ENABLED = Setting.boolSetting( @@ -40,6 +41,7 @@ public final class SearchRequestStats extends SearchRequestOperationsListener { public SearchRequestStats(ClusterSettings clusterSettings) { this.setEnabled(clusterSettings.get(SEARCH_REQUEST_STATS_ENABLED)); clusterSettings.addSettingsUpdateConsumer(SEARCH_REQUEST_STATS_ENABLED, this::setEnabled); + tookStatsHolder = new StatsHolder(); for (SearchPhaseName searchPhaseName : SearchPhaseName.values()) { phaseStatsMap.put(searchPhaseName, new StatsHolder()); } @@ -57,6 +59,18 @@ public long getPhaseMetric(SearchPhaseName searchPhaseName) { return phaseStatsMap.get(searchPhaseName).timing.sum(); } + public long getTookCurrent() { + return tookStatsHolder.current.count(); + } + + public long getTookTotal() { + return tookStatsHolder.total.count(); + } + + public long getTookMetric() { + return tookStatsHolder.timing.sum(); + } + @Override protected void onPhaseStart(SearchPhaseContext context) { phaseStatsMap.get(context.getCurrentPhase().getSearchPhaseName()).current.inc(); @@ -75,6 +89,23 @@ protected void onPhaseFailure(SearchPhaseContext context, Throwable cause) { phaseStatsMap.get(context.getCurrentPhase().getSearchPhaseName()).current.dec(); } + @Override + protected void onRequestStart(SearchRequestContext searchRequestContext) { + tookStatsHolder.current.inc(); + } + + @Override + protected void onRequestEnd(SearchPhaseContext context, SearchRequestContext searchRequestContext) { + tookStatsHolder.current.dec(); + tookStatsHolder.total.inc(); + tookStatsHolder.timing.inc(TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - searchRequestContext.getAbsoluteStartNanos())); + } + + @Override + protected void onRequestFailure(SearchPhaseContext context, SearchRequestContext searchRequestContext) { + tookStatsHolder.current.dec(); + } + /** * Holder of statistics values * diff --git a/server/src/main/java/org/opensearch/index/search/stats/SearchStats.java b/server/src/main/java/org/opensearch/index/search/stats/SearchStats.java index bb61e1afa05f4..d6ea803c9ee13 100644 --- a/server/src/main/java/org/opensearch/index/search/stats/SearchStats.java +++ b/server/src/main/java/org/opensearch/index/search/stats/SearchStats.java @@ -110,7 +110,7 @@ public void writeTo(StreamOutput out) throws IOException { } /** - * Holds requests stats for different phases. + * Holds all requests stats. * * @opensearch.api */ @@ -124,6 +124,7 @@ public Map getRequestStatsHolder() { } RequestStatsLongHolder() { + requestStatsHolder.put(Fields.TOOK, new PhaseStatsLongHolder()); for (SearchPhaseName searchPhaseName : SearchPhaseName.values()) { requestStatsHolder.put(searchPhaseName.getName(), new PhaseStatsLongHolder()); } @@ -512,6 +513,15 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws if (requestStatsLongHolder != null) { builder.startObject(Fields.REQUEST); + PhaseStatsLongHolder tookStatsLongHolder = requestStatsLongHolder.requestStatsHolder.get(Fields.TOOK); + if (tookStatsLongHolder != null) { + builder.startObject(Fields.TOOK); + builder.humanReadableField(Fields.TIME_IN_MILLIS, Fields.TIME, new TimeValue(tookStatsLongHolder.timeInMillis)); + builder.field(Fields.CURRENT, tookStatsLongHolder.current); + builder.field(Fields.TOTAL, tookStatsLongHolder.total); + builder.endObject(); + } + for (SearchPhaseName searchPhaseName : SearchPhaseName.values()) { PhaseStatsLongHolder statsLongHolder = requestStatsLongHolder.requestStatsHolder.get(searchPhaseName.getName()); if (statsLongHolder == null) { @@ -545,6 +555,17 @@ public void setSearchRequestStats(SearchRequestStats searchRequestStats) { totalStats.requestStatsLongHolder = new RequestStatsLongHolder(); } + // Set took stats + totalStats.requestStatsLongHolder.requestStatsHolder.put( + Fields.TOOK, + new PhaseStatsLongHolder( + searchRequestStats.getTookCurrent(), + searchRequestStats.getTookTotal(), + searchRequestStats.getTookMetric() + ) + ); + + // Set phase stats for (SearchPhaseName searchPhaseName : SearchPhaseName.values()) { totalStats.requestStatsLongHolder.requestStatsHolder.put( searchPhaseName.getName(), @@ -678,6 +699,7 @@ static final class Fields { static final String CURRENT = "current"; static final String TOTAL = "total"; static final String SEARCH_IDLE_REACTIVATE_COUNT_TOTAL = "search_idle_reactivate_count_total"; + static final String TOOK = "took"; } diff --git a/server/src/test/java/org/opensearch/action/search/SearchRequestStatsTests.java b/server/src/test/java/org/opensearch/action/search/SearchRequestStatsTests.java index 1af3eb2738a58..3bad3ec3e7d21 100644 --- a/server/src/test/java/org/opensearch/action/search/SearchRequestStatsTests.java +++ b/server/src/test/java/org/opensearch/action/search/SearchRequestStatsTests.java @@ -25,6 +25,41 @@ import static org.mockito.Mockito.when; public class SearchRequestStatsTests extends OpenSearchTestCase { + public void testSearchRequestStats_OnRequestFailure() { + ClusterSettings clusterSettings = new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS); + SearchRequestStats testRequestStats = new SearchRequestStats(clusterSettings); + SearchPhaseContext mockSearchPhaseContext = mock(SearchPhaseContext.class); + SearchRequestContext mockSearchRequestContext = mock(SearchRequestContext.class); + + testRequestStats.onRequestStart(mockSearchRequestContext); + assertEquals(1, testRequestStats.getTookCurrent()); + testRequestStats.onRequestFailure(mockSearchPhaseContext, mockSearchRequestContext); + assertEquals(0, testRequestStats.getTookCurrent()); + assertEquals(0, testRequestStats.getTookTotal()); + } + + public void testSearchRequestStats_OnRequestEnd() { + ClusterSettings clusterSettings = new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS); + SearchRequestStats testRequestStats = new SearchRequestStats(clusterSettings); + SearchPhaseContext mockSearchPhaseContext = mock(SearchPhaseContext.class); + SearchRequestContext mockSearchRequestContext = mock(SearchRequestContext.class); + + // Start request + testRequestStats.onRequestStart(mockSearchRequestContext); + assertEquals(1, testRequestStats.getTookCurrent()); + + // Mock start time + long tookTimeInMillis = randomIntBetween(1, 10); + long startTimeInNanos = System.nanoTime() - TimeUnit.MILLISECONDS.toNanos(tookTimeInMillis); + when(mockSearchRequestContext.getAbsoluteStartNanos()).thenReturn(startTimeInNanos); + + // End request + testRequestStats.onRequestEnd(mockSearchPhaseContext, mockSearchRequestContext); + assertEquals(0, testRequestStats.getTookCurrent()); + assertEquals(1, testRequestStats.getTookTotal()); + assertThat(testRequestStats.getTookMetric(), greaterThanOrEqualTo(tookTimeInMillis)); + } + public void testSearchRequestPhaseFailure() { ClusterSettings clusterSettings = new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS); SearchRequestStats testRequestStats = new SearchRequestStats(clusterSettings); From c7254315b7d801951547b5f4bd3f2c89ac484c71 Mon Sep 17 00:00:00 2001 From: gaobinlong Date: Wed, 7 Aug 2024 01:17:01 +0800 Subject: [PATCH 163/167] Fix version check for adding rangeQuery and regexpQuery support for constant_keyword field type (#15127) Signed-off-by: Gao Binlong --- .../{110_constant_keyword.yml => 115_constant_keyword.yml} | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename rest-api-spec/src/main/resources/rest-api-spec/test/index/{110_constant_keyword.yml => 115_constant_keyword.yml} (98%) diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/index/110_constant_keyword.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/index/115_constant_keyword.yml similarity index 98% rename from rest-api-spec/src/main/resources/rest-api-spec/test/index/110_constant_keyword.yml rename to rest-api-spec/src/main/resources/rest-api-spec/test/index/115_constant_keyword.yml index 1c50187534026..e60981dbbf50c 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/index/110_constant_keyword.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/index/115_constant_keyword.yml @@ -70,8 +70,8 @@ --- "Queries": - skip: - version: " - 2.99.99" - reason: "rangeQuery and regexpQuery are supported in 3.0.0 in main branch" + version: " - 2.16.99" + reason: "rangeQuery and regexpQuery are introduced in 2.17.0" - do: indices.create: From 2829a89f1484cc92e74e56a2695f691e375948c6 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Tue, 6 Aug 2024 15:16:59 -0400 Subject: [PATCH 164/167] Bump com.azure:azure-core from 1.49.1 to 1.51.0 in /plugins/repository-azure (#15111) * Bump com.azure:azure-core in /plugins/repository-azure Bumps [com.azure:azure-core](https://github.com/Azure/azure-sdk-for-java) from 1.49.1 to 1.51.0. - [Release notes](https://github.com/Azure/azure-sdk-for-java/releases) - [Commits](https://github.com/Azure/azure-sdk-for-java/compare/azure-core_1.49.1...azure-core_1.51.0) --- updated-dependencies: - dependency-name: com.azure:azure-core dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] * Updating SHAs Signed-off-by: dependabot[bot] * Update changelog Signed-off-by: dependabot[bot] --------- Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] --- CHANGELOG.md | 1 + plugins/repository-azure/build.gradle | 2 +- plugins/repository-azure/licenses/azure-core-1.49.1.jar.sha1 | 1 - plugins/repository-azure/licenses/azure-core-1.51.0.jar.sha1 | 1 + 4 files changed, 3 insertions(+), 2 deletions(-) delete mode 100644 plugins/repository-azure/licenses/azure-core-1.49.1.jar.sha1 create mode 100644 plugins/repository-azure/licenses/azure-core-1.51.0.jar.sha1 diff --git a/CHANGELOG.md b/CHANGELOG.md index 061e3280852e8..5d1650c8341a7 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -25,6 +25,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Bump `actions/setup-java` from 1 to 4 ([#15104](https://github.com/opensearch-project/OpenSearch/pull/15104)) - Bump `org.apache.avro:avro` from 1.11.3 to 1.12.0 in /plugins/repository-hdfs ([#15119](https://github.com/opensearch-project/OpenSearch/pull/15119)) - Bump `org.bouncycastle:bcpg-fips` from 1.0.7.1 to 2.0.8 and `org.bouncycastle:bc-fips` from 1.0.2.5 to 2.0.0 in /distribution/tools/plugin-cli ([#15103](https://github.com/opensearch-project/OpenSearch/pull/15103)) +- Bump `com.azure:azure-core` from 1.49.1 to 1.51.0 ([#15111](https://github.com/opensearch-project/OpenSearch/pull/15111)) ### Changed - Add lower limit for primary and replica batch allocators timeout ([#14979](https://github.com/opensearch-project/OpenSearch/pull/14979)) diff --git a/plugins/repository-azure/build.gradle b/plugins/repository-azure/build.gradle index 15e3158f2dbc4..80809e067f65a 100644 --- a/plugins/repository-azure/build.gradle +++ b/plugins/repository-azure/build.gradle @@ -44,7 +44,7 @@ opensearchplugin { } dependencies { - api 'com.azure:azure-core:1.49.1' + api 'com.azure:azure-core:1.51.0' api 'com.azure:azure-json:1.1.0' api 'com.azure:azure-xml:1.0.0' api 'com.azure:azure-storage-common:12.25.1' diff --git a/plugins/repository-azure/licenses/azure-core-1.49.1.jar.sha1 b/plugins/repository-azure/licenses/azure-core-1.49.1.jar.sha1 deleted file mode 100644 index d487c08c26e94..0000000000000 --- a/plugins/repository-azure/licenses/azure-core-1.49.1.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -a7c44282eaa0f5a3be4b920d6a057509adfe8674 \ No newline at end of file diff --git a/plugins/repository-azure/licenses/azure-core-1.51.0.jar.sha1 b/plugins/repository-azure/licenses/azure-core-1.51.0.jar.sha1 new file mode 100644 index 0000000000000..7200f59af2f9a --- /dev/null +++ b/plugins/repository-azure/licenses/azure-core-1.51.0.jar.sha1 @@ -0,0 +1 @@ +ff5d0aedf75ca45ec0ace24673f790d2f7a57096 \ No newline at end of file From f980924136e4d689581c2346d3def8580c178087 Mon Sep 17 00:00:00 2001 From: Rishabh Singh Date: Tue, 6 Aug 2024 13:29:15 -0700 Subject: [PATCH 165/167] Add baseline-cluster-config key to benchmark config (#15134) Signed-off-by: Rishabh Singh --- .github/benchmark-configs.json | 34 +++++++++++++------- .github/workflows/benchmark-pull-request.yml | 2 ++ 2 files changed, 24 insertions(+), 12 deletions(-) diff --git a/.github/benchmark-configs.json b/.github/benchmark-configs.json index 5b44198cd3b8e..8f4bad040fe44 100644 --- a/.github/benchmark-configs.json +++ b/.github/benchmark-configs.json @@ -14,7 +14,8 @@ "cluster_configuration": { "size": "Single-Node", "data_instance_config": "4vCPU, 32G Mem, 16G Heap" - } + }, + "baseline_cluster_config": "x64-r5.xlarge-single-node-1-shard-0-replica-baseline" }, "id_2": { "description": "Indexing only configuration for HTTP_LOGS workload", @@ -30,7 +31,8 @@ "cluster_configuration": { "size": "Single-Node", "data_instance_config": "4vCPU, 32G Mem, 16G Heap" - } + }, + "baseline_cluster_config": "x64-r5.xlarge-single-node-1-shard-0-replica-baseline" }, "id_3": { "description": "Search only test-procedure for NYC_TAXIS, uses snapshot to restore the data for OS-3.0.0", @@ -46,7 +48,8 @@ "cluster_configuration": { "size": "Single-Node", "data_instance_config": "4vCPU, 32G Mem, 16G Heap" - } + }, + "baseline_cluster_config": "x64-r5.xlarge-1-shard-0-replica-snapshot-baseline" }, "id_4": { "description": "Search only test-procedure for HTTP_LOGS, uses snapshot to restore the data for OS-3.0.0", @@ -62,10 +65,11 @@ "cluster_configuration": { "size": "Single-Node", "data_instance_config": "4vCPU, 32G Mem, 16G Heap" - } + }, + "baseline_cluster_config": "x64-r5.xlarge-1-shard-0-replica-snapshot-baseline" }, "id_5": { - "description": "Search only test-procedure for HTTP_LOGS, uses snapshot to restore the data for OS-3.0.0", + "description": "Search only test-procedure for big5, uses snapshot to restore the data for OS-3.0.0", "supported_major_versions": ["3"], "cluster-benchmark-configs": { "SINGLE_NODE_CLUSTER": "true", @@ -78,7 +82,8 @@ "cluster_configuration": { "size": "Single-Node", "data_instance_config": "4vCPU, 32G Mem, 16G Heap" - } + }, + "baseline_cluster_config": "x64-r5.xlarge-1-shard-0-replica-snapshot-baseline" }, "id_6": { "description": "Search only test-procedure for NYC_TAXIS, uses snapshot to restore the data for OS-2.x", @@ -94,7 +99,8 @@ "cluster_configuration": { "size": "Single-Node", "data_instance_config": "4vCPU, 32G Mem, 16G Heap" - } + }, + "baseline_cluster_config": "x64-r5.xlarge-1-shard-0-replica-snapshot-baseline" }, "id_7": { "description": "Search only test-procedure for HTTP_LOGS, uses snapshot to restore the data for OS-2.x", @@ -110,10 +116,11 @@ "cluster_configuration": { "size": "Single-Node", "data_instance_config": "4vCPU, 32G Mem, 16G Heap" - } + }, + "baseline_cluster_config": "x64-r5.xlarge-1-shard-0-replica-snapshot-baseline" }, "id_8": { - "description": "Search only test-procedure for HTTP_LOGS, uses snapshot to restore the data for OS-2.x", + "description": "Search only test-procedure for big5, uses snapshot to restore the data for OS-2.x", "supported_major_versions": ["2"], "cluster-benchmark-configs": { "SINGLE_NODE_CLUSTER": "true", @@ -126,7 +133,8 @@ "cluster_configuration": { "size": "Single-Node", "data_instance_config": "4vCPU, 32G Mem, 16G Heap" - } + }, + "baseline_cluster_config": "x64-r5.xlarge-1-shard-0-replica-snapshot-baseline" }, "id_9": { "description": "Indexing and search configuration for pmc workload", @@ -141,7 +149,8 @@ "cluster_configuration": { "size": "Single-Node", "data_instance_config": "4vCPU, 32G Mem, 16G Heap" - } + }, + "baseline_cluster_config": "x64-r5.xlarge-single-node-1-shard-0-replica-baseline" }, "id_10": { "description": "Indexing only configuration for stack-overflow workload", @@ -156,6 +165,7 @@ "cluster_configuration": { "size": "Single-Node", "data_instance_config": "4vCPU, 32G Mem, 16G Heap" - } + }, + "baseline_cluster_config": "x64-r5.xlarge-single-node-1-shard-0-replica-baseline" } } diff --git a/.github/workflows/benchmark-pull-request.yml b/.github/workflows/benchmark-pull-request.yml index 2a54c2072de59..1096014e4a291 100644 --- a/.github/workflows/benchmark-pull-request.yml +++ b/.github/workflows/benchmark-pull-request.yml @@ -60,6 +60,8 @@ jobs: for (const [key, value] of Object.entries(clusterBenchmarkConfigs)) { core.exportVariable(key, value); } + if (benchmarkConfigs[configId].hasOwnProperty('baseline_cluster_config')) { + core.exportVariable('BASELINE_CLUSTER_CONFIG', benchmarkConfigs[configId]['baseline_cluster_config']); - name: Post invalid format comment if: steps.check_comment.outputs.invalid == 'true' uses: actions/github-script@v7 From b47b401b5a3c15304fc07b2bad9621c9dea122da Mon Sep 17 00:00:00 2001 From: Liyun Xiu Date: Wed, 7 Aug 2024 05:31:12 +0800 Subject: [PATCH 166/167] Fix bulk ingest NPE with empty pipeline (#15033) Signed-off-by: Liyun Xiu --- CHANGELOG.md | 1 + .../org/opensearch/ingest/IngestService.java | 10 +++++- .../opensearch/ingest/IngestServiceTests.java | 36 +++++++++++++++++++ 3 files changed, 46 insertions(+), 1 deletion(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 5d1650c8341a7..be5e5598b09c2 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -36,6 +36,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), ### Fixed - Fix constraint bug which allows more primary shards than average primary shards per index ([#14908](https://github.com/opensearch-project/OpenSearch/pull/14908)) +- Fix NPE when bulk ingest with empty pipeline ([#15033](https://github.com/opensearch-project/OpenSearch/pull/15033)) - Fix missing value of FieldSort for unsigned_long ([#14963](https://github.com/opensearch-project/OpenSearch/pull/14963)) - Fix delete index template failed when the index template matches a data stream but is unused ([#15080](https://github.com/opensearch-project/OpenSearch/pull/15080)) diff --git a/server/src/main/java/org/opensearch/ingest/IngestService.java b/server/src/main/java/org/opensearch/ingest/IngestService.java index 17eb23422e68b..938ca7493926e 100644 --- a/server/src/main/java/org/opensearch/ingest/IngestService.java +++ b/server/src/main/java/org/opensearch/ingest/IngestService.java @@ -997,7 +997,7 @@ private void innerBatchExecute( Consumer> handler ) { if (pipeline.getProcessors().isEmpty()) { - handler.accept(null); + handler.accept(toIngestDocumentWrappers(slots, indexRequests)); return; } @@ -1271,6 +1271,14 @@ private static IngestDocumentWrapper toIngestDocumentWrapper(int slot, IndexRequ return new IngestDocumentWrapper(slot, toIngestDocument(indexRequest), null); } + private static List toIngestDocumentWrappers(List slots, List indexRequests) { + List ingestDocumentWrappers = new ArrayList<>(); + for (int i = 0; i < slots.size(); ++i) { + ingestDocumentWrappers.add(toIngestDocumentWrapper(slots.get(i), indexRequests.get(i))); + } + return ingestDocumentWrappers; + } + private static Map createSlotIndexRequestMap(List slots, List indexRequests) { Map slotIndexRequestMap = new HashMap<>(); for (int i = 0; i < slots.size(); ++i) { diff --git a/server/src/test/java/org/opensearch/ingest/IngestServiceTests.java b/server/src/test/java/org/opensearch/ingest/IngestServiceTests.java index 166b94966196c..1f4b1d635d438 100644 --- a/server/src/test/java/org/opensearch/ingest/IngestServiceTests.java +++ b/server/src/test/java/org/opensearch/ingest/IngestServiceTests.java @@ -1995,6 +1995,42 @@ public void testExecuteBulkRequestInBatchWithDefaultBatchSize() { verify(mockCompoundProcessor, never()).execute(any(), any()); } + public void testExecuteEmptyPipelineInBatch() throws Exception { + IngestService ingestService = createWithProcessors(emptyMap()); + PutPipelineRequest putRequest = new PutPipelineRequest( + "_id", + new BytesArray("{\"processors\": [], \"description\": \"_description\"}"), + MediaTypeRegistry.JSON + ); + ClusterState clusterState = ClusterState.builder(new ClusterName("_name")).build(); // Start empty + ClusterState previousClusterState = clusterState; + clusterState = IngestService.innerPut(putRequest, clusterState); + ingestService.applyClusterState(new ClusterChangedEvent("", clusterState, previousClusterState)); + BulkRequest bulkRequest = new BulkRequest(); + IndexRequest indexRequest1 = new IndexRequest("_index").id("_id1").source(emptyMap()).setPipeline("_id").setFinalPipeline("_none"); + bulkRequest.add(indexRequest1); + IndexRequest indexRequest2 = new IndexRequest("_index").id("_id2").source(emptyMap()).setPipeline("_id").setFinalPipeline("_none"); + bulkRequest.add(indexRequest2); + IndexRequest indexRequest3 = new IndexRequest("_index").id("_id3").source(emptyMap()).setPipeline("_id").setFinalPipeline("_none"); + bulkRequest.add(indexRequest3); + IndexRequest indexRequest4 = new IndexRequest("_index").id("_id4").source(emptyMap()).setPipeline("_id").setFinalPipeline("_none"); + bulkRequest.add(indexRequest4); + bulkRequest.batchSize(4); + final Map failureHandler = new HashMap<>(); + final Map completionHandler = new HashMap<>(); + ingestService.executeBulkRequest( + 4, + bulkRequest.requests(), + failureHandler::put, + completionHandler::put, + indexReq -> {}, + Names.WRITE, + bulkRequest + ); + assertTrue(failureHandler.isEmpty()); + assertEquals(Set.of(Thread.currentThread()), completionHandler.keySet()); + } + public void testPrepareBatches_same_index_pipeline() { IngestService.IndexRequestWrapper wrapper1 = createIndexRequestWrapper("index1", Collections.singletonList("p1")); IngestService.IndexRequestWrapper wrapper2 = createIndexRequestWrapper("index1", Collections.singletonList("p1")); From 212597e41717d1474b143721a29fd6b6e9f8f2fd Mon Sep 17 00:00:00 2001 From: Jay Deng Date: Tue, 6 Aug 2024 15:26:12 -0700 Subject: [PATCH 167/167] CODEOWNERS personalizations for jed326 (#15137) Signed-off-by: Jay Deng --- .github/CODEOWNERS | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/.github/CODEOWNERS b/.github/CODEOWNERS index 1aefeee710f47..fb7d73f599670 100644 --- a/.github/CODEOWNERS +++ b/.github/CODEOWNERS @@ -13,15 +13,25 @@ # Default ownership for all repo files * @anasalkouz @andrross @ashking94 @Bukhtawar @CEHENKLE @dblock @dbwiddis @gbbafna @kotwanikunal @mch2 @msfroh @nknize @owaiskazi19 @reta @Rishikesh1159 @sachinpkale @saratvemulapalli @shwetathareja @sohami @VachaShah +/modules/lang-painless/ @anasalkouz @andrross @ashking94 @Bukhtawar @CEHENKLE @dblock @dbwiddis @gbbafna @jed326 @kotwanikunal @mch2 @msfroh @nknize @owaiskazi19 @reta @Rishikesh1159 @sachinpkale @saratvemulapalli @shwetathareja @sohami @VachaShah +/modules/parent-join/ @anasalkouz @andrross @ashking94 @Bukhtawar @CEHENKLE @dblock @dbwiddis @gbbafna @jed326 @kotwanikunal @mch2 @msfroh @nknize @owaiskazi19 @reta @Rishikesh1159 @sachinpkale @saratvemulapalli @shwetathareja @sohami @VachaShah /modules/transport-netty4/ @peternied /plugins/identity-shiro/ @peternied +/server/src/internalClusterTest/java/org/opensearch/index/ @anasalkouz @andrross @ashking94 @Bukhtawar @CEHENKLE @dblock @dbwiddis @gbbafna @jed326 @kotwanikunal @mch2 @msfroh @nknize @owaiskazi19 @reta @Rishikesh1159 @sachinpkale @saratvemulapalli @shwetathareja @sohami @VachaShah +/server/src/internalClusterTest/java/org/opensearch/search/ @anasalkouz @andrross @ashking94 @Bukhtawar @CEHENKLE @dblock @dbwiddis @gbbafna @jed326 @kotwanikunal @mch2 @msfroh @nknize @owaiskazi19 @reta @Rishikesh1159 @sachinpkale @saratvemulapalli @shwetathareja @sohami @VachaShah + /server/src/main/java/org/opensearch/extensions/ @peternied /server/src/main/java/org/opensearch/identity/ @peternied -/server/src/main/java/org/opensearch/threadpool/ @peternied +/server/src/main/java/org/opensearch/index/ @anasalkouz @andrross @ashking94 @Bukhtawar @CEHENKLE @dblock @dbwiddis @gbbafna @jed326 @kotwanikunal @mch2 @msfroh @nknize @owaiskazi19 @reta @Rishikesh1159 @sachinpkale @saratvemulapalli @shwetathareja @sohami @VachaShah +/server/src/main/java/org/opensearch/search/ @anasalkouz @andrross @ashking94 @Bukhtawar @CEHENKLE @dblock @dbwiddis @gbbafna @jed326 @kotwanikunal @mch2 @msfroh @nknize @owaiskazi19 @reta @Rishikesh1159 @sachinpkale @saratvemulapalli @shwetathareja @sohami @VachaShah +/server/src/main/java/org/opensearch/threadpool/ @jed326 @peternied /server/src/main/java/org/opensearch/transport/ @peternied -/.github/ @peternied +/server/src/test/java/org/opensearch/index/ @anasalkouz @andrross @ashking94 @Bukhtawar @CEHENKLE @dblock @dbwiddis @gbbafna @jed326 @kotwanikunal @mch2 @msfroh @nknize @owaiskazi19 @reta @Rishikesh1159 @sachinpkale @saratvemulapalli @shwetathareja @sohami @VachaShah +/server/src/test/java/org/opensearch/search/ @anasalkouz @andrross @ashking94 @Bukhtawar @CEHENKLE @dblock @dbwiddis @gbbafna @jed326 @kotwanikunal @mch2 @msfroh @nknize @owaiskazi19 @reta @Rishikesh1159 @sachinpkale @saratvemulapalli @shwetathareja @sohami @VachaShah + +/.github/ @jed326 @peternied /MAINTAINERS.md @anasalkouz @andrross @ashking94 @Bukhtawar @CEHENKLE @dblock @dbwiddis @gaobinlong @gbbafna @jed326 @kotwanikunal @mch2 @msfroh @nknize @owaiskazi19 @peternied @reta @Rishikesh1159 @sachinpkale @saratvemulapalli @shwetathareja @sohami @VachaShah