Skip to content

Commit

Permalink
Merge branch 'upstream/master' into tsdb-deal-timestreamp
Browse files Browse the repository at this point in the history
* upstream/master: (521 commits)
  Migrate custom role providers to licensed feature (elastic#79127)
  Remove stale AwaitsFix in InternalEngineTests (elastic#79323)
  Fix errors in RefreshListenersTests (elastic#79324)
  Reeable BwC Tests after elastic#79318 (elastic#79320)
  Mute BwC Tests for elastic#79318 (elastic#79319)
  Reenable BwC Tests after elastic#79308 (elastic#79313)
  Disable BwC Tests for elastic#79308 (elastic#79310)
  Adjust BWC for node-level field cap requests (elastic#79301)
  Allow total memory to be overridden (elastic#78750)
  Fix SnapshotBasedIndexRecoveryIT#testRecoveryIsCancelledAfterDeletingTheIndex (elastic#79269)
  Disable BWC tests
  Mute GeoIpDownloaderCliIT.testStartWithNoDatabases (elastic#79299)
  Add alias support to fleet search API (elastic#79285)
  Create a coordinating node level reader for tsdb (elastic#79197)
  Route documents to the correct shards in tsdb (elastic#77731)
  Inject migrate action regardless of allocate action (elastic#79090)
  Migrate to data tiers should always ensure a TIER_PREFERENCE is set (elastic#79100)
  Skip building of BWC distributions when building release artifacts (elastic#79180)
  Default ENFORCE_DEFAULT_TIER_PREFERENCE to true (elastic#79275)
  Deprecation of transient cluster settings (elastic#78794)
  ...

# Conflicts:
#	rest-api-spec/src/yamlRestTest/resources/rest-api-spec/test/tsdb/10_settings.yml
#	server/src/main/java/org/elasticsearch/common/settings/IndexScopedSettings.java
#	server/src/main/java/org/elasticsearch/common/settings/Setting.java
#	server/src/main/java/org/elasticsearch/index/IndexMode.java
#	server/src/test/java/org/elasticsearch/index/TimeSeriesModeTests.java
  • Loading branch information
weizijun committed Oct 18, 2021
2 parents 0e67726 + fceacfe commit e3b7256
Show file tree
Hide file tree
Showing 7,220 changed files with 96,176 additions and 52,750 deletions.
The diff you're trying to view is too large. We only load the first 3000 changed files.
3 changes: 2 additions & 1 deletion .ci/bwcVersions
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,8 @@ BWC_VERSION:
- "7.14.0"
- "7.14.1"
- "7.14.2"
- "7.14.3"
- "7.15.0"
- "7.15.1"
- "7.15.2"
- "7.16.0"
- "8.0.0"
2 changes: 1 addition & 1 deletion .ci/jobs.t/defaults.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
concurrent: true
logrotate:
daysToKeep: 30
numToKeep: 90
numToKeep: 500
artifactDaysToKeep: 7
parameters:
- string:
Expand Down
4 changes: 2 additions & 2 deletions .idea/eclipseCodeFormatter.xml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

4 changes: 2 additions & 2 deletions .idea/runConfigurations/Debug_Elasticsearch.xml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

113 changes: 113 additions & 0 deletions BUILDING.md
Original file line number Diff line number Diff line change
Expand Up @@ -78,6 +78,23 @@ The major difference between these two syntaxes is, that the configuration block

By actually doing less in the gradle configuration time as only creating tasks that are requested as part of the build and by only running the configurations for those requested tasks, using the task avoidance api contributes a major part in keeping our build fast.

#### Registering test clusters

When using the elasticsearch test cluster plugin we want to use (similar to the task avoidance API) a Gradle API to create domain objects lazy or only if required by the build.
Therefore we register test cluster by using the following syntax:

def someClusterProvider = testClusters.register('someCluster') { ... }

This registers a potential testCluster named `somecluster` and provides a provider instance, but doesn't create it yet nor configures it. This makes the gradle configuration phase more efficient by
doing less.

To wire this registered cluster into a `TestClusterAware` task (e.g. `RestIntegTest`) you can resolve the actual cluster from the provider instance:

tasks.register('someClusterTest', RestIntegTestTask) {
useCluster someClusterProvider
nonInputProperties.systemProperty 'tests.leader_host', "${-> someClusterProvider.get().getAllHttpSocketURI().get(0)}"
}

#### Adding additional integration tests

Additional integration tests for a certain elasticsearch modules that are specific to certain cluster configuration can be declared in a separate so called `qa` subproject of your module.
Expand Down Expand Up @@ -118,3 +135,99 @@ dependencies {
}
}
```

## FAQ

### How do I test a development version of a third party dependency?

To test an unreleased development version of a third party dependency you have several options.

#### How to use a maven based third party dependency via mavenlocal?

1. Clone the third party repository locally
2. Run `mvn install` to install copy into your `~/.m2/repository` folder.
3. Add this to the root build script:

```
allprojects {
repositories {
mavenLocal()
}
}
```
4. Update the version in your dependency declaration accordingly (likely a snapshot version)
5. Run the gradle build as needed

#### How to use a maven built based third party dependency with jitpack repository?

https://jitpack.io is an adhoc repository that supports building maven projects transparently in the background when
resolving unreleased snapshots from a github repository. This approach also works as temporally solution
and is compliant with our CI builds.

1. Add the JitPack repository to the root build file:

```
allprojects {
repositories {
maven { url "https://jitpack.io" }
}
}
```
2. Add the dependency in the following format
```
dependencies {
implementation 'com.github.User:Repo:Tag'
}
```

As version you could also use a certain short commit hash or `master-SNAPSHOT`.
In addition to snapshot builds JitPack supports building Pull Requests. Simply use PR<NR>-SNAPSHOT as the version.

3. Run the gradle build as needed. Keep in mind the initial resolution might take a bit longer as this needs to be built
by JitPack in the background before we can resolve the adhoc built dependency.

---

**NOTE**

You should only use that approach locally or on a developer branch for for production dependencies as we do
not want to ship unreleased libraries into our releases.
---

#### How to use a custom third party artifact?

For third party libraries that are not built with maven (e.g. ant) or provided as a plain jar artifact we can leverage
a flat directory repository that resolves artifacts from a flat directory on your filesystem.

1. Put the jar artifact with the format `artifactName-version.jar` into a directory named `localRepo` (you have to create this manually)
2. Declare a flatDir repository in your root build.gradle file

```
allprojects {
repositories {
flatDir {
dirs 'localRepo'
}
}
}
```

3. Update the dependency declaration of the artifact in question to match the custom build version. For a file named e.g. `jmxri-1.2.1.jar` the
dependency definition would be `:jmxri:1.2.1` as it comes with no group information:

```
dependencies {
implementation ':jmxri:1.2.1'
}
```
4. Run the gradle build as needed.

---
**NOTE**

As Gradle prefers to use modules whose descriptor has been created from real meta-data rather than being generated,
flat directory repositories cannot be used to override artifacts with real meta-data from other repositories declared in the build.
For example, if Gradle finds only `jmxri-1.2.1.jar` in a flat directory repository, but `jmxri-1.2.1.pom` in another repository
that supports meta-data, it will use the second repository to provide the module.
Therefore it is recommended to declare a version that is not resolveable from public repositories we use (e.g. maven central)
---
20 changes: 10 additions & 10 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -193,14 +193,14 @@ need them.
2. Click "Use the Eclipse Code Formatter"
3. Under "Eclipse formatter config", select "Eclipse workspace/project
folder or config file"
4. Click "Browse", and navigate to the file `build-tools-internal/formatterConfig.xml`
4. Click "Browse", and navigate to the file `build-conventions/formatterConfig.xml`
5. **IMPORTANT** - make sure "Optimize Imports" is **NOT** selected.
6. Click "OK"

Note that only some sub-projects in the Elasticsearch project are currently
fully-formatted. You can see a list of project that **are not**
automatically formatted in
[build-tools-internal/src/main/groovy/elasticsearch.formatting.gradle](build-tools-internal/src/main/groovy/elasticsearch.formatting.gradle).
[FormattingPrecommitPlugin.java](build-conventions/src/main/java/org/elasticsearch/gradle/internal/conventions/precommit/FormattingPrecommitPlugin.java).

### Importing the project into Eclipse

Expand Down Expand Up @@ -234,15 +234,15 @@ Next you'll want to import our auto-formatter:
- Select **Window > Preferences**
- Select **Java > Code Style > Formatter**
- Click **Import**
- Import the file at **build-tools-internal/formatterConfig.xml**
- Import the file at **build-conventions/formatterConfig.xml**
- Make sure it is the **Active profile**

Finally, set up import order:

- Select **Window > Preferences**
- Select **Java > Code Style > Organize Imports**
- Click **Import...**
- Import the file at **build-tools-internal/elastic.importorder**
- Import the file at **build-conventions/elastic.importorder**
- Set the **Number of imports needed for `.*`** to ***9999***
- Set the **Number of static imports needed for `.*`** to ***9999*** as well
- Apply that
Expand Down Expand Up @@ -279,11 +279,12 @@ form.
Java files in the Elasticsearch codebase are automatically formatted using
the [Spotless Gradle] plugin. All new projects are automatically formatted,
while existing projects are gradually being opted-in. The formatting check
can be run explicitly with:
is run automatically via the `precommit` task, but it can be run explicitly with:

./gradlew spotlessJavaCheck

The code can be formatted with:
It is usually more useful, and just as fast, to just reformat the project. You
can do this with:

./gradlew spotlessApply

Expand All @@ -304,10 +305,9 @@ Please follow these formatting guidelines:
* Wildcard imports (`import foo.bar.baz.*`) are forbidden and will cause
the build to fail.
* If *absolutely* necessary, you can disable formatting for regions of code
with the `// tag::NAME` and `// end::NAME` directives, but note that
these are intended for use in documentation, so please make it clear what
you have done, and only do this where the benefit clearly outweighs the
decrease in consistency.
with the `// @formatter:off` and `// @formatter:on` directives, but
only do this where the benefit clearly outweighs the decrease in formatting
consistency.
* Note that Javadoc and block comments i.e. `/* ... */` are not formatted,
but line comments i.e `// ...` are.
* Negative boolean expressions must use the form `foo == false` instead of
Expand Down
3 changes: 0 additions & 3 deletions NOTICE.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,3 @@ Copyright 2009-2021 Elasticsearch

This product includes software developed by The Apache Software
Foundation (http://www.apache.org/).

This product includes software developed by
Joda.org (http://www.joda.org/).
32 changes: 31 additions & 1 deletion TESTING.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -283,6 +283,36 @@ memory or some of the containers will fail to start. You can tell that you
are short of memory if containers are exiting quickly after starting with
code 137 (128 + 9, where 9 means SIGKILL).

== Debugging tests

If you would like to debug your tests themselves, simply pass the `--debug-jvm`
flag to the testing task and connect a debugger on the default port of `5005`.

---------------------------------------------------------------------------
./gradlew :server:test --debug-jvm
---------------------------------------------------------------------------

For REST tests, if you'd like to debug the Elasticsearch server itself, and
not your test code, use the `--debug-server-jvm` flag and use the
"Debug Elasticsearch" run configuration in IntelliJ to listen on the default
port of `5007`.

---------------------------------------------------------------------------
./gradlew :rest-api-spec:yamlRestTest --debug-server-jvm
---------------------------------------------------------------------------

NOTE: In the case of test clusters using multiple nodes, multiple debuggers
will need to be attached on incrementing ports. For example, for a 3 node
cluster ports `5007`, `5008`, and `5009` will attempt to attach to a listening
debugger.

You can also use a combination of both flags to debug both tests and server.
This is only applicable to Java REST tests.

---------------------------------------------------------------------------
./gradlew :modules:kibana:javaRestTest --debug-jvm --debug-server-jvm
---------------------------------------------------------------------------

== Testing the REST layer

The REST layer is tested through specific tests that are executed against
Expand Down Expand Up @@ -324,7 +354,7 @@ A specific test case can be run with the following command:
---------------------------------------------------------------------------
./gradlew ':rest-api-spec:yamlRestTest' \
--tests "org.elasticsearch.test.rest.ClientYamlTestSuiteIT" \
-Dtests.method="test {p0=cat.segments/10_basic/Help}"
-Dtests.method="test {yaml=cat.segments/10_basic/Help}"
---------------------------------------------------------------------------

The YAML REST tests support all the options provided by the randomized runner, plus the following:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -44,12 +44,13 @@ public class AvailableIndexFoldersBenchmark {
@Setup
public void setup() throws IOException {
Path path = Files.createTempDirectory("test");
String[] paths = new String[] { path.toString() };
nodePath = new NodeEnvironment.NodePath(path);

LogConfigurator.setNodeName("test");
Settings settings = Settings.builder()
.put(Environment.PATH_HOME_SETTING.getKey(), path)
.put(Environment.PATH_DATA_SETTING.getKey(), path.resolve("data"))
.putList(Environment.PATH_DATA_SETTING.getKey(), paths)
.build();
nodeEnv = new NodeEnvironment(settings, new Environment(settings, null));

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
import org.elasticsearch.cluster.metadata.Metadata;
import org.elasticsearch.cluster.node.DiscoveryNodes;
import org.elasticsearch.cluster.routing.RoutingTable;
import org.elasticsearch.cluster.routing.ShardRoutingState;
import org.elasticsearch.cluster.routing.ShardRouting;
import org.elasticsearch.cluster.routing.allocation.AllocationService;
import org.elasticsearch.common.settings.Settings;
import org.openjdk.jmh.annotations.Benchmark;
Expand All @@ -31,6 +31,8 @@

import java.util.Collections;
import java.util.concurrent.TimeUnit;
import java.util.stream.Collectors;
import java.util.stream.StreamSupport;

@Fork(3)
@Warmup(iterations = 10)
Expand Down Expand Up @@ -154,7 +156,10 @@ public ClusterState measureAllocation() {
while (clusterState.getRoutingNodes().hasUnassignedShards()) {
clusterState = strategy.applyStartedShards(
clusterState,
clusterState.getRoutingNodes().shardsWithState(ShardRoutingState.INITIALIZING)
StreamSupport.stream(clusterState.getRoutingNodes().spliterator(), false)
.flatMap(shardRoutings -> StreamSupport.stream(shardRoutings.spliterator(), false))
.filter(ShardRouting::initializing)
.collect(Collectors.toList())
);
clusterState = strategy.reroute(clusterState, "reroute");
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@
import org.elasticsearch.core.Releasables;
import org.elasticsearch.index.Index;
import org.elasticsearch.index.IndexSettings;
import org.elasticsearch.index.analysis.NameOrDefinition;
import org.elasticsearch.index.analysis.NamedAnalyzer;
import org.elasticsearch.index.cache.bitset.BitsetFilterCache;
import org.elasticsearch.index.fielddata.IndexFieldData;
Expand Down Expand Up @@ -197,6 +198,22 @@ public long nowInMillis() {
return 0;
}

@Override
public Analyzer getNamedAnalyzer(String analyzer) {
return null;
}

@Override
public Analyzer buildCustomAnalyzer(
IndexSettings indexSettings,
boolean normalizer,
NameOrDefinition tokenizer,
List<NameOrDefinition> charFilters,
List<NameOrDefinition> tokenFilters
) {
return null;
}

@Override
protected IndexFieldData<?> buildFieldData(MappedFieldType ft) {
IndexFieldDataCache indexFieldDataCache = indicesFieldDataCache.buildIndexFieldDataCache(new IndexFieldDataCache.Listener() {
Expand Down
Loading

0 comments on commit e3b7256

Please sign in to comment.