Skip to content

Commit

Permalink
Merge branch 'master' into ingest_pipelines/functional_test
Browse files Browse the repository at this point in the history
  • Loading branch information
elasticmachine authored May 11, 2020
2 parents 5054e44 + faaa127 commit fa97e33
Show file tree
Hide file tree
Showing 138 changed files with 1,998 additions and 584 deletions.
2 changes: 1 addition & 1 deletion .backportrc.json
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@
],
"targetPRLabels": ["backport"],
"branchLabelMapping": {
"^v7.8.0$": "7.x",
"^v7.9.0$": "7.x",
"^v(\\d+).(\\d+).\\d+$": "$1.$2"
}
}
Binary file added docs/apm/images/apm-service-map-anomaly.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/apm/images/green-service.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/apm/images/red-service.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/apm/images/service-maps.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/apm/images/yellow-service.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
19 changes: 14 additions & 5 deletions docs/apm/machine-learning.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,22 +6,31 @@
<titleabbrev>Integrate with machine learning</titleabbrev>
++++

The Machine Learning integration will initiate a new job predefined to calculate anomaly scores on transaction response times.
The response time graph will show the expected bounds and add an annotation when the anomaly score is 75 or above.
Jobs can be created per transaction type, and based on the average response time.
Manage jobs in the *Machine Learning jobs management*.
The Machine Learning integration initiates a new job predefined to calculate anomaly scores on APM transaction durations.
Jobs can be created per transaction type, and are based on the service's average response time.

After a machine learning job is created, results are shown in two places:

The transaction duration graph will show the expected bounds and add an annotation when the anomaly score is 75 or above.

[role="screenshot"]
image::apm/images/apm-ml-integration.png[Example view of anomaly scores on response times in the APM app]

Service maps will display a color-coded anomaly indicator based on the detected anomaly score.

[role="screenshot"]
image::apm/images/apm-ml-integration.png[Example view of anomaly scores on response times in APM app in Kibana]
image::apm/images/apm-service-map-anomaly.png[Example view of anomaly scores on service maps in the APM app]

[float]
[[create-ml-integration]]
=== Create a new machine learning job

To enable machine learning anomaly detection, first choose a service to monitor.
Then, select **Integrations** > **Enable ML anomaly detection** and click **Create job**.

That's it! After a few minutes, the job will begin calculating results;
it might take additional time for results to appear on your graph.
Jobs can be managed in *Machine Learning jobs management*.

APM specific anomaly detection wizards are also available for certain Agents.
See the machine learning {ml-docs}/ootb-ml-jobs-apm.html[APM anomaly detection configurations] for more information.
24 changes: 23 additions & 1 deletion docs/apm/service-maps.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,9 @@ Please use Chrome or Firefox if available.

A service map is a real-time visual representation of the instrumented services in your application's architecture.
It shows you how these services are connected, along with high-level metrics like average transaction duration,
requests per minute, and errors per minute, that allow you to quickly assess the status of your services.
requests per minute, and errors per minute.
If enabled, service maps also integrate with machine learning--for real time health indicators based on anomaly detection scores.
All of these features can help you to quickly and visually assess the status and health of your services.

We currently surface two types of service maps:

Expand Down Expand Up @@ -52,6 +54,26 @@ Additional filters are not currently available for service maps.
[role="screenshot"]
image::apm/images/service-maps-java.png[Example view of service maps with Java highlighted in the APM app in Kibana]

[float]
[[service-map-anomaly-detection]]
=== Anomaly detection with machine learning

Machine learning jobs can be created to calculate anomaly scores on APM transaction durations within the selected service.
When these jobs are active, service maps will display a color-coded anomaly indicator based on the detected anomaly score:

[horizontal]
image:apm/images/green-service.png[APM green service]:: Max anomaly score **<=25**. Service is healthy.
image:apm/images/yellow-service.png[APM yellow service]:: Max anomaly score **26-74**. Anomalous activity detected. Service may be degraded.
image:apm/images/red-service.png[APM red service]:: Max anomaly score **>=75**. Anomalous activity detected. Service is unhealthy.

[role="screenshot"]
image::apm/images/apm-service-map-anomaly.png[Example view of anomaly scores on service maps in the APM app]

If an anomaly has been detected, click *view anomalies* to view the anomaly detection metric viewier in the Machine learning app.
This time series analysis will display additional details on the severity and time of the detected anomalies.

To learn how to create a machine learning job, see <<machine-learning-integration,machine learning integration>>.

[float]
[[service-maps-legend]]
=== Legend
Expand Down
14 changes: 14 additions & 0 deletions docs/visualize/tsvb.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -122,3 +122,17 @@ Edit the source for the Markdown visualization.
. To insert the mustache template variable into the editor, click the variable name.
+
The http://mustache.github.io/mustache.5.html[mustache syntax] uses the Handlebar.js processor, which is an extended version of the Mustache template language.

[float]
[[tsvb-style-markdown]]
==== Style Markdown text

Style your Markdown visualization using http://lesscss.org/features/[less syntax].

. Select *Markdown*.

. Select *Panel options*.

. Enter styling rules in *Custom CSS* section
+
Less in TSVB does not support custom plugins or inline JavaScript.
2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
Expand Up @@ -210,7 +210,7 @@
"leaflet-responsive-popup": "0.6.4",
"leaflet-vega": "^0.8.6",
"leaflet.heat": "0.2.0",
"less": "^2.7.3",
"less": "npm:@elastic/less@2.7.3-kibana",
"less-loader": "5.0.0",
"lodash": "npm:@elastic/lodash@3.10.1-kibana4",
"lodash.clonedeep": "^4.5.0",
Expand Down
6 changes: 3 additions & 3 deletions packages/kbn-optimizer/src/worker/webpack.config.ts
Original file line number Diff line number Diff line change
Expand Up @@ -137,9 +137,9 @@ export function getWebpackConfig(bundle: Bundle, worker: WorkerConfig) {
// or which have require() statements that should be ignored because the file is
// already bundled with all its necessary depedencies
noParse: [
/[\///]node_modules[\///]elasticsearch-browser[\///]/,
/[\///]node_modules[\///]lodash[\///]index\.js$/,
/[\///]node_modules[\///]vega-lib[\///]build[\///]vega\.js$/,
/[\/\\]node_modules[\/\\]elasticsearch-browser[\/\\]/,
/[\/\\]node_modules[\/\\]lodash[\/\\]index\.js$/,
/[\/\\]node_modules[\/\\]vega-lib[\/\\]build[\/\\]vega\.js$/,
],

rules: [
Expand Down
26 changes: 13 additions & 13 deletions src/core/server/logging/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -167,7 +167,7 @@ logging:
- context: plugins
appenders: [custom]
level: warn
- context: plugins.pid
- context: plugins.myPlugin
level: info
- context: server
level: fatal
Expand All @@ -180,14 +180,14 @@ logging:
Here is what we get with the config above:
| Context | Appenders | Level |
| ------------- |:------------------------:| -----:|
| root | console, file | error |
| plugins | custom | warn |
| plugins.pid | custom | info |
| server | console, file | fatal |
| optimize | console | error |
| telemetry | json-file-appender | all |
| Context | Appenders | Level |
| ---------------- |:------------------------:| -----:|
| root | console, file | error |
| plugins | custom | warn |
| plugins.myPlugin | custom | info |
| server | console, file | fatal |
| optimize | console | error |
| telemetry | json-file-appender | all |
The `root` logger has a dedicated configuration node since this context is special and should always exist. By
Expand Down Expand Up @@ -259,7 +259,7 @@ define a custom one.
```yaml
logging:
loggers:
- context: your-plugin
- context: plugins.myPlugin
appenders: [console]
```
Logs in a *file* if given file path. You should define a custom appender with `kind: file`
Expand All @@ -273,7 +273,7 @@ logging:
layout:
kind: pattern
loggers:
- context: your-plugin
- context: plugins.myPlugin
appenders: [file]
```
#### logging.json
Expand All @@ -282,10 +282,10 @@ the output format with [layouts](#layouts).

#### logging.quiet
Suppresses all logging output other than error messages. With new logging, config can be achieved
with adjusting minimum required [logging level](#log-level)
with adjusting minimum required [logging level](#log-level).
```yaml
loggers:
- context: my-plugin
- context: plugins.myPlugin
appenders: [console]
level: error
# or for all output
Expand Down
59 changes: 53 additions & 6 deletions src/core/server/saved_objects/service/lib/repository.test.js
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ import { SavedObjectsErrorHelpers } from './errors';
import { SavedObjectsSerializer } from '../../serialization';
import { encodeHitVersion } from '../../version';
import { SavedObjectTypeRegistry } from '../../saved_objects_type_registry';
import { DocumentMigrator } from '../../migrations/core/document_migrator';

jest.mock('./search_dsl/search_dsl', () => ({ getSearchDsl: jest.fn() }));

Expand Down Expand Up @@ -115,6 +116,7 @@ describe('SavedObjectsRepository', () => {
const createType = type => ({
name: type,
mappings: { properties: mappings.properties[type].properties },
migrations: { '1.1.1': doc => doc },
});

const registry = new SavedObjectTypeRegistry();
Expand Down Expand Up @@ -144,6 +146,13 @@ describe('SavedObjectsRepository', () => {
namespaceType: 'agnostic',
});

const documentMigrator = new DocumentMigrator({
typeRegistry: registry,
kibanaVersion: '2.0.0',
log: {},
validateDoc: jest.fn(),
});

const getMockGetResponse = ({ type, id, references, namespace }) => ({
// NOTE: Elasticsearch returns more fields (_index, _type) but the SavedObjectsRepository method ignores these
found: true,
Expand Down Expand Up @@ -207,7 +216,7 @@ describe('SavedObjectsRepository', () => {
beforeEach(() => {
callAdminCluster = jest.fn();
migrator = {
migrateDocument: jest.fn(doc => doc),
migrateDocument: jest.fn().mockImplementation(documentMigrator.migrate),
runMigrations: async () => ({ status: 'skipped' }),
};

Expand Down Expand Up @@ -424,9 +433,17 @@ describe('SavedObjectsRepository', () => {

const getMockBulkCreateResponse = (objects, namespace) => {
return {
items: objects.map(({ type, id }) => ({
items: objects.map(({ type, id, attributes, references, migrationVersion }) => ({
create: {
_id: `${namespace ? `${namespace}:` : ''}${type}:${id}`,
_source: {
[type]: attributes,
type,
namespace,
references,
...mockTimestampFields,
migrationVersion: migrationVersion || { [type]: '1.1.1' },
},
...mockVersionProps,
},
})),
Expand Down Expand Up @@ -474,7 +491,7 @@ describe('SavedObjectsRepository', () => {

const expectSuccessResult = obj => ({
...obj,
migrationVersion: undefined,
migrationVersion: { [obj.type]: '1.1.1' },
version: mockVersion,
...mockTimestampFields,
});
Expand Down Expand Up @@ -619,13 +636,16 @@ describe('SavedObjectsRepository', () => {
};

const bulkCreateError = async (obj, esError, expectedError) => {
const objects = [obj1, obj, obj2];
const response = getMockBulkCreateResponse(objects);
let response;
if (esError) {
response = getMockBulkCreateResponse([obj1, obj, obj2]);
response.items[1].create = { error: esError };
} else {
response = getMockBulkCreateResponse([obj1, obj2]);
}
callAdminCluster.mockResolvedValue(response); // this._writeToCluster('bulk', ...)

const objects = [obj1, obj, obj2];
const result = await savedObjectsRepository.bulkCreate(objects);
expectClusterCalls('bulk');
const objCall = esError ? expectObjArgs(obj) : [];
Expand Down Expand Up @@ -781,14 +801,40 @@ describe('SavedObjectsRepository', () => {
id: 'three',
};
const objects = [obj1, obj, obj2];
const response = getMockBulkCreateResponse(objects);
const response = getMockBulkCreateResponse([obj1, obj2]);
callAdminCluster.mockResolvedValue(response); // this._writeToCluster('bulk', ...)
const result = await savedObjectsRepository.bulkCreate(objects);
expect(callAdminCluster).toHaveBeenCalledTimes(1);
expect(result).toEqual({
saved_objects: [expectSuccessResult(obj1), expectError(obj), expectSuccessResult(obj2)],
});
});

it(`a deserialized saved object`, async () => {
// Test for fix to https://github.com/elastic/kibana/issues/65088 where
// we returned raw ID's when an object without an id was created.
const namespace = 'myspace';
const response = getMockBulkCreateResponse([obj1, obj2], namespace);
callAdminCluster.mockResolvedValueOnce(response); // this._writeToCluster('bulk', ...)

// Bulk create one object with id unspecified, and one with id specified
const result = await savedObjectsRepository.bulkCreate([{ ...obj1, id: undefined }, obj2], {
namespace,
});

// Assert that both raw docs from the ES response are deserialized
expect(serializer.rawToSavedObject).toHaveBeenNthCalledWith(1, {
...response.items[0].create,
_id: expect.stringMatching(/^myspace:config:[0-9a-f]{8}-([0-9a-f]{4}-){3}[0-9a-f]{12}$/),
});
expect(serializer.rawToSavedObject).toHaveBeenNthCalledWith(2, response.items[1].create);

// Assert that ID's are deserialized to remove the type and namespace
expect(result.saved_objects[0].id).toEqual(
expect.stringMatching(/^[0-9a-f]{8}-([0-9a-f]{4}-){3}[0-9a-f]{12}$/)
);
expect(result.saved_objects[1].id).toEqual(obj2.id);
});
});
});

Expand Down Expand Up @@ -1604,6 +1650,7 @@ describe('SavedObjectsRepository', () => {
version: mockVersion,
attributes,
references,
migrationVersion: { [type]: '1.1.1' },
});
});
});
Expand Down
43 changes: 18 additions & 25 deletions src/core/server/saved_objects/service/lib/repository.ts
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@
*/

import { omit } from 'lodash';
import uuid from 'uuid';
import { retryCallCluster } from '../../../elasticsearch/retry_call_cluster';
import { APICaller } from '../../../elasticsearch/';

Expand Down Expand Up @@ -299,6 +300,8 @@ export class SavedObjectsRepository {
const requiresNamespacesCheck =
method === 'index' && this._registry.isMultiNamespace(object.type);

if (object.id == null) object.id = uuid.v1();

return {
tag: 'Right' as 'Right',
value: {
Expand Down Expand Up @@ -404,35 +407,25 @@ export class SavedObjectsRepository {
}

const { requestedId, rawMigratedDoc, esRequestIndex } = expectedResult.value;
const response = bulkResponse.items[esRequestIndex];
const {
error,
_id: responseId,
_seq_no: seqNo,
_primary_term: primaryTerm,
} = Object.values(response)[0] as any;

const {
_source: { type, [type]: attributes, references = [], namespaces },
} = rawMigratedDoc;

const id = requestedId || responseId;
const { error, ...rawResponse } = Object.values(
bulkResponse.items[esRequestIndex]
)[0] as any;

if (error) {
return {
id,
type,
error: getBulkOperationError(error, type, id),
id: requestedId,
type: rawMigratedDoc._source.type,
error: getBulkOperationError(error, rawMigratedDoc._source.type, requestedId),
};
}
return {
id,
type,
...(namespaces && { namespaces }),
updated_at: time,
version: encodeVersion(seqNo, primaryTerm),
attributes,
references,
};

// When method == 'index' the bulkResponse doesn't include the indexed
// _source so we return rawMigratedDoc but have to spread the latest
// _seq_no and _primary_term values from the rawResponse.
return this._serializer.rawToSavedObject({
...rawMigratedDoc,
...{ _seq_no: rawResponse._seq_no, _primary_term: rawResponse._primary_term },
});
}),
};
}
Expand Down
Loading

0 comments on commit fa97e33

Please sign in to comment.