diff --git a/output/openapi/elasticsearch-openapi.json b/output/openapi/elasticsearch-openapi.json index 348471e7f5..3303ee8d9d 100644 --- a/output/openapi/elasticsearch-openapi.json +++ b/output/openapi/elasticsearch-openapi.json @@ -33105,6 +33105,260 @@ "x-state": "Added in 0.0.0" } }, + "/_snapshot/{repository}/_analyze": { + "post": { + "tags": [ + "snapshot" + ], + "summary": "Analyze a snapshot repository", + "description": "Analyze the performance characteristics and any incorrect behaviour found in a repository.\n\nThe response exposes implementation details of the analysis which may change from version to version.\nThe response body format is therefore not considered stable and may be different in newer versions.\n\nThere are a large number of third-party storage systems available, not all of which are suitable for use as a snapshot repository by Elasticsearch.\nSome storage systems behave incorrectly, or perform poorly, especially when accessed concurrently by multiple clients as the nodes of an Elasticsearch cluster do. This API performs a collection of read and write operations on your repository which are designed to detect incorrect behaviour and to measure the performance characteristics of your storage system.\n\nThe default values for the parameters are deliberately low to reduce the impact of running an analysis inadvertently and to provide a sensible starting point for your investigations.\nRun your first analysis with the default parameter values to check for simple problems.\nIf successful, run a sequence of increasingly large analyses until you encounter a failure or you reach a `blob_count` of at least `2000`, a `max_blob_size` of at least `2gb`, a `max_total_data_size` of at least `1tb`, and a `register_operation_count` of at least `100`.\nAlways specify a generous timeout, possibly `1h` or longer, to allow time for each analysis to run to completion.\nPerform the analyses using a multi-node cluster of a similar size to your production cluster so that it can detect any problems that only arise when the repository is accessed by many nodes at once.\n\nIf the analysis fails, Elasticsearch detected that your repository behaved unexpectedly.\nThis usually means you are using a third-party storage system with an incorrect or incompatible implementation of the API it claims to support.\nIf so, this storage system is not suitable for use as a snapshot repository.\nYou will need to work with the supplier of your storage system to address the incompatibilities that Elasticsearch detects.\n\nIf the analysis is successful, the API returns details of the testing process, optionally including how long each operation took.\nYou can use this information to determine the performance of your storage system.\nIf any operation fails or returns an incorrect result, the API returns an error.\nIf the API returns an error, it may not have removed all the data it wrote to the repository.\nThe error will indicate the location of any leftover data and this path is also recorded in the Elasticsearch logs.\nYou should verify that this location has been cleaned up correctly.\nIf there is still leftover data at the specified location, you should manually remove it.\n\nIf the connection from your client to Elasticsearch is closed while the client is waiting for the result of the analysis, the test is cancelled.\nSome clients are configured to close their connection if no response is received within a certain timeout.\nAn analysis takes a long time to complete so you might need to relax any such client-side timeouts.\nOn cancellation the analysis attempts to clean up the data it was writing, but it may not be able to remove it all.\nThe path to the leftover data is recorded in the Elasticsearch logs.\nYou should verify that this location has been cleaned up correctly.\nIf there is still leftover data at the specified location, you should manually remove it.\n\nIf the analysis is successful then it detected no incorrect behaviour, but this does not mean that correct behaviour is guaranteed.\nThe analysis attempts to detect common bugs but it does not offer 100% coverage.\nAdditionally, it does not test the following:\n\n* Your repository must perform durable writes. Once a blob has been written it must remain in place until it is deleted, even after a power loss or similar disaster.\n* Your repository must not suffer from silent data corruption. Once a blob has been written, its contents must remain unchanged until it is deliberately modified or deleted.\n* Your repository must behave correctly even if connectivity from the cluster is disrupted. Reads and writes may fail in this case, but they must not return incorrect results.\n\nIMPORTANT: An analysis writes a substantial amount of data to your repository and then reads it back again.\nThis consumes bandwidth on the network between the cluster and the repository, and storage space and I/O bandwidth on the repository itself.\nYou must ensure this load does not affect other users of these systems.\nAnalyses respect the repository settings `max_snapshot_bytes_per_sec` and `max_restore_bytes_per_sec` if available and the cluster setting `indices.recovery.max_bytes_per_sec` which you can use to limit the bandwidth they consume.\n\nNOTE: This API is intended for exploratory use by humans. You should expect the request parameters and the response format to vary in future versions.\n\nNOTE: Different versions of Elasticsearch may perform different checks for repository compatibility, with newer versions typically being stricter than older ones.\nA storage system that passes repository analysis with one version of Elasticsearch may fail with a different version.\nThis indicates it behaves incorrectly in ways that the former version did not detect.\nYou must work with the supplier of your storage system to address the incompatibilities detected by the repository analysis API in any version of Elasticsearch.\n\nNOTE: This API may not work correctly in a mixed-version cluster.\n\n*Implementation details*\n\nNOTE: This section of documentation describes how the repository analysis API works in this version of Elasticsearch, but you should expect the implementation to vary between versions. The request parameters and response format depend on details of the implementation so may also be different in newer versions.\n\nThe analysis comprises a number of blob-level tasks, as set by the `blob_count` parameter and a number of compare-and-exchange operations on linearizable registers, as set by the `register_operation_count` parameter.\nThese tasks are distributed over the data and master-eligible nodes in the cluster for execution.\n\nFor most blob-level tasks, the executing node first writes a blob to the repository and then instructs some of the other nodes in the cluster to attempt to read the data it just wrote.\nThe size of the blob is chosen randomly, according to the `max_blob_size` and `max_total_data_size` parameters.\nIf any of these reads fails then the repository does not implement the necessary read-after-write semantics that Elasticsearch requires.\n\nFor some blob-level tasks, the executing node will instruct some of its peers to attempt to read the data before the writing process completes.\nThese reads are permitted to fail, but must not return partial data.\nIf any read returns partial data then the repository does not implement the necessary atomicity semantics that Elasticsearch requires.\n\nFor some blob-level tasks, the executing node will overwrite the blob while its peers are reading it.\nIn this case the data read may come from either the original or the overwritten blob, but the read operation must not return partial data or a mix of data from the two blobs.\nIf any of these reads returns partial data or a mix of the two blobs then the repository does not implement the necessary atomicity semantics that Elasticsearch requires for overwrites.\n\nThe executing node will use a variety of different methods to write the blob.\nFor instance, where applicable, it will use both single-part and multi-part uploads.\nSimilarly, the reading nodes will use a variety of different methods to read the data back again.\nFor instance they may read the entire blob from start to end or may read only a subset of the data.\n\nFor some blob-level tasks, the executing node will cancel the write before it is complete.\nIn this case, it still instructs some of the other nodes in the cluster to attempt to read the blob but all of these reads must fail to find the blob.\n\nLinearizable registers are special blobs that Elasticsearch manipulates using an atomic compare-and-exchange operation.\nThis operation ensures correct and strongly-consistent behavior even when the blob is accessed by multiple nodes at the same time.\nThe detailed implementation of the compare-and-exchange operation on linearizable registers varies by repository type.\nRepository analysis verifies that that uncontended compare-and-exchange operations on a linearizable register blob always succeed.\nRepository analysis also verifies that contended operations either succeed or report the contention but do not return incorrect results.\nIf an operation fails due to contention, Elasticsearch retries the operation until it succeeds.\nMost of the compare-and-exchange operations performed by repository analysis atomically increment a counter which is represented as an 8-byte blob.\nSome operations also verify the behavior on small blobs with sizes other than 8 bytes.", + "operationId": "snapshot-repository-analyze", + "parameters": [ + { + "in": "path", + "name": "repository", + "description": "The name of the repository.", + "required": true, + "deprecated": false, + "schema": { + "$ref": "#/components/schemas/_types:Name" + }, + "style": "simple" + }, + { + "in": "query", + "name": "blob_count", + "description": "The total number of blobs to write to the repository during the test.\nFor realistic experiments, you should set it to at least `2000`.", + "deprecated": false, + "schema": { + "type": "number" + }, + "style": "form" + }, + { + "in": "query", + "name": "concurrency", + "description": "The number of operations to run concurrently during the test.", + "deprecated": false, + "schema": { + "type": "number" + }, + "style": "form" + }, + { + "in": "query", + "name": "detailed", + "description": "Indicates whether to return detailed results, including timing information for every operation performed during the analysis.\nIf false, it returns only a summary of the analysis.", + "deprecated": false, + "schema": { + "type": "boolean" + }, + "style": "form" + }, + { + "in": "query", + "name": "early_read_node_count", + "description": "The number of nodes on which to perform an early read operation while writing each blob.\nEarly read operations are only rarely performed.", + "deprecated": false, + "schema": { + "type": "number" + }, + "style": "form" + }, + { + "in": "query", + "name": "max_blob_size", + "description": "The maximum size of a blob to be written during the test.\nFor realistic experiments, you should set it to at least `2gb`.", + "deprecated": false, + "schema": { + "$ref": "#/components/schemas/_types:ByteSize" + }, + "style": "form" + }, + { + "in": "query", + "name": "max_total_data_size", + "description": "An upper limit on the total size of all the blobs written during the test.\nFor realistic experiments, you should set it to at least `1tb`.", + "deprecated": false, + "schema": { + "$ref": "#/components/schemas/_types:ByteSize" + }, + "style": "form" + }, + { + "in": "query", + "name": "rare_action_probability", + "description": "The probability of performing a rare action such as an early read, an overwrite, or an aborted write on each blob.", + "deprecated": false, + "schema": { + "type": "number" + }, + "style": "form" + }, + { + "in": "query", + "name": "rarely_abort_writes", + "description": "Indicates whether to rarely cancel writes before they complete.", + "deprecated": false, + "schema": { + "type": "boolean" + }, + "style": "form" + }, + { + "in": "query", + "name": "read_node_count", + "description": "The number of nodes on which to read a blob after writing.", + "deprecated": false, + "schema": { + "type": "number" + }, + "style": "form" + }, + { + "in": "query", + "name": "register_operation_count", + "description": "The minimum number of linearizable register operations to perform in total.\nFor realistic experiments, you should set it to at least `100`.", + "deprecated": false, + "schema": { + "type": "number" + }, + "style": "form" + }, + { + "in": "query", + "name": "seed", + "description": "The seed for the pseudo-random number generator used to generate the list of operations performed during the test.\nTo repeat the same set of operations in multiple experiments, use the same seed in each experiment.\nNote that the operations are performed concurrently so might not always happen in the same order on each run.", + "deprecated": false, + "schema": { + "type": "number" + }, + "style": "form" + }, + { + "in": "query", + "name": "timeout", + "description": "The period of time to wait for the test to complete.\nIf no response is received before the timeout expires, the test is cancelled and returns an error.", + "deprecated": false, + "schema": { + "$ref": "#/components/schemas/_types:Duration" + }, + "style": "form" + } + ], + "responses": { + "200": { + "description": "", + "content": { + "application/json": { + "schema": { + "type": "object", + "properties": { + "blob_count": { + "description": "The number of blobs written to the repository during the test.", + "type": "number" + }, + "blob_path": { + "description": "The path in the repository under which all the blobs were written during the test.", + "type": "string" + }, + "concurrency": { + "description": "The number of write operations performed concurrently during the test.", + "type": "number" + }, + "coordinating_node": { + "$ref": "#/components/schemas/snapshot.repository_analyze:NodeInfo" + }, + "delete_elapsed": { + "$ref": "#/components/schemas/_types:Duration" + }, + "delete_elapsed_nanos": { + "$ref": "#/components/schemas/_types:DurationValueUnitNanos" + }, + "details": { + "$ref": "#/components/schemas/snapshot.repository_analyze:DetailsInfo" + }, + "early_read_node_count": { + "description": "The limit on the number of nodes on which early read operations were performed after writing each blob.", + "type": "number" + }, + "issues_detected": { + "description": "A list of correctness issues detected, which is empty if the API succeeded.\nIt is included to emphasize that a successful response does not guarantee correct behaviour in future.", + "type": "array", + "items": { + "type": "string" + } + }, + "listing_elapsed": { + "$ref": "#/components/schemas/_types:Duration" + }, + "listing_elapsed_nanos": { + "$ref": "#/components/schemas/_types:DurationValueUnitNanos" + }, + "max_blob_size": { + "$ref": "#/components/schemas/_types:ByteSize" + }, + "max_blob_size_bytes": { + "description": "The limit, in bytes, on the size of a blob written during the test.", + "type": "number" + }, + "max_total_data_size": { + "$ref": "#/components/schemas/_types:ByteSize" + }, + "max_total_data_size_bytes": { + "description": "The limit, in bytes, on the total size of all blob written during the test.", + "type": "number" + }, + "rare_action_probability": { + "description": "The probability of performing rare actions during the test.", + "type": "number" + }, + "read_node_count": { + "description": "The limit on the number of nodes on which read operations were performed after writing each blob.", + "type": "number" + }, + "repository": { + "description": "The name of the repository that was the subject of the analysis.", + "type": "string" + }, + "seed": { + "description": "The seed for the pseudo-random number generator used to generate the operations used during the test.", + "type": "number" + }, + "summary": { + "$ref": "#/components/schemas/snapshot.repository_analyze:SummaryInfo" + } + }, + "required": [ + "blob_count", + "blob_path", + "concurrency", + "coordinating_node", + "delete_elapsed", + "delete_elapsed_nanos", + "details", + "early_read_node_count", + "issues_detected", + "listing_elapsed", + "listing_elapsed_nanos", + "max_blob_size", + "max_blob_size_bytes", + "max_total_data_size", + "max_total_data_size_bytes", + "rare_action_probability", + "read_node_count", + "repository", + "seed", + "summary" + ] + } + } + } + } + }, + "x-state": "Added in 7.12.0" + } + }, "/_snapshot/{repository}/_verify_integrity": { "post": { "tags": [ @@ -86686,6 +86940,247 @@ "repository" ] }, + "snapshot.repository_analyze:NodeInfo": { + "type": "object", + "properties": { + "id": { + "$ref": "#/components/schemas/_types:Id" + }, + "name": { + "$ref": "#/components/schemas/_types:Name" + } + }, + "required": [ + "id", + "name" + ] + }, + "snapshot.repository_analyze:DetailsInfo": { + "type": "object", + "properties": { + "blob": { + "$ref": "#/components/schemas/snapshot.repository_analyze:BlobDetails" + }, + "overwrite_elapsed": { + "$ref": "#/components/schemas/_types:Duration" + }, + "overwrite_elapsed_nanos": { + "$ref": "#/components/schemas/_types:DurationValueUnitNanos" + }, + "write_elapsed": { + "$ref": "#/components/schemas/_types:Duration" + }, + "write_elapsed_nanos": { + "$ref": "#/components/schemas/_types:DurationValueUnitNanos" + }, + "write_throttled": { + "$ref": "#/components/schemas/_types:Duration" + }, + "write_throttled_nanos": { + "$ref": "#/components/schemas/_types:DurationValueUnitNanos" + }, + "writer_node": { + "$ref": "#/components/schemas/snapshot.repository_analyze:NodeInfo" + } + }, + "required": [ + "blob", + "write_elapsed", + "write_elapsed_nanos", + "write_throttled", + "write_throttled_nanos", + "writer_node" + ] + }, + "snapshot.repository_analyze:BlobDetails": { + "type": "object", + "properties": { + "name": { + "description": "The name of the blob.", + "type": "string" + }, + "overwritten": { + "description": "Indicates whether the blob was overwritten while the read operations were ongoing.\n /**", + "type": "boolean" + }, + "read_early": { + "type": "boolean" + }, + "read_end": { + "description": "The position, in bytes, at which read operations completed.", + "type": "number" + }, + "read_start": { + "description": "The position, in bytes, at which read operations started.", + "type": "number" + }, + "reads": { + "$ref": "#/components/schemas/snapshot.repository_analyze:ReadBlobDetails" + }, + "size": { + "$ref": "#/components/schemas/_types:ByteSize" + }, + "size_bytes": { + "description": "The size of the blob in bytes.", + "type": "number" + } + }, + "required": [ + "name", + "overwritten", + "read_early", + "read_end", + "read_start", + "reads", + "size", + "size_bytes" + ] + }, + "snapshot.repository_analyze:ReadBlobDetails": { + "type": "object", + "properties": { + "before_write_complete": { + "description": "Indicates whether the read operation may have started before the write operation was complete.", + "type": "boolean" + }, + "elapsed": { + "$ref": "#/components/schemas/_types:Duration" + }, + "elapsed_nanos": { + "$ref": "#/components/schemas/_types:DurationValueUnitNanos" + }, + "first_byte_time": { + "$ref": "#/components/schemas/_types:Duration" + }, + "first_byte_time_nanos": { + "$ref": "#/components/schemas/_types:DurationValueUnitNanos" + }, + "found": { + "description": "Indicates whether the blob was found by the read operation.\nIf the read was started before the write completed or the write was ended before completion, it might be false.", + "type": "boolean" + }, + "node": { + "$ref": "#/components/schemas/snapshot.repository_analyze:NodeInfo" + }, + "throttled": { + "$ref": "#/components/schemas/_types:Duration" + }, + "throttled_nanos": { + "$ref": "#/components/schemas/_types:DurationValueUnitNanos" + } + }, + "required": [ + "first_byte_time_nanos", + "found", + "node" + ] + }, + "snapshot.repository_analyze:SummaryInfo": { + "type": "object", + "properties": { + "read": { + "$ref": "#/components/schemas/snapshot.repository_analyze:ReadSummaryInfo" + }, + "write": { + "$ref": "#/components/schemas/snapshot.repository_analyze:WriteSummaryInfo" + } + }, + "required": [ + "read", + "write" + ] + }, + "snapshot.repository_analyze:ReadSummaryInfo": { + "type": "object", + "properties": { + "count": { + "description": "The number of read operations performed in the test.", + "type": "number" + }, + "max_wait": { + "$ref": "#/components/schemas/_types:Duration" + }, + "max_wait_nanos": { + "$ref": "#/components/schemas/_types:DurationValueUnitNanos" + }, + "total_elapsed": { + "$ref": "#/components/schemas/_types:Duration" + }, + "total_elapsed_nanos": { + "$ref": "#/components/schemas/_types:DurationValueUnitNanos" + }, + "total_size": { + "$ref": "#/components/schemas/_types:ByteSize" + }, + "total_size_bytes": { + "description": "The total size of all the blobs or partial blobs read in the test, in bytes.", + "type": "number" + }, + "total_throttled": { + "$ref": "#/components/schemas/_types:Duration" + }, + "total_throttled_nanos": { + "$ref": "#/components/schemas/_types:DurationValueUnitNanos" + }, + "total_wait": { + "$ref": "#/components/schemas/_types:Duration" + }, + "total_wait_nanos": { + "$ref": "#/components/schemas/_types:DurationValueUnitNanos" + } + }, + "required": [ + "count", + "max_wait", + "max_wait_nanos", + "total_elapsed", + "total_elapsed_nanos", + "total_size", + "total_size_bytes", + "total_throttled", + "total_throttled_nanos", + "total_wait", + "total_wait_nanos" + ] + }, + "snapshot.repository_analyze:WriteSummaryInfo": { + "type": "object", + "properties": { + "count": { + "description": "The number of write operations performed in the test.", + "type": "number" + }, + "total_elapsed": { + "$ref": "#/components/schemas/_types:Duration" + }, + "total_elapsed_nanos": { + "$ref": "#/components/schemas/_types:DurationValueUnitNanos" + }, + "total_size": { + "$ref": "#/components/schemas/_types:ByteSize" + }, + "total_size_bytes": { + "description": "The total size of all the blobs written in the test, in bytes.", + "type": "number" + }, + "total_throttled": { + "$ref": "#/components/schemas/_types:Duration" + }, + "total_throttled_nanos": { + "description": "The total time spent waiting due to the `max_snapshot_bytes_per_sec` throttle, in nanoseconds.", + "type": "number" + } + }, + "required": [ + "count", + "total_elapsed", + "total_elapsed_nanos", + "total_size", + "total_size_bytes", + "total_throttled", + "total_throttled_nanos" + ] + }, "snapshot.restore:SnapshotRestore": { "type": "object", "properties": { diff --git a/output/schema/schema.json b/output/schema/schema.json index ba9b24d03c..32f93b5b14 100644 --- a/output/schema/schema.json +++ b/output/schema/schema.json @@ -18761,16 +18761,29 @@ { "availability": { "stack": { + "since": "7.12.0", "stability": "stable", "visibility": "public" } }, - "description": "Analyzes a repository for correctness and performance", - "docUrl": "https://www.elastic.co/guide/en/elasticsearch/reference/master/modules-snapshots.html", + "description": "Analyze a snapshot repository.\nAnalyze the performance characteristics and any incorrect behaviour found in a repository.\n\nThe response exposes implementation details of the analysis which may change from version to version.\nThe response body format is therefore not considered stable and may be different in newer versions.\n\nThere are a large number of third-party storage systems available, not all of which are suitable for use as a snapshot repository by Elasticsearch.\nSome storage systems behave incorrectly, or perform poorly, especially when accessed concurrently by multiple clients as the nodes of an Elasticsearch cluster do. This API performs a collection of read and write operations on your repository which are designed to detect incorrect behaviour and to measure the performance characteristics of your storage system.\n\nThe default values for the parameters are deliberately low to reduce the impact of running an analysis inadvertently and to provide a sensible starting point for your investigations.\nRun your first analysis with the default parameter values to check for simple problems.\nIf successful, run a sequence of increasingly large analyses until you encounter a failure or you reach a `blob_count` of at least `2000`, a `max_blob_size` of at least `2gb`, a `max_total_data_size` of at least `1tb`, and a `register_operation_count` of at least `100`.\nAlways specify a generous timeout, possibly `1h` or longer, to allow time for each analysis to run to completion.\nPerform the analyses using a multi-node cluster of a similar size to your production cluster so that it can detect any problems that only arise when the repository is accessed by many nodes at once.\n\nIf the analysis fails, Elasticsearch detected that your repository behaved unexpectedly.\nThis usually means you are using a third-party storage system with an incorrect or incompatible implementation of the API it claims to support.\nIf so, this storage system is not suitable for use as a snapshot repository.\nYou will need to work with the supplier of your storage system to address the incompatibilities that Elasticsearch detects.\n\nIf the analysis is successful, the API returns details of the testing process, optionally including how long each operation took.\nYou can use this information to determine the performance of your storage system.\nIf any operation fails or returns an incorrect result, the API returns an error.\nIf the API returns an error, it may not have removed all the data it wrote to the repository.\nThe error will indicate the location of any leftover data and this path is also recorded in the Elasticsearch logs.\nYou should verify that this location has been cleaned up correctly.\nIf there is still leftover data at the specified location, you should manually remove it.\n\nIf the connection from your client to Elasticsearch is closed while the client is waiting for the result of the analysis, the test is cancelled.\nSome clients are configured to close their connection if no response is received within a certain timeout.\nAn analysis takes a long time to complete so you might need to relax any such client-side timeouts.\nOn cancellation the analysis attempts to clean up the data it was writing, but it may not be able to remove it all.\nThe path to the leftover data is recorded in the Elasticsearch logs.\nYou should verify that this location has been cleaned up correctly.\nIf there is still leftover data at the specified location, you should manually remove it.\n\nIf the analysis is successful then it detected no incorrect behaviour, but this does not mean that correct behaviour is guaranteed.\nThe analysis attempts to detect common bugs but it does not offer 100% coverage.\nAdditionally, it does not test the following:\n\n* Your repository must perform durable writes. Once a blob has been written it must remain in place until it is deleted, even after a power loss or similar disaster.\n* Your repository must not suffer from silent data corruption. Once a blob has been written, its contents must remain unchanged until it is deliberately modified or deleted.\n* Your repository must behave correctly even if connectivity from the cluster is disrupted. Reads and writes may fail in this case, but they must not return incorrect results.\n\nIMPORTANT: An analysis writes a substantial amount of data to your repository and then reads it back again.\nThis consumes bandwidth on the network between the cluster and the repository, and storage space and I/O bandwidth on the repository itself.\nYou must ensure this load does not affect other users of these systems.\nAnalyses respect the repository settings `max_snapshot_bytes_per_sec` and `max_restore_bytes_per_sec` if available and the cluster setting `indices.recovery.max_bytes_per_sec` which you can use to limit the bandwidth they consume.\n\nNOTE: This API is intended for exploratory use by humans. You should expect the request parameters and the response format to vary in future versions.\n\nNOTE: Different versions of Elasticsearch may perform different checks for repository compatibility, with newer versions typically being stricter than older ones.\nA storage system that passes repository analysis with one version of Elasticsearch may fail with a different version.\nThis indicates it behaves incorrectly in ways that the former version did not detect.\nYou must work with the supplier of your storage system to address the incompatibilities detected by the repository analysis API in any version of Elasticsearch.\n\nNOTE: This API may not work correctly in a mixed-version cluster.\n\n*Implementation details*\n\nNOTE: This section of documentation describes how the repository analysis API works in this version of Elasticsearch, but you should expect the implementation to vary between versions. The request parameters and response format depend on details of the implementation so may also be different in newer versions.\n\nThe analysis comprises a number of blob-level tasks, as set by the `blob_count` parameter and a number of compare-and-exchange operations on linearizable registers, as set by the `register_operation_count` parameter.\nThese tasks are distributed over the data and master-eligible nodes in the cluster for execution.\n\nFor most blob-level tasks, the executing node first writes a blob to the repository and then instructs some of the other nodes in the cluster to attempt to read the data it just wrote.\nThe size of the blob is chosen randomly, according to the `max_blob_size` and `max_total_data_size` parameters.\nIf any of these reads fails then the repository does not implement the necessary read-after-write semantics that Elasticsearch requires.\n\nFor some blob-level tasks, the executing node will instruct some of its peers to attempt to read the data before the writing process completes.\nThese reads are permitted to fail, but must not return partial data.\nIf any read returns partial data then the repository does not implement the necessary atomicity semantics that Elasticsearch requires.\n\nFor some blob-level tasks, the executing node will overwrite the blob while its peers are reading it.\nIn this case the data read may come from either the original or the overwritten blob, but the read operation must not return partial data or a mix of data from the two blobs.\nIf any of these reads returns partial data or a mix of the two blobs then the repository does not implement the necessary atomicity semantics that Elasticsearch requires for overwrites.\n\nThe executing node will use a variety of different methods to write the blob.\nFor instance, where applicable, it will use both single-part and multi-part uploads.\nSimilarly, the reading nodes will use a variety of different methods to read the data back again.\nFor instance they may read the entire blob from start to end or may read only a subset of the data.\n\nFor some blob-level tasks, the executing node will cancel the write before it is complete.\nIn this case, it still instructs some of the other nodes in the cluster to attempt to read the blob but all of these reads must fail to find the blob.\n\nLinearizable registers are special blobs that Elasticsearch manipulates using an atomic compare-and-exchange operation.\nThis operation ensures correct and strongly-consistent behavior even when the blob is accessed by multiple nodes at the same time.\nThe detailed implementation of the compare-and-exchange operation on linearizable registers varies by repository type.\nRepository analysis verifies that that uncontended compare-and-exchange operations on a linearizable register blob always succeed.\nRepository analysis also verifies that contended operations either succeed or report the contention but do not return incorrect results.\nIf an operation fails due to contention, Elasticsearch retries the operation until it succeeds.\nMost of the compare-and-exchange operations performed by repository analysis atomically increment a counter which is represented as an 8-byte blob.\nSome operations also verify the behavior on small blobs with sizes other than 8 bytes.", + "docId": "analyze-repository", + "docUrl": "https://www.elastic.co/guide/en/elasticsearch/reference/{branch}/repo-analysis-api.html", "name": "snapshot.repository_analyze", - "request": null, + "privileges": { + "cluster": [ + "manage" + ] + }, + "request": { + "name": "Request", + "namespace": "snapshot.repository_analyze" + }, "requestBodyRequired": false, - "response": null, + "response": { + "name": "Response", + "namespace": "snapshot.repository_analyze" + }, "responseMediaType": [ "application/json" ], @@ -206801,6 +206814,1203 @@ }, "specLocation": "snapshot/get_repository/SnapshotGetRepositoryResponse.ts#L23-L25" }, + { + "kind": "interface", + "name": { + "name": "BlobDetails", + "namespace": "snapshot.repository_analyze" + }, + "properties": [ + { + "description": "The name of the blob.", + "name": "name", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "string", + "namespace": "_builtins" + } + } + }, + { + "description": "Indicates whether the blob was overwritten while the read operations were ongoing.\n /**", + "name": "overwritten", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "boolean", + "namespace": "_builtins" + } + } + }, + { + "name": "read_early", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "boolean", + "namespace": "_builtins" + } + } + }, + { + "description": "The position, in bytes, at which read operations completed.", + "name": "read_end", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "long", + "namespace": "_types" + } + } + }, + { + "description": "The position, in bytes, at which read operations started.", + "name": "read_start", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "long", + "namespace": "_types" + } + } + }, + { + "description": "A description of every read operation performed on the blob.", + "name": "reads", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "ReadBlobDetails", + "namespace": "snapshot.repository_analyze" + } + } + }, + { + "description": "The size of the blob.", + "name": "size", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "ByteSize", + "namespace": "_types" + } + } + }, + { + "description": "The size of the blob in bytes.", + "name": "size_bytes", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "long", + "namespace": "_types" + } + } + } + ], + "specLocation": "snapshot/repository_analyze/SnapshotAnalyzeRepositoryResponse.ts#L250-L284" + }, + { + "kind": "interface", + "name": { + "name": "DetailsInfo", + "namespace": "snapshot.repository_analyze" + }, + "properties": [ + { + "description": "A description of the blob that was written and read.", + "name": "blob", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "BlobDetails", + "namespace": "snapshot.repository_analyze" + } + } + }, + { + "description": "The elapsed time spent overwriting the blob.\nIf the blob was not overwritten, this information is omitted.", + "name": "overwrite_elapsed", + "required": false, + "type": { + "kind": "instance_of", + "type": { + "name": "Duration", + "namespace": "_types" + } + } + }, + { + "description": "The elapsed time spent overwriting the blob, in nanoseconds.\nIf the blob was not overwritten, this information is omitted.", + "name": "overwrite_elapsed_nanos", + "required": false, + "type": { + "kind": "instance_of", + "generics": [ + { + "kind": "instance_of", + "type": { + "name": "UnitNanos", + "namespace": "_types" + } + } + ], + "type": { + "name": "DurationValue", + "namespace": "_types" + } + } + }, + { + "description": "The elapsed time spent writing the blob.", + "name": "write_elapsed", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "Duration", + "namespace": "_types" + } + } + }, + { + "description": "The elapsed time spent writing the blob, in nanoseconds.", + "name": "write_elapsed_nanos", + "required": true, + "type": { + "kind": "instance_of", + "generics": [ + { + "kind": "instance_of", + "type": { + "name": "UnitNanos", + "namespace": "_types" + } + } + ], + "type": { + "name": "DurationValue", + "namespace": "_types" + } + } + }, + { + "description": "The length of time spent waiting for the `max_snapshot_bytes_per_sec` (or `indices.recovery.max_bytes_per_sec` if the recovery settings for managed services are set) throttle while writing the blob.", + "name": "write_throttled", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "Duration", + "namespace": "_types" + } + } + }, + { + "description": "The length of time spent waiting for the `max_snapshot_bytes_per_sec` (or `indices.recovery.max_bytes_per_sec` if the recovery settings for managed services are set) throttle while writing the blob, in nanoseconds.", + "name": "write_throttled_nanos", + "required": true, + "type": { + "kind": "instance_of", + "generics": [ + { + "kind": "instance_of", + "type": { + "name": "UnitNanos", + "namespace": "_types" + } + } + ], + "type": { + "name": "DurationValue", + "namespace": "_types" + } + } + }, + { + "description": "The node which wrote the blob and coordinated the read operations.", + "name": "writer_node", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "NodeInfo", + "namespace": "snapshot.repository_analyze" + } + } + } + ], + "specLocation": "snapshot/repository_analyze/SnapshotAnalyzeRepositoryResponse.ts#L286-L321" + }, + { + "kind": "interface", + "name": { + "name": "NodeInfo", + "namespace": "snapshot.repository_analyze" + }, + "properties": [ + { + "name": "id", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "Id", + "namespace": "_types" + } + } + }, + { + "name": "name", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "Name", + "namespace": "_types" + } + } + } + ], + "specLocation": "snapshot/repository_analyze/SnapshotAnalyzeRepositoryResponse.ts#L110-L113" + }, + { + "kind": "interface", + "name": { + "name": "ReadBlobDetails", + "namespace": "snapshot.repository_analyze" + }, + "properties": [ + { + "description": "Indicates whether the read operation may have started before the write operation was complete.", + "name": "before_write_complete", + "required": false, + "type": { + "kind": "instance_of", + "type": { + "name": "boolean", + "namespace": "_builtins" + } + } + }, + { + "description": "The length of time spent reading the blob.\nIf the blob was not found, this detail is omitted.", + "name": "elapsed", + "required": false, + "type": { + "kind": "instance_of", + "type": { + "name": "Duration", + "namespace": "_types" + } + } + }, + { + "description": "The length of time spent reading the blob, in nanoseconds.\nIf the blob was not found, this detail is omitted.", + "name": "elapsed_nanos", + "required": false, + "type": { + "kind": "instance_of", + "generics": [ + { + "kind": "instance_of", + "type": { + "name": "UnitNanos", + "namespace": "_types" + } + } + ], + "type": { + "name": "DurationValue", + "namespace": "_types" + } + } + }, + { + "description": "The length of time waiting for the first byte of the read operation to be received.\nIf the blob was not found, this detail is omitted.", + "name": "first_byte_time", + "required": false, + "type": { + "kind": "instance_of", + "type": { + "name": "Duration", + "namespace": "_types" + } + } + }, + { + "description": "The length of time waiting for the first byte of the read operation to be received, in nanoseconds.\nIf the blob was not found, this detail is omitted.", + "name": "first_byte_time_nanos", + "required": true, + "type": { + "kind": "instance_of", + "generics": [ + { + "kind": "instance_of", + "type": { + "name": "UnitNanos", + "namespace": "_types" + } + } + ], + "type": { + "name": "DurationValue", + "namespace": "_types" + } + } + }, + { + "description": "Indicates whether the blob was found by the read operation.\nIf the read was started before the write completed or the write was ended before completion, it might be false.", + "name": "found", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "boolean", + "namespace": "_builtins" + } + } + }, + { + "description": "The node that performed the read operation.", + "name": "node", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "NodeInfo", + "namespace": "snapshot.repository_analyze" + } + } + }, + { + "description": "The length of time spent waiting due to the `max_restore_bytes_per_sec` or `indices.recovery.max_bytes_per_sec` throttles during the read of the blob.\nIf the blob was not found, this detail is omitted.", + "name": "throttled", + "required": false, + "type": { + "kind": "instance_of", + "type": { + "name": "Duration", + "namespace": "_types" + } + } + }, + { + "description": "The length of time spent waiting due to the `max_restore_bytes_per_sec` or `indices.recovery.max_bytes_per_sec` throttles during the read of the blob, in nanoseconds.\nIf the blob was not found, this detail is omitted.", + "name": "throttled_nanos", + "required": false, + "type": { + "kind": "instance_of", + "generics": [ + { + "kind": "instance_of", + "type": { + "name": "UnitNanos", + "namespace": "_types" + } + } + ], + "type": { + "name": "DurationValue", + "namespace": "_types" + } + } + } + ], + "specLocation": "snapshot/repository_analyze/SnapshotAnalyzeRepositoryResponse.ts#L204-L248" + }, + { + "kind": "interface", + "name": { + "name": "ReadSummaryInfo", + "namespace": "snapshot.repository_analyze" + }, + "properties": [ + { + "description": "The number of read operations performed in the test.", + "name": "count", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "integer", + "namespace": "_types" + } + } + }, + { + "description": "The maximum time spent waiting for the first byte of any read request to be received.", + "name": "max_wait", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "Duration", + "namespace": "_types" + } + } + }, + { + "description": "The maximum time spent waiting for the first byte of any read request to be received, in nanoseconds.", + "name": "max_wait_nanos", + "required": true, + "type": { + "kind": "instance_of", + "generics": [ + { + "kind": "instance_of", + "type": { + "name": "UnitNanos", + "namespace": "_types" + } + } + ], + "type": { + "name": "DurationValue", + "namespace": "_types" + } + } + }, + { + "description": "The total elapsed time spent on reading blobs in the test.", + "name": "total_elapsed", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "Duration", + "namespace": "_types" + } + } + }, + { + "description": "The total elapsed time spent on reading blobs in the test, in nanoseconds.", + "name": "total_elapsed_nanos", + "required": true, + "type": { + "kind": "instance_of", + "generics": [ + { + "kind": "instance_of", + "type": { + "name": "UnitNanos", + "namespace": "_types" + } + } + ], + "type": { + "name": "DurationValue", + "namespace": "_types" + } + } + }, + { + "description": "The total size of all the blobs or partial blobs read in the test.", + "name": "total_size", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "ByteSize", + "namespace": "_types" + } + } + }, + { + "description": "The total size of all the blobs or partial blobs read in the test, in bytes.", + "name": "total_size_bytes", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "long", + "namespace": "_types" + } + } + }, + { + "description": "The total time spent waiting due to the `max_restore_bytes_per_sec` or `indices.recovery.max_bytes_per_sec` throttles.", + "name": "total_throttled", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "Duration", + "namespace": "_types" + } + } + }, + { + "description": "The total time spent waiting due to the `max_restore_bytes_per_sec` or `indices.recovery.max_bytes_per_sec` throttles, in nanoseconds.", + "name": "total_throttled_nanos", + "required": true, + "type": { + "kind": "instance_of", + "generics": [ + { + "kind": "instance_of", + "type": { + "name": "UnitNanos", + "namespace": "_types" + } + } + ], + "type": { + "name": "DurationValue", + "namespace": "_types" + } + } + }, + { + "description": "The total time spent waiting for the first byte of each read request to be received.", + "name": "total_wait", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "Duration", + "namespace": "_types" + } + } + }, + { + "description": "The total time spent waiting for the first byte of each read request to be received, in nanoseconds.", + "name": "total_wait_nanos", + "required": true, + "type": { + "kind": "instance_of", + "generics": [ + { + "kind": "instance_of", + "type": { + "name": "UnitNanos", + "namespace": "_types" + } + } + ], + "type": { + "name": "DurationValue", + "namespace": "_types" + } + } + } + ], + "specLocation": "snapshot/repository_analyze/SnapshotAnalyzeRepositoryResponse.ts#L115-L160" + }, + { + "kind": "request", + "attachedBehaviors": [ + "CommonQueryParameters" + ], + "body": { + "kind": "no_body" + }, + "description": "Analyze a snapshot repository.\nAnalyze the performance characteristics and any incorrect behaviour found in a repository.\n\nThe response exposes implementation details of the analysis which may change from version to version.\nThe response body format is therefore not considered stable and may be different in newer versions.\n\nThere are a large number of third-party storage systems available, not all of which are suitable for use as a snapshot repository by Elasticsearch.\nSome storage systems behave incorrectly, or perform poorly, especially when accessed concurrently by multiple clients as the nodes of an Elasticsearch cluster do. This API performs a collection of read and write operations on your repository which are designed to detect incorrect behaviour and to measure the performance characteristics of your storage system.\n\nThe default values for the parameters are deliberately low to reduce the impact of running an analysis inadvertently and to provide a sensible starting point for your investigations.\nRun your first analysis with the default parameter values to check for simple problems.\nIf successful, run a sequence of increasingly large analyses until you encounter a failure or you reach a `blob_count` of at least `2000`, a `max_blob_size` of at least `2gb`, a `max_total_data_size` of at least `1tb`, and a `register_operation_count` of at least `100`.\nAlways specify a generous timeout, possibly `1h` or longer, to allow time for each analysis to run to completion.\nPerform the analyses using a multi-node cluster of a similar size to your production cluster so that it can detect any problems that only arise when the repository is accessed by many nodes at once.\n\nIf the analysis fails, Elasticsearch detected that your repository behaved unexpectedly.\nThis usually means you are using a third-party storage system with an incorrect or incompatible implementation of the API it claims to support.\nIf so, this storage system is not suitable for use as a snapshot repository.\nYou will need to work with the supplier of your storage system to address the incompatibilities that Elasticsearch detects.\n\nIf the analysis is successful, the API returns details of the testing process, optionally including how long each operation took.\nYou can use this information to determine the performance of your storage system.\nIf any operation fails or returns an incorrect result, the API returns an error.\nIf the API returns an error, it may not have removed all the data it wrote to the repository.\nThe error will indicate the location of any leftover data and this path is also recorded in the Elasticsearch logs.\nYou should verify that this location has been cleaned up correctly.\nIf there is still leftover data at the specified location, you should manually remove it.\n\nIf the connection from your client to Elasticsearch is closed while the client is waiting for the result of the analysis, the test is cancelled.\nSome clients are configured to close their connection if no response is received within a certain timeout.\nAn analysis takes a long time to complete so you might need to relax any such client-side timeouts.\nOn cancellation the analysis attempts to clean up the data it was writing, but it may not be able to remove it all.\nThe path to the leftover data is recorded in the Elasticsearch logs.\nYou should verify that this location has been cleaned up correctly.\nIf there is still leftover data at the specified location, you should manually remove it.\n\nIf the analysis is successful then it detected no incorrect behaviour, but this does not mean that correct behaviour is guaranteed.\nThe analysis attempts to detect common bugs but it does not offer 100% coverage.\nAdditionally, it does not test the following:\n\n* Your repository must perform durable writes. Once a blob has been written it must remain in place until it is deleted, even after a power loss or similar disaster.\n* Your repository must not suffer from silent data corruption. Once a blob has been written, its contents must remain unchanged until it is deliberately modified or deleted.\n* Your repository must behave correctly even if connectivity from the cluster is disrupted. Reads and writes may fail in this case, but they must not return incorrect results.\n\nIMPORTANT: An analysis writes a substantial amount of data to your repository and then reads it back again.\nThis consumes bandwidth on the network between the cluster and the repository, and storage space and I/O bandwidth on the repository itself.\nYou must ensure this load does not affect other users of these systems.\nAnalyses respect the repository settings `max_snapshot_bytes_per_sec` and `max_restore_bytes_per_sec` if available and the cluster setting `indices.recovery.max_bytes_per_sec` which you can use to limit the bandwidth they consume.\n\nNOTE: This API is intended for exploratory use by humans. You should expect the request parameters and the response format to vary in future versions.\n\nNOTE: Different versions of Elasticsearch may perform different checks for repository compatibility, with newer versions typically being stricter than older ones.\nA storage system that passes repository analysis with one version of Elasticsearch may fail with a different version.\nThis indicates it behaves incorrectly in ways that the former version did not detect.\nYou must work with the supplier of your storage system to address the incompatibilities detected by the repository analysis API in any version of Elasticsearch.\n\nNOTE: This API may not work correctly in a mixed-version cluster.\n\n*Implementation details*\n\nNOTE: This section of documentation describes how the repository analysis API works in this version of Elasticsearch, but you should expect the implementation to vary between versions. The request parameters and response format depend on details of the implementation so may also be different in newer versions.\n\nThe analysis comprises a number of blob-level tasks, as set by the `blob_count` parameter and a number of compare-and-exchange operations on linearizable registers, as set by the `register_operation_count` parameter.\nThese tasks are distributed over the data and master-eligible nodes in the cluster for execution.\n\nFor most blob-level tasks, the executing node first writes a blob to the repository and then instructs some of the other nodes in the cluster to attempt to read the data it just wrote.\nThe size of the blob is chosen randomly, according to the `max_blob_size` and `max_total_data_size` parameters.\nIf any of these reads fails then the repository does not implement the necessary read-after-write semantics that Elasticsearch requires.\n\nFor some blob-level tasks, the executing node will instruct some of its peers to attempt to read the data before the writing process completes.\nThese reads are permitted to fail, but must not return partial data.\nIf any read returns partial data then the repository does not implement the necessary atomicity semantics that Elasticsearch requires.\n\nFor some blob-level tasks, the executing node will overwrite the blob while its peers are reading it.\nIn this case the data read may come from either the original or the overwritten blob, but the read operation must not return partial data or a mix of data from the two blobs.\nIf any of these reads returns partial data or a mix of the two blobs then the repository does not implement the necessary atomicity semantics that Elasticsearch requires for overwrites.\n\nThe executing node will use a variety of different methods to write the blob.\nFor instance, where applicable, it will use both single-part and multi-part uploads.\nSimilarly, the reading nodes will use a variety of different methods to read the data back again.\nFor instance they may read the entire blob from start to end or may read only a subset of the data.\n\nFor some blob-level tasks, the executing node will cancel the write before it is complete.\nIn this case, it still instructs some of the other nodes in the cluster to attempt to read the blob but all of these reads must fail to find the blob.\n\nLinearizable registers are special blobs that Elasticsearch manipulates using an atomic compare-and-exchange operation.\nThis operation ensures correct and strongly-consistent behavior even when the blob is accessed by multiple nodes at the same time.\nThe detailed implementation of the compare-and-exchange operation on linearizable registers varies by repository type.\nRepository analysis verifies that that uncontended compare-and-exchange operations on a linearizable register blob always succeed.\nRepository analysis also verifies that contended operations either succeed or report the contention but do not return incorrect results.\nIf an operation fails due to contention, Elasticsearch retries the operation until it succeeds.\nMost of the compare-and-exchange operations performed by repository analysis atomically increment a counter which is represented as an 8-byte blob.\nSome operations also verify the behavior on small blobs with sizes other than 8 bytes.", + "inherits": { + "type": { + "name": "RequestBase", + "namespace": "_types" + } + }, + "name": { + "name": "Request", + "namespace": "snapshot.repository_analyze" + }, + "path": [ + { + "codegenName": "name", + "description": "The name of the repository.", + "name": "repository", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "Name", + "namespace": "_types" + } + } + } + ], + "query": [ + { + "description": "The total number of blobs to write to the repository during the test.\nFor realistic experiments, you should set it to at least `2000`.", + "name": "blob_count", + "required": false, + "serverDefault": 100, + "type": { + "kind": "instance_of", + "type": { + "name": "integer", + "namespace": "_types" + } + } + }, + { + "description": "The number of operations to run concurrently during the test.", + "name": "concurrency", + "required": false, + "serverDefault": 10, + "type": { + "kind": "instance_of", + "type": { + "name": "integer", + "namespace": "_types" + } + } + }, + { + "description": "Indicates whether to return detailed results, including timing information for every operation performed during the analysis.\nIf false, it returns only a summary of the analysis.", + "name": "detailed", + "required": false, + "serverDefault": false, + "type": { + "kind": "instance_of", + "type": { + "name": "boolean", + "namespace": "_builtins" + } + } + }, + { + "description": "The number of nodes on which to perform an early read operation while writing each blob.\nEarly read operations are only rarely performed.", + "name": "early_read_node_count", + "required": false, + "serverDefault": 2, + "type": { + "kind": "instance_of", + "type": { + "name": "integer", + "namespace": "_types" + } + } + }, + { + "description": "The maximum size of a blob to be written during the test.\nFor realistic experiments, you should set it to at least `2gb`.", + "name": "max_blob_size", + "required": false, + "serverDefault": "10mb", + "type": { + "kind": "instance_of", + "type": { + "name": "ByteSize", + "namespace": "_types" + } + } + }, + { + "description": "An upper limit on the total size of all the blobs written during the test.\nFor realistic experiments, you should set it to at least `1tb`.", + "name": "max_total_data_size", + "required": false, + "serverDefault": "1gb", + "type": { + "kind": "instance_of", + "type": { + "name": "ByteSize", + "namespace": "_types" + } + } + }, + { + "description": "The probability of performing a rare action such as an early read, an overwrite, or an aborted write on each blob.", + "name": "rare_action_probability", + "required": false, + "serverDefault": 0.02, + "type": { + "kind": "instance_of", + "type": { + "name": "double", + "namespace": "_types" + } + } + }, + { + "description": "Indicates whether to rarely cancel writes before they complete.", + "name": "rarely_abort_writes", + "required": false, + "serverDefault": true, + "type": { + "kind": "instance_of", + "type": { + "name": "boolean", + "namespace": "_builtins" + } + } + }, + { + "description": "The number of nodes on which to read a blob after writing.", + "name": "read_node_count", + "required": false, + "serverDefault": 10, + "type": { + "kind": "instance_of", + "type": { + "name": "integer", + "namespace": "_types" + } + } + }, + { + "description": "The minimum number of linearizable register operations to perform in total.\nFor realistic experiments, you should set it to at least `100`.", + "name": "register_operation_count", + "required": false, + "serverDefault": 10, + "type": { + "kind": "instance_of", + "type": { + "name": "integer", + "namespace": "_types" + } + } + }, + { + "description": "The seed for the pseudo-random number generator used to generate the list of operations performed during the test.\nTo repeat the same set of operations in multiple experiments, use the same seed in each experiment.\nNote that the operations are performed concurrently so might not always happen in the same order on each run.", + "name": "seed", + "required": false, + "type": { + "kind": "instance_of", + "type": { + "name": "integer", + "namespace": "_types" + } + } + }, + { + "description": "The period of time to wait for the test to complete.\nIf no response is received before the timeout expires, the test is cancelled and returns an error.", + "name": "timeout", + "required": false, + "serverDefault": "30s", + "type": { + "kind": "instance_of", + "type": { + "name": "Duration", + "namespace": "_types" + } + } + } + ], + "specLocation": "snapshot/repository_analyze/SnapshotAnalyzeRepositoryRequest.ts#L25-L202" + }, + { + "kind": "response", + "body": { + "kind": "properties", + "properties": [ + { + "description": "The number of blobs written to the repository during the test.", + "name": "blob_count", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "integer", + "namespace": "_types" + } + } + }, + { + "description": "The path in the repository under which all the blobs were written during the test.", + "name": "blob_path", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "string", + "namespace": "_builtins" + } + } + }, + { + "description": "The number of write operations performed concurrently during the test.", + "name": "concurrency", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "integer", + "namespace": "_types" + } + } + }, + { + "description": "The node that coordinated the analysis and performed the final cleanup.", + "name": "coordinating_node", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "NodeInfo", + "namespace": "snapshot.repository_analyze" + } + } + }, + { + "description": "The time it took to delete all the blobs in the container.", + "name": "delete_elapsed", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "Duration", + "namespace": "_types" + } + } + }, + { + "description": "The time it took to delete all the blobs in the container, in nanoseconds.", + "name": "delete_elapsed_nanos", + "required": true, + "type": { + "kind": "instance_of", + "generics": [ + { + "kind": "instance_of", + "type": { + "name": "UnitNanos", + "namespace": "_types" + } + } + ], + "type": { + "name": "DurationValue", + "namespace": "_types" + } + } + }, + { + "description": "A description of every read and write operation performed during the test.", + "name": "details", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "DetailsInfo", + "namespace": "snapshot.repository_analyze" + } + } + }, + { + "description": "The limit on the number of nodes on which early read operations were performed after writing each blob.", + "name": "early_read_node_count", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "integer", + "namespace": "_types" + } + } + }, + { + "description": "A list of correctness issues detected, which is empty if the API succeeded.\nIt is included to emphasize that a successful response does not guarantee correct behaviour in future.", + "name": "issues_detected", + "required": true, + "type": { + "kind": "array_of", + "value": { + "kind": "instance_of", + "type": { + "name": "string", + "namespace": "_builtins" + } + } + } + }, + { + "description": "The time it took to retrieve a list of all the blobs in the container.", + "name": "listing_elapsed", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "Duration", + "namespace": "_types" + } + } + }, + { + "description": "The time it took to retrieve a list of all the blobs in the container, in nanoseconds.", + "name": "listing_elapsed_nanos", + "required": true, + "type": { + "kind": "instance_of", + "generics": [ + { + "kind": "instance_of", + "type": { + "name": "UnitNanos", + "namespace": "_types" + } + } + ], + "type": { + "name": "DurationValue", + "namespace": "_types" + } + } + }, + { + "description": "The limit on the size of a blob written during the test.", + "name": "max_blob_size", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "ByteSize", + "namespace": "_types" + } + } + }, + { + "description": "The limit, in bytes, on the size of a blob written during the test.", + "name": "max_blob_size_bytes", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "long", + "namespace": "_types" + } + } + }, + { + "description": "The limit on the total size of all blob written during the test.", + "name": "max_total_data_size", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "ByteSize", + "namespace": "_types" + } + } + }, + { + "description": "The limit, in bytes, on the total size of all blob written during the test.", + "name": "max_total_data_size_bytes", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "long", + "namespace": "_types" + } + } + }, + { + "description": "The probability of performing rare actions during the test.", + "name": "rare_action_probability", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "double", + "namespace": "_types" + } + } + }, + { + "description": "The limit on the number of nodes on which read operations were performed after writing each blob.", + "name": "read_node_count", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "integer", + "namespace": "_types" + } + } + }, + { + "description": "The name of the repository that was the subject of the analysis.", + "name": "repository", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "string", + "namespace": "_builtins" + } + } + }, + { + "description": "The seed for the pseudo-random number generator used to generate the operations used during the test.", + "name": "seed", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "long", + "namespace": "_types" + } + } + }, + { + "description": "A collection of statistics that summarize the results of the test.", + "name": "summary", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "SummaryInfo", + "namespace": "snapshot.repository_analyze" + } + } + } + ] + }, + "name": { + "name": "Response", + "namespace": "snapshot.repository_analyze" + }, + "specLocation": "snapshot/repository_analyze/SnapshotAnalyzeRepositoryResponse.ts#L24-L108" + }, + { + "kind": "interface", + "name": { + "name": "SummaryInfo", + "namespace": "snapshot.repository_analyze" + }, + "properties": [ + { + "description": "A collection of statistics that summarise the results of the read operations in the test.", + "name": "read", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "ReadSummaryInfo", + "namespace": "snapshot.repository_analyze" + } + } + }, + { + "description": "A collection of statistics that summarise the results of the write operations in the test.", + "name": "write", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "WriteSummaryInfo", + "namespace": "snapshot.repository_analyze" + } + } + } + ], + "specLocation": "snapshot/repository_analyze/SnapshotAnalyzeRepositoryResponse.ts#L193-L202" + }, + { + "kind": "interface", + "name": { + "name": "WriteSummaryInfo", + "namespace": "snapshot.repository_analyze" + }, + "properties": [ + { + "description": "The number of write operations performed in the test.", + "name": "count", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "integer", + "namespace": "_types" + } + } + }, + { + "description": "The total elapsed time spent on writing blobs in the test.", + "name": "total_elapsed", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "Duration", + "namespace": "_types" + } + } + }, + { + "description": "The total elapsed time spent on writing blobs in the test, in nanoseconds.", + "name": "total_elapsed_nanos", + "required": true, + "type": { + "kind": "instance_of", + "generics": [ + { + "kind": "instance_of", + "type": { + "name": "UnitNanos", + "namespace": "_types" + } + } + ], + "type": { + "name": "DurationValue", + "namespace": "_types" + } + } + }, + { + "description": "The total size of all the blobs written in the test.", + "name": "total_size", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "ByteSize", + "namespace": "_types" + } + } + }, + { + "description": "The total size of all the blobs written in the test, in bytes.", + "name": "total_size_bytes", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "long", + "namespace": "_types" + } + } + }, + { + "description": "The total time spent waiting due to the `max_snapshot_bytes_per_sec` throttle.", + "name": "total_throttled", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "Duration", + "namespace": "_types" + } + } + }, + { + "description": "The total time spent waiting due to the `max_snapshot_bytes_per_sec` throttle, in nanoseconds.", + "name": "total_throttled_nanos", + "required": true, + "type": { + "kind": "instance_of", + "type": { + "name": "long", + "namespace": "_types" + } + } + } + ], + "specLocation": "snapshot/repository_analyze/SnapshotAnalyzeRepositoryResponse.ts#L162-L191" + }, { "kind": "request", "attachedBehaviors": [ diff --git a/output/schema/validation-errors.json b/output/schema/validation-errors.json index 3b2613f018..a6a79cf731 100644 --- a/output/schema/validation-errors.json +++ b/output/schema/validation-errors.json @@ -733,7 +733,7 @@ }, "snapshot.repository_analyze": { "request": [ - "Missing request & response" + "Request: query parameter 'register_operation_count' does not exist in the json spec" ], "response": [] }, diff --git a/output/typescript/types.ts b/output/typescript/types.ts index e67511c3e7..1a76ff7bea 100644 --- a/output/typescript/types.ts +++ b/output/typescript/types.ts @@ -19559,6 +19559,113 @@ export interface SnapshotGetRepositoryRequest extends RequestBase { export type SnapshotGetRepositoryResponse = Record +export interface SnapshotRepositoryAnalyzeBlobDetails { + name: string + overwritten: boolean + read_early: boolean + read_end: long + read_start: long + reads: SnapshotRepositoryAnalyzeReadBlobDetails + size: ByteSize + size_bytes: long +} + +export interface SnapshotRepositoryAnalyzeDetailsInfo { + blob: SnapshotRepositoryAnalyzeBlobDetails + overwrite_elapsed?: Duration + overwrite_elapsed_nanos?: DurationValue + write_elapsed: Duration + write_elapsed_nanos: DurationValue + write_throttled: Duration + write_throttled_nanos: DurationValue + writer_node: SnapshotRepositoryAnalyzeNodeInfo +} + +export interface SnapshotRepositoryAnalyzeNodeInfo { + id: Id + name: Name +} + +export interface SnapshotRepositoryAnalyzeReadBlobDetails { + before_write_complete?: boolean + elapsed?: Duration + elapsed_nanos?: DurationValue + first_byte_time?: Duration + first_byte_time_nanos: DurationValue + found: boolean + node: SnapshotRepositoryAnalyzeNodeInfo + throttled?: Duration + throttled_nanos?: DurationValue +} + +export interface SnapshotRepositoryAnalyzeReadSummaryInfo { + count: integer + max_wait: Duration + max_wait_nanos: DurationValue + total_elapsed: Duration + total_elapsed_nanos: DurationValue + total_size: ByteSize + total_size_bytes: long + total_throttled: Duration + total_throttled_nanos: DurationValue + total_wait: Duration + total_wait_nanos: DurationValue +} + +export interface SnapshotRepositoryAnalyzeRequest extends RequestBase { + name: Name + blob_count?: integer + concurrency?: integer + detailed?: boolean + early_read_node_count?: integer + max_blob_size?: ByteSize + max_total_data_size?: ByteSize + rare_action_probability?: double + rarely_abort_writes?: boolean + read_node_count?: integer + register_operation_count?: integer + seed?: integer + timeout?: Duration +} + +export interface SnapshotRepositoryAnalyzeResponse { + blob_count: integer + blob_path: string + concurrency: integer + coordinating_node: SnapshotRepositoryAnalyzeNodeInfo + delete_elapsed: Duration + delete_elapsed_nanos: DurationValue + details: SnapshotRepositoryAnalyzeDetailsInfo + early_read_node_count: integer + issues_detected: string[] + listing_elapsed: Duration + listing_elapsed_nanos: DurationValue + max_blob_size: ByteSize + max_blob_size_bytes: long + max_total_data_size: ByteSize + max_total_data_size_bytes: long + rare_action_probability: double + read_node_count: integer + repository: string + seed: long + summary: SnapshotRepositoryAnalyzeSummaryInfo +} + +export interface SnapshotRepositoryAnalyzeSummaryInfo { + read: SnapshotRepositoryAnalyzeReadSummaryInfo + write: SnapshotRepositoryAnalyzeWriteSummaryInfo +} + +export interface SnapshotRepositoryAnalyzeWriteSummaryInfo { + count: integer + total_elapsed: Duration + total_elapsed_nanos: DurationValue + total_size: ByteSize + total_size_bytes: long + total_throttled: Duration + total_throttled_nanos: long +} + export interface SnapshotRepositoryVerifyIntegrityRequest extends RequestBase { name: Names meta_thread_pool_concurrency?: integer diff --git a/specification/_doc_ids/table.csv b/specification/_doc_ids/table.csv index af03e9641a..fdf5bca391 100644 --- a/specification/_doc_ids/table.csv +++ b/specification/_doc_ids/table.csv @@ -6,6 +6,7 @@ analysis-standard-analyzer,https://www.elastic.co/guide/en/elasticsearch/referen analysis-tokenfilters,https://www.elastic.co/guide/en/elasticsearch/reference/{branch}/analysis-tokenfilters.html analysis-tokenizers,https://www.elastic.co/guide/en/elasticsearch/reference/{branch}/analysis-tokenizers.html analysis,https://www.elastic.co/guide/en/elasticsearch/reference/{branch}/analysis.html +analyze-repository,https://www.elastic.co/guide/en/elasticsearch/reference/{branch}/repo-analysis-api.html analyzer-anatomy,https://www.elastic.co/guide/en/elasticsearch/reference/{branch}/analyzer-anatomy.html api-date-math-index-names,https://www.elastic.co/guide/en/elasticsearch/reference/{branch}/api-conventions.html#api-date-math-index-names append-processor,https://www.elastic.co/guide/en/elasticsearch/reference/{branch}/append-processor.html diff --git a/specification/snapshot/repository_analyze/SnapshotAnalyzeRepositoryRequest.ts b/specification/snapshot/repository_analyze/SnapshotAnalyzeRepositoryRequest.ts new file mode 100644 index 0000000000..993be44e35 --- /dev/null +++ b/specification/snapshot/repository_analyze/SnapshotAnalyzeRepositoryRequest.ts @@ -0,0 +1,202 @@ +/* + * Licensed to Elasticsearch B.V. under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch B.V. licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +import { RequestBase } from '@_types/Base' +import { ByteSize, Name } from '@_types/common' +import { double, integer } from '@_types/Numeric' +import { Duration } from '@_types/Time' + +/** + * Analyze a snapshot repository. + * Analyze the performance characteristics and any incorrect behaviour found in a repository. + * + * The response exposes implementation details of the analysis which may change from version to version. + * The response body format is therefore not considered stable and may be different in newer versions. + * + * There are a large number of third-party storage systems available, not all of which are suitable for use as a snapshot repository by Elasticsearch. + * Some storage systems behave incorrectly, or perform poorly, especially when accessed concurrently by multiple clients as the nodes of an Elasticsearch cluster do. This API performs a collection of read and write operations on your repository which are designed to detect incorrect behaviour and to measure the performance characteristics of your storage system. + * + * The default values for the parameters are deliberately low to reduce the impact of running an analysis inadvertently and to provide a sensible starting point for your investigations. + * Run your first analysis with the default parameter values to check for simple problems. + * If successful, run a sequence of increasingly large analyses until you encounter a failure or you reach a `blob_count` of at least `2000`, a `max_blob_size` of at least `2gb`, a `max_total_data_size` of at least `1tb`, and a `register_operation_count` of at least `100`. + * Always specify a generous timeout, possibly `1h` or longer, to allow time for each analysis to run to completion. + * Perform the analyses using a multi-node cluster of a similar size to your production cluster so that it can detect any problems that only arise when the repository is accessed by many nodes at once. + * + * If the analysis fails, Elasticsearch detected that your repository behaved unexpectedly. + * This usually means you are using a third-party storage system with an incorrect or incompatible implementation of the API it claims to support. + * If so, this storage system is not suitable for use as a snapshot repository. + * You will need to work with the supplier of your storage system to address the incompatibilities that Elasticsearch detects. + * + * If the analysis is successful, the API returns details of the testing process, optionally including how long each operation took. + * You can use this information to determine the performance of your storage system. + * If any operation fails or returns an incorrect result, the API returns an error. + * If the API returns an error, it may not have removed all the data it wrote to the repository. + * The error will indicate the location of any leftover data and this path is also recorded in the Elasticsearch logs. + * You should verify that this location has been cleaned up correctly. + * If there is still leftover data at the specified location, you should manually remove it. + * + * If the connection from your client to Elasticsearch is closed while the client is waiting for the result of the analysis, the test is cancelled. + * Some clients are configured to close their connection if no response is received within a certain timeout. + * An analysis takes a long time to complete so you might need to relax any such client-side timeouts. + * On cancellation the analysis attempts to clean up the data it was writing, but it may not be able to remove it all. + * The path to the leftover data is recorded in the Elasticsearch logs. + * You should verify that this location has been cleaned up correctly. + * If there is still leftover data at the specified location, you should manually remove it. + * + * If the analysis is successful then it detected no incorrect behaviour, but this does not mean that correct behaviour is guaranteed. + * The analysis attempts to detect common bugs but it does not offer 100% coverage. + * Additionally, it does not test the following: + * + * * Your repository must perform durable writes. Once a blob has been written it must remain in place until it is deleted, even after a power loss or similar disaster. + * * Your repository must not suffer from silent data corruption. Once a blob has been written, its contents must remain unchanged until it is deliberately modified or deleted. + * * Your repository must behave correctly even if connectivity from the cluster is disrupted. Reads and writes may fail in this case, but they must not return incorrect results. + * + * IMPORTANT: An analysis writes a substantial amount of data to your repository and then reads it back again. + * This consumes bandwidth on the network between the cluster and the repository, and storage space and I/O bandwidth on the repository itself. + * You must ensure this load does not affect other users of these systems. + * Analyses respect the repository settings `max_snapshot_bytes_per_sec` and `max_restore_bytes_per_sec` if available and the cluster setting `indices.recovery.max_bytes_per_sec` which you can use to limit the bandwidth they consume. + * + * NOTE: This API is intended for exploratory use by humans. You should expect the request parameters and the response format to vary in future versions. + * + * NOTE: Different versions of Elasticsearch may perform different checks for repository compatibility, with newer versions typically being stricter than older ones. + * A storage system that passes repository analysis with one version of Elasticsearch may fail with a different version. + * This indicates it behaves incorrectly in ways that the former version did not detect. + * You must work with the supplier of your storage system to address the incompatibilities detected by the repository analysis API in any version of Elasticsearch. + * + * NOTE: This API may not work correctly in a mixed-version cluster. + * + * *Implementation details* + * + * NOTE: This section of documentation describes how the repository analysis API works in this version of Elasticsearch, but you should expect the implementation to vary between versions. The request parameters and response format depend on details of the implementation so may also be different in newer versions. + * + * The analysis comprises a number of blob-level tasks, as set by the `blob_count` parameter and a number of compare-and-exchange operations on linearizable registers, as set by the `register_operation_count` parameter. + * These tasks are distributed over the data and master-eligible nodes in the cluster for execution. + * + * For most blob-level tasks, the executing node first writes a blob to the repository and then instructs some of the other nodes in the cluster to attempt to read the data it just wrote. + * The size of the blob is chosen randomly, according to the `max_blob_size` and `max_total_data_size` parameters. + * If any of these reads fails then the repository does not implement the necessary read-after-write semantics that Elasticsearch requires. + * + * For some blob-level tasks, the executing node will instruct some of its peers to attempt to read the data before the writing process completes. + * These reads are permitted to fail, but must not return partial data. + * If any read returns partial data then the repository does not implement the necessary atomicity semantics that Elasticsearch requires. + * + * For some blob-level tasks, the executing node will overwrite the blob while its peers are reading it. + * In this case the data read may come from either the original or the overwritten blob, but the read operation must not return partial data or a mix of data from the two blobs. + * If any of these reads returns partial data or a mix of the two blobs then the repository does not implement the necessary atomicity semantics that Elasticsearch requires for overwrites. + * + * The executing node will use a variety of different methods to write the blob. + * For instance, where applicable, it will use both single-part and multi-part uploads. + * Similarly, the reading nodes will use a variety of different methods to read the data back again. + * For instance they may read the entire blob from start to end or may read only a subset of the data. + * + * For some blob-level tasks, the executing node will cancel the write before it is complete. + * In this case, it still instructs some of the other nodes in the cluster to attempt to read the blob but all of these reads must fail to find the blob. + * + * Linearizable registers are special blobs that Elasticsearch manipulates using an atomic compare-and-exchange operation. + * This operation ensures correct and strongly-consistent behavior even when the blob is accessed by multiple nodes at the same time. + * The detailed implementation of the compare-and-exchange operation on linearizable registers varies by repository type. + * Repository analysis verifies that that uncontended compare-and-exchange operations on a linearizable register blob always succeed. + * Repository analysis also verifies that contended operations either succeed or report the contention but do not return incorrect results. + * If an operation fails due to contention, Elasticsearch retries the operation until it succeeds. + * Most of the compare-and-exchange operations performed by repository analysis atomically increment a counter which is represented as an 8-byte blob. + * Some operations also verify the behavior on small blobs with sizes other than 8 bytes. + * @rest_spec_name snapshot.repository_analyze + * @availability stack since=7.12.0 stability=stable visibility=public + * @cluster_privileges manage + * @doc_id analyze-repository + */ +export interface Request extends RequestBase { + path_parts: { + /** + * The name of the repository. + * @codegen_name name + */ + repository: Name + } + query_parameters: { + /** + * The total number of blobs to write to the repository during the test. + * For realistic experiments, you should set it to at least `2000`. + * @server_default 100 + */ + blob_count?: integer + /** + * The number of operations to run concurrently during the test. + * @server_default 10 + */ + concurrency?: integer + /** + * Indicates whether to return detailed results, including timing information for every operation performed during the analysis. + * If false, it returns only a summary of the analysis. + * @server_default false + */ + detailed?: boolean + /** + * The number of nodes on which to perform an early read operation while writing each blob. + * Early read operations are only rarely performed. + * @server_default 2 + */ + early_read_node_count?: integer + /** + * The maximum size of a blob to be written during the test. + * For realistic experiments, you should set it to at least `2gb`. + * @server_default 10mb + */ + max_blob_size?: ByteSize + /** + * An upper limit on the total size of all the blobs written during the test. + * For realistic experiments, you should set it to at least `1tb`. + * @server_default 1gb + */ + max_total_data_size?: ByteSize + /** + * The probability of performing a rare action such as an early read, an overwrite, or an aborted write on each blob. + * @server_default 0.02 + */ + rare_action_probability?: double + /** + * Indicates whether to rarely cancel writes before they complete. + * @server_default true + */ + rarely_abort_writes?: boolean + /** + * The number of nodes on which to read a blob after writing. + * @server_default 10 + */ + read_node_count?: integer + /** + * The minimum number of linearizable register operations to perform in total. + * For realistic experiments, you should set it to at least `100`. + * @server_default 10 + */ + register_operation_count?: integer + /** + * The seed for the pseudo-random number generator used to generate the list of operations performed during the test. + * To repeat the same set of operations in multiple experiments, use the same seed in each experiment. + * Note that the operations are performed concurrently so might not always happen in the same order on each run. + */ + seed?: integer + /** + * The period of time to wait for the test to complete. + * If no response is received before the timeout expires, the test is cancelled and returns an error. + * @server_default 30s + */ + timeout?: Duration + } +} diff --git a/specification/snapshot/repository_analyze/SnapshotAnalyzeRepositoryResponse.ts b/specification/snapshot/repository_analyze/SnapshotAnalyzeRepositoryResponse.ts new file mode 100644 index 0000000000..fe0cd07231 --- /dev/null +++ b/specification/snapshot/repository_analyze/SnapshotAnalyzeRepositoryResponse.ts @@ -0,0 +1,321 @@ +/* + * Licensed to Elasticsearch B.V. under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch B.V. licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +import { ByteSize, Id, Name } from '@_types/common' +import { double, integer, long } from '@_types/Numeric' +import { Duration, DurationValue, UnitNanos } from '@_types/Time' + +export class Response { + body: { + /** + * The number of blobs written to the repository during the test. + */ + blob_count: integer + /** + * The path in the repository under which all the blobs were written during the test. + */ + blob_path: string + /** + * The number of write operations performed concurrently during the test. + */ + concurrency: integer + /** + * The node that coordinated the analysis and performed the final cleanup. + */ + coordinating_node: NodeInfo + /** + * The time it took to delete all the blobs in the container. + */ + delete_elapsed: Duration + /** + * The time it took to delete all the blobs in the container, in nanoseconds. + */ + delete_elapsed_nanos: DurationValue + /** + * A description of every read and write operation performed during the test. + */ + details: DetailsInfo + /** + * The limit on the number of nodes on which early read operations were performed after writing each blob. + */ + early_read_node_count: integer + /** + * A list of correctness issues detected, which is empty if the API succeeded. + * It is included to emphasize that a successful response does not guarantee correct behaviour in future. + */ + issues_detected: Array + /** + * The time it took to retrieve a list of all the blobs in the container. + */ + listing_elapsed: Duration + /** + * The time it took to retrieve a list of all the blobs in the container, in nanoseconds. + */ + listing_elapsed_nanos: DurationValue + /** + * The limit on the size of a blob written during the test. + */ + max_blob_size: ByteSize + /** + * The limit, in bytes, on the size of a blob written during the test. + */ + max_blob_size_bytes: long + /** + * The limit on the total size of all blob written during the test. + */ + max_total_data_size: ByteSize + /** + * The limit, in bytes, on the total size of all blob written during the test. + */ + max_total_data_size_bytes: long + /** + * The probability of performing rare actions during the test. + */ + rare_action_probability: double + /** + * The limit on the number of nodes on which read operations were performed after writing each blob. + */ + read_node_count: integer + /** + * The name of the repository that was the subject of the analysis. + */ + repository: string + /** + * The seed for the pseudo-random number generator used to generate the operations used during the test. + */ + seed: long + /** + * A collection of statistics that summarize the results of the test. + */ + summary: SummaryInfo + } +} + +export class NodeInfo { + id: Id + name: Name +} + +export class ReadSummaryInfo { + /** + * The number of read operations performed in the test. + */ + count: integer + /** + * The maximum time spent waiting for the first byte of any read request to be received. + */ + max_wait: Duration + /** + * The maximum time spent waiting for the first byte of any read request to be received, in nanoseconds. + */ + max_wait_nanos: DurationValue + /** + * The total elapsed time spent on reading blobs in the test. + */ + total_elapsed: Duration + /** + * The total elapsed time spent on reading blobs in the test, in nanoseconds. + */ + total_elapsed_nanos: DurationValue + /** + * The total size of all the blobs or partial blobs read in the test. + */ + total_size: ByteSize + /** + * The total size of all the blobs or partial blobs read in the test, in bytes. + */ + total_size_bytes: long + /** + * The total time spent waiting due to the `max_restore_bytes_per_sec` or `indices.recovery.max_bytes_per_sec` throttles. + */ + total_throttled: Duration + /** + * The total time spent waiting due to the `max_restore_bytes_per_sec` or `indices.recovery.max_bytes_per_sec` throttles, in nanoseconds. + */ + total_throttled_nanos: DurationValue + /** + * The total time spent waiting for the first byte of each read request to be received. + */ + total_wait: Duration + /** + * The total time spent waiting for the first byte of each read request to be received, in nanoseconds. + */ + total_wait_nanos: DurationValue +} + +export class WriteSummaryInfo { + /** + * The number of write operations performed in the test. + */ + count: integer + /** + * The total elapsed time spent on writing blobs in the test. + */ + total_elapsed: Duration + /** + * The total elapsed time spent on writing blobs in the test, in nanoseconds. + */ + total_elapsed_nanos: DurationValue + /** + * The total size of all the blobs written in the test. + */ + total_size: ByteSize + /** + * The total size of all the blobs written in the test, in bytes. + */ + total_size_bytes: long + /** + * The total time spent waiting due to the `max_snapshot_bytes_per_sec` throttle. + */ + total_throttled: Duration + /** + * The total time spent waiting due to the `max_snapshot_bytes_per_sec` throttle, in nanoseconds. + */ + total_throttled_nanos: long +} + +export class SummaryInfo { + /** + * A collection of statistics that summarise the results of the read operations in the test. + */ + read: ReadSummaryInfo + /** + * A collection of statistics that summarise the results of the write operations in the test. + */ + write: WriteSummaryInfo +} + +export class ReadBlobDetails { + /** + * Indicates whether the read operation may have started before the write operation was complete. + */ + before_write_complete?: boolean + /** + * The length of time spent reading the blob. + * If the blob was not found, this detail is omitted. + */ + elapsed?: Duration + /** + * The length of time spent reading the blob, in nanoseconds. + * If the blob was not found, this detail is omitted. + */ + elapsed_nanos?: DurationValue + /** + * The length of time waiting for the first byte of the read operation to be received. + * If the blob was not found, this detail is omitted. + */ + first_byte_time?: Duration + /** + * The length of time waiting for the first byte of the read operation to be received, in nanoseconds. + * If the blob was not found, this detail is omitted. + */ + first_byte_time_nanos: DurationValue + /** + * Indicates whether the blob was found by the read operation. + * If the read was started before the write completed or the write was ended before completion, it might be false. + */ + found: boolean + /** + * The node that performed the read operation. + */ + node: NodeInfo + /** + * The length of time spent waiting due to the `max_restore_bytes_per_sec` or `indices.recovery.max_bytes_per_sec` throttles during the read of the blob. + * If the blob was not found, this detail is omitted. + */ + throttled?: Duration + /** + * The length of time spent waiting due to the `max_restore_bytes_per_sec` or `indices.recovery.max_bytes_per_sec` throttles during the read of the blob, in nanoseconds. + * If the blob was not found, this detail is omitted. + */ + throttled_nanos?: DurationValue +} + +export class BlobDetails { + /** + * The name of the blob. + */ + name: string + /** + * Indicates whether the blob was overwritten while the read operations were ongoing. + /** + */ + overwritten: boolean + /* + * Indicates whether any read operations were started before the write operation completed. + */ + read_early: boolean + /** + * The position, in bytes, at which read operations completed. + */ + read_end: long + /** + * The position, in bytes, at which read operations started. + */ + read_start: long + /** + * A description of every read operation performed on the blob. + */ + reads: ReadBlobDetails + /** + * The size of the blob. + */ + size: ByteSize + /** + * The size of the blob in bytes. + */ + size_bytes: long +} + +export class DetailsInfo { + /** + * A description of the blob that was written and read. + */ + blob: BlobDetails + /** + * The elapsed time spent overwriting the blob. + * If the blob was not overwritten, this information is omitted. + */ + overwrite_elapsed?: Duration + /** + * The elapsed time spent overwriting the blob, in nanoseconds. + * If the blob was not overwritten, this information is omitted. + */ + overwrite_elapsed_nanos?: DurationValue + /** + * The elapsed time spent writing the blob. + */ + write_elapsed: Duration + /** + * The elapsed time spent writing the blob, in nanoseconds. + */ + write_elapsed_nanos: DurationValue + /** + * The length of time spent waiting for the `max_snapshot_bytes_per_sec` (or `indices.recovery.max_bytes_per_sec` if the recovery settings for managed services are set) throttle while writing the blob. + */ + write_throttled: Duration + /** + * The length of time spent waiting for the `max_snapshot_bytes_per_sec` (or `indices.recovery.max_bytes_per_sec` if the recovery settings for managed services are set) throttle while writing the blob, in nanoseconds. + */ + write_throttled_nanos: DurationValue + /** + * The node which wrote the blob and coordinated the read operations. + */ + writer_node: NodeInfo +}