diff --git a/codegen/sdk-codegen/aws-models/batch.json b/codegen/sdk-codegen/aws-models/batch.json index 7319b98ad97..af5c3091d69 100644 --- a/codegen/sdk-codegen/aws-models/batch.json +++ b/codegen/sdk-codegen/aws-models/batch.json @@ -4907,7 +4907,7 @@ "shareDecaySeconds": { "target": "com.amazonaws.batch#Integer", "traits": { - "smithy.api#documentation": "
The amount of time (in seconds) to use to calculate a fair share percentage for each fair\n share identifier in use. A value of zero (0) indicates that only current usage is measured. The\n decay allows for more recently run jobs to have more weight than jobs that ran earlier. The\n maximum supported value is 604800 (1 week).
" + "smithy.api#documentation": "The amount of time (in seconds) to use to calculate a fair share percentage for each fair\n share identifier in use. A value of zero (0) indicates the default minimum time window (600 seconds).\n The maximum supported value is 604800 (1 week).
\nThe decay allows for more recently run jobs to have more weight than jobs that ran earlier. \n Consider adjusting this number if you have jobs that (on average) run longer than ten minutes, \n or a large difference in job count or job run times between share identifiers, and the allocation\n of resources doesn’t meet your needs.
" } }, "computeReservation": { @@ -5585,7 +5585,7 @@ "target": "com.amazonaws.batch#Integer", "traits": { "smithy.api#clientOptional": {}, - "smithy.api#documentation": "The priority of the job queue. Job queues with a higher priority (or a higher integer value for the priority
parameter) are evaluated first when associated with the same compute environment. Priority is determined in descending order. For example, a job queue with a priority value of 10
is given scheduling preference over a job queue with a priority value of 1
. All of the compute environments must be either Amazon EC2 (EC2
or SPOT
) or Fargate (FARGATE
or FARGATE_SPOT
). Amazon EC2 and Fargate compute environments can't be mixed.
The priority of the job queue. Job queue priority determines the order \n that job queues are evaluated when multiple queues dispatch jobs within a \n shared compute environment. A higher value for priority
indicates\n a higher priority. Queues are evaluated in cycles, in descending order by\n priority. For example, a job queue with a priority value of 10
is \n evaluated before a queue with a priority value of 1
. All of the \n compute environments must be either Amazon EC2 (EC2
or SPOT
)\n or Fargate (FARGATE
or FARGATE_SPOT
). Amazon EC2 and \n Fargate compute environments can't be mixed.
Job queue priority doesn't guarantee that a particular job executes before \n a job in a lower priority queue. Jobs added to higher priority queues during the \n queue evaluation cycle might not be evaluated until the next cycle. A job is \n dispatched from a queue only if resources are available when the queue is evaluated. \n If there are insufficient resources available at that time, the cycle proceeds to the \n next queue. This means that jobs added to higher priority queues might have to wait \n for jobs in multiple lower priority queues to complete before they are dispatched. \n You can use job dependencies to control the order for jobs from queues with different \n priorities. For more information, see Job Dependencies\n in the Batch User Guide.
\nThe instance type or family that this this override launch template should be applied to.
\nThis parameter is required when defining a launch template override.
\nInformation included in this parameter must meet the following requirements:
\nMust be a valid Amazon EC2 instance type or family.
\n\n optimal
isn't allowed.
\n targetInstanceTypes
can target only instance types and families that are included within the \n ComputeResource.instanceTypes
\n set. targetInstanceTypes
doesn't need to include all of the instances from the instanceType
set, but at least a subset. For example, if ComputeResource.instanceTypes
includes [m5, g5]
, targetInstanceTypes
can include [m5.2xlarge]
and [m5.large]
but not [c5.large]
.
\n targetInstanceTypes
included within the same launch template override or across launch template overrides can't overlap for the same compute environment. For example, you can't define one launch template override to target an instance family and another define an instance type within this same family.
The instance type or family that this override launch template should be applied to.
\nThis parameter is required when defining a launch template override.
\nInformation included in this parameter must meet the following requirements:
\nMust be a valid Amazon EC2 instance type or family.
\n\n optimal
isn't allowed.
\n targetInstanceTypes
can target only instance types and families that are included within the \n ComputeResource.instanceTypes
\n set. targetInstanceTypes
doesn't need to include all of the instances from the instanceType
set, but at least a subset. For example, if ComputeResource.instanceTypes
includes [m5, g5]
, targetInstanceTypes
can include [m5.2xlarge]
and [m5.large]
but not [c5.large]
.
\n targetInstanceTypes
included within the same launch template override or across launch template overrides can't overlap for the same compute environment. For example, you can't define one launch template override to target an instance family and another define an instance type within this same family.
The environment variables to pass to a container. This parameter maps to Env inthe Create a container\n section of the Docker Remote API\n and the --env
parameter to docker run.
We don't recommend using plaintext environment variables for sensitive information, such as\n credential data.
\nEnvironment variables cannot start with AWS_BATCH
. This naming convention is\n reserved for variables that Batch sets.
The environment variables to pass to a container. This parameter maps to Env in the Create a container\n section of the Docker Remote API\n and the --env
parameter to docker run.
We don't recommend using plaintext environment variables for sensitive information, such as\n credential data.
\nEnvironment variables cannot start with AWS_BATCH
. This naming convention is\n reserved for variables that Batch sets.
Associates the specified KMS key with either one log group in the account, or with all stored\n CloudWatch Logs query insights results in the account.
\nWhen you use AssociateKmsKey
, you specify either the logGroupName
parameter\n or the resourceIdentifier
parameter. You can't specify both of those parameters in the same operation.
Specify the logGroupName
parameter to cause all log events stored in the log group to\n be encrypted with that key. Only the log events ingested after the key is associated are encrypted with that key.
Associating a KMS key with a log group overrides any existing\n associations between the log group and a KMS key. After a KMS key is associated with a log group, all newly ingested data for the log group is encrypted\n using the KMS key. This association is stored as long as the data encrypted\n with the KMS key is still within CloudWatch Logs. This enables CloudWatch Logs to decrypt this data whenever it is requested.
\nAssociating\n a key with a log group does not cause the results of queries of that log group to be encrypted with that key. To have query\n results encrypted with a KMS key, you must use an AssociateKmsKey
operation with the resourceIdentifier
\n parameter that specifies a query-result
resource.
Specify the resourceIdentifier
parameter with a query-result
resource, \n to use that key to encrypt the stored results of all future \n StartQuery\n operations in the account. The response from a \n GetQueryResults\n operation will still return\n the query results in plain text.
Even if you have not associated a key with your query results, the query results are encrypted when stored,\n using the default CloudWatch Logs method.
\nIf you run a query from a monitoring account that queries logs in a source account, the\n query results key from the monitoring account, if any, is used.
\nIf you delete the key that is used to encrypt log events or log group query results,\n then all the associated stored log events or query results that were encrypted with that key \n will be unencryptable and unusable.
\nCloudWatch Logs supports only symmetric KMS keys. Do not use an associate\n an asymmetric KMS key with your log group or query results. For more information, see Using\n Symmetric and Asymmetric Keys.
\nIt can take up to 5 minutes for this operation to take effect.
\nIf you attempt to associate a KMS key with a log group but the KMS key does not exist or the KMS key is disabled, you receive an\n InvalidParameterException
error.
Associates the specified KMS key with either one log group in the account, or with all stored\n CloudWatch Logs query insights results in the account.
\nWhen you use AssociateKmsKey
, you specify either the logGroupName
parameter\n or the resourceIdentifier
parameter. You can't specify both of those parameters in the same operation.
Specify the logGroupName
parameter to cause log events ingested into that log group to\n be encrypted with that key. Only the log events ingested after the key is associated are encrypted with that key.
Associating a KMS key with a log group overrides any existing\n associations between the log group and a KMS key. After a KMS key is associated with a log group, all newly ingested data for the log group is encrypted\n using the KMS key. This association is stored as long as the data encrypted\n with the KMS key is still within CloudWatch Logs. This enables CloudWatch Logs to decrypt this data whenever it is requested.
\nAssociating\n a key with a log group does not cause the results of queries of that log group to be encrypted with that key. To have query\n results encrypted with a KMS key, you must use an AssociateKmsKey
operation with the resourceIdentifier
\n parameter that specifies a query-result
resource.
Specify the resourceIdentifier
parameter with a query-result
resource, \n to use that key to encrypt the stored results of all future \n StartQuery\n operations in the account. The response from a \n GetQueryResults\n operation will still return\n the query results in plain text.
Even if you have not associated a key with your query results, the query results are encrypted when stored,\n using the default CloudWatch Logs method.
\nIf you run a query from a monitoring account that queries logs in a source account, the\n query results key from the monitoring account, if any, is used.
\nIf you delete the key that is used to encrypt log events or log group query results,\n then all the associated stored log events or query results that were encrypted with that key \n will be unencryptable and unusable.
\nCloudWatch Logs supports only symmetric KMS keys. Do not use an associate\n an asymmetric KMS key with your log group or query results. For more information, see Using\n Symmetric and Asymmetric Keys.
\nIt can take up to 5 minutes for this operation to take effect.
\nIf you attempt to associate a KMS key with a log group but the KMS key does not exist or the KMS key is disabled, you receive an\n InvalidParameterException
error.
Creates an export task so that you can efficiently export data from a log group to an\n Amazon S3 bucket. When you perform a CreateExportTask
operation, you must use\n credentials that have permission to write to the S3 bucket that you specify as the\n destination.
Exporting log data to S3 buckets that are encrypted by KMS is supported.\n Exporting log data to Amazon S3 buckets that have S3 Object Lock enabled with a\n retention period is also supported.
\nExporting to S3 buckets that are encrypted with AES-256 is supported.
\nThis is an asynchronous call. If all the required information is provided, this \n operation initiates an export task and responds with the ID of the task. After the task has started,\n you can use DescribeExportTasks to get the status of the export task. Each account can\n only have one active (RUNNING
or PENDING
) export task at a time.\n To cancel an export task, use CancelExportTask.
You can export logs from multiple log groups or multiple time ranges to the same S3\n bucket. To separate log data for each export task, specify a prefix to be used as the Amazon\n S3 key prefix for all exported objects.
\nTime-based sorting on chunks of log data inside an exported file is not guaranteed. You can\n sort the exported log field data by using Linux utilities.
\nCreates an export task so that you can efficiently export data from a log group to an\n Amazon S3 bucket. When you perform a CreateExportTask
operation, you must use\n credentials that have permission to write to the S3 bucket that you specify as the\n destination.
Exporting log data to S3 buckets that are encrypted by KMS is supported.\n Exporting log data to Amazon S3 buckets that have S3 Object Lock enabled with a\n retention period is also supported.
\nExporting to S3 buckets that are encrypted with AES-256 is supported.
\nThis is an asynchronous call. If all the required information is provided, this \n operation initiates an export task and responds with the ID of the task. After the task has started,\n you can use DescribeExportTasks to get the status of the export task. Each account can\n only have one active (RUNNING
or PENDING
) export task at a time.\n To cancel an export task, use CancelExportTask.
You can export logs from multiple log groups or multiple time ranges to the same S3\n bucket. To separate log data for each export task, specify a prefix to be used as the Amazon\n S3 key prefix for all exported objects.
\nWe recommend that you don't regularly export to Amazon S3 as a way to continuously archive your logs. For that use case, we instaed recommend that \n you use subscriptions. For more information about subscriptions, see \n Real-time processing of log data with subscriptions.
\nTime-based sorting on chunks of log data inside an exported file is not guaranteed. You can\n sort the exported log field data by using Linux utilities.
\nDeletes s delivery. A delivery is a connection between a logical delivery source and a logical\n delivery destination. Deleting a delivery only deletes the connection between the delivery source and delivery destination. It does\n not delete the delivery destination or the delivery source.
" + "smithy.api#documentation": "Deletes a delivery. A delivery is a connection between a logical delivery source and a logical\n delivery destination. Deleting a delivery only deletes the connection between the delivery source and delivery destination. It does\n not delete the delivery destination or the delivery source.
" } }, "com.amazonaws.cloudwatchlogs#DeleteDeliveryDestination": { @@ -2522,7 +2522,7 @@ } ], "traits": { - "smithy.api#documentation": "Returns a list of all CloudWatch Logs account policies in the account.
" + "smithy.api#documentation": "Returns a list of all CloudWatch Logs account policies in the account.
\nTo use this operation, you must be signed on with the correct permissions depending on the type of policy that you are retrieving information for.
\nTo see data protection policies, you must have the logs:GetDataProtectionPolicy
and \n logs:DescribeAccountPolicies
permissions.
To see subscription filter policies, you must have the logs:DescrubeSubscriptionFilters
and \n logs:DescribeAccountPolicies
permissions.
To see transformer policies, you must have the logs:GetTransformer
and logs:DescribeAccountPolicies
permissions.
To see field index policies, you must have the logs:DescribeIndexPolicies
and \n logs:DescribeAccountPolicies
permissions.
Lists the log streams for the specified log group. \n You can list all the log streams or filter the results by prefix.\n You can also control how the results are ordered.
\nYou can specify the log group to search by using either logGroupIdentifier
or logGroupName
.\n You must include one of these two parameters, but you can't include both.\n
This operation has a limit of five transactions per second, after which transactions are throttled.
\nIf you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and \n view data from the linked source accounts. For more information, see \n CloudWatch cross-account observability.
", + "smithy.api#documentation": "Lists the log streams for the specified log group. \n You can list all the log streams or filter the results by prefix.\n You can also control how the results are ordered.
\nYou can specify the log group to search by using either logGroupIdentifier
or logGroupName
.\n You must include one of these two parameters, but you can't include both.\n
This operation has a limit of 25 transactions per second, after which transactions are throttled.
\nIf you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and \n view data from the linked source accounts. For more information, see \n CloudWatch cross-account observability.
", "smithy.api#paginated": { "inputToken": "nextToken", "outputToken": "nextToken", @@ -8752,14 +8752,14 @@ "dataSourceRoleArn": { "target": "com.amazonaws.cloudwatchlogs#Arn", "traits": { - "smithy.api#documentation": "Specify the ARN of an IAM role that CloudWatch Logs will use to create the integration. This role must have the permissions necessary to access the OpenSearch Service\n collection to be able to create the dashboards. For more information about the permissions needed, see Create an IAM role to access the OpenSearch Service collection in the CloudWatch Logs User Guide.
", + "smithy.api#documentation": "Specify the ARN of an IAM role that CloudWatch Logs will use to create the integration. This role must have the permissions necessary to access the OpenSearch Service\n collection to be able to create the dashboards. For more information about the permissions needed, see Permissions that the integration needs in the CloudWatch Logs User Guide.
", "smithy.api#required": {} } }, "dashboardViewerPrincipals": { "target": "com.amazonaws.cloudwatchlogs#DashboardViewerPrincipals", "traits": { - "smithy.api#documentation": "Specify the ARNs of IAM roles and IAM users who you want to grant permission to for viewing the dashboards.
\nIn addition to specifying these users here, you must also grant them the CloudWatchOpenSearchDashboardsAccess \n IAM policy. For more information, see
\nSpecify the ARNs of IAM roles and IAM users who you want to grant permission to for viewing the dashboards.
\nIn addition to specifying these users here, you must also grant them the CloudWatchOpenSearchDashboardAccess \n IAM policy. For more information, see IAM policies for users.
\nCreates an account-level data protection policy, subscription filter policy, or field index policy\n that applies to all log groups \n or a subset of log groups in the account.
\n\n Data protection policy\n
\nA data protection policy can help safeguard sensitive \n data that's ingested by your log groups by auditing and masking the sensitive log data. Each account can have only\n one account-level data protection policy.
\nSensitive data is detected and masked when it is ingested into a log group. When you set a \n data protection policy, log events ingested into the log groups before that time are not masked.
\nIf you use PutAccountPolicy
to create a data protection policy for your whole account, it applies to both existing log groups\n and all log groups that are created later in this account. The account-level policy is applied to existing log groups\n with eventual consistency. It might take up to 5 minutes before sensitive data in existing log groups begins to be masked.
By default, when a user views a log event that includes masked data, the sensitive data is replaced by asterisks.\n A user who has the logs:Unmask
permission can use a \n GetLogEvents or \n FilterLogEvents\n operation with the unmask
parameter set to true
to view the unmasked \n log events. Users with the logs:Unmask
can also view unmasked data in the CloudWatch Logs\n console by running a CloudWatch Logs Insights query with the unmask
query command.
For more information, including a list of types of data that can be audited and masked, see\n Protect sensitive log data with masking.
\nTo use the PutAccountPolicy
operation for a data protection policy, you must be signed on with \n the logs:PutDataProtectionPolicy
\n and logs:PutAccountPolicy
permissions.
The PutAccountPolicy
operation applies to all log groups in the account. You can use \n PutDataProtectionPolicy\n to create a data protection policy that applies to just one log group. \n If a log group has its own data protection policy and \n the account also has an account-level data protection policy, then the two policies are cumulative. Any sensitive term\n specified in either policy is masked.
\n Subscription filter policy\n
\nA subscription filter policy sets up a real-time feed of log events from CloudWatch Logs to other Amazon Web Services services.\n Account-level subscription filter policies apply to both existing log groups and log groups that are created later in \n this account. Supported destinations are Kinesis Data Streams, Firehose, and \n Lambda. When log events are sent to the receiving service, they are Base64 encoded and \n compressed with the GZIP format.
\nThe following destinations are supported for subscription filters:
\nAn Kinesis Data Streams data stream in the same account as the subscription policy, for same-account delivery.
\nAn Firehose data stream in the same account as the subscription policy, for same-account delivery.
\nA Lambda function in the same account as the subscription policy, for same-account delivery.
\nA logical destination in a different account created with PutDestination, for cross-account\n delivery. Kinesis Data Streams and Firehose are supported as logical destinations.
\nEach account can have one account-level subscription filter policy per Region. \n If you are updating an existing filter, you must specify the correct name in PolicyName
.\n To perform a PutAccountPolicy
subscription filter operation for any destination except a Lambda \n function, you must also have the iam:PassRole
permission.
\n Transformer policy\n
\nCreates or updates a log transformer policy for your account. You use log transformers to transform log events into\n a different format, making them easier for you to process and analyze. You can also transform logs from different sources into standardized formats that \n contain\n relevant, source-specific information. After you have created a transformer, \n CloudWatch Logs performs this transformation at the time of log ingestion. You can then refer to the transformed versions of the logs during\n operations such as querying with CloudWatch Logs Insights or creating metric filters or subscription filters.
\nYou can also use a transformer to copy metadata from metadata keys into the log events themselves. This metadata can include log group name, \n log stream name, account ID and Region.
\nA transformer for a log group is a series of processors, where each processor applies one type of transformation to the log events\n ingested into this log group. For more information about the available processors to use in a transformer, see \n Processors that you can use.
\nHaving log events in standardized format enables visibility across your applications for your log analysis, reporting, and alarming needs. \n CloudWatch Logs provides transformation for common log types with out-of-the-box transformation templates for major Amazon Web Services log sources such \n as VPC flow logs, Lambda, and Amazon RDS. You can use pre-built transformation templates or create custom transformation policies.
\nYou can create transformers only for the log groups in the Standard log class.
\nYou can have one account-level transformer policy that applies to all log groups in the account. \n Or you can create as many as 20 account-level transformer policies that are each scoped to a subset of log groups with \n the selectionCriteria
parameter. If you have multiple\n account-level transformer policies with selection criteria, no two of them can use the same or overlapping log group name prefixes.\n For example, if you have one policy filtered to log groups that start with my-log
, you can't have another field index\n policy filtered to my-logpprod
or my-logging
.
You can also set up a transformer at the log-group level. For more information, see \n PutTransformer. If there is both a \n log-group level transformer created with PutTransformer
and an account-level transformer that could apply to the same log \n group, the log group uses only the log-group level transformer. It ignores the account-level transformer.
\n Field index policy\n
\nYou can use field index policies to create indexes on fields found in \n log events in the log group. Creating field indexes can help lower the scan volume for CloudWatch Logs Insights queries that reference\n those fields, because these queries attempt to skip the processing of log events that are known to not match the indexed field.\n Good fields to index are fields that you often need to query for and fields or values that match only a small fraction of the total log events.\n Common examples of indexes\n include request ID, session ID, user IDs, or instance IDs. For more information, see \n Create field indexes to improve query performance and reduce costs\n
\nTo find the fields that are in your log group events, use the \n GetLogGroupFields\n operation.
\nFor example, suppose you have created a field index for requestId
. Then, any \n CloudWatch Logs Insights query on that log group that includes requestId = value\n
\n or requestId in [value, value, ...]
will attempt to process only the log events where\n the indexed field matches the specified value.
Matches of log events to the names of indexed fields are case-sensitive. For example, an indexed field\n of RequestId
won't match a log event containing requestId
.
You can have one account-level field index policy that applies to all log groups in the account. \n Or you can create as many as 20 account-level field index policies that are each scoped to a subset of log groups with \n the selectionCriteria
parameter. If you have multiple\n account-level index policies with selection criteria, no two of them can use the same or overlapping log group name prefixes.\n For example, if you have one policy filtered to log groups that start with my-log
, you can't have another field index\n policy filtered to my-logpprod
or my-logging
.
If you create an account-level field index policy in a monitoring account in cross-account observability, the policy is applied only\n to the monitoring account and not to any source accounts.
\nIf you want to create a field index policy for a single log group, you can use PutIndexPolicy instead of \n PutAccountPolicy
. If you do so, that log group will use only that log-group level policy, and will ignore the account-level policy\n that you create with PutAccountPolicy.
Creates an account-level data protection policy, subscription filter policy, or field index policy\n that applies to all log groups \n or a subset of log groups in the account.
\nTo use this operation, you must be signed on with the correct permissions depending on the type of policy that you are creating.
\nTo create a data protection policy, you must have the logs:PutDataProtectionPolicy
and \n logs:PutAccountPolicy
permissions.
To create a subscription filter policy, you must have the logs:PutSubscriptionFilter
and \n logs:PutccountPolicy
permissions.
To create a transformer policy, you must have the logs:PutTransformer
and logs:PutAccountPolicy
permissions.
To create a field index policy, you must have the logs:PutIndexPolicy
and \n logs:PutAccountPolicy
permissions.
\n Data protection policy\n
\nA data protection policy can help safeguard sensitive \n data that's ingested by your log groups by auditing and masking the sensitive log data. Each account can have only\n one account-level data protection policy.
\nSensitive data is detected and masked when it is ingested into a log group. When you set a \n data protection policy, log events ingested into the log groups before that time are not masked.
\nIf you use PutAccountPolicy
to create a data protection policy for your whole account, it applies to both existing log groups\n and all log groups that are created later in this account. The account-level policy is applied to existing log groups\n with eventual consistency. It might take up to 5 minutes before sensitive data in existing log groups begins to be masked.
By default, when a user views a log event that includes masked data, the sensitive data is replaced by asterisks.\n A user who has the logs:Unmask
permission can use a \n GetLogEvents or \n FilterLogEvents\n operation with the unmask
parameter set to true
to view the unmasked \n log events. Users with the logs:Unmask
can also view unmasked data in the CloudWatch Logs\n console by running a CloudWatch Logs Insights query with the unmask
query command.
For more information, including a list of types of data that can be audited and masked, see\n Protect sensitive log data with masking.
\nTo use the PutAccountPolicy
operation for a data protection policy, you must be signed on with \n the logs:PutDataProtectionPolicy
\n and logs:PutAccountPolicy
permissions.
The PutAccountPolicy
operation applies to all log groups in the account. You can use \n PutDataProtectionPolicy\n to create a data protection policy that applies to just one log group. \n If a log group has its own data protection policy and \n the account also has an account-level data protection policy, then the two policies are cumulative. Any sensitive term\n specified in either policy is masked.
\n Subscription filter policy\n
\nA subscription filter policy sets up a real-time feed of log events from CloudWatch Logs to other Amazon Web Services services.\n Account-level subscription filter policies apply to both existing log groups and log groups that are created later in \n this account. Supported destinations are Kinesis Data Streams, Firehose, and \n Lambda. When log events are sent to the receiving service, they are Base64 encoded and \n compressed with the GZIP format.
\nThe following destinations are supported for subscription filters:
\nAn Kinesis Data Streams data stream in the same account as the subscription policy, for same-account delivery.
\nAn Firehose data stream in the same account as the subscription policy, for same-account delivery.
\nA Lambda function in the same account as the subscription policy, for same-account delivery.
\nA logical destination in a different account created with PutDestination, for cross-account\n delivery. Kinesis Data Streams and Firehose are supported as logical destinations.
\nEach account can have one account-level subscription filter policy per Region. \n If you are updating an existing filter, you must specify the correct name in PolicyName
.\n To perform a PutAccountPolicy
subscription filter operation for any destination except a Lambda \n function, you must also have the iam:PassRole
permission.
\n Transformer policy\n
\nCreates or updates a log transformer policy for your account. You use log transformers to transform log events into\n a different format, making them easier for you to process and analyze. You can also transform logs from different sources into standardized formats that \n contain\n relevant, source-specific information. After you have created a transformer, \n CloudWatch Logs performs this transformation at the time of log ingestion. You can then refer to the transformed versions of the logs during\n operations such as querying with CloudWatch Logs Insights or creating metric filters or subscription filters.
\nYou can also use a transformer to copy metadata from metadata keys into the log events themselves. This metadata can include log group name, \n log stream name, account ID and Region.
\nA transformer for a log group is a series of processors, where each processor applies one type of transformation to the log events\n ingested into this log group. For more information about the available processors to use in a transformer, see \n Processors that you can use.
\nHaving log events in standardized format enables visibility across your applications for your log analysis, reporting, and alarming needs. \n CloudWatch Logs provides transformation for common log types with out-of-the-box transformation templates for major Amazon Web Services log sources such \n as VPC flow logs, Lambda, and Amazon RDS. You can use pre-built transformation templates or create custom transformation policies.
\nYou can create transformers only for the log groups in the Standard log class.
\nYou can have one account-level transformer policy that applies to all log groups in the account. \n Or you can create as many as 20 account-level transformer policies that are each scoped to a subset of log groups with \n the selectionCriteria
parameter. If you have multiple\n account-level transformer policies with selection criteria, no two of them can use the same or overlapping log group name prefixes.\n For example, if you have one policy filtered to log groups that start with my-log
, you can't have another field index\n policy filtered to my-logpprod
or my-logging
.
You can also set up a transformer at the log-group level. For more information, see \n PutTransformer. If there is both a \n log-group level transformer created with PutTransformer
and an account-level transformer that could apply to the same log \n group, the log group uses only the log-group level transformer. It ignores the account-level transformer.
\n Field index policy\n
\nYou can use field index policies to create indexes on fields found in \n log events in the log group. Creating field indexes can help lower the scan volume for CloudWatch Logs Insights queries that reference\n those fields, because these queries attempt to skip the processing of log events that are known to not match the indexed field.\n Good fields to index are fields that you often need to query for and fields or values that match only a small fraction of the total log events.\n Common examples of indexes\n include request ID, session ID, user IDs, or instance IDs. For more information, see \n Create field indexes to improve query performance and reduce costs\n
\nTo find the fields that are in your log group events, use the \n GetLogGroupFields\n operation.
\nFor example, suppose you have created a field index for requestId
. Then, any \n CloudWatch Logs Insights query on that log group that includes requestId = value\n
\n or requestId in [value, value, ...]
will attempt to process only the log events where\n the indexed field matches the specified value.
Matches of log events to the names of indexed fields are case-sensitive. For example, an indexed field\n of RequestId
won't match a log event containing requestId
.
You can have one account-level field index policy that applies to all log groups in the account. \n Or you can create as many as 20 account-level field index policies that are each scoped to a subset of log groups with \n the selectionCriteria
parameter. If you have multiple\n account-level index policies with selection criteria, no two of them can use the same or overlapping log group name prefixes.\n For example, if you have one policy filtered to log groups that start with my-log
, you can't have another field index\n policy filtered to my-logpprod
or my-logging
.
If you create an account-level field index policy in a monitoring account in cross-account observability, the policy is applied only\n to the monitoring account and not to any source accounts.
\nIf you want to create a field index policy for a single log group, you can use PutIndexPolicy instead of \n PutAccountPolicy
. If you do so, that log group will use only that log-group level policy, and will ignore the account-level policy\n that you create with PutAccountPolicy.
Creates or updates a logical delivery destination. A delivery destination is an Amazon Web Services resource that represents an \n Amazon Web Services service that logs can be sent to. CloudWatch Logs, Amazon S3, and\n Firehose are supported as logs delivery destinations.
\nTo configure logs delivery between a supported Amazon Web Services service and a destination, you must do the following:
\nCreate a delivery source, which is a logical object that represents the resource that is actually\n sending the logs. For more \n information, see PutDeliverySource.
\nUse PutDeliveryDestination
to create a delivery destination, which is a logical object that represents the actual\n delivery destination.
If you are delivering logs cross-account, you must use \n PutDeliveryDestinationPolicy\n in the destination account to assign an IAM policy to the \n destination. This policy allows delivery to that destination.\n
\nUse CreateDelivery
to create a delivery by pairing exactly \n one delivery source and one delivery destination. For more \n information, see CreateDelivery.\n
You can configure a single delivery source to send logs to multiple destinations by creating multiple deliveries. You \n can also create multiple deliveries to configure multiple delivery sources to send logs to the same delivery destination.
\nOnly some Amazon Web Services services support being configured as a delivery source. These services are listed\n as Supported [V2 Permissions] in the table at \n Enabling \n logging from Amazon Web Services services.\n
\nIf you use this operation to update an existing delivery destination, all the current delivery destination parameters are overwritten\n with the new parameter values that you specify.
" + "smithy.api#documentation": "Creates or updates a logical delivery destination. A delivery destination is an Amazon Web Services resource that represents an \n Amazon Web Services service that logs can be sent to. CloudWatch Logs, Amazon S3, and\n Firehose are supported as logs delivery destinations.
\nTo configure logs delivery between a supported Amazon Web Services service and a destination, you must do the following:
\nCreate a delivery source, which is a logical object that represents the resource that is actually\n sending the logs. For more \n information, see PutDeliverySource.
\nUse PutDeliveryDestination
to create a delivery destination in the same account of the actual delivery destination. \n The delivery destination that you create is a logical object that represents the actual\n delivery destination.
If you are delivering logs cross-account, you must use \n PutDeliveryDestinationPolicy\n in the destination account to assign an IAM policy to the \n destination. This policy allows delivery to that destination.\n
\nUse CreateDelivery
to create a delivery by pairing exactly \n one delivery source and one delivery destination. For more \n information, see CreateDelivery.\n
You can configure a single delivery source to send logs to multiple destinations by creating multiple deliveries. You \n can also create multiple deliveries to configure multiple delivery sources to send logs to the same delivery destination.
\nOnly some Amazon Web Services services support being configured as a delivery source. These services are listed\n as Supported [V2 Permissions] in the table at \n Enabling \n logging from Amazon Web Services services.\n
\nIf you use this operation to update an existing delivery destination, all the current delivery destination parameters are overwritten\n with the new parameter values that you specify.
" } }, "com.amazonaws.cloudwatchlogs#PutDeliveryDestinationPolicy": { @@ -9759,7 +9759,7 @@ "logType": { "target": "com.amazonaws.cloudwatchlogs#LogType", "traits": { - "smithy.api#documentation": "Defines the type of log that the source is sending.
\nFor Amazon Bedrock, the valid value is \n APPLICATION_LOGS
.
For Amazon CodeWhisperer, the valid value is \n EVENT_LOGS
.
For IAM Identity Center, the valid value is \n ERROR_LOGS
.
For Amazon WorkMail, the valid values are \n ACCESS_CONTROL_LOGS
, AUTHENTICATION_LOGS
, WORKMAIL_AVAILABILITY_PROVIDER_LOGS
, and WORKMAIL_MAILBOX_ACCESS_LOGS
.
Defines the type of log that the source is sending.
\nFor Amazon Bedrock, the valid value is \n APPLICATION_LOGS
.
For CloudFront, the valid value is \n ACCESS_LOGS
.
For Amazon CodeWhisperer, the valid value is \n EVENT_LOGS
.
For Elemental MediaPackage, the valid values are \n EGRESS_ACCESS_LOGS
and INGRESS_ACCESS_LOGS
.
For Elemental MediaTailor, the valid values are \n AD_DECISION_SERVER_LOGS
, MANIFEST_SERVICE_LOGS
, and TRANSCODE_LOGS
.
For IAM Identity Center, the valid value is \n ERROR_LOGS
.
For Amazon Q, the valid value is \n EVENT_LOGS
.
For Amazon SES mail manager, the valid value is \n APPLICATION_LOG
.
For Amazon WorkMail, the valid values are \n ACCESS_CONTROL_LOGS
, AUTHENTICATION_LOGS
, WORKMAIL_AVAILABILITY_PROVIDER_LOGS
, WORKMAIL_MAILBOX_ACCESS_LOGS
, \n and WORKMAIL_PERSONAL_ACCESS_TOKEN_LOGS
.
Creates or updates a metric filter and associates it with the specified log group. With\n metric filters, you can configure rules to extract metric data from log events ingested\n through PutLogEvents.
\nThe maximum number of metric filters that can be associated with a log group is\n 100.
\nUsing regular expressions to create metric filters is supported. For these filters, \n there is a quota of two regular expression patterns within a single filter pattern. There\n is also a quota of five regular expression patterns per log group.\n For more information about using regular expressions in metric filters, \n see \n Filter pattern syntax for metric filters, subscription filters, filter log events, and Live Tail.
\nWhen you create a metric filter, you can also optionally assign a unit and dimensions\n to the metric that is created.
\nMetrics extracted from log events are charged as custom metrics.\n To prevent unexpected high charges, do not specify high-cardinality fields such as \n IPAddress
or requestID
as dimensions. Each different value \n found for \n a dimension is treated as a separate metric and accrues charges as a separate custom metric.\n
CloudWatch Logs might disable a metric filter if it generates 1,000 different name/value pairs for\n your specified dimensions within one hour.
\nYou can also set up a billing alarm to alert you if your charges are higher than \n expected. For more information, \n see \n Creating a Billing Alarm to Monitor Your Estimated Amazon Web Services Charges.\n
\nCreates or updates a metric filter and associates it with the specified log group. With\n metric filters, you can configure rules to extract metric data from log events ingested\n through PutLogEvents.
\nThe maximum number of metric filters that can be associated with a log group is\n 100.
\nUsing regular expressions in filter patterns is supported. For these filters, \n there is a quota of two regular expression patterns within a single filter pattern. There\n is also a quota of five regular expression patterns per log group.\n For more information about using regular expressions in filter patterns, \n see \n Filter pattern syntax for metric filters, subscription filters, filter log events, and Live Tail.
\nWhen you create a metric filter, you can also optionally assign a unit and dimensions\n to the metric that is created.
\nMetrics extracted from log events are charged as custom metrics.\n To prevent unexpected high charges, do not specify high-cardinality fields such as \n IPAddress
or requestID
as dimensions. Each different value \n found for \n a dimension is treated as a separate metric and accrues charges as a separate custom metric.\n
CloudWatch Logs might disable a metric filter if it generates 1,000 different name/value pairs for\n your specified dimensions within one hour.
\nYou can also set up a billing alarm to alert you if your charges are higher than \n expected. For more information, \n see \n Creating a Billing Alarm to Monitor Your Estimated Amazon Web Services Charges.\n
\nCreates or updates a subscription filter and associates it with the specified log\n group. With subscription filters, you can subscribe to a real-time stream of log events\n ingested through PutLogEvents\n and have them delivered to a specific destination. When log events are sent to the receiving\n service, they are Base64 encoded and compressed with the GZIP format.
\nThe following destinations are supported for subscription filters:
\nAn Amazon Kinesis data stream belonging to the same account as the subscription\n filter, for same-account delivery.
\nA logical destination created with PutDestination that belongs to a different account, for cross-account delivery.\n We currently support Kinesis Data Streams and Firehose as logical destinations.
\nAn Amazon Kinesis Data Firehose delivery stream that belongs to the same account as\n the subscription filter, for same-account delivery.
\nAn Lambda function that belongs to the same account as the\n subscription filter, for same-account delivery.
\nEach log group can have up to two subscription filters associated with it. If you are\n updating an existing filter, you must specify the correct name in filterName
.\n
Using regular expressions to create subscription filters is supported. For these filters, \n there is a quotas of quota of two regular expression patterns within a single filter pattern. There\n is also a quota of five regular expression patterns per log group.\n For more information about using regular expressions in subscription filters, \n see \n Filter pattern syntax for metric filters, subscription filters, filter log events, and Live Tail.
\nTo perform a PutSubscriptionFilter
operation for any destination except a Lambda function, \n you must also have the \n iam:PassRole
permission.
Creates or updates a subscription filter and associates it with the specified log\n group. With subscription filters, you can subscribe to a real-time stream of log events\n ingested through PutLogEvents\n and have them delivered to a specific destination. When log events are sent to the receiving\n service, they are Base64 encoded and compressed with the GZIP format.
\nThe following destinations are supported for subscription filters:
\nAn Amazon Kinesis data stream belonging to the same account as the subscription\n filter, for same-account delivery.
\nA logical destination created with PutDestination that belongs to a different account, for cross-account delivery.\n We currently support Kinesis Data Streams and Firehose as logical destinations.
\nAn Amazon Kinesis Data Firehose delivery stream that belongs to the same account as\n the subscription filter, for same-account delivery.
\nAn Lambda function that belongs to the same account as the\n subscription filter, for same-account delivery.
\nEach log group can have up to two subscription filters associated with it. If you are\n updating an existing filter, you must specify the correct name in filterName
.\n
Using regular expressions in filter patterns is supported. For these filters, \n there is a quotas of quota of two regular expression patterns within a single filter pattern. There\n is also a quota of five regular expression patterns per log group.\n For more information about using regular expressions in filter patterns, \n see \n Filter pattern syntax for metric filters, subscription filters, filter log events, and Live Tail.
\nTo perform a PutSubscriptionFilter
operation for any destination except a Lambda function, \n you must also have the \n iam:PassRole
permission.
This string allows re-configuring the S3 object prefix to contain either static or variable sections. The valid variables \n to use in the suffix path will vary by each log source. See ConfigurationTemplate$allowedSuffixPathFields for \n more info on what values are supported in the suffix path for each log source.
" + "smithy.api#documentation": "This string allows re-configuring the S3 object prefix to contain either static or variable sections. The valid variables \n to use in the suffix path will vary by each log source. To find the values supported for the suffix path for each log source, \n use the DescribeConfigurationTemplates operation and check the \n allowedSuffixPathFields
field in the response.
his exception is returned if an unknown error occurs during a Live Tail session.
", + "smithy.api#documentation": "This exception is returned if an unknown error occurs during a Live Tail session.
", "smithy.api#error": "client" } }, diff --git a/codegen/sdk-codegen/aws-models/cognito-identity-provider.json b/codegen/sdk-codegen/aws-models/cognito-identity-provider.json index 172de5b1d84..d623cbe4cd2 100644 --- a/codegen/sdk-codegen/aws-models/cognito-identity-provider.json +++ b/codegen/sdk-codegen/aws-models/cognito-identity-provider.json @@ -569,6 +569,82 @@ } ], "rules": [ + { + "conditions": [ + { + "fn": "stringEquals", + "argv": [ + { + "ref": "Region" + }, + "us-east-1" + ] + } + ], + "endpoint": { + "url": "https://cognito-idp-fips.us-east-1.amazonaws.com", + "properties": {}, + "headers": {} + }, + "type": "endpoint" + }, + { + "conditions": [ + { + "fn": "stringEquals", + "argv": [ + { + "ref": "Region" + }, + "us-east-2" + ] + } + ], + "endpoint": { + "url": "https://cognito-idp-fips.us-east-2.amazonaws.com", + "properties": {}, + "headers": {} + }, + "type": "endpoint" + }, + { + "conditions": [ + { + "fn": "stringEquals", + "argv": [ + { + "ref": "Region" + }, + "us-west-1" + ] + } + ], + "endpoint": { + "url": "https://cognito-idp-fips.us-west-1.amazonaws.com", + "properties": {}, + "headers": {} + }, + "type": "endpoint" + }, + { + "conditions": [ + { + "fn": "stringEquals", + "argv": [ + { + "ref": "Region" + }, + "us-west-2" + ] + } + ], + "endpoint": { + "url": "https://cognito-idp-fips.us-west-2.amazonaws.com", + "properties": {}, + "headers": {} + }, + "type": "endpoint" + }, { "conditions": [], "endpoint": { @@ -673,6 +749,31 @@ } ], "rules": [ + { + "conditions": [ + { + "fn": "stringEquals", + "argv": [ + "aws", + { + "fn": "getAttr", + "argv": [ + { + "ref": "PartitionResult" + }, + "name" + ] + } + ] + } + ], + "endpoint": { + "url": "https://cognito-idp.{Region}.amazonaws.com", + "properties": {}, + "headers": {} + }, + "type": "endpoint" + }, { "conditions": [], "endpoint": { @@ -717,6 +818,32 @@ }, "smithy.rules#endpointTests": { "testCases": [ + { + "documentation": "For region af-south-1 with FIPS disabled and DualStack disabled", + "expect": { + "endpoint": { + "url": "https://cognito-idp.af-south-1.amazonaws.com" + } + }, + "params": { + "Region": "af-south-1", + "UseFIPS": false, + "UseDualStack": false + } + }, + { + "documentation": "For region ap-east-1 with FIPS disabled and DualStack disabled", + "expect": { + "endpoint": { + "url": "https://cognito-idp.ap-east-1.amazonaws.com" + } + }, + "params": { + "Region": "ap-east-1", + "UseFIPS": false, + "UseDualStack": false + } + }, { "documentation": "For region ap-northeast-1 with FIPS disabled and DualStack disabled", "expect": { @@ -743,6 +870,19 @@ "UseDualStack": false } }, + { + "documentation": "For region ap-northeast-3 with FIPS disabled and DualStack disabled", + "expect": { + "endpoint": { + "url": "https://cognito-idp.ap-northeast-3.amazonaws.com" + } + }, + "params": { + "Region": "ap-northeast-3", + "UseFIPS": false, + "UseDualStack": false + } + }, { "documentation": "For region ap-south-1 with FIPS disabled and DualStack disabled", "expect": { @@ -756,6 +896,19 @@ "UseDualStack": false } }, + { + "documentation": "For region ap-south-2 with FIPS disabled and DualStack disabled", + "expect": { + "endpoint": { + "url": "https://cognito-idp.ap-south-2.amazonaws.com" + } + }, + "params": { + "Region": "ap-south-2", + "UseFIPS": false, + "UseDualStack": false + } + }, { "documentation": "For region ap-southeast-1 with FIPS disabled and DualStack disabled", "expect": { @@ -782,6 +935,32 @@ "UseDualStack": false } }, + { + "documentation": "For region ap-southeast-3 with FIPS disabled and DualStack disabled", + "expect": { + "endpoint": { + "url": "https://cognito-idp.ap-southeast-3.amazonaws.com" + } + }, + "params": { + "Region": "ap-southeast-3", + "UseFIPS": false, + "UseDualStack": false + } + }, + { + "documentation": "For region ap-southeast-4 with FIPS disabled and DualStack disabled", + "expect": { + "endpoint": { + "url": "https://cognito-idp.ap-southeast-4.amazonaws.com" + } + }, + "params": { + "Region": "ap-southeast-4", + "UseFIPS": false, + "UseDualStack": false + } + }, { "documentation": "For region ca-central-1 with FIPS disabled and DualStack disabled", "expect": { @@ -795,6 +974,19 @@ "UseDualStack": false } }, + { + "documentation": "For region ca-west-1 with FIPS disabled and DualStack disabled", + "expect": { + "endpoint": { + "url": "https://cognito-idp.ca-west-1.amazonaws.com" + } + }, + "params": { + "Region": "ca-west-1", + "UseFIPS": false, + "UseDualStack": false + } + }, { "documentation": "For region eu-central-1 with FIPS disabled and DualStack disabled", "expect": { @@ -808,6 +1000,19 @@ "UseDualStack": false } }, + { + "documentation": "For region eu-central-2 with FIPS disabled and DualStack disabled", + "expect": { + "endpoint": { + "url": "https://cognito-idp.eu-central-2.amazonaws.com" + } + }, + "params": { + "Region": "eu-central-2", + "UseFIPS": false, + "UseDualStack": false + } + }, { "documentation": "For region eu-north-1 with FIPS disabled and DualStack disabled", "expect": { @@ -821,6 +1026,32 @@ "UseDualStack": false } }, + { + "documentation": "For region eu-south-1 with FIPS disabled and DualStack disabled", + "expect": { + "endpoint": { + "url": "https://cognito-idp.eu-south-1.amazonaws.com" + } + }, + "params": { + "Region": "eu-south-1", + "UseFIPS": false, + "UseDualStack": false + } + }, + { + "documentation": "For region eu-south-2 with FIPS disabled and DualStack disabled", + "expect": { + "endpoint": { + "url": "https://cognito-idp.eu-south-2.amazonaws.com" + } + }, + "params": { + "Region": "eu-south-2", + "UseFIPS": false, + "UseDualStack": false + } + }, { "documentation": "For region eu-west-1 with FIPS disabled and DualStack disabled", "expect": { @@ -860,6 +1091,32 @@ "UseDualStack": false } }, + { + "documentation": "For region il-central-1 with FIPS disabled and DualStack disabled", + "expect": { + "endpoint": { + "url": "https://cognito-idp.il-central-1.amazonaws.com" + } + }, + "params": { + "Region": "il-central-1", + "UseFIPS": false, + "UseDualStack": false + } + }, + { + "documentation": "For region me-central-1 with FIPS disabled and DualStack disabled", + "expect": { + "endpoint": { + "url": "https://cognito-idp.me-central-1.amazonaws.com" + } + }, + "params": { + "Region": "me-central-1", + "UseFIPS": false, + "UseDualStack": false + } + }, { "documentation": "For region me-south-1 with FIPS disabled and DualStack disabled", "expect": { @@ -912,6 +1169,19 @@ "UseDualStack": false } }, + { + "documentation": "For region us-east-1 with FIPS enabled and DualStack enabled", + "expect": { + "endpoint": { + "url": "https://cognito-idp-fips.us-east-1.amazonaws.com" + } + }, + "params": { + "Region": "us-east-1", + "UseFIPS": true, + "UseDualStack": true + } + }, { "documentation": "For region us-east-2 with FIPS disabled and DualStack disabled", "expect": { @@ -938,6 +1208,19 @@ "UseDualStack": false } }, + { + "documentation": "For region us-east-2 with FIPS enabled and DualStack enabled", + "expect": { + "endpoint": { + "url": "https://cognito-idp-fips.us-east-2.amazonaws.com" + } + }, + "params": { + "Region": "us-east-2", + "UseFIPS": true, + "UseDualStack": true + } + }, { "documentation": "For region us-west-1 with FIPS disabled and DualStack disabled", "expect": { @@ -964,6 +1247,19 @@ "UseDualStack": false } }, + { + "documentation": "For region us-west-1 with FIPS enabled and DualStack enabled", + "expect": { + "endpoint": { + "url": "https://cognito-idp-fips.us-west-1.amazonaws.com" + } + }, + "params": { + "Region": "us-west-1", + "UseFIPS": true, + "UseDualStack": true + } + }, { "documentation": "For region us-west-2 with FIPS disabled and DualStack disabled", "expect": { @@ -991,14 +1287,14 @@ } }, { - "documentation": "For region us-east-1 with FIPS enabled and DualStack enabled", + "documentation": "For region us-west-2 with FIPS enabled and DualStack enabled", "expect": { "endpoint": { - "url": "https://cognito-idp-fips.us-east-1.api.aws" + "url": "https://cognito-idp-fips.us-west-2.amazonaws.com" } }, "params": { - "Region": "us-east-1", + "Region": "us-west-2", "UseFIPS": true, "UseDualStack": true } @@ -1007,7 +1303,7 @@ "documentation": "For region us-east-1 with FIPS disabled and DualStack enabled", "expect": { "endpoint": { - "url": "https://cognito-idp.us-east-1.api.aws" + "url": "https://cognito-idp.us-east-1.amazonaws.com" } }, "params": { diff --git a/codegen/sdk-codegen/aws-models/connect.json b/codegen/sdk-codegen/aws-models/connect.json index 0dd364d082b..7aeb9c9d136 100644 --- a/codegen/sdk-codegen/aws-models/connect.json +++ b/codegen/sdk-codegen/aws-models/connect.json @@ -1057,6 +1057,9 @@ { "target": "com.amazonaws.connect#DeleteContactFlowModule" }, + { + "target": "com.amazonaws.connect#DeleteContactFlowVersion" + }, { "target": "com.amazonaws.connect#DeleteEmailAddress" }, @@ -3734,7 +3737,7 @@ } ], "traits": { - "smithy.api#documentation": ">Associates a set of proficiencies with a user.
", + "smithy.api#documentation": "Associates a set of proficiencies with a user.
", "smithy.api#http": { "method": "POST", "uri": "/users/{InstanceId}/{UserId}/associate-proficiencies", @@ -6463,6 +6466,12 @@ "traits": { "smithy.api#enumValue": "QUEUE_TRANSFER" } + }, + "CAMPAIGN": { + "target": "smithy.api#Unit", + "traits": { + "smithy.api#enumValue": "CAMPAIGN" + } } } }, @@ -7417,7 +7426,7 @@ "FlowContentSha256": { "target": "com.amazonaws.connect#FlowContentSha256", "traits": { - "smithy.api#documentation": "Indicates the checksum value of the flow content.
" + "smithy.api#documentation": "Indicates the checksum value of the latest published flow content.
" } } }, @@ -7457,7 +7466,7 @@ } ], "traits": { - "smithy.api#documentation": "Publishes a new version of the flow provided. Versions are immutable and monotonically\n increasing. If a version of the same flow content already exists, no new version is created and\n instead the existing version number is returned. If the FlowContentSha256
provided\n is different from the FlowContentSha256
of the $LATEST
published flow\n content, then an error is returned. This API only supports creating versions for flows of type\n Campaign
.
Publishes a new version of the flow provided. Versions are immutable and monotonically\n increasing. If the FlowContentSha256
provided is different from the\n FlowContentSha256
of the $LATEST
published flow content, then an error\n is returned. This API only supports creating versions for flows of type\n Campaign
.
Indicates the checksum value of the flow content.
" } }, + "ContactFlowVersion": { + "target": "com.amazonaws.connect#ResourceVersion", + "traits": { + "smithy.api#documentation": "The identifier of the flow version.
" + } + }, "LastModifiedTime": { "target": "com.amazonaws.connect#Timestamp", "traits": { @@ -11112,6 +11127,82 @@ "smithy.api#output": {} } }, + "com.amazonaws.connect#DeleteContactFlowVersion": { + "type": "operation", + "input": { + "target": "com.amazonaws.connect#DeleteContactFlowVersionRequest" + }, + "output": { + "target": "com.amazonaws.connect#DeleteContactFlowVersionResponse" + }, + "errors": [ + { + "target": "com.amazonaws.connect#AccessDeniedException" + }, + { + "target": "com.amazonaws.connect#InternalServiceException" + }, + { + "target": "com.amazonaws.connect#InvalidParameterException" + }, + { + "target": "com.amazonaws.connect#InvalidRequestException" + }, + { + "target": "com.amazonaws.connect#ResourceNotFoundException" + }, + { + "target": "com.amazonaws.connect#ThrottlingException" + } + ], + "traits": { + "smithy.api#documentation": "Deletes the particular version specified in flow version identifier.
", + "smithy.api#http": { + "method": "DELETE", + "uri": "/contact-flows/{InstanceId}/{ContactFlowId}/version/{ContactFlowVersion}", + "code": 200 + } + } + }, + "com.amazonaws.connect#DeleteContactFlowVersionRequest": { + "type": "structure", + "members": { + "InstanceId": { + "target": "com.amazonaws.connect#InstanceId", + "traits": { + "smithy.api#documentation": "The identifier of the Amazon Connect instance. You can find the instance ID in the Amazon Resource Name (ARN) of the instance.
", + "smithy.api#httpLabel": {}, + "smithy.api#required": {} + } + }, + "ContactFlowId": { + "target": "com.amazonaws.connect#ARN", + "traits": { + "smithy.api#documentation": "The identifier of the flow.
", + "smithy.api#httpLabel": {}, + "smithy.api#required": {} + } + }, + "ContactFlowVersion": { + "target": "com.amazonaws.connect#ResourceVersion", + "traits": { + "smithy.api#documentation": "The identifier of the flow version.
", + "smithy.api#httpLabel": {}, + "smithy.api#required": {} + } + } + }, + "traits": { + "smithy.api#input": {} + } + }, + "com.amazonaws.connect#DeleteContactFlowVersionResponse": { + "type": "structure", + "members": {}, + "traits": { + "smithy.api#output": {} + } + }, "com.amazonaws.connect#DeleteEmailAddress": { "type": "operation", "input": { @@ -11695,7 +11786,7 @@ } ], "traits": { - "smithy.api#documentation": "Deletes a queue. It isn't possible to delete a queue by using the Amazon Connect admin website.
", + "smithy.api#documentation": "Deletes a queue.
", "smithy.api#http": { "method": "DELETE", "uri": "/queues/{InstanceId}/{QueueId}", @@ -12797,7 +12888,7 @@ } ], "traits": { - "smithy.api#documentation": "Describes the specified flow.
\nYou can also create and update flows using the Amazon Connect\n Flow language.
\nUse the $SAVED
alias in the request to describe the SAVED
content\n of a Flow. For example, arn:aws:.../contact-flow/{id}:$SAVED
. After a flow is\n published, $SAVED
needs to be supplied to view saved content that has not been\n published.
In the response, Status indicates the flow status as either\n SAVED
or PUBLISHED
. The PUBLISHED
status will initiate\n validation on the content. SAVED
does not initiate validation of the content.\n SAVED
| PUBLISHED
\n
Describes the specified flow.
\nYou can also create and update flows using the Amazon Connect\n Flow language.
\nUse the $SAVED
alias in the request to describe the SAVED
content\n of a Flow. For example, arn:aws:.../contact-flow/{id}:$SAVED
. After a flow is\n published, $SAVED
needs to be supplied to view saved content that has not been\n published.
Use arn:aws:.../contact-flow/{id}:{version}
to retrieve the content of a\n specific flow version.
In the response, Status indicates the flow status as either\n SAVED
or PUBLISHED
. The PUBLISHED
status will initiate\n validation on the content. SAVED
does not initiate validation of the content.\n SAVED
| PUBLISHED
\n
This setting enables partial ingestion at entry-level. If set to true
, we ingest all TQVs not resulting in an error. If set to \n false
, an invalid TQV fails ingestion of the entire entry that contains it.
Set this period to specify how long your data is stored in the warm tier before it is deleted. You can set this only if cold tier is enabled.
" } + }, + "disallowIngestNullNaN": { + "target": "com.amazonaws.iotsitewise#DisallowIngestNullNaN", + "traits": { + "smithy.api#documentation": "Describes the configuration for ingesting NULL and NaN data. \n By default the feature is allowed. The feature is disallowed if the value is true
.
The type of null asset property data.
", + "smithy.api#required": {} + } + } + }, + "traits": { + "smithy.api#documentation": "The value type of null asset property data with BAD and UNCERTAIN qualities.
" + } + }, "com.amazonaws.iotsitewise#PropertyValueStringValue": { "type": "string" }, @@ -14282,6 +14310,12 @@ "traits": { "smithy.api#documentation": "Set this period to specify how long your data is stored in the warm tier before it is deleted. You can set this only if cold tier is enabled.
" } + }, + "disallowIngestNullNaN": { + "target": "com.amazonaws.iotsitewise#DisallowIngestNullNaN", + "traits": { + "smithy.api#documentation": "Describes the configuration for ingesting NULL and NaN data. \n By default the feature is allowed. The feature is disallowed if the value is true
.
Set this period to specify how long your data is stored in the warm tier before it is deleted. You can set this only if cold tier is enabled.
" } + }, + "disallowIngestNullNaN": { + "target": "com.amazonaws.iotsitewise#DisallowIngestNullNaN", + "traits": { + "smithy.api#documentation": "Describes the configuration for ingesting NULL and NaN data. \n By default the feature is allowed. The feature is disallowed if the value is true
.
Asset property data of type string (sequence of characters).
" + "smithy.api#documentation": "\n Asset property data of type string (sequence of characters).\n The allowed pattern: \"^$|[^\\u0000-\\u001F\\u007F]+\". The max length is 1024.\n
" } }, "integerValue": { @@ -16467,7 +16542,7 @@ "doubleValue": { "target": "com.amazonaws.iotsitewise#PropertyValueDoubleValue", "traits": { - "smithy.api#documentation": "Asset property data of type double (floating point number).
" + "smithy.api#documentation": "\n Asset property data of type double (floating point number). The min value is -10^10. \n The max value is 10^10. Double.NaN is allowed.\n
" } }, "booleanValue": { @@ -16475,6 +16550,12 @@ "traits": { "smithy.api#documentation": "Asset property data of type Boolean (true or false).
" } + }, + "nullValue": { + "target": "com.amazonaws.iotsitewise#PropertyValueNullValue", + "traits": { + "smithy.api#documentation": "The type of null asset property data with BAD and UNCERTAIN qualities.
" + } } }, "traits": { diff --git a/codegen/sdk-codegen/aws-models/quicksight.json b/codegen/sdk-codegen/aws-models/quicksight.json index 8f68cab4baa..f8962fea7dc 100644 --- a/codegen/sdk-codegen/aws-models/quicksight.json +++ b/codegen/sdk-codegen/aws-models/quicksight.json @@ -24181,6 +24181,23 @@ "smithy.api#documentation": "The configuration of destination parameter values.
\nThis is a union type structure. For this structure to be valid, only one of the attributes can be defined.
" } }, + "com.amazonaws.quicksight#DigitGroupingStyle": { + "type": "enum", + "members": { + "DEFAULT": { + "target": "smithy.api#Unit", + "traits": { + "smithy.api#enumValue": "DEFAULT" + } + }, + "LAKHS": { + "target": "smithy.api#Unit", + "traits": { + "smithy.api#enumValue": "LAKHS" + } + } + } + }, "com.amazonaws.quicksight#DimensionField": { "type": "structure", "members": { @@ -39032,6 +39049,18 @@ "traits": { "smithy.api#enumValue": "TRILLIONS" } + }, + "LAKHS": { + "target": "smithy.api#Unit", + "traits": { + "smithy.api#enumValue": "LAKHS" + } + }, + "CRORES": { + "target": "smithy.api#Unit", + "traits": { + "smithy.api#enumValue": "CRORES" + } } } }, @@ -51628,7 +51657,7 @@ "traits": { "smithy.api#length": { "min": 0, - "max": 100 + "max": 201 } } }, @@ -51971,11 +52000,23 @@ } } }, + "com.amazonaws.quicksight#TableUnaggregatedFieldList": { + "type": "list", + "member": { + "target": "com.amazonaws.quicksight#UnaggregatedField" + }, + "traits": { + "smithy.api#length": { + "min": 0, + "max": 201 + } + } + }, "com.amazonaws.quicksight#TableUnaggregatedFieldWells": { "type": "structure", "members": { "Values": { - "target": "com.amazonaws.quicksight#UnaggregatedFieldList", + "target": "com.amazonaws.quicksight#TableUnaggregatedFieldList", "traits": { "smithy.api#documentation": "The values field well for a pivot table. Values are unaggregated for an unaggregated table.
" } @@ -53200,6 +53241,12 @@ "traits": { "smithy.api#documentation": "Determines the visibility of the thousands separator.
" } + }, + "GroupingStyle": { + "target": "com.amazonaws.quicksight#DigitGroupingStyle", + "traits": { + "smithy.api#documentation": "Determines the way numbers are styled to accommodate different readability standards. The DEFAULT
value uses the standard international grouping system and groups numbers by the thousands. The LAKHS
value uses the Indian numbering system and groups numbers by lakhs and crores.
A map of attributes with their corresponding values.
\nThe following lists names, descriptions, and values of the special request parameters\n that the CreateTopic
action uses:
\n DeliveryPolicy
– The policy that defines how Amazon SNS retries\n failed deliveries to HTTP/S endpoints.
\n DisplayName
– The display name to use for a topic with SMS\n subscriptions.
\n FifoTopic
– Set to true to create a FIFO topic.
\n Policy
– The policy that defines who can access your\n topic. By default, only the topic owner can publish or subscribe to the\n topic.
\n SignatureVersion
– The signature version corresponds to\n the hashing algorithm used while creating the signature of the notifications,\n subscription confirmations, or unsubscribe confirmation messages sent by Amazon SNS.\n By default, SignatureVersion
is set to 1
.
\n TracingConfig
– Tracing mode of an Amazon SNS topic. By default\n TracingConfig
is set to PassThrough
, and the topic\n passes through the tracing header it receives from an Amazon SNS publisher to its\n subscriptions. If set to Active
, Amazon SNS will vend X-Ray segment data\n to topic owner account if the sampled flag in the tracing header is true. This\n is only supported on standard topics.
The following attribute applies only to server-side\n encryption:
\n\n KmsMasterKeyId
– The ID of an Amazon Web Services managed customer master\n key (CMK) for Amazon SNS or a custom CMK. For more information, see Key\n Terms. For more examples, see KeyId in the Key Management Service API Reference.
The following attributes apply only to FIFO topics:
\n\n ArchivePolicy
– The policy that sets the retention period\n for messages stored in the message archive of an Amazon SNS FIFO\n topic.
\n ContentBasedDeduplication
– Enables content-based\n deduplication for FIFO topics.
By default, ContentBasedDeduplication
is set to\n false
. If you create a FIFO topic and this attribute is\n false
, you must specify a value for the\n MessageDeduplicationId
parameter for the Publish\n action.
When you set ContentBasedDeduplication
to true
,\n Amazon SNS uses a SHA-256 hash to generate the\n MessageDeduplicationId
using the body of the message (but not\n the attributes of the message).
(Optional) To override the generated value, you can specify a value for the\n MessageDeduplicationId
parameter for the Publish
\n action.
A map of attributes with their corresponding values.
\nThe following lists names, descriptions, and values of the special request parameters\n that the CreateTopic
action uses:
\n DeliveryPolicy
– The policy that defines how Amazon SNS retries\n failed deliveries to HTTP/S endpoints.
\n DisplayName
– The display name to use for a topic with SMS\n subscriptions.
\n FifoTopic
– Set to true to create a FIFO topic.
\n Policy
– The policy that defines who can access your\n topic. By default, only the topic owner can publish or subscribe to the\n topic.
\n SignatureVersion
– The signature version corresponds to\n the hashing algorithm used while creating the signature of the notifications,\n subscription confirmations, or unsubscribe confirmation messages sent by Amazon SNS.\n By default, SignatureVersion
is set to 1
.
\n TracingConfig
– Tracing mode of an Amazon SNS topic. By default\n TracingConfig
is set to PassThrough
, and the topic\n passes through the tracing header it receives from an Amazon SNS publisher to its\n subscriptions. If set to Active
, Amazon SNS will vend X-Ray segment data\n to topic owner account if the sampled flag in the tracing header is true. This\n is only supported on standard topics.
The following attribute applies only to server-side\n encryption:
\n\n KmsMasterKeyId
– The ID of an Amazon Web Services managed customer master\n key (CMK) for Amazon SNS or a custom CMK. For more information, see Key\n Terms. For more examples, see KeyId in the Key Management Service API Reference.
The following attributes apply only to FIFO topics:
\n\n ArchivePolicy
– The policy that sets the retention period\n for messages stored in the message archive of an Amazon SNS FIFO\n topic.
\n ContentBasedDeduplication
– Enables content-based\n deduplication for FIFO topics.
By default, ContentBasedDeduplication
is set to\n false
. If you create a FIFO topic and this attribute is\n false
, you must specify a value for the\n MessageDeduplicationId
parameter for the Publish action.
When you set ContentBasedDeduplication
to\n true
, Amazon SNS uses a SHA-256 hash to\n generate the MessageDeduplicationId
using the body of the\n message (but not the attributes of the message).
(Optional) To override the generated value, you can specify a value\n for the MessageDeduplicationId
parameter for the\n Publish
action.
\n FifoThroughputScope
– Enables higher throughput for your FIFO topic by adjusting the scope of deduplication. This attribute has two possible values:
\n Topic
– The scope of message deduplication is across the entire topic. This is the default value and maintains existing behavior, with a maximum throughput of 3000 messages per second or 20MB per second, whichever comes first.
\n MessageGroup
– The scope of deduplication is within each individual message group, which enables higher throughput per topic subject to regional quotas. For more information on quotas or to request an increase, see Amazon SNS service quotas in the Amazon Web Services General Reference.
This parameter applies only to FIFO (first-in-first-out) topics.
\nThe token used for deduplication of messages within a 5-minute minimum deduplication\n interval. If a message with a particular MessageDeduplicationId
is sent\n successfully, subsequent messages with the same MessageDeduplicationId
are\n accepted successfully but aren't delivered.
Every message must have a unique MessageDeduplicationId
.
You may provide a MessageDeduplicationId
\n explicitly.
If you aren't able to provide a MessageDeduplicationId
\n and you enable ContentBasedDeduplication
for your topic,\n Amazon SNS uses a SHA-256 hash to generate the\n MessageDeduplicationId
using the body of the message\n (but not the attributes of the message).
If you don't provide a MessageDeduplicationId
and the\n topic doesn't have ContentBasedDeduplication
set, the\n action fails with an error.
If the topic has a ContentBasedDeduplication
set, your\n MessageDeduplicationId
overrides the generated one.\n
When ContentBasedDeduplication
is in effect, messages with\n identical content sent within the deduplication interval are treated as\n duplicates and only one copy of the message is delivered.
If you send one message with ContentBasedDeduplication
enabled,\n and then another message with a MessageDeduplicationId
that is the\n same as the one generated for the first MessageDeduplicationId
, the\n two messages are treated as duplicates and only one copy of the message is\n delivered.
The MessageDeduplicationId
is available to the consumer of the\n message (this can be useful for troubleshooting delivery issues).
If a message is sent successfully but the acknowledgement is lost and the message\n is resent with the same MessageDeduplicationId
after the deduplication\n interval, Amazon SNS can't detect duplicate messages.
Amazon SNS continues to keep track of the message deduplication ID even after the\n message is received and deleted.
\nThe length of MessageDeduplicationId
is 128 characters.
\n MessageDeduplicationId
can contain alphanumeric characters (a-z,\n A-Z, 0-9)
and punctuation\n (!\"#$%&'()*+,-./:;<=>?@[\\]^_`{|}~)
.
This parameter applies only to FIFO (first-in-first-out) topics.
\nThis parameter applies only to FIFO (first-in-first-out) topics. The\n MessageDeduplicationId
can contain up to 128 alphanumeric\n characters (a-z, A-Z, 0-9)
and punctuation\n (!\"#$%&'()*+,-./:;<=>?@[\\]^_`{|}~)
.
Every message must have a unique MessageDeduplicationId
, which is\n a token used for deduplication of sent messages within the 5 minute minimum\n deduplication interval.
The scope of deduplication depends on the FifoThroughputScope
\n attribute, when set to Topic
the message deduplication scope is\n across the entire topic, when set to MessageGroup
the message\n deduplication scope is within each individual message group.
If a message with a particular MessageDeduplicationId
is sent\n successfully, subsequent messages within the deduplication scope and interval,\n with the same MessageDeduplicationId
, are accepted successfully but\n aren't delivered.
Every message must have a unique MessageDeduplicationId
.
You may provide a MessageDeduplicationId
\n explicitly.
If you aren't able to provide a MessageDeduplicationId
\n and you enable ContentBasedDeduplication
for your topic,\n Amazon SNS uses a SHA-256 hash to generate the\n MessageDeduplicationId
using the body of the message\n (but not the attributes of the message).
If you don't provide a MessageDeduplicationId
and the\n topic doesn't have ContentBasedDeduplication
set, the\n action fails with an error.
If the topic has a ContentBasedDeduplication
set, your\n MessageDeduplicationId
overrides the generated one.\n
When ContentBasedDeduplication
is in effect, messages with\n identical content sent within the deduplication scope and interval are treated\n as duplicates and only one copy of the message is delivered.
If you send one message with ContentBasedDeduplication
enabled,\n and then another message with a MessageDeduplicationId
that is the\n same as the one generated for the first MessageDeduplicationId
, the\n two messages are treated as duplicates, within the deduplication scope and\n interval, and only one copy of the message is delivered.
The MessageDeduplicationId
is available to the consumer of the\n message (this can be useful for troubleshooting delivery issues).
If a message is sent successfully but the acknowledgement is lost and the message\n is resent with the same MessageDeduplicationId
after the deduplication\n interval, Amazon SNS can't detect duplicate messages.
Amazon SNS continues to keep track of the message deduplication ID even after the\n message is received and deleted.
\nThis parameter applies only to FIFO (first-in-first-out) topics. The\n MessageDeduplicationId
can contain up to 128 alphanumeric characters\n (a-z, A-Z, 0-9)
and punctuation\n (!\"#$%&'()*+,-./:;<=>?@[\\]^_`{|}~)
.
Every message must have a unique MessageDeduplicationId
, which is a token\n used for deduplication of sent messages. If a message with a particular\n MessageDeduplicationId
is sent successfully, any message sent with the\n same MessageDeduplicationId
during the 5-minute deduplication interval is\n treated as a duplicate.
If the topic has ContentBasedDeduplication
set, the system generates a\n MessageDeduplicationId
based on the contents of the message. Your\n MessageDeduplicationId
overrides the generated one.
This parameter applies only to FIFO (first-in-first-out) topics. The\n MessageDeduplicationId
can contain up to 128 alphanumeric\n characters (a-z, A-Z, 0-9)
and punctuation\n (!\"#$%&'()*+,-./:;<=>?@[\\]^_`{|}~)
.
Every message must have a unique MessageDeduplicationId
, which is\n a token used for deduplication of sent messages within the 5 minute minimum\n deduplication interval.
The scope of deduplication depends on the FifoThroughputScope
\n attribute, when set to Topic
the message deduplication scope is\n across the entire topic, when set to MessageGroup
the message\n deduplication scope is within each individual message group.
If a message with a particular MessageDeduplicationId
is sent\n successfully, subsequent messages within the deduplication scope and interval,\n with the same MessageDeduplicationId
, are accepted successfully but\n aren't delivered.
Every message must have a unique MessageDeduplicationId
:
You may provide a MessageDeduplicationId
\n explicitly.
If you aren't able to provide a MessageDeduplicationId
\n and you enable ContentBasedDeduplication
for your topic,\n Amazon SNS uses a SHA-256 hash to generate the\n MessageDeduplicationId
using the body of the message\n (but not the attributes of the message).
If you don't provide a MessageDeduplicationId
and the\n topic doesn't have ContentBasedDeduplication
set, the\n action fails with an error.
If the topic has a ContentBasedDeduplication
set, your\n MessageDeduplicationId
overrides the generated one.\n
When ContentBasedDeduplication
is in effect, messages with\n identical content sent within the deduplication scope and interval are treated\n as duplicates and only one copy of the message is delivered.
If you send one message with ContentBasedDeduplication
enabled,\n and then another message with a MessageDeduplicationId
that is the\n same as the one generated for the first MessageDeduplicationId
, the\n two messages are treated as duplicates, within the deduplication scope and\n interval, and only one copy of the message is delivered.
A map of attributes with their corresponding values.
\nThe following lists the names, descriptions, and values of the special request\n parameters that the SetTopicAttributes
action uses:
\n ApplicationSuccessFeedbackRoleArn
– Indicates failed\n message delivery status for an Amazon SNS topic that is subscribed to a platform\n application endpoint.
\n DeliveryPolicy
– The policy that defines how Amazon SNS retries\n failed deliveries to HTTP/S endpoints.
\n DisplayName
– The display name to use for a topic with SMS\n subscriptions.
\n Policy
– The policy that defines who can access your\n topic. By default, only the topic owner can publish or subscribe to the\n topic.
\n TracingConfig
– Tracing mode of an Amazon SNS topic. By default\n TracingConfig
is set to PassThrough
, and the topic\n passes through the tracing header it receives from an Amazon SNS publisher to its\n subscriptions. If set to Active
, Amazon SNS will vend X-Ray segment data\n to topic owner account if the sampled flag in the tracing header is true. This\n is only supported on standard topics.
HTTP
\n\n HTTPSuccessFeedbackRoleArn
– Indicates successful\n message delivery status for an Amazon SNS topic that is subscribed to an HTTP\n endpoint.
\n HTTPSuccessFeedbackSampleRate
– Indicates\n percentage of successful messages to sample for an Amazon SNS topic that is\n subscribed to an HTTP endpoint.
\n HTTPFailureFeedbackRoleArn
– Indicates failed\n message delivery status for an Amazon SNS topic that is subscribed to an HTTP\n endpoint.
Amazon Kinesis Data Firehose
\n\n FirehoseSuccessFeedbackRoleArn
– Indicates\n successful message delivery status for an Amazon SNS topic that is subscribed\n to an Amazon Kinesis Data Firehose endpoint.
\n FirehoseSuccessFeedbackSampleRate
– Indicates\n percentage of successful messages to sample for an Amazon SNS topic that is\n subscribed to an Amazon Kinesis Data Firehose endpoint.
\n FirehoseFailureFeedbackRoleArn
– Indicates failed\n message delivery status for an Amazon SNS topic that is subscribed to an\n Amazon Kinesis Data Firehose endpoint.
Lambda
\n\n LambdaSuccessFeedbackRoleArn
– Indicates\n successful message delivery status for an Amazon SNS topic that is subscribed\n to an Lambda endpoint.
\n LambdaSuccessFeedbackSampleRate
– Indicates\n percentage of successful messages to sample for an Amazon SNS topic that is\n subscribed to an Lambda endpoint.
\n LambdaFailureFeedbackRoleArn
– Indicates failed\n message delivery status for an Amazon SNS topic that is subscribed to an\n Lambda endpoint.
Platform application endpoint
\n\n ApplicationSuccessFeedbackRoleArn
– Indicates\n successful message delivery status for an Amazon SNS topic that is subscribed\n to an Amazon Web Services application endpoint.
\n ApplicationSuccessFeedbackSampleRate
– Indicates\n percentage of successful messages to sample for an Amazon SNS topic that is\n subscribed to an Amazon Web Services application endpoint.
\n ApplicationFailureFeedbackRoleArn
– Indicates\n failed message delivery status for an Amazon SNS topic that is subscribed to\n an Amazon Web Services application endpoint.
In addition to being able to configure topic attributes for message\n delivery status of notification messages sent to Amazon SNS application\n endpoints, you can also configure application attributes for the delivery\n status of push notification messages sent to push notification\n services.
\nFor example, For more information, see Using Amazon SNS Application\n Attributes for Message Delivery Status.
\nAmazon SQS
\n\n SQSSuccessFeedbackRoleArn
– Indicates successful\n message delivery status for an Amazon SNS topic that is subscribed to an\n Amazon SQS endpoint.
\n SQSSuccessFeedbackSampleRate
– Indicates\n percentage of successful messages to sample for an Amazon SNS topic that is\n subscribed to an Amazon SQS endpoint.
\n SQSFailureFeedbackRoleArn
– Indicates failed\n message delivery status for an Amazon SNS topic that is subscribed to an\n Amazon SQS endpoint.
The
The following attribute applies only to server-side-encryption:
\n\n KmsMasterKeyId
– The ID of an Amazon Web Services managed customer master\n key (CMK) for Amazon SNS or a custom CMK. For more information, see Key\n Terms. For more examples, see KeyId in the Key Management Service API Reference.
\n SignatureVersion
– The signature version corresponds to the\n hashing algorithm used while creating the signature of the notifications,\n subscription confirmations, or unsubscribe confirmation messages sent by Amazon SNS.\n By default, SignatureVersion
is set to 1
.
The following attribute applies only to FIFO topics:
\n\n ArchivePolicy
– The policy that sets the retention period\n for messages stored in the message archive of an Amazon SNS FIFO\n topic.
\n ContentBasedDeduplication
– Enables content-based\n deduplication for FIFO topics.
By default, ContentBasedDeduplication
is set to\n false
. If you create a FIFO topic and this attribute is\n false
, you must specify a value for the\n MessageDeduplicationId
parameter for the Publish\n action.
When you set ContentBasedDeduplication
to true
,\n Amazon SNS uses a SHA-256 hash to generate the\n MessageDeduplicationId
using the body of the message (but not\n the attributes of the message).
(Optional) To override the generated value, you can specify a value for the\n MessageDeduplicationId
parameter for the Publish
\n action.
A map of attributes with their corresponding values.
\nThe following lists the names, descriptions, and values of the special request\n parameters that the SetTopicAttributes
action uses:
\n ApplicationSuccessFeedbackRoleArn
– Indicates failed\n message delivery status for an Amazon SNS topic that is subscribed to a platform\n application endpoint.
\n DeliveryPolicy
– The policy that defines how Amazon SNS retries\n failed deliveries to HTTP/S endpoints.
\n DisplayName
– The display name to use for a topic with SMS\n subscriptions.
\n Policy
– The policy that defines who can access your\n topic. By default, only the topic owner can publish or subscribe to the\n topic.
\n TracingConfig
– Tracing mode of an Amazon SNS topic. By default\n TracingConfig
is set to PassThrough
, and the topic\n passes through the tracing header it receives from an Amazon SNS publisher to its\n subscriptions. If set to Active
, Amazon SNS will vend X-Ray segment data\n to topic owner account if the sampled flag in the tracing header is true. This\n is only supported on standard topics.
HTTP
\n\n HTTPSuccessFeedbackRoleArn
– Indicates successful\n message delivery status for an Amazon SNS topic that is subscribed to an HTTP\n endpoint.
\n HTTPSuccessFeedbackSampleRate
– Indicates\n percentage of successful messages to sample for an Amazon SNS topic that is\n subscribed to an HTTP endpoint.
\n HTTPFailureFeedbackRoleArn
– Indicates failed\n message delivery status for an Amazon SNS topic that is subscribed to an HTTP\n endpoint.
Amazon Kinesis Data Firehose
\n\n FirehoseSuccessFeedbackRoleArn
– Indicates\n successful message delivery status for an Amazon SNS topic that is subscribed\n to an Amazon Kinesis Data Firehose endpoint.
\n FirehoseSuccessFeedbackSampleRate
– Indicates\n percentage of successful messages to sample for an Amazon SNS topic that is\n subscribed to an Amazon Kinesis Data Firehose endpoint.
\n FirehoseFailureFeedbackRoleArn
– Indicates failed\n message delivery status for an Amazon SNS topic that is subscribed to an\n Amazon Kinesis Data Firehose endpoint.
Lambda
\n\n LambdaSuccessFeedbackRoleArn
– Indicates\n successful message delivery status for an Amazon SNS topic that is subscribed\n to an Lambda endpoint.
\n LambdaSuccessFeedbackSampleRate
– Indicates\n percentage of successful messages to sample for an Amazon SNS topic that is\n subscribed to an Lambda endpoint.
\n LambdaFailureFeedbackRoleArn
– Indicates failed\n message delivery status for an Amazon SNS topic that is subscribed to an\n Lambda endpoint.
Platform application endpoint
\n\n ApplicationSuccessFeedbackRoleArn
– Indicates\n successful message delivery status for an Amazon SNS topic that is subscribed\n to an Amazon Web Services application endpoint.
\n ApplicationSuccessFeedbackSampleRate
– Indicates\n percentage of successful messages to sample for an Amazon SNS topic that is\n subscribed to an Amazon Web Services application endpoint.
\n ApplicationFailureFeedbackRoleArn
– Indicates\n failed message delivery status for an Amazon SNS topic that is subscribed to\n an Amazon Web Services application endpoint.
In addition to being able to configure topic attributes for message\n delivery status of notification messages sent to Amazon SNS application\n endpoints, you can also configure application attributes for the delivery\n status of push notification messages sent to push notification\n services.
\nFor example, For more information, see Using Amazon SNS Application\n Attributes for Message Delivery Status.
\nAmazon SQS
\n\n SQSSuccessFeedbackRoleArn
– Indicates successful\n message delivery status for an Amazon SNS topic that is subscribed to an\n Amazon SQS endpoint.
\n SQSSuccessFeedbackSampleRate
– Indicates\n percentage of successful messages to sample for an Amazon SNS topic that is\n subscribed to an Amazon SQS endpoint.
\n SQSFailureFeedbackRoleArn
– Indicates failed\n message delivery status for an Amazon SNS topic that is subscribed to an\n Amazon SQS endpoint.
The
The following attribute applies only to server-side-encryption:
\n\n KmsMasterKeyId
– The ID of an Amazon Web Services managed customer master\n key (CMK) for Amazon SNS or a custom CMK. For more information, see Key\n Terms. For more examples, see KeyId in the Key Management Service API Reference.
\n SignatureVersion
– The signature version corresponds to the\n hashing algorithm used while creating the signature of the notifications,\n subscription confirmations, or unsubscribe confirmation messages sent by Amazon SNS.\n By default, SignatureVersion
is set to 1
.
The following attribute applies only to FIFO topics:
\n\n ArchivePolicy
– The policy that sets the retention period\n for messages stored in the message archive of an Amazon SNS FIFO\n topic.
\n ContentBasedDeduplication
– Enables content-based\n deduplication for FIFO topics.
By default, ContentBasedDeduplication
is set to\n false
. If you create a FIFO topic and this attribute is\n false
, you must specify a value for the\n MessageDeduplicationId
parameter for the Publish action.
When you set ContentBasedDeduplication
to\n true
, Amazon SNS uses a SHA-256 hash to\n generate the MessageDeduplicationId
using the body of the\n message (but not the attributes of the message).
(Optional) To override the generated value, you can specify a value\n for the MessageDeduplicationId
parameter for the\n Publish
action.
\n FifoThroughputScope
– Enables higher throughput for your FIFO topic by adjusting the scope of deduplication. This attribute has two possible values:
\n Topic
– The scope of message deduplication is across the entire topic. This is the default value and maintains existing behavior, with a maximum throughput of 3000 messages per second or 20MB per second, whichever comes first.
\n MessageGroup
– The scope of deduplication is within each individual message group, which enables higher throughput per topic subject to regional quotas. For more information on quotas or to request an increase, see Amazon SNS service quotas in the Amazon Web Services General Reference.