Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[manual backport stable-5] rds_cluster - update list of supported engines (#1191) #1491

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions changelogs/fragments/1191-rds_cluster-new_options.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
minor_changes:
- rds_cluster - update list of supported engines with ``mysql`` and ``postgres`` (https://github.com/ansible-collections/amazon.aws/pull/1191).
- rds_cluster - add new options (e.g., ``db_cluster_instance_class``, ``allocated_storage``, ``storage_type``, ``iops``) (https://github.com/ansible-collections/amazon.aws/pull/1191).
17 changes: 10 additions & 7 deletions plugins/module_utils/rds.py
Original file line number Diff line number Diff line change
Expand Up @@ -180,13 +180,16 @@ def handle_errors(module, exception, method_name, parameters):
if 'DB Cluster that is not a read replica' in to_text(exception):
changed = False
else:
module.fail_json_aws(exception, msg='Unable to {0}'.format(get_rds_method_attribute(method_name, module).operation_description))
elif method_name == 'create_db_cluster' and error_code == 'InvalidParameterValue':
accepted_engines = [
'aurora', 'aurora-mysql', 'aurora-postgresql'
]
if parameters.get('Engine') not in accepted_engines:
module.fail_json_aws(exception, msg='DB engine {0} should be one of {1}'.format(parameters.get('Engine'), accepted_engines))
module.fail_json_aws(
exception,
msg="Unable to {0}".format(get_rds_method_attribute(method_name, module).operation_description),
)
elif method_name == "create_db_cluster" and error_code == "InvalidParameterValue":
accepted_engines = ["aurora", "aurora-mysql", "aurora-postgresql", "mysql", "postgres"]
if parameters.get("Engine") not in accepted_engines:
module.fail_json_aws(
exception, msg="DB engine {0} should be one of {1}".format(parameters.get("Engine"), accepted_engines)
)
else:
module.fail_json_aws(exception, msg='Unable to {0}'.format(get_rds_method_attribute(method_name, module).operation_description))
else:
Expand Down
116 changes: 103 additions & 13 deletions plugins/modules/rds_cluster.py
Original file line number Diff line number Diff line change
Expand Up @@ -162,18 +162,62 @@
By default, write operations are not allowed on Aurora DB clusters that are secondary clusters in an Aurora global database.
- This value can be only set on Aurora DB clusters that are members of an Aurora global database.
type: bool
db_cluster_instance_class:
description:
- The compute and memory capacity of each DB instance in the Multi-AZ DB cluster, for example C(db.m6gd.xlarge).
- Not all DB instance classes are available in all Amazon Web Services Regions, or for all database engines.
- For the full list of DB instance classes and availability for your engine visit
U(https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.DBInstanceClass.html).
- This setting is required to create a Multi-AZ DB cluster.
- I(db_cluster_instance_class) require botocore >= 1.23.44.
type: str
version_added: 5.4.0
enable_iam_database_authentication:
description:
- Enable mapping of AWS Identity and Access Management (IAM) accounts to database accounts.
If this option is omitted when creating the cluster, Amazon RDS sets this to C(false).
type: bool
allocated_storage:
description:
- The amount of storage in gibibytes (GiB) to allocate to each DB instance in the Multi-AZ DB cluster.
- This setting is required to create a Multi-AZ DB cluster.
- I(allocated_storage) require botocore >= 1.23.44.
type: int
version_added: 5.4.0
storage_type:
description:
- Specifies the storage type to be associated with the DB cluster.
- This setting is required to create a Multi-AZ DB cluster.
- When specified, a value for the I(iops) parameter is required.
- I(storage_type) require botocore >= 1.23.44.
- Defaults to C(io1).
type: str
choices:
- io1
version_added: 5.4.0
iops:
description:
- The amount of Provisioned IOPS (input/output operations per second) to be initially allocated for each DB instance in the Multi-AZ DB cluster.
- This setting is required to create a Multi-AZ DB cluster
- Must be a multiple between .5 and 50 of the storage amount for the DB cluster.
- I(iops) require botocore >= 1.23.44.
type: int
version_added: 5.4.0
engine:
description:
- The name of the database engine to be used for this DB cluster. This is required to create a cluster.
- The combinaison of I(engine) and I(engine_mode) may not be supported.
- "See AWS documentation for details:
L(Amazon RDS Documentation,https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBCluster.html)."
- When I(engine=mysql), I(allocated_storage), I(iops) and I(db_cluster_instance_class) must also be specified.
- When I(engine=postgres), I(allocated_storage), I(iops) and I(db_cluster_instance_class) must also be specified.
- Support for C(postgres) and C(mysql) was added in amazon.aws 5.4.0.
choices:
- aurora
- aurora-mysql
- aurora-postgresql
- mysql
- postgres
type: str
engine_version:
description:
Expand Down Expand Up @@ -665,14 +709,39 @@ def get_backtrack_options(params_dict):

def get_create_options(params_dict):
options = [
'AvailabilityZones', 'BacktrackWindow', 'BackupRetentionPeriod', 'PreferredBackupWindow',
'CharacterSetName', 'DBClusterIdentifier', 'DBClusterParameterGroupName', 'DBSubnetGroupName',
'DatabaseName', 'EnableCloudwatchLogsExports', 'EnableIAMDatabaseAuthentication', 'KmsKeyId',
'Engine', 'EngineVersion', 'PreferredMaintenanceWindow', 'MasterUserPassword', 'MasterUsername',
'OptionGroupName', 'Port', 'ReplicationSourceIdentifier', 'SourceRegion', 'StorageEncrypted',
'Tags', 'VpcSecurityGroupIds', 'EngineMode', 'ScalingConfiguration', 'DeletionProtection',
'EnableHttpEndpoint', 'CopyTagsToSnapshot', 'Domain', 'DomainIAMRoleName',
'EnableGlobalWriteForwarding',
"AvailabilityZones",
"BacktrackWindow",
"BackupRetentionPeriod",
"PreferredBackupWindow",
"CharacterSetName",
"DBClusterIdentifier",
"DBClusterParameterGroupName",
"DBSubnetGroupName",
"DatabaseName",
"EnableCloudwatchLogsExports",
"EnableIAMDatabaseAuthentication",
"KmsKeyId",
"Engine",
"EngineMode",
"EngineVersion",
"PreferredMaintenanceWindow",
"MasterUserPassword",
"MasterUsername",
"OptionGroupName",
"Port",
"ReplicationSourceIdentifier",
"SourceRegion",
"StorageEncrypted",
"Tags",
"VpcSecurityGroupIds",
"EngineMode",
"ScalingConfiguration",
"DeletionProtection",
"EnableHttpEndpoint",
"CopyTagsToSnapshot",
"Domain",
"DomainIAMRoleName",
"EnableGlobalWriteForwarding",
]

return dict((k, v) for k, v in params_dict.items() if k in options and v is not None)
Expand Down Expand Up @@ -779,7 +848,7 @@ def backtrack_cluster(params):
try:
client.backtrack_db_cluster(**params)
except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
module.fail_json_aws(e, msg=F"Unable to backtrack cluster {params['DBClusterIdentifier']}")
module.fail_json_aws(e, msg=f"Unable to backtrack cluster {params['DBClusterIdentifier']}")
wait_for_cluster_status(client, module, params['DBClusterIdentifier'], 'cluster_available')


Expand Down Expand Up @@ -920,10 +989,15 @@ def main():
copy_tags_to_snapshot=dict(type='bool'),
domain=dict(),
domain_iam_role_name=dict(),
enable_global_write_forwarding=dict(type='bool'),
enable_iam_database_authentication=dict(type='bool'),
engine=dict(choices=["aurora", "aurora-mysql", "aurora-postgresql"]),
enable_global_write_forwarding=dict(type="bool"),
db_cluster_instance_class=dict(type="str"),
enable_iam_database_authentication=dict(type="bool"),
engine=dict(choices=["aurora", "aurora-mysql", "aurora-postgresql", "mysql", "postgres"]),
engine_mode=dict(choices=["provisioned", "serverless", "parallelquery", "global", "multimaster"]),
engine_version=dict(),
allocated_storage=dict(type="int"),
storage_type=dict(type="str", choices=["io1"]),
iops=dict(type="int"),
final_snapshot_identifier=dict(),
force_backtrack=dict(type='bool'),
kms_key_id=dict(),
Expand Down Expand Up @@ -967,7 +1041,7 @@ def main():
('s3_bucket_name', 'source_db_cluster_identifier', 'snapshot_identifier'),
('use_latest_restorable_time', 'restore_to_time'),
],
supports_check_mode=True
supports_check_mode=True,
)

retry_decorator = AWSRetry.jittered_backoff(retries=10)
Expand All @@ -977,6 +1051,22 @@ def main():
except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
module.fail_json_aws(e, msg='Failed to connect to AWS.')

if module.params.get("engine") and module.params["engine"] in ("mysql", "postgres"):
module.require_botocore_at_least("1.23.44", reason="to use mysql and postgres engines")
if module.params["state"] == "present":
if not (
module.params.get("allocated_storage")
and module.params.get("iops")
and module.params.get("db_cluster_instance_class")
):
module.fail_json(
f"When engine={module.params['engine']} allocated_storage, iops and db_cluster_instance_class msut be specified"
)
else:
# Fall to default value
if not module.params.get("storage_type"):
module.params["storage_type"] = "io1"

module.params['db_cluster_identifier'] = module.params['db_cluster_identifier'].lower()
cluster = get_cluster(module.params['db_cluster_identifier'])

Expand Down
7 changes: 7 additions & 0 deletions tests/integration/targets/rds_cluster_multi_az/aliases
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
cloud/aws

# It takes >20min to spawn the mutlti az cluster
disabled

rds_cluster
rds_cluster_info
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# Create cluster
cluster_id: ansible-test-{{ tiny_prefix }}
username: testrdsusername
password: "{{ lookup('password', 'dev/null length=12 chars=ascii_letters,digits') }}"
tags_create:
Name: ansible-test-cluster-{{ tiny_prefix }}
Created_By: Ansible_rds_cluster_integration_test
5 changes: 5 additions & 0 deletions tests/integration/targets/rds_cluster_multi_az/meta/main.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---
dependencies:
- role: setup_botocore_pip
vars:
botocore_version: "1.23.44"
79 changes: 79 additions & 0 deletions tests/integration/targets/rds_cluster_multi_az/tasks/main.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
---
- module_defaults:
group/aws:
region: "{{ aws_region }}"
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
security_token: "{{ security_token | default(omit) }}"
collections:
- amazon.aws

block:
- name: Ensure the resource doesn't exist
rds_cluster:
id: '{{ cluster_id }}'
state: absent
engine: 'mysql'
username: '{{ username }}'
password: '{{ password }}'
skip_final_snapshot: true
register: _result_delete_db_cluster

- assert:
that:
- not _result_delete_db_cluster.changed
ignore_errors: true

- name: Create a source DB cluster (CHECK_MODE)
rds_cluster:
id: '{{ cluster_id }}'
state: present
engine: 'mysql'
engine_version: 8.0.28
allocated_storage: 100
iops: 5000
db_cluster_instance_class: db.r6gd.xlarge
username: '{{ username }}'
password: '{{ password }}'
wait: true
tags: '{{ tags_create }}'
register: _result_create_source_db_cluster
check_mode: True
vars:
ansible_python_interpreter: "{{ botocore_virtualenv_interpreter }}"

- assert:
that:
- _result_create_source_db_cluster.changed

- name: Create a source DB cluster
rds_cluster:
id: '{{ cluster_id }}'
state: present
engine: 'mysql'
engine_version: 8.0.28
allocated_storage: 100
iops: 5000
db_cluster_instance_class: db.r6gd.xlarge
username: '{{ username }}'
password: '{{ password }}'
wait: true
tags: '{{ tags_create }}'
register: _result_create_source_db_cluster
vars:
ansible_python_interpreter: "{{ botocore_virtualenv_interpreter }}"

- assert:
that:
- _result_create_source_db_cluster.changed

always:

- name: Delete DB cluster without creating a final snapshot
rds_cluster:
state: absent
cluster_id: '{{ item }}'
skip_final_snapshot: true
ignore_errors: true
loop:
- '{{ cluster_id }}'