Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use EFS DNS name instead of Mount Target DNS name. Fix README.md #9

Merged
merged 6 commits into from
Sep 30, 2017
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
103 changes: 80 additions & 23 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,13 @@
Terraform module designed to easily backup EFS filesystems to S3 using DataPipeline.

The workflow is simple:

* Periodically launch resource (EC2 instance) based on schedule
* Execute the shell command defined in the activity on the instance
* Execute sync data from Production EFS to S3 Bucket by aws-cli
* The execution log of the activity is stored in S3
* Publish to the SNS topic that defined the success or failure of the activity
* Automatic backup rotation using `S3 lifecycle rule`
* Sync data from Production EFS to S3 Bucket by using `aws-cli`
* The execution log of the activity is stored in `S3`
* Publish the success or failure of the activity to an `SNS` topic
* Automatically rotate the backups using `S3 lifecycle rule`


## Usage
Expand All @@ -22,10 +23,9 @@ module "efs_backup" {
name = "${var.name}"
stage = "${var.stage}"
namespace = "${var.namespace}"
region = "${var.region}"
vpc_id = "${var.vpc_id}"
efs_mount_target_id = "${var.efs_mount_target_id}"
use_ip_address = "true"
use_ip_address = "false"
noncurrent_version_expiration_days = "${var.noncurrent_version_expiration_days}"
ssh_key_pair = "${var.ssh_key_pair}"
datapipeline_config = "${var.datapipeline_config}"
Expand All @@ -40,36 +40,93 @@ output "efs_backup_security_group" {

## Variables

| Name | Default | Description | Required |
|:-----------------------------------|:--------------:|:------------------------------------------------------------------------------------|:--------:|
| namespace | `` | Namespace (e.g. `cp` or `cloudposse`) | Yes |
| stage | `` | Stage (e.g. `prod`, `dev`, `staging`) | Yes |
| name | `` | Name (e.g. `efs-backup`) | Yes |
| region | `us-east-1` | AWS Region where module should operate (e.g. `us-east-1`) | Yes |
| vpc_id | `` | AWS VPC ID where module should operate (e.g. `vpc-a22222ee`) | Yes |
| efs_mount_target_id | `` | Elastic File System Mount Target ID (e.g. `fsmt-279bfc62`) | Yes |
| use_ip_address | `false` | If set to `true` will be used IP address instead DNS name of Elastic File System | Yes |
| modify_security_group | `false` | Should the module modify EFS security groups (if set to `false` backups will fail) | Yes |
| noncurrent_version_expiration_days | `3` | S3 object versions expiration period (days) | Yes |
| ssh_key_pair | `` | A ssh key that will be deployed on DataPipeline's instance | Yes |
| datapipeline_config | `${map("instance_type", "t2.micro", "email", "", "period", "24 hours")}"`| Essential Datapipeline configuration options | Yes |
| Name | Default | Description | Required |
|:-----------------------------------|:--------------:|:----------------------------------------------------------------------------------------------|:--------:|
| namespace | `` | Namespace (e.g. `cp` or `cloudposse`) | Yes |
| stage | `` | Stage (e.g. `prod`, `dev`, `staging`) | Yes |
| name | `` | Name (e.g. `app` or `wordpress`) | Yes |
| region | `us-east-1` | (Optional) AWS Region. If not specified, will be derived from 'aws_region' data source | No |
| vpc_id | `` | AWS VPC ID where module should operate (_e.g._ `vpc-a22222ee`) | Yes |
| efs_mount_target_id | `` | Elastic File System Mount Target ID (_e.g._ `fsmt-279bfc62`) | Yes |
| use_ip_address | `false` | If set to `true`, will use IP address instead of DNS name to connect to the `EFS` | Yes |
| modify_security_group | `false` | Should the module modify the `EFS` security group | No |
| noncurrent_version_expiration_days | `35` | S3 object versions expiration period (days) | Yes |
| ssh_key_pair | `` | `SSH` key that will be deployed on DataPipeline's instance | No |
| datapipeline_config | `${map("instance_type", "t2.micro", "email", "", "period", "24 hours", "timeout", "60 Minutes")}"`| DataPipeline configuration options | Yes |
| attributes | `[]` | Additional attributes (_e.g._ `efs-backup`) | No |
| tags | `{}` | Additional tags (e.g. `map("BusinessUnit","XYZ")` | No |
| delimiter | `-` | Delimiter to be used between `name`, `namespace`, `stage` and `attributes` | No |


### `datapipeline_config` variables

| Name | Default | Description | Required |
|:-----------------------------------|:--------------:|:------------------------------------------------------------|:--------:|
| instance_type | `t2.micro` | Instance type to use | Yes |
| email | `` | Email to use in SNS | Yes |
| email | `` | Email to use in `SNS` | Yes |
| period | `24 hours` | Frequency of pipeline execution (frequency of backups) | Yes |
| timeout | `60 Minutes` | Pipeline execution timeout | Yes |



## Integration with `EFS`

To enable connectivity between the `DataPipeline` instances and the `EFS`, use one of the following methods to configure Security Groups:

1. Explicitly add the `DataPipeline` SG (the output of this module `security_group_id`) to the list of the `ingress` rules of the `EFS` SG. For example:

```hcl
module "elastic_beanstalk_environment" {
source = "git::https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment.git?ref=master"
namespace = "${var.namespace}"
name = "${var.name}"
stage = "${var.stage}"
delimiter = "${var.delimiter}"
attributes = ["${compact(concat(var.attributes, list("eb-env")))}"]
tags = "${var.tags}"

# ..............................
}

module "efs" {
source = "git::https://github.com/cloudposse/terraform-aws-efs.git?ref=tmaster"
namespace = "${var.namespace}"
name = "${var.name}"
stage = "${var.stage}"
delimiter = "${var.delimiter}"
attributes = ["${compact(concat(var.attributes, list("efs")))}"]
tags = "${var.tags}"

# Allow EB/EC2 instances and DataPipeline instances to connect to the EFS
security_groups = ["${module.elastic_beanstalk_environment.security_group_id}", "${module.efs_backup.security_group_id}"]
}

## Integration with EFS
module "efs_backup" {
source = "git::https://github.com/cloudposse/terraform-aws-efs-backup.git?ref=master"
name = "${var.name}"
stage = "${var.stage}"
namespace = "${var.namespace}"
delimiter = "${var.delimiter}"
attributes = ["${compact(concat(var.attributes, list("efs-backup")))}"]
tags = "${var.tags}"

# Important to set it to `false` since we added the `DataPipeline` SG (output of the `efs_backup` module) to the `security_groups` of the `efs` module
# See NOTE below for more information
modify_security_group = "false"

# ..............................
}
```

It's necessary to configure your EFS filesystem security groups to permit backups from the DataPipeline instances.
2. Set `modify_security_group` attribute to `true` so the module will modify the `EFS` SG to allow the `DataPipeline` to connect to the `EFS`

Add the security group ID from the `efs_backup_security_group` output to a security group of EFS Filesystems.
**NOTE:** Do not mix these two methods together.
`Terraform` does not support using a Security Group with in-line rules in conjunction with any Security Group Rule resources.
https://www.terraform.io/docs/providers/aws/r/security_group_rule.html
> NOTE on Security Groups and Security Group Rules: Terraform currently provides both a standalone Security Group Rule resource
(a single ingress or egress rule), and a Security Group resource with ingress and egress rules defined in-line.
At this time you cannot use a Security Group with in-line rules in conjunction with any Security Group Rule resources.
Doing so will cause a conflict of rule settings and will overwrite rules.


## References
Expand Down
16 changes: 12 additions & 4 deletions cloudformation.tf
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,9 @@ module "sns_label" {
namespace = "${var.namespace}"
stage = "${var.stage}"
name = "${var.name}"
attributes = ["sns"]
delimiter = "${var.delimiter}"
attributes = ["${compact(concat(var.attributes, list("sns")))}"]
tags = "${var.tags}"
}

resource "aws_cloudformation_stack" "sns" {
Expand All @@ -13,14 +15,18 @@ resource "aws_cloudformation_stack" "sns" {
parameters {
Email = "${var.datapipeline_config["email"]}"
}

tags = "${module.sns_label.tags}"
}

module "datapipeline_label" {
source = "git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.2.1"
namespace = "${var.namespace}"
stage = "${var.stage}"
name = "${var.name}"
attributes = ["datapipeline"]
delimiter = "${var.delimiter}"
attributes = ["${compact(concat(var.attributes, list("datapipeline")))}"]
tags = "${var.tags}"
}

resource "aws_cloudformation_stack" "datapipeline" {
Expand All @@ -31,9 +37,9 @@ resource "aws_cloudformation_stack" "datapipeline" {
myInstanceType = "${var.datapipeline_config["instance_type"]}"
mySubnetId = "${data.aws_subnet_ids.default.ids[0]}"
mySecurityGroupId = "${aws_security_group.datapipeline.id}"
myEFSHost = "${var.use_ip_address ? data.aws_efs_mount_target.default.ip_address : data.aws_efs_mount_target.default.dns_name }"
myEFSHost = "${var.use_ip_address ? data.aws_efs_mount_target.default.ip_address : format("%s.efs.%s.amazonaws.com", data.aws_efs_mount_target.default.file_system_id, (signum(length(var.region)) == 1 ? var.region : data.aws_region.default.name))}"
myS3BackupsBucket = "${aws_s3_bucket.backups.id}"
myRegion = "${var.region}"
myRegion = "${signum(length(var.region)) == 1 ? var.region : data.aws_region.default.name}"
myImageId = "${data.aws_ami.amazon_linux.id}"
myTopicArn = "${aws_cloudformation_stack.sns.outputs["TopicArn"]}"
myS3LogBucket = "${aws_s3_bucket.logs.id}"
Expand All @@ -44,4 +50,6 @@ resource "aws_cloudformation_stack" "datapipeline" {
Tag = "${module.label.id}"
myExecutionTimeout = "${var.datapipeline_config["timeout"]}"
}

tags = "${module.datapipeline_label.tags}"
}
2 changes: 1 addition & 1 deletion efs.tf
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Get Elastic File System Mount Target (EFS)
# Get Elastic File System Mount Target
data "aws_efs_mount_target" "default" {
mount_target_id = "${var.efs_mount_target_id}"
}
8 changes: 6 additions & 2 deletions iam.tf
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,9 @@ module "resource_role_label" {
namespace = "${var.namespace}"
stage = "${var.stage}"
name = "${var.name}"
attributes = ["resource-role"]
delimiter = "${var.delimiter}"
attributes = ["${compact(concat(var.attributes, list("resource-role")))}"]
tags = "${var.tags}"
}

resource "aws_iam_role" "resource_role" {
Expand Down Expand Up @@ -56,7 +58,9 @@ module "role_label" {
namespace = "${var.namespace}"
stage = "${var.stage}"
name = "${var.name}"
attributes = ["role"]
delimiter = "${var.delimiter}"
attributes = ["${compact(concat(var.attributes, list("role")))}"]
tags = "${var.tags}"
}

resource "aws_iam_role" "role" {
Expand Down
17 changes: 12 additions & 5 deletions main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -3,14 +3,21 @@ terraform {
}

provider "aws" {
region = "${var.region}"
region = "${signum(length(var.region)) == 1 ? var.region : data.aws_region.default.name}"
}

data "aws_region" "default" {
current = true
}

module "label" {
source = "git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.2.1"
namespace = "${var.namespace}"
stage = "${var.stage}"
name = "${var.name}"
source = "git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.2.1"
namespace = "${var.namespace}"
stage = "${var.stage}"
name = "${var.name}"
delimiter = "${var.delimiter}"
attributes = "${var.attributes}"
tags = "${var.tags}"
}

data "aws_ami" "amazon_linux" {
Expand Down
2 changes: 1 addition & 1 deletion network.tf
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ data "aws_vpc" "default" {
id = "${var.vpc_id}"
}

# Get all subnets from the necessary vpc
# Get all subnets from the VPC
data "aws_subnet_ids" "default" {
vpc_id = "${data.aws_vpc.default.id}"
}
8 changes: 6 additions & 2 deletions s3.tf
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,9 @@ module "logs_label" {
namespace = "${var.namespace}"
stage = "${var.stage}"
name = "${var.name}"
attributes = ["logs"]
delimiter = "${var.delimiter}"
attributes = ["${compact(concat(var.attributes, list("logs")))}"]
tags = "${var.tags}"
}

resource "aws_s3_bucket" "logs" {
Expand All @@ -17,7 +19,9 @@ module "backups_label" {
namespace = "${var.namespace}"
stage = "${var.stage}"
name = "${var.name}"
attributes = ["backups"]
delimiter = "${var.delimiter}"
attributes = ["${compact(concat(var.attributes, list("backups")))}"]
tags = "${var.tags}"
}

resource "aws_s3_bucket" "backups" {
Expand Down
51 changes: 43 additions & 8 deletions variables.tf
Original file line number Diff line number Diff line change
@@ -1,12 +1,24 @@
variable "name" {}
variable "name" {
type = "string"
}

variable "namespace" {}
variable "namespace" {
type = "string"
}

variable "stage" {}
variable "stage" {
type = "string"
}

variable "region" {}
variable "region" {
type = "string"
default = ""
description = "(Optional) AWS Region. If not specified, will be derived from 'aws_region' data source"
}

variable "vpc_id" {}
variable "vpc_id" {
type = "string"
}

# https://www.terraform.io/docs/configuration/variables.html
# simply using string values rather than booleans for variables is recommended
Expand All @@ -25,15 +37,38 @@ variable "datapipeline_config" {
}
}

variable "efs_mount_target_id" {}
variable "efs_mount_target_id" {
type = "string"
description = "EFS Mount Target ID (e.g. `fsmt-279bfc62`)"
}

variable "modify_security_group" {
default = false
default = "false"
}

# Set a name of ssh key that will be deployed on DataPipeline's instance. The key should be present in AWS.
variable "ssh_key_pair" {}
variable "ssh_key_pair" {
type = "string"
}

variable "noncurrent_version_expiration_days" {
default = "35"
}

variable "delimiter" {
type = "string"
default = "-"
description = "Delimiter to be used between `name`, `namespace`, `stage`, etc."
}

variable "attributes" {
type = "list"
default = []
description = "Additional attributes (e.g. `efs-backup`)"
}

variable "tags" {
type = "map"
default = {}
description = "Additional tags (e.g. `map('BusinessUnit`,`XYZ`)"
}