Skip to content

dcos-terraform/terraform-aws-infrastructure

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AWS DC/OS Master Instances

This module creates typical DS/OS infrastructure in AWS.

EXAMPLE

module "dcos-infrastructure" {
  source  = "dcos-terraform/infrastructure/aws"
  version = "~> 0.2.0"

  cluster_name = "production"
  ssh_public_key = "ssh-rsa ..."

  num_masters = "3"
  num_private_agents = "2"
  num_public_agents = "1"
}

output "bootstrap-public-ip" {
  value = "${module.dcos-infrastructure.bootstrap.public_ip}"
}

output "masters-public-ips" {
  value = "${module.dcos-infrastructure.masters.public_ips}"
}

Known Issues

Not subscribed to a marketplace AMI.

* module.dcos-infrastructure.module.dcos-privateagent-instances.module.dcos-private-agent-instances.aws_instance.instance[0]: 1 error(s) occurred:
* aws_instance.instance.0: Error launching source instance: OptInRequired: In order to use this AWS Marketplace product you need to accept terms and subscribe. To do so please visit https://aws.amazon.com/marketplace/pp?sku=ryg425ue2hwnsok9ccfastg4
      status code: 401, request id: 421d7970-d19a-4178-9ee2-95995afe05da
* module.dcos-infrastructure.module.dcos-privateagent-instances.module.dcos-private-agent-instances.aws_instance.instance[1]: 1 error(s) occurred:

Klick the stated link while being logged into the AWS Console ( Webinterface ) then click "subscribe" on the following page and follow the instructions.

Inputs

Name Description Type Default Required
admin_ips List of CIDR admin IPs list n/a yes
cluster_name Name of the DC/OS cluster string n/a yes
ssh_public_key_file Path to SSH public key. This is mandatory but can be set to an empty string if you want to use ssh_public_key with the key as string. string n/a yes
accepted_internal_networks Subnet ranges for all internal networks list <list> no
adminrouter_grpc_proxy_port string "12379" no
availability_zones List of availability_zones to be used as the same format that are required by the platform/cloud providers. i.e ['RegionZone'] list <list> no
aws_ami AMI that will be used for the instances instead of the Mesosphere chosen default images. Custom AMIs must fulfill the Mesosphere DC/OS system-requirements: See https://docs.mesosphere.com/1.12/installing/production/system-requirements/ string "" no
aws_create_s3_bucket Create S3 bucket with unique name for exhibitor. string "false" no
aws_key_name Specify the aws ssh key to use. We assume its already loaded in your SSH agent. Set ssh_public_key_file to empty string string "" no
bootstrap_associate_public_ip_address [BOOTSTRAP] Associate a public ip address with there instances string "true" no
bootstrap_aws_ami [BOOTSTRAP] AMI to be used string "" no
bootstrap_hostname_format [BOOTSTRAP] Format the hostname inputs are index+1, region, cluster_name string "%[3]s-bootstrap%[1]d-%[2]s" no
bootstrap_iam_instance_profile [BOOTSTRAP] Instance profile to be used for these instances string "" no
bootstrap_instance_type [BOOTSTRAP] Instance type string "t2.medium" no
bootstrap_os [BOOTSTRAP] Operating system to use. Instead of using your own AMI you could use a provided OS. string "" no
bootstrap_root_volume_size [BOOTSTRAP] Root volume size in GB string "80" no
bootstrap_root_volume_type [BOOTSTRAP] Root volume type string "standard" no
dcos_instance_os Operating system to use. Instead of using your own AMI you could use a provided OS. string "centos_7.4" no
lb_disable_masters Do not spawn master load balancer (admin access + internal access) string "false" no
lb_disable_public_agents Do not spawn public agent load balancers. ( Needs to be true when num_public_agents is 0 ) string "false" no
masters_acm_cert_arn ACM certifacte to be used for the masters load balancer string "" no
masters_associate_public_ip_address [MASTERS] Associate a public ip address with there instances string "true" no
masters_aws_ami [MASTERS] AMI to be used string "" no
masters_hostname_format [MASTERS] Format the hostname inputs are index+1, region, cluster_name string "%[3]s-master%[1]d-%[2]s" no
masters_iam_instance_profile [MASTERS] Instance profile to be used for these instances string "" no
masters_instance_type [MASTERS] Instance type string "m4.xlarge" no
masters_internal_acm_cert_arn ACM certifacte to be used for the internal masters load balancer string "" no
masters_os [MASTERS] Operating system to use. Instead of using your own AMI you could use a provided OS. string "" no
masters_root_volume_size [MASTERS] Root volume size in GB string "120" no
masters_user_data [MASTERS] User data to be used on these instances (cloud-init) string "" no
name_prefix Name Prefix string "" no
num_bootstrap Specify the amount of bootstrap. You should have at most 1 string "1" no
num_masters Specify the amount of masters. For redundancy you should have at least 3 string "3" no
num_private_agents Specify the amount of private agents. These agents will provide your main resources string "2" no
num_public_agents Specify the amount of public agents. These agents will host marathon-lb and edgelb string "1" no
open_admin_router Open admin router to public (80+443 on load balancer). WARNING: attackers could take over your cluster string "false" no
open_instance_ssh Open SSH on instances to public. WARNING: make sure you use a strong SSH key string "false" no
private_agents_associate_public_ip_address [PRIVATE AGENTS] Associate a public ip address with there instances string "true" no
private_agents_aws_ami [PRIVATE AGENTS] AMI to be used string "" no
private_agents_extra_volumes [PRIVATE AGENTS] Extra volumes for each private agent list <list> no
private_agents_hostname_format [PRIVATE AGENTS] Format the hostname inputs are index+1, region, cluster_name string "%[3]s-privateagent%[1]d-%[2]s" no
private_agents_iam_instance_profile [PRIVATE AGENTS] Instance profile to be used for these instances string "" no
private_agents_instance_type [PRIVATE AGENTS] Instance type string "m4.xlarge" no
private_agents_os [PRIVATE AGENTS] Operating system to use. Instead of using your own AMI you could use a provided OS. string "" no
private_agents_root_volume_size [PRIVATE AGENTS] Root volume size in GB string "120" no
private_agents_root_volume_type [PRIVATE AGENTS] Root volume type string "gp2" no
private_agents_user_data [PRIVATE AGENTS] User data to be used on these instances (cloud-init) string "" no
public_agents_access_ips List of ips allowed access to public agents. admin_ips are joined to this list list <list> no
public_agents_acm_cert_arn ACM certifacte to be used for the public agents load balancer string "" no
public_agents_additional_ports List of additional ports allowed for public access on public agents (80 and 443 open by default) list <list> no
public_agents_allow_dynamic Allow dynamic / ephemeral ports (49152-65535 see: RFC6335) on public agents public IPs string "false" no
public_agents_allow_registered Allow registered / user ports (1024-49151 see: RFC6335) on public agents public IPs string "false" no
public_agents_associate_public_ip_address [PUBLIC AGENTS] Associate a public ip address with there instances string "true" no
public_agents_aws_ami [PUBLIC AGENTS] AMI to be used string "" no
public_agents_extra_volumes [PUBLIC AGENTS] Extra volumes for each public agent list <list> no
public_agents_hostname_format [PUBLIC AGENTS] Format the hostname inputs are index+1, region, cluster_name string "%[3]s-publicagent%[1]d-%[2]s" no
public_agents_iam_instance_profile [PUBLIC AGENTS] Instance profile to be used for these instances string "" no
public_agents_instance_type [PUBLIC AGENTS] Instance type string "m4.xlarge" no
public_agents_os [PUBLIC AGENTS] Operating system to use. Instead of using your own AMI you could use a provided OS. string "" no
public_agents_root_volume_size [PUBLIC AGENTS] Root volume size string "120" no
public_agents_root_volume_type [PUBLIC AGENTS] Specify the root volume type. string "gp2" no
public_agents_user_data [PUBLIC AGENTS] User data to be used on these instances (cloud-init) string "" no
ssh_public_key SSH public key in authorized keys format (e.g. 'ssh-rsa ..') to be used with the instances. Make sure you added this key to your ssh-agent. string "" no
subnet_range Private IP space to be used in CIDR format string "172.16.0.0/16" no
tags Add custom tags to all resources map <map> no

Outputs

Name Description
aws_key_name Specify the aws ssh key to use. We assume its already loaded in your SSH agent. Set ssh_public_key_file to empty string
aws_s3_bucket_name Name of the created S3 bucket
bootstrap.instance Bootstrap instance ID
bootstrap.os_user Bootstrap instance OS default user
bootstrap.private_ip Private IP of the bootstrap instance
bootstrap.public_ip Public IP of the bootstrap instance
iam.agent_profile Name of the agent profile
iam.master_profile Name of the master profile
lb.masters_dns_name This is the load balancer to access the DC/OS UI
lb.masters_internal_dns_name This is the load balancer to access the masters internally in the cluster
lb.public_agents_dns_name This is the load balancer to reach the public agents
masters.aws_iam_instance_profile Masters instance profile name
masters.instances Master instances IDs
masters.os_user Master instances private OS default user
masters.private_ips Master instances private IPs
masters.public_ips Master instances public IPs
private_agents.aws_iam_instance_profile Private Agent instance profile name
private_agents.instances Private Agent instances IDs
private_agents.os_user Private Agent instances private OS default user
private_agents.private_ips Private Agent instances private IPs
private_agents.public_ips Private Agent public IPs
public_agents.aws_iam_instance_profile Public Agent instance profile name
public_agents.instances Public Agent instances IDs
public_agents.os_user Public Agent instances private OS default user
public_agents.private_ips Public Agent instances private IPs
public_agents.public_ips Public Agent public IPs
security_groups.admin This is the id of the admin security_group that the cluster is in
security_groups.internal This is the id of the internal security_group that the cluster is in
vpc.cidr_block This is the cidr_block of the VPC the cluster is in
vpc.id This is the id of the VPC the cluster is in
vpc.main_route_table_id This is the id of the VPC's main routing table the cluster is in
vpc.subnet_ids This is the list of subnet_ids the cluster is in