diff --git a/.gitignore b/.gitignore index eb56170..4c752aa 100644 --- a/.gitignore +++ b/.gitignore @@ -1,2 +1,3 @@ +# Infrastructure ignores. .terraform terraform.tfvars diff --git a/README.md b/README.md index 9edb595..b030495 100644 --- a/README.md +++ b/README.md @@ -1,5 +1,14 @@ # terraform-aws-openshift -It's OpenShift, on AWS, handled by Terraform. But it's also WIP, eh? + +This project shows you how to set up OpenShift Origin on AWS using Terraform. + +## Overview + +Terraform is used to create infrastructure as shown: + +![Network Diagram](./docs/network-diagram.png) + +Once the infrastructure is set up, a single command is used to install the OpenShift platform on the hosts. ## Prerequisites @@ -26,17 +35,11 @@ This will keep your AWS credentials in the `$HOME/.aws/credentials` file, which The cluster is implemented as a [Terraform Module](https://www.terraform.io/docs/modules/index.html). To launch, just run: ```bash -# Create the module. -terraform get - -# See what we will create, or do a dry run! -terraform plan - -# Create the cluster! -terraform apply +# Get the modules, create the infrastructure. +terraform get && terraform apply ``` -You will be asked for a region to deploy in, use `us-east-1` should work fine! You can configure the nuances of how the cluster is created in the [`main.tf`](./main.tf) file. Once created, you will see a message like: +You will be asked for a region to deploy in, use `us-east-1` or your preferred region. You can configure the nuances of how the cluster is created in the [`main.tf`](./main.tf) file. Once created, you will see a message like: ``` $ terraform apply @@ -50,7 +53,38 @@ var.region Apply complete! Resources: 20 added, 0 changed, 0 destroyed. ``` -That's it. +That's it! The infrastructure is ready and you can install OpenShift. + +## Installing OpenShift + +Make sure you have your local identity added: + +``` +$ ssh-add ~/.ssh/id_rsa +``` + +Then just run the install script on the bastion: + +``` +$ cat install-from-bastion.sh | ssh -A ec2-user@$(terraform output bastion-public_dns) +``` + +It will take about 20 minutes. + +## Additional Configuration + +Access the master or nodes to update configuration and add feature as needed: + +``` +$ ssh -A ec2-user@$(terraform output bastion-public_dns) +$ ssh -A master.openshift.local +$ sudo su +$ oc get nodes +NAME STATUS AGE +master.openshift.local Ready 1h +node1.openshift.local Ready 1h +node2.openshift.local Ready 1h +``` ## Destroying the Cluster @@ -70,8 +104,10 @@ You'll be paying for: - https://www.udemy.com/openshift-enterprise-installation-and-configuration - The basic structure of the network is based on this course. - https://blog.openshift.com/openshift-container-platform-reference-architecture-implementation-guides/ - Detailed guide on high available solutions, including production grade AWS setup. + - https://access.redhat.com/sites/default/files/attachments/ocp-on-gce-3.pdf - Some useful info on using the bastion for installation. ## TODO - [ ] Consider whether it is needed to script elastic IPs for the instances and DNS. -- [ ] Test whether the previously registered domain name is actually forwarding to the public DNS. +- [ ] Consider documenting public DNS setup. +- [ ] Consider moving the nodes into a private subnet. diff --git a/docs/network-diagram.png b/docs/network-diagram.png new file mode 100644 index 0000000..3d52b5c Binary files /dev/null and b/docs/network-diagram.png differ diff --git a/install-from-bastion.sh b/install-from-bastion.sh new file mode 100644 index 0000000..95814e9 --- /dev/null +++ b/install-from-bastion.sh @@ -0,0 +1,56 @@ +# Fail on errors. +set -x + +# Elevate priviledges, retaining the environment. +sudo -E su + +# Install dev tools and Ansible 2.2 +yum install -y "@Development Tools" python2-pip openssl-devel python-devel gcc libffi-devel +pip install -Iv ansible==2.2.0.0 + +# Clone the openshift-ansible repo, which contains the installer. +git clone https://github.com/openshift/openshift-ansible +cd openshift-ansible + +# Create our Ansible inventory: +mkdir -p /etc/ansible +cat > /etc/ansible/hosts <<- EOF +# Create an OSEv3 group that contains the masters and nodes groups +[OSEv3:children] +masters +nodes + +# Set variables common for all OSEv3 hosts +[OSEv3:vars] +# SSH user, this user should allow ssh based auth without requiring a password +ansible_ssh_user=ec2-user + +# If ansible_ssh_user is not root, ansible_become must be set to true +ansible_become=true + +deployment_type=origin + +# uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider +# openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] + +# Create the masters host group. Be explicit with the openshift_hostname, +# otherwise it will resolve to something like ip-10-0-1-98.ec2.internal and use +# that as the node name. +[masters] +master.openshift.local openshift_hostname=master.openshift.local + +# host group for etcd +[etcd] +master.openshift.local + +# host group for nodes, includes region info +[nodes] +master.openshift.local openshift_node_labels="{'region': 'infra', 'zone': 'default'}" openshift_schedulable=true +node1.openshift.local openshift_hostname=node1.openshift.local openshift_node_labels="{'region': 'primary', 'zone': 'east'}" +node2.openshift.local openshift_hostname=node2.openshift.local openshift_node_labels="{'region': 'primary', 'zone': 'west'}" +EOF + +# Run the playbook. +ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook playbooks/byo/config.yml + +ansible-playbook playbooks/adhoc/uninstall.yml diff --git a/main.tf b/main.tf index 14114cb..e37786f 100644 --- a/main.tf +++ b/main.tf @@ -17,12 +17,54 @@ module "openshift" { } // Output some useful variables for quick SSH access etc. -output "master-dns" { - value = "${module.openshift.master-dns}" +output "master-public_dns" { + value = "${module.openshift.master-public_dns}" } -output "node1-dns" { - value = "${module.openshift.node1-dns}" +output "master-public_ip" { + value = "${module.openshift.master-public_ip}" } -output "node2-dns" { - value = "${module.openshift.node2-dns}" +output "master-private_dns" { + value = "${module.openshift.master-private_dns}" +} +output "master-private_ip" { + value = "${module.openshift.master-private_ip}" +} + +output "node1-public_dns" { + value = "${module.openshift.node1-public_dns}" +} +output "node1-public_ip" { + value = "${module.openshift.node1-public_ip}" +} +output "node1-private_dns" { + value = "${module.openshift.node1-private_dns}" +} +output "node1-private_ip" { + value = "${module.openshift.node1-private_ip}" +} + +output "node2-public_dns" { + value = "${module.openshift.node2-public_dns}" +} +output "node2-public_ip" { + value = "${module.openshift.node2-public_ip}" +} +output "node2-private_dns" { + value = "${module.openshift.node2-private_dns}" +} +output "node2-private_ip" { + value = "${module.openshift.node2-private_ip}" +} + +output "bastion-public_dns" { + value = "${module.openshift.bastion-public_dns}" +} +output "bastion-public_ip" { + value = "${module.openshift.bastion-public_ip}" +} +output "bastion-private_dns" { + value = "${module.openshift.bastion-private_dns}" +} +output "bastion-private_ip" { + value = "${module.openshift.bastion-private_ip}" } diff --git a/modules/openshift/02-security-groups.tf b/modules/openshift/02-security-groups.tf index 92990dc..6eecbf1 100644 --- a/modules/openshift/02-security-groups.tf +++ b/modules/openshift/02-security-groups.tf @@ -1,9 +1,5 @@ -// This is not the best way to handle security groups for an OpenShift cluster, -// as the various different needs are bundled into one security group. However -// this suffices for a simple demo. -// IMPORTANT: This is *not* production ready. SSH access is allowed to all -// instances from anywhere. - +// This security group allows intra-node communication on all ports with all +// protocols. resource "aws_security_group" "openshift-vpc" { name = "openshift-vpc" description = "Default security group that allows all instances in the VPC to talk to each other over any port and protocol." @@ -29,11 +25,11 @@ resource "aws_security_group" "openshift-vpc" { } } -// This security group allows public access to the instances for HTTP, HTTPS -// common HTTP/S proxy ports and SSH. -resource "aws_security_group" "openshift-public-access" { - name = "openshift-public-access" - description = "Security group that allows public access to instances, HTTP, HTTPS, SSH and more." +// This security group allows public ingress to the instances for HTTP, HTTPS +// and common HTTP/S proxy ports. +resource "aws_security_group" "openshift-public-ingress" { + name = "openshift-public-ingress" + description = "Security group that allows public ingress to instances, HTTP, HTTPS and more." vpc_id = "${aws_vpc.openshift.id}" // HTTP @@ -68,6 +64,47 @@ resource "aws_security_group" "openshift-public-access" { cidr_blocks = ["0.0.0.0/0"] } + tags { + Name = "OpenShift Public Access" + Project = "openshift" + } +} + +// This security group allows public egress from the instances for HTTP and +// HTTPS, which is needed for yum updates, git access etc etc. +resource "aws_security_group" "openshift-public-egress" { + name = "openshift-public-egress" + description = "Security group that allows egress to the internet for instances over HTTP and HTTPS." + vpc_id = "${aws_vpc.openshift.id}" + + // HTTP + egress { + from_port = 80 + to_port = 80 + protocol = "tcp" + cidr_blocks = ["0.0.0.0/0"] + } + + // HTTPS + egress { + from_port = 443 + to_port = 443 + protocol = "tcp" + cidr_blocks = ["0.0.0.0/0"] + } + + tags { + Name = "OpenShift Public Access" + Project = "openshift" + } +} + +// Security group which allows SSH access to a host. Used for the bastion. +resource "aws_security_group" "openshift-ssh" { + name = "openshift-ssh" + description = "Security group that allows public ingress over SSH." + vpc_id = "${aws_vpc.openshift.id}" + // SSH ingress { from_port = 22 @@ -77,7 +114,7 @@ resource "aws_security_group" "openshift-public-access" { } tags { - Name = "OpenShift Public Access" + Name = "OpenShift SSH Access" Project = "openshift" } } diff --git a/modules/openshift/03-roles.tf b/modules/openshift/03-roles.tf new file mode 100644 index 0000000..4133a92 --- /dev/null +++ b/modules/openshift/03-roles.tf @@ -0,0 +1,64 @@ +// Create a role which OpenShift instances will assume. +// This role has a policy saying it can be assumed by ec2 +// instances. +resource "aws_iam_role" "openshift-instance-role" { + name = "openshift-instance-role" + + assume_role_policy = <'. -resource "aws_route53_record" "master-console-a-record" { - zone_id = "${aws_route53_zone.external.zone_id}" - name = "console.${var.public_domain}" - type = "A" - ttl = 300 - records = [ - "${aws_instance.master.public_ip}" - ] -} - -// Also add a wildcard - this'll be for services etc. -resource "aws_route53_record" "master-wildcard-a-record" { - zone_id = "${aws_route53_zone.external.zone_id}" - name = "*.${var.public_domain}" - type = "A" - ttl = 300 - records = [ - "${aws_instance.master.public_ip}" - ] -} diff --git a/modules/openshift/06-bastion.tf b/modules/openshift/06-bastion.tf new file mode 100644 index 0000000..5bf9224 --- /dev/null +++ b/modules/openshift/06-bastion.tf @@ -0,0 +1,46 @@ +// Define an Amazon Linux AMI. +data "aws_ami" "amazonlinux" { + most_recent = true + + owners = ["137112412989"] + + filter { + name = "architecture" + values = ["x86_64"] + } + + filter { + name = "root-device-type" + values = ["ebs"] + } + + filter { + name = "virtualization-type" + values = ["hvm"] + } + + filter { + name = "name" + values = ["amzn-ami-hvm-*"] + } +} + +// Launch configuration for the consul cluster auto-scaling group. +resource "aws_instance" "bastion" { + ami = "${data.aws_ami.amazonlinux.id}" + instance_type = "t2.micro" + subnet_id = "${aws_subnet.public-subnet.id}" + + security_groups = [ + "${aws_security_group.openshift-vpc.id}", + "${aws_security_group.openshift-ssh.id}", + "${aws_security_group.openshift-public-egress.id}", + ] + + key_name = "${aws_key_pair.keypair.key_name}" + + tags { + Name = "OpenShift Bastion" + Project = "openshift" + } +} diff --git a/modules/openshift/99-outputs.tf b/modules/openshift/99-outputs.tf index 91e3e90..e0e9d89 100644 --- a/modules/openshift/99-outputs.tf +++ b/modules/openshift/99-outputs.tf @@ -1,10 +1,52 @@ // Output some useful variables for quick SSH access etc. -output "master-dns" { +output "master-public_dns" { value = "${aws_instance.master.public_dns}" } -output "node1-dns" { +output "master-public_ip" { + value = "${aws_instance.master.public_ip}" +} +output "master-private_dns" { + value = "${aws_instance.master.private_dns}" +} +output "master-private_ip" { + value = "${aws_instance.master.private_ip}" +} + +output "node1-public_dns" { value = "${aws_instance.node1.public_dns}" } -output "node2-dns" { +output "node1-public_ip" { + value = "${aws_instance.node1.public_ip}" +} +output "node1-private_dns" { + value = "${aws_instance.node1.private_dns}" +} +output "node1-private_ip" { + value = "${aws_instance.node1.private_ip}" +} + +output "node2-public_dns" { value = "${aws_instance.node2.public_dns}" } +output "node2-public_ip" { + value = "${aws_instance.node2.public_ip}" +} +output "node2-private_dns" { + value = "${aws_instance.node2.private_dns}" +} +output "node2-private_ip" { + value = "${aws_instance.node2.private_ip}" +} + +output "bastion-public_dns" { + value = "${aws_instance.bastion.public_dns}" +} +output "bastion-public_ip" { + value = "${aws_instance.bastion.public_ip}" +} +output "bastion-private_dns" { + value = "${aws_instance.bastion.private_dns}" +} +output "bastion-private_ip" { + value = "${aws_instance.bastion.private_ip}" +} diff --git a/modules/openshift/files/setup-master.sh b/modules/openshift/files/setup-master.sh new file mode 100644 index 0000000..5a5febc --- /dev/null +++ b/modules/openshift/files/setup-master.sh @@ -0,0 +1,50 @@ +#!/usr/bin/env bash + +# This script template is expected to be populated during the setup of a +# OpenShift node. It runs on host startup. + +# Log everything we do. +set -x +exec > /var/log/user-data.log 2>&1 + +# Create a folder to hold our AWS logs config. +# mkdir -p /var/awslogs/etc + +# Download and run the AWS logs agent. +curl https://s3.amazonaws.com/aws-cloudwatch/downloads/latest/awslogs-agent-setup.py -O +python ./awslogs-agent-setup.py --non-interactive --region us-east-1 -c /var/awslogs/etc/awslogs.conf + +# Create a the awslogs config. +cat >> /var/awslogs/etc/awslogs.conf <<- EOF +[/var/log/user-data.log] +file = /var/log/user-data.log +log_group_name = /var/log/user-data.log +log_stream_name = {instance_id} +EOF + +# Start the awslogs service, also start on reboot. +# Note: Errors go to /var/log/awslogs.log +service awslogs restart +chkconfig awslogs on + +# OpenShift setup +# See: https://docs.openshift.org/latest/install_config/install/host_preparation.html + +# Install packages required to setup OpenShift. +yum install -y wget git net-tools bind-utils iptables-services bridge-utils bash-completion +yum update -y + +# Note: The step below is not in the official docs, I needed it to install +# Docker. If anyone finds out why, I'd love to know. +# See: https://forums.aws.amazon.com/thread.jspa?messageID=574126 +yum-config-manager --enable rhui-REGION-rhel-server-extras + +# Docker setup. Check the version with `docker version`, should be 1.12. +yum install -y docker + +# Update the docker config to allow OpenShift's local insecure registry. +sed -i '/OPTIONS=.*/c\OPTIONS="--selinux-enabled --insecure-registry 172.30.0.0/16 --log-opt max-size=1M --log-opt max-file=3"' \ +/etc/sysconfig/docker +systemctl restart docker + +# Note we are not configuring Docker storage as per the guide. diff --git a/modules/openshift/files/setup-node.sh b/modules/openshift/files/setup-node.sh new file mode 100644 index 0000000..5a5febc --- /dev/null +++ b/modules/openshift/files/setup-node.sh @@ -0,0 +1,50 @@ +#!/usr/bin/env bash + +# This script template is expected to be populated during the setup of a +# OpenShift node. It runs on host startup. + +# Log everything we do. +set -x +exec > /var/log/user-data.log 2>&1 + +# Create a folder to hold our AWS logs config. +# mkdir -p /var/awslogs/etc + +# Download and run the AWS logs agent. +curl https://s3.amazonaws.com/aws-cloudwatch/downloads/latest/awslogs-agent-setup.py -O +python ./awslogs-agent-setup.py --non-interactive --region us-east-1 -c /var/awslogs/etc/awslogs.conf + +# Create a the awslogs config. +cat >> /var/awslogs/etc/awslogs.conf <<- EOF +[/var/log/user-data.log] +file = /var/log/user-data.log +log_group_name = /var/log/user-data.log +log_stream_name = {instance_id} +EOF + +# Start the awslogs service, also start on reboot. +# Note: Errors go to /var/log/awslogs.log +service awslogs restart +chkconfig awslogs on + +# OpenShift setup +# See: https://docs.openshift.org/latest/install_config/install/host_preparation.html + +# Install packages required to setup OpenShift. +yum install -y wget git net-tools bind-utils iptables-services bridge-utils bash-completion +yum update -y + +# Note: The step below is not in the official docs, I needed it to install +# Docker. If anyone finds out why, I'd love to know. +# See: https://forums.aws.amazon.com/thread.jspa?messageID=574126 +yum-config-manager --enable rhui-REGION-rhel-server-extras + +# Docker setup. Check the version with `docker version`, should be 1.12. +yum install -y docker + +# Update the docker config to allow OpenShift's local insecure registry. +sed -i '/OPTIONS=.*/c\OPTIONS="--selinux-enabled --insecure-registry 172.30.0.0/16 --log-opt max-size=1M --log-opt max-file=3"' \ +/etc/sysconfig/docker +systemctl restart docker + +# Note we are not configuring Docker storage as per the guide. diff --git a/terraform.tfstate b/terraform.tfstate index 8af3275..8d05ba8 100644 --- a/terraform.tfstate +++ b/terraform.tfstate @@ -1,7 +1,7 @@ { "version": 3, "terraform_version": "0.8.1", - "serial": 9, + "serial": 19, "lineage": "0011e481-2822-42cf-ada3-4655a9ed3816", "modules": [ { @@ -9,20 +9,85 @@ "root" ], "outputs": { - "master-dns": { + "bastion-private_dns": { "sensitive": false, "type": "string", - "value": "ec2-54-88-123-202.compute-1.amazonaws.com" + "value": "ip-10-0-1-198.ec2.internal" }, - "node1-dns": { + "bastion-private_ip": { "sensitive": false, "type": "string", - "value": "ec2-52-91-197-28.compute-1.amazonaws.com" + "value": "10.0.1.198" }, - "node2-dns": { + "bastion-public_dns": { "sensitive": false, "type": "string", - "value": "ec2-54-160-36-86.compute-1.amazonaws.com" + "value": "ec2-54-152-212-19.compute-1.amazonaws.com" + }, + "bastion-public_ip": { + "sensitive": false, + "type": "string", + "value": "54.152.212.19" + }, + "master-private_dns": { + "sensitive": false, + "type": "string", + "value": "ip-10-0-1-199.ec2.internal" + }, + "master-private_ip": { + "sensitive": false, + "type": "string", + "value": "10.0.1.199" + }, + "master-public_dns": { + "sensitive": false, + "type": "string", + "value": "ec2-54-208-6-234.compute-1.amazonaws.com" + }, + "master-public_ip": { + "sensitive": false, + "type": "string", + "value": "54.208.6.234" + }, + "node1-private_dns": { + "sensitive": false, + "type": "string", + "value": "ip-10-0-1-98.ec2.internal" + }, + "node1-private_ip": { + "sensitive": false, + "type": "string", + "value": "10.0.1.98" + }, + "node1-public_dns": { + "sensitive": false, + "type": "string", + "value": "ec2-54-205-212-122.compute-1.amazonaws.com" + }, + "node1-public_ip": { + "sensitive": false, + "type": "string", + "value": "54.205.212.122" + }, + "node2-private_dns": { + "sensitive": false, + "type": "string", + "value": "ip-10-0-1-215.ec2.internal" + }, + "node2-private_ip": { + "sensitive": false, + "type": "string", + "value": "10.0.1.215" + }, + "node2-public_dns": { + "sensitive": false, + "type": "string", + "value": "ec2-52-90-148-134.compute-1.amazonaws.com" + }, + "node2-public_ip": { + "sensitive": false, + "type": "string", + "value": "52.90.148.134" } }, "resources": {}, @@ -43,34 +108,241 @@ "openshift" ], "outputs": { - "master-dns": { + "bastion-private_dns": { "sensitive": false, "type": "string", - "value": "ec2-54-88-123-202.compute-1.amazonaws.com" + "value": "ip-10-0-1-198.ec2.internal" }, - "node1-dns": { + "bastion-private_ip": { "sensitive": false, "type": "string", - "value": "ec2-52-91-197-28.compute-1.amazonaws.com" + "value": "10.0.1.198" }, - "node2-dns": { + "bastion-public_dns": { "sensitive": false, "type": "string", - "value": "ec2-54-160-36-86.compute-1.amazonaws.com" + "value": "ec2-54-152-212-19.compute-1.amazonaws.com" + }, + "bastion-public_ip": { + "sensitive": false, + "type": "string", + "value": "54.152.212.19" + }, + "master-private_dns": { + "sensitive": false, + "type": "string", + "value": "ip-10-0-1-199.ec2.internal" + }, + "master-private_ip": { + "sensitive": false, + "type": "string", + "value": "10.0.1.199" + }, + "master-public_dns": { + "sensitive": false, + "type": "string", + "value": "ec2-54-208-6-234.compute-1.amazonaws.com" + }, + "master-public_ip": { + "sensitive": false, + "type": "string", + "value": "54.208.6.234" + }, + "node1-private_dns": { + "sensitive": false, + "type": "string", + "value": "ip-10-0-1-98.ec2.internal" + }, + "node1-private_ip": { + "sensitive": false, + "type": "string", + "value": "10.0.1.98" + }, + "node1-public_dns": { + "sensitive": false, + "type": "string", + "value": "ec2-54-205-212-122.compute-1.amazonaws.com" + }, + "node1-public_ip": { + "sensitive": false, + "type": "string", + "value": "54.205.212.122" + }, + "node2-private_dns": { + "sensitive": false, + "type": "string", + "value": "ip-10-0-1-215.ec2.internal" + }, + "node2-private_ip": { + "sensitive": false, + "type": "string", + "value": "10.0.1.215" + }, + "node2-public_dns": { + "sensitive": false, + "type": "string", + "value": "ec2-52-90-148-134.compute-1.amazonaws.com" + }, + "node2-public_ip": { + "sensitive": false, + "type": "string", + "value": "52.90.148.134" } }, "resources": { + "aws_iam_instance_profile.openshift-instance-profile": { + "type": "aws_iam_instance_profile", + "depends_on": [ + "aws_iam_role.openshift-instance-role" + ], + "primary": { + "id": "openshift-instance-profile", + "attributes": { + "arn": "arn:aws:iam::705383350627:instance-profile/openshift-instance-profile", + "id": "openshift-instance-profile", + "name": "openshift-instance-profile", + "path": "/", + "roles.#": "1", + "roles.1717939172": "openshift-instance-role" + }, + "meta": {}, + "tainted": false + }, + "deposed": [], + "provider": "" + }, + "aws_iam_policy.openshift-policy-forward-logs": { + "type": "aws_iam_policy", + "depends_on": [], + "primary": { + "id": "arn:aws:iam::705383350627:policy/openshift-instance-forward-logs", + "attributes": { + "arn": "arn:aws:iam::705383350627:policy/openshift-instance-forward-logs", + "description": "Allows an instance to forward logs to CloudWatch", + "id": "arn:aws:iam::705383350627:policy/openshift-instance-forward-logs", + "name": "openshift-instance-forward-logs", + "path": "/", + "policy": "{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Effect\": \"Allow\",\n \"Action\": [\n \"logs:CreateLogGroup\",\n \"logs:CreateLogStream\",\n \"logs:PutLogEvents\",\n \"logs:DescribeLogStreams\"\n ],\n \"Resource\": [\n \"arn:aws:logs:*:*:*\"\n ]\n }\n ]\n}\n" + }, + "meta": {}, + "tainted": false + }, + "deposed": [], + "provider": "" + }, + "aws_iam_policy_attachment.openshift-attachment-forward-logs": { + "type": "aws_iam_policy_attachment", + "depends_on": [ + "aws_iam_policy.openshift-policy-forward-logs", + "aws_iam_role.openshift-instance-role" + ], + "primary": { + "id": "openshift-attachment-forward-logs", + "attributes": { + "groups.#": "0", + "id": "openshift-attachment-forward-logs", + "name": "openshift-attachment-forward-logs", + "policy_arn": "arn:aws:iam::705383350627:policy/openshift-instance-forward-logs", + "roles.#": "1", + "roles.1717939172": "openshift-instance-role", + "users.#": "0" + }, + "meta": {}, + "tainted": false + }, + "deposed": [], + "provider": "" + }, + "aws_iam_role.openshift-instance-role": { + "type": "aws_iam_role", + "depends_on": [], + "primary": { + "id": "openshift-instance-role", + "attributes": { + "arn": "arn:aws:iam::705383350627:role/openshift-instance-role", + "assume_role_policy": "{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Action\": \"sts:AssumeRole\",\n \"Principal\": {\n \"Service\": \"ec2.amazonaws.com\"\n },\n \"Effect\": \"Allow\",\n \"Sid\": \"\"\n }\n ]\n}\n", + "create_date": "2017-01-30T04:10:40Z", + "id": "openshift-instance-role", + "name": "openshift-instance-role", + "path": "/", + "unique_id": "AROAILT3FTXQFM73K44WS" + }, + "meta": {}, + "tainted": false + }, + "deposed": [], + "provider": "" + }, + "aws_instance.bastion": { + "type": "aws_instance", + "depends_on": [ + "aws_key_pair.keypair", + "aws_security_group.openshift-public-egress", + "aws_security_group.openshift-ssh", + "aws_security_group.openshift-vpc", + "aws_subnet.public-subnet", + "data.aws_ami.amazonlinux" + ], + "primary": { + "id": "i-001382932f6463dd9", + "attributes": { + "ami": "ami-0b33d91d", + "associate_public_ip_address": "true", + "availability_zone": "us-east-1a", + "disable_api_termination": "false", + "ebs_block_device.#": "0", + "ebs_optimized": "false", + "ephemeral_block_device.#": "0", + "iam_instance_profile": "", + "id": "i-001382932f6463dd9", + "instance_state": "running", + "instance_type": "t2.micro", + "key_name": "openshift", + "monitoring": "false", + "network_interface_id": "eni-ad66fb4c", + "private_dns": "ip-10-0-1-198.ec2.internal", + "private_ip": "10.0.1.198", + "public_dns": "ec2-54-152-212-19.compute-1.amazonaws.com", + "public_ip": "54.152.212.19", + "root_block_device.#": "1", + "root_block_device.0.delete_on_termination": "true", + "root_block_device.0.iops": "100", + "root_block_device.0.volume_size": "8", + "root_block_device.0.volume_type": "gp2", + "security_groups.#": "0", + "source_dest_check": "true", + "subnet_id": "subnet-0b516d42", + "tags.%": "2", + "tags.Name": "OpenShift Bastion", + "tags.Project": "openshift", + "tenancy": "default", + "vpc_security_group_ids.#": "3", + "vpc_security_group_ids.1364463654": "sg-4520a339", + "vpc_security_group_ids.376553491": "sg-4a20a336", + "vpc_security_group_ids.980584819": "sg-4720a33b" + }, + "meta": { + "schema_version": "1" + }, + "tainted": false + }, + "deposed": [], + "provider": "" + }, "aws_instance.master": { "type": "aws_instance", "depends_on": [ + "aws_iam_instance_profile.openshift-instance-profile", "aws_key_pair.keypair", - "aws_security_group.openshift-public-access", + "aws_security_group.openshift-public-egress", + "aws_security_group.openshift-public-ingress", "aws_security_group.openshift-vpc", "aws_subnet.public-subnet", - "data.aws_ami.rhel7_2" + "data.aws_ami.rhel7_2", + "data.template_file.setup-master" ], "primary": { - "id": "i-0a58b7bbcc41ecaf7", + "id": "i-0275382deea654c2d", "attributes": { "ami": "ami-873e6190", "associate_public_ip_address": "true", @@ -79,17 +351,17 @@ "ebs_block_device.#": "0", "ebs_optimized": "false", "ephemeral_block_device.#": "0", - "iam_instance_profile": "", - "id": "i-0a58b7bbcc41ecaf7", + "iam_instance_profile": "openshift-instance-profile", + "id": "i-0275382deea654c2d", "instance_state": "running", "instance_type": "t2.large", "key_name": "openshift", "monitoring": "false", - "network_interface_id": "eni-c67de827", - "private_dns": "ip-10-0-1-120.ec2.internal", - "private_ip": "10.0.1.120", - "public_dns": "ec2-54-88-123-202.compute-1.amazonaws.com", - "public_ip": "54.88.123.202", + "network_interface_id": "eni-476af7a6", + "private_dns": "ip-10-0-1-199.ec2.internal", + "private_ip": "10.0.1.199", + "public_dns": "ec2-54-208-6-234.compute-1.amazonaws.com", + "public_ip": "54.208.6.234", "root_block_device.#": "1", "root_block_device.0.delete_on_termination": "true", "root_block_device.0.iops": "0", @@ -97,14 +369,16 @@ "root_block_device.0.volume_type": "standard", "security_groups.#": "0", "source_dest_check": "true", - "subnet_id": "subnet-fff7cab6", + "subnet_id": "subnet-0b516d42", "tags.%": "2", "tags.Name": "OpenShift Master", "tags.Project": "openshift", "tenancy": "default", - "vpc_security_group_ids.#": "2", - "vpc_security_group_ids.1898191372": "sg-ce53d4b2", - "vpc_security_group_ids.3661969230": "sg-c853d4b4" + "user_data": "a196f34bc00cd1cc62ec0105e975d0b3b7ec9af5", + "vpc_security_group_ids.#": "3", + "vpc_security_group_ids.1364463654": "sg-4520a339", + "vpc_security_group_ids.84726653": "sg-4620a33a", + "vpc_security_group_ids.980584819": "sg-4720a33b" }, "meta": { "schema_version": "1" @@ -117,14 +391,17 @@ "aws_instance.node1": { "type": "aws_instance", "depends_on": [ + "aws_iam_instance_profile.openshift-instance-profile", "aws_key_pair.keypair", - "aws_security_group.openshift-public-access", + "aws_security_group.openshift-public-egress", + "aws_security_group.openshift-public-ingress", "aws_security_group.openshift-vpc", "aws_subnet.public-subnet", - "data.aws_ami.rhel7_2" + "data.aws_ami.rhel7_2", + "data.template_file.setup-node" ], "primary": { - "id": "i-00ce95d2d59503100", + "id": "i-0ec2d3c984d88c3a8", "attributes": { "ami": "ami-873e6190", "associate_public_ip_address": "true", @@ -133,17 +410,17 @@ "ebs_block_device.#": "0", "ebs_optimized": "false", "ephemeral_block_device.#": "0", - "iam_instance_profile": "", - "id": "i-00ce95d2d59503100", + "iam_instance_profile": "openshift-instance-profile", + "id": "i-0ec2d3c984d88c3a8", "instance_state": "running", "instance_type": "t2.large", "key_name": "openshift", "monitoring": "false", - "network_interface_id": "eni-d97ce938", - "private_dns": "ip-10-0-1-210.ec2.internal", - "private_ip": "10.0.1.210", - "public_dns": "ec2-52-91-197-28.compute-1.amazonaws.com", - "public_ip": "52.91.197.28", + "network_interface_id": "eni-8f64f96e", + "private_dns": "ip-10-0-1-98.ec2.internal", + "private_ip": "10.0.1.98", + "public_dns": "ec2-54-205-212-122.compute-1.amazonaws.com", + "public_ip": "54.205.212.122", "root_block_device.#": "1", "root_block_device.0.delete_on_termination": "true", "root_block_device.0.iops": "0", @@ -151,14 +428,16 @@ "root_block_device.0.volume_type": "standard", "security_groups.#": "0", "source_dest_check": "true", - "subnet_id": "subnet-fff7cab6", + "subnet_id": "subnet-0b516d42", "tags.%": "2", "tags.Name": "OpenShift Node 1", "tags.Project": "openshift", "tenancy": "default", - "vpc_security_group_ids.#": "2", - "vpc_security_group_ids.1898191372": "sg-ce53d4b2", - "vpc_security_group_ids.3661969230": "sg-c853d4b4" + "user_data": "a196f34bc00cd1cc62ec0105e975d0b3b7ec9af5", + "vpc_security_group_ids.#": "3", + "vpc_security_group_ids.1364463654": "sg-4520a339", + "vpc_security_group_ids.84726653": "sg-4620a33a", + "vpc_security_group_ids.980584819": "sg-4720a33b" }, "meta": { "schema_version": "1" @@ -171,14 +450,16 @@ "aws_instance.node2": { "type": "aws_instance", "depends_on": [ + "aws_iam_instance_profile.openshift-instance-profile", "aws_key_pair.keypair", - "aws_security_group.openshift-public-access", + "aws_security_group.openshift-public-egress", + "aws_security_group.openshift-public-ingress", "aws_security_group.openshift-vpc", "aws_subnet.public-subnet", "data.aws_ami.rhel7_2" ], "primary": { - "id": "i-090eee02939ee28e3", + "id": "i-0d8251f79c573670e", "attributes": { "ami": "ami-873e6190", "associate_public_ip_address": "true", @@ -187,17 +468,17 @@ "ebs_block_device.#": "0", "ebs_optimized": "false", "ephemeral_block_device.#": "0", - "iam_instance_profile": "", - "id": "i-090eee02939ee28e3", + "iam_instance_profile": "openshift-instance-profile", + "id": "i-0d8251f79c573670e", "instance_state": "running", "instance_type": "t2.large", "key_name": "openshift", "monitoring": "false", - "network_interface_id": "eni-5672e7b7", - "private_dns": "ip-10-0-1-38.ec2.internal", - "private_ip": "10.0.1.38", - "public_dns": "ec2-54-160-36-86.compute-1.amazonaws.com", - "public_ip": "54.160.36.86", + "network_interface_id": "eni-d360fd32", + "private_dns": "ip-10-0-1-215.ec2.internal", + "private_ip": "10.0.1.215", + "public_dns": "ec2-52-90-148-134.compute-1.amazonaws.com", + "public_ip": "52.90.148.134", "root_block_device.#": "1", "root_block_device.0.delete_on_termination": "true", "root_block_device.0.iops": "0", @@ -205,14 +486,15 @@ "root_block_device.0.volume_type": "standard", "security_groups.#": "0", "source_dest_check": "true", - "subnet_id": "subnet-fff7cab6", + "subnet_id": "subnet-0b516d42", "tags.%": "2", "tags.Name": "OpenShift Node 2", "tags.Project": "openshift", "tenancy": "default", - "vpc_security_group_ids.#": "2", - "vpc_security_group_ids.1898191372": "sg-ce53d4b2", - "vpc_security_group_ids.3661969230": "sg-c853d4b4" + "vpc_security_group_ids.#": "3", + "vpc_security_group_ids.1364463654": "sg-4520a339", + "vpc_security_group_ids.84726653": "sg-4620a33a", + "vpc_security_group_ids.980584819": "sg-4720a33b" }, "meta": { "schema_version": "1" @@ -228,13 +510,13 @@ "aws_vpc.openshift" ], "primary": { - "id": "igw-4ae0b42d", + "id": "igw-02b9ee65", "attributes": { - "id": "igw-4ae0b42d", + "id": "igw-02b9ee65", "tags.%": "2", "tags.Name": "OpenShift IGW", "tags.Project": "openshift", - "vpc_id": "vpc-0c0d9a6a" + "vpc_id": "vpc-4848de2e" }, "meta": {}, "tainted": false @@ -248,7 +530,6 @@ "primary": { "id": "openshift", "attributes": { - "fingerprint": "91:a6:f4:81:eb:ac:2f:13:c5:ed:73:8d:e6:74:b6:e7", "id": "openshift", "key_name": "openshift", "public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfIclj04SSpHmxPHRo9CWMlWMpQWf1hVLOuzlOMEU+BYpBE7poqbbbUvoqLRatgOCW8sMkg6mV1zxQ7AljpEiwRQEmjzANKw+DD1xq1XOcu3oLZ9shtom1Ra9uqE+yxZzfH/DWZDeuqoAd38wpym2l/UYrw/CH1kldKOtQwn5VBJTn8c/57FqqCBBA0mgSBtl/MmxrsihASFlCkhyD+jYWLKxL7tTXIuNqPZYuvdjlXFsSmDELacyWlMuq5zWIssVCDfRqIXAvOBf/z32EpxmW0rt0MrXw6lunwinZ31C8qbaejDAlLGu9nWu/apgMfKXRw7vJiaV4e5svL89eA2iyuTDdthiJl7oB5wmT/71CMGPNPUhI9ofAoyMug/JFoKatIFFz+7R3p4kmGTMq1FpeErG5vVH9fyF8MD6bJH44sDdHM0QHG04fqwY5QY5xcZCJDDg7sy7DC/g4vhI5nSUC2vqhk45qA1lcl4Z3wYJJW6RbKLq8MhPjs+n2SuaEzBmkM4dNBs/BRyZERnlg6Qfv/UEqMyZEoX8q5ep4fJ0pdi01/D23L7PcTZtUnXAf5tXyQJz+XldDqDxdhB+oxUAc2N3KvbRWq5Mb2jBLogzsZkRYnSGKLJ56qLtNTJkmHmpGt9jh3rCsGBLyzDuR0rHGSqzY7rQ7O8/zEJsS11NPYw== dwmkerr@gmail.com" @@ -268,74 +549,18 @@ "aws_route53_zone.internal" ], "primary": { - "id": "Z38WOXDGMJGX4S_master.openshift.local_A", + "id": "Z16Z0X6C2UAORQ_master.openshift.local_A", "attributes": { "fqdn": "master.openshift.local", "health_check_id": "", - "id": "Z38WOXDGMJGX4S_master.openshift.local_A", + "id": "Z16Z0X6C2UAORQ_master.openshift.local_A", "name": "master.openshift.local", "records.#": "1", - "records.18829971": "10.0.1.120", + "records.2604086268": "10.0.1.199", "set_identifier": "", "ttl": "300", "type": "A", - "zone_id": "Z38WOXDGMJGX4S" - }, - "meta": { - "schema_version": "2" - }, - "tainted": false - }, - "deposed": [], - "provider": "" - }, - "aws_route53_record.master-console-a-record": { - "type": "aws_route53_record", - "depends_on": [ - "aws_instance.master", - "aws_route53_zone.external" - ], - "primary": { - "id": "Z27X8ZY05MQC88_console.openshifting.com_A", - "attributes": { - "fqdn": "console.openshifting.com", - "health_check_id": "", - "id": "Z27X8ZY05MQC88_console.openshifting.com_A", - "name": "console.openshifting.com", - "records.#": "1", - "records.90318304": "54.88.123.202", - "set_identifier": "", - "ttl": "300", - "type": "A", - "zone_id": "Z27X8ZY05MQC88" - }, - "meta": { - "schema_version": "2" - }, - "tainted": false - }, - "deposed": [], - "provider": "" - }, - "aws_route53_record.master-wildcard-a-record": { - "type": "aws_route53_record", - "depends_on": [ - "aws_instance.master", - "aws_route53_zone.external" - ], - "primary": { - "id": "Z27X8ZY05MQC88_*.openshifting.com_A", - "attributes": { - "fqdn": "*.openshifting.com", - "health_check_id": "", - "id": "Z27X8ZY05MQC88_*.openshifting.com_A", - "name": "*.openshifting.com", - "records.#": "1", - "records.90318304": "54.88.123.202", - "set_identifier": "", - "ttl": "300", - "type": "A", - "zone_id": "Z27X8ZY05MQC88" + "zone_id": "Z16Z0X6C2UAORQ" }, "meta": { "schema_version": "2" @@ -352,18 +577,18 @@ "aws_route53_zone.internal" ], "primary": { - "id": "Z38WOXDGMJGX4S_node1.openshift.local_A", + "id": "Z16Z0X6C2UAORQ_node1.openshift.local_A", "attributes": { "fqdn": "node1.openshift.local", "health_check_id": "", - "id": "Z38WOXDGMJGX4S_node1.openshift.local_A", + "id": "Z16Z0X6C2UAORQ_node1.openshift.local_A", "name": "node1.openshift.local", "records.#": "1", - "records.678739721": "10.0.1.210", + "records.3812984385": "10.0.1.98", "set_identifier": "", "ttl": "300", "type": "A", - "zone_id": "Z38WOXDGMJGX4S" + "zone_id": "Z16Z0X6C2UAORQ" }, "meta": { "schema_version": "2" @@ -380,18 +605,18 @@ "aws_route53_zone.internal" ], "primary": { - "id": "Z38WOXDGMJGX4S_node2.openshift.local_A", + "id": "Z16Z0X6C2UAORQ_node2.openshift.local_A", "attributes": { "fqdn": "node2.openshift.local", "health_check_id": "", - "id": "Z38WOXDGMJGX4S_node2.openshift.local_A", + "id": "Z16Z0X6C2UAORQ_node2.openshift.local_A", "name": "node2.openshift.local", "records.#": "1", - "records.430599883": "10.0.1.38", + "records.1478380422": "10.0.1.215", "set_identifier": "", "ttl": "300", "type": "A", - "zone_id": "Z38WOXDGMJGX4S" + "zone_id": "Z16Z0X6C2UAORQ" }, "meta": { "schema_version": "2" @@ -401,43 +626,17 @@ "deposed": [], "provider": "" }, - "aws_route53_zone.external": { - "type": "aws_route53_zone", - "depends_on": [], - "primary": { - "id": "Z27X8ZY05MQC88", - "attributes": { - "comment": "OpenShift Cluster Internal DNS", - "force_destroy": "false", - "id": "Z27X8ZY05MQC88", - "name": "openshifting.com", - "name_servers.#": "4", - "name_servers.0": "ns-1417.awsdns-49.org", - "name_servers.1": "ns-1752.awsdns-27.co.uk", - "name_servers.2": "ns-63.awsdns-07.com", - "name_servers.3": "ns-859.awsdns-43.net", - "tags.%": "2", - "tags.Name": "OpenShift External DNS", - "tags.Project": "openshift", - "zone_id": "Z27X8ZY05MQC88" - }, - "meta": {}, - "tainted": false - }, - "deposed": [], - "provider": "" - }, "aws_route53_zone.internal": { "type": "aws_route53_zone", "depends_on": [ "aws_vpc.openshift" ], "primary": { - "id": "Z38WOXDGMJGX4S", + "id": "Z16Z0X6C2UAORQ", "attributes": { "comment": "OpenShift Cluster Internal DNS", "force_destroy": "false", - "id": "Z38WOXDGMJGX4S", + "id": "Z16Z0X6C2UAORQ", "name": "openshift.local", "name_servers.#": "4", "name_servers.0": "ns-0.awsdns-00.com.", @@ -447,9 +646,9 @@ "tags.%": "2", "tags.Name": "OpenShift Internal DNS", "tags.Project": "openshift", - "vpc_id": "vpc-0c0d9a6a", + "vpc_id": "vpc-4848de2e", "vpc_region": "us-east-1", - "zone_id": "Z38WOXDGMJGX4S" + "zone_id": "Z16Z0X6C2UAORQ" }, "meta": {}, "tainted": false @@ -464,21 +663,21 @@ "aws_vpc.openshift" ], "primary": { - "id": "rtb-6a1bcd13", + "id": "rtb-7fed3c06", "attributes": { - "id": "rtb-6a1bcd13", + "id": "rtb-7fed3c06", "propagating_vgws.#": "0", "route.#": "1", - "route.3846797823.cidr_block": "0.0.0.0/0", - "route.3846797823.gateway_id": "igw-4ae0b42d", - "route.3846797823.instance_id": "", - "route.3846797823.nat_gateway_id": "", - "route.3846797823.network_interface_id": "", - "route.3846797823.vpc_peering_connection_id": "", + "route.1967899660.cidr_block": "0.0.0.0/0", + "route.1967899660.gateway_id": "igw-02b9ee65", + "route.1967899660.instance_id": "", + "route.1967899660.nat_gateway_id": "", + "route.1967899660.network_interface_id": "", + "route.1967899660.vpc_peering_connection_id": "", "tags.%": "2", "tags.Name": "OpenShift Public Route Table", "tags.Project": "openshift", - "vpc_id": "vpc-0c0d9a6a" + "vpc_id": "vpc-4848de2e" }, "meta": {}, "tainted": false @@ -493,11 +692,52 @@ "aws_subnet.public-subnet" ], "primary": { - "id": "rtbassoc-b2972bca", + "id": "rtbassoc-4e2e9736", + "attributes": { + "id": "rtbassoc-4e2e9736", + "route_table_id": "rtb-7fed3c06", + "subnet_id": "subnet-0b516d42" + }, + "meta": {}, + "tainted": false + }, + "deposed": [], + "provider": "" + }, + "aws_security_group.openshift-public-egress": { + "type": "aws_security_group", + "depends_on": [ + "aws_vpc.openshift" + ], + "primary": { + "id": "sg-4520a339", "attributes": { - "id": "rtbassoc-b2972bca", - "route_table_id": "rtb-6a1bcd13", - "subnet_id": "subnet-fff7cab6" + "description": "Security group that allows egress to the internet for instances over HTTP and HTTPS.", + "egress.#": "2", + "egress.2214680975.cidr_blocks.#": "1", + "egress.2214680975.cidr_blocks.0": "0.0.0.0/0", + "egress.2214680975.from_port": "80", + "egress.2214680975.prefix_list_ids.#": "0", + "egress.2214680975.protocol": "tcp", + "egress.2214680975.security_groups.#": "0", + "egress.2214680975.self": "false", + "egress.2214680975.to_port": "80", + "egress.2617001939.cidr_blocks.#": "1", + "egress.2617001939.cidr_blocks.0": "0.0.0.0/0", + "egress.2617001939.from_port": "443", + "egress.2617001939.prefix_list_ids.#": "0", + "egress.2617001939.protocol": "tcp", + "egress.2617001939.security_groups.#": "0", + "egress.2617001939.self": "false", + "egress.2617001939.to_port": "443", + "id": "sg-4520a339", + "ingress.#": "0", + "name": "openshift-public-egress", + "owner_id": "705383350627", + "tags.%": "2", + "tags.Name": "OpenShift Public Access", + "tags.Project": "openshift", + "vpc_id": "vpc-4848de2e" }, "meta": {}, "tainted": false @@ -505,18 +745,18 @@ "deposed": [], "provider": "" }, - "aws_security_group.openshift-public-access": { + "aws_security_group.openshift-public-ingress": { "type": "aws_security_group", "depends_on": [ "aws_vpc.openshift" ], "primary": { - "id": "sg-c853d4b4", + "id": "sg-4620a33a", "attributes": { - "description": "Security group that allows public access to instances, HTTP, HTTPS, SSH and more.", + "description": "Security group that allows public ingress to instances, HTTP, HTTPS and more.", "egress.#": "0", - "id": "sg-c853d4b4", - "ingress.#": "5", + "id": "sg-4620a33a", + "ingress.#": "4", "ingress.2214680975.cidr_blocks.#": "1", "ingress.2214680975.cidr_blocks.0": "0.0.0.0/0", "ingress.2214680975.from_port": "80", @@ -524,13 +764,6 @@ "ingress.2214680975.security_groups.#": "0", "ingress.2214680975.self": "false", "ingress.2214680975.to_port": "80", - "ingress.2541437006.cidr_blocks.#": "1", - "ingress.2541437006.cidr_blocks.0": "0.0.0.0/0", - "ingress.2541437006.from_port": "22", - "ingress.2541437006.protocol": "tcp", - "ingress.2541437006.security_groups.#": "0", - "ingress.2541437006.self": "false", - "ingress.2541437006.to_port": "22", "ingress.2617001939.cidr_blocks.#": "1", "ingress.2617001939.cidr_blocks.0": "0.0.0.0/0", "ingress.2617001939.from_port": "443", @@ -552,12 +785,44 @@ "ingress.516175195.security_groups.#": "0", "ingress.516175195.self": "false", "ingress.516175195.to_port": "8080", - "name": "openshift-public-access", + "name": "openshift-public-ingress", "owner_id": "705383350627", "tags.%": "2", "tags.Name": "OpenShift Public Access", "tags.Project": "openshift", - "vpc_id": "vpc-0c0d9a6a" + "vpc_id": "vpc-4848de2e" + }, + "meta": {}, + "tainted": false + }, + "deposed": [], + "provider": "" + }, + "aws_security_group.openshift-ssh": { + "type": "aws_security_group", + "depends_on": [ + "aws_vpc.openshift" + ], + "primary": { + "id": "sg-4a20a336", + "attributes": { + "description": "Security group that allows public ingress over SSH.", + "egress.#": "0", + "id": "sg-4a20a336", + "ingress.#": "1", + "ingress.2541437006.cidr_blocks.#": "1", + "ingress.2541437006.cidr_blocks.0": "0.0.0.0/0", + "ingress.2541437006.from_port": "22", + "ingress.2541437006.protocol": "tcp", + "ingress.2541437006.security_groups.#": "0", + "ingress.2541437006.self": "false", + "ingress.2541437006.to_port": "22", + "name": "openshift-ssh", + "owner_id": "705383350627", + "tags.%": "2", + "tags.Name": "OpenShift SSH Access", + "tags.Project": "openshift", + "vpc_id": "vpc-4848de2e" }, "meta": {}, "tainted": false @@ -571,7 +836,7 @@ "aws_vpc.openshift" ], "primary": { - "id": "sg-ce53d4b2", + "id": "sg-4720a33b", "attributes": { "description": "Default security group that allows all instances in the VPC to talk to each other over any port and protocol.", "egress.#": "1", @@ -582,7 +847,7 @@ "egress.753360330.security_groups.#": "0", "egress.753360330.self": "true", "egress.753360330.to_port": "0", - "id": "sg-ce53d4b2", + "id": "sg-4720a33b", "ingress.#": "1", "ingress.753360330.cidr_blocks.#": "0", "ingress.753360330.from_port": "0", @@ -595,7 +860,7 @@ "tags.%": "2", "tags.Name": "OpenShift Internal VPC", "tags.Project": "openshift", - "vpc_id": "vpc-0c0d9a6a" + "vpc_id": "vpc-4848de2e" }, "meta": {}, "tainted": false @@ -610,16 +875,16 @@ "aws_vpc.openshift" ], "primary": { - "id": "subnet-fff7cab6", + "id": "subnet-0b516d42", "attributes": { "availability_zone": "us-east-1a", "cidr_block": "10.0.1.0/24", - "id": "subnet-fff7cab6", + "id": "subnet-0b516d42", "map_public_ip_on_launch": "true", "tags.%": "2", "tags.Name": "OpenShift Public Subnet", "tags.Project": "openshift", - "vpc_id": "vpc-0c0d9a6a" + "vpc_id": "vpc-4848de2e" }, "meta": {}, "tainted": false @@ -631,19 +896,19 @@ "type": "aws_vpc", "depends_on": [], "primary": { - "id": "vpc-0c0d9a6a", + "id": "vpc-4848de2e", "attributes": { "cidr_block": "10.0.0.0/16", - "default_network_acl_id": "acl-de401bb8", - "default_route_table_id": "rtb-781bcd01", - "default_security_group_id": "sg-1053d46c", + "default_network_acl_id": "acl-03277d65", + "default_route_table_id": "rtb-97ea3bee", + "default_security_group_id": "sg-8921a2f5", "dhcp_options_id": "dopt-3309ea56", "enable_classiclink": "false", "enable_dns_hostnames": "true", "enable_dns_support": "true", - "id": "vpc-0c0d9a6a", + "id": "vpc-4848de2e", "instance_tenancy": "default", - "main_route_table_id": "rtb-781bcd01", + "main_route_table_id": "rtb-97ea3bee", "tags.%": "2", "tags.Name": "OpenShift VPC", "tags.Project": "openshift" @@ -654,6 +919,68 @@ "deposed": [], "provider": "" }, + "data.aws_ami.amazonlinux": { + "type": "aws_ami", + "depends_on": [], + "primary": { + "id": "ami-0b33d91d", + "attributes": { + "architecture": "x86_64", + "block_device_mappings.#": "1", + "block_device_mappings.340275815.device_name": "/dev/xvda", + "block_device_mappings.340275815.ebs.%": "6", + "block_device_mappings.340275815.ebs.delete_on_termination": "true", + "block_device_mappings.340275815.ebs.encrypted": "false", + "block_device_mappings.340275815.ebs.iops": "0", + "block_device_mappings.340275815.ebs.snapshot_id": "snap-037f1f9e6c8ea4d65", + "block_device_mappings.340275815.ebs.volume_size": "8", + "block_device_mappings.340275815.ebs.volume_type": "gp2", + "block_device_mappings.340275815.no_device": "", + "block_device_mappings.340275815.virtual_name": "", + "creation_date": "2017-01-20T23:39:56.000Z", + "description": "Amazon Linux AMI 2016.09.1.20170119 x86_64 HVM GP2", + "filter.#": "4", + "filter.1281954306.name": "root-device-type", + "filter.1281954306.values.#": "1", + "filter.1281954306.values.0": "ebs", + "filter.2313955347.name": "name", + "filter.2313955347.values.#": "1", + "filter.2313955347.values.0": "amzn-ami-hvm-*", + "filter.3386043752.name": "architecture", + "filter.3386043752.values.#": "1", + "filter.3386043752.values.0": "x86_64", + "filter.490168357.name": "virtualization-type", + "filter.490168357.values.#": "1", + "filter.490168357.values.0": "hvm", + "hypervisor": "xen", + "id": "ami-0b33d91d", + "image_id": "ami-0b33d91d", + "image_location": "amazon/amzn-ami-hvm-2016.09.1.20170119-x86_64-gp2", + "image_owner_alias": "amazon", + "image_type": "machine", + "most_recent": "true", + "name": "amzn-ami-hvm-2016.09.1.20170119-x86_64-gp2", + "owner_id": "137112412989", + "owners.#": "1", + "owners.0": "137112412989", + "product_codes.#": "0", + "public": "true", + "root_device_name": "/dev/xvda", + "root_device_type": "ebs", + "sriov_net_support": "simple", + "state": "available", + "state_reason.%": "2", + "state_reason.code": "UNSET", + "state_reason.message": "UNSET", + "tags.#": "0", + "virtualization_type": "hvm" + }, + "meta": {}, + "tainted": false + }, + "deposed": [], + "provider": "" + }, "data.aws_ami.rhel7_2": { "type": "aws_ami", "depends_on": [], @@ -714,6 +1041,38 @@ }, "deposed": [], "provider": "" + }, + "data.template_file.setup-master": { + "type": "template_file", + "depends_on": [], + "primary": { + "id": "0ed9578c29557f8cc79483d8cde362be7e98c025851daec98b504275e2cc1282", + "attributes": { + "id": "0ed9578c29557f8cc79483d8cde362be7e98c025851daec98b504275e2cc1282", + "rendered": "#!/usr/bin/env bash\n\n# This script template is expected to be populated during the setup of a\n# OpenShift node. It runs on host startup.\n\n# Log everything we do.\nset -x\nexec \u003e /var/log/user-data.log 2\u003e\u00261\n\n# Create a folder to hold our AWS logs config.\n# mkdir -p /var/awslogs/etc\n\n# Download and run the AWS logs agent.\ncurl https://s3.amazonaws.com/aws-cloudwatch/downloads/latest/awslogs-agent-setup.py -O\npython ./awslogs-agent-setup.py --non-interactive --region us-east-1 -c /var/awslogs/etc/awslogs.conf\n\n# Create a the awslogs config.\ncat \u003e\u003e /var/awslogs/etc/awslogs.conf \u003c\u003c- EOF\n[/var/log/user-data.log]\nfile = /var/log/user-data.log\nlog_group_name = /var/log/user-data.log\nlog_stream_name = {instance_id}\nEOF\n\n# Start the awslogs service, also start on reboot.\n# Note: Errors go to /var/log/awslogs.log\nservice awslogs restart\nchkconfig awslogs on\n\n# OpenShift setup\n# See: https://docs.openshift.org/latest/install_config/install/host_preparation.html\n\n# Install packages required to setup OpenShift.\nyum install -y wget git net-tools bind-utils iptables-services bridge-utils bash-completion\nyum update -y\n\n# Note: The step below is not in the official docs, I needed it to install\n# Docker. If anyone finds out why, I'd love to know.\n# See: https://forums.aws.amazon.com/thread.jspa?messageID=574126\nyum-config-manager --enable rhui-REGION-rhel-server-extras\n\n# Docker setup. Check the version with `docker version`, should be 1.12.\nyum install -y docker\n\n# Update the docker config to allow OpenShift's local insecure registry.\nsed -i '/OPTIONS=.*/c\\OPTIONS=\"--selinux-enabled --insecure-registry 172.30.0.0/16 --log-opt max-size=1M --log-opt max-file=3\"' \\\n/etc/sysconfig/docker\nsystemctl restart docker\n\n# Note we are not configuring Docker storage as per the guide.\n", + "template": "#!/usr/bin/env bash\n\n# This script template is expected to be populated during the setup of a\n# OpenShift node. It runs on host startup.\n\n# Log everything we do.\nset -x\nexec \u003e /var/log/user-data.log 2\u003e\u00261\n\n# Create a folder to hold our AWS logs config.\n# mkdir -p /var/awslogs/etc\n\n# Download and run the AWS logs agent.\ncurl https://s3.amazonaws.com/aws-cloudwatch/downloads/latest/awslogs-agent-setup.py -O\npython ./awslogs-agent-setup.py --non-interactive --region us-east-1 -c /var/awslogs/etc/awslogs.conf\n\n# Create a the awslogs config.\ncat \u003e\u003e /var/awslogs/etc/awslogs.conf \u003c\u003c- EOF\n[/var/log/user-data.log]\nfile = /var/log/user-data.log\nlog_group_name = /var/log/user-data.log\nlog_stream_name = {instance_id}\nEOF\n\n# Start the awslogs service, also start on reboot.\n# Note: Errors go to /var/log/awslogs.log\nservice awslogs restart\nchkconfig awslogs on\n\n# OpenShift setup\n# See: https://docs.openshift.org/latest/install_config/install/host_preparation.html\n\n# Install packages required to setup OpenShift.\nyum install -y wget git net-tools bind-utils iptables-services bridge-utils bash-completion\nyum update -y\n\n# Note: The step below is not in the official docs, I needed it to install\n# Docker. If anyone finds out why, I'd love to know.\n# See: https://forums.aws.amazon.com/thread.jspa?messageID=574126\nyum-config-manager --enable rhui-REGION-rhel-server-extras\n\n# Docker setup. Check the version with `docker version`, should be 1.12.\nyum install -y docker\n\n# Update the docker config to allow OpenShift's local insecure registry.\nsed -i '/OPTIONS=.*/c\\OPTIONS=\"--selinux-enabled --insecure-registry 172.30.0.0/16 --log-opt max-size=1M --log-opt max-file=3\"' \\\n/etc/sysconfig/docker\nsystemctl restart docker\n\n# Note we are not configuring Docker storage as per the guide.\n" + }, + "meta": {}, + "tainted": false + }, + "deposed": [], + "provider": "" + }, + "data.template_file.setup-node": { + "type": "template_file", + "depends_on": [], + "primary": { + "id": "0ed9578c29557f8cc79483d8cde362be7e98c025851daec98b504275e2cc1282", + "attributes": { + "id": "0ed9578c29557f8cc79483d8cde362be7e98c025851daec98b504275e2cc1282", + "rendered": "#!/usr/bin/env bash\n\n# This script template is expected to be populated during the setup of a\n# OpenShift node. It runs on host startup.\n\n# Log everything we do.\nset -x\nexec \u003e /var/log/user-data.log 2\u003e\u00261\n\n# Create a folder to hold our AWS logs config.\n# mkdir -p /var/awslogs/etc\n\n# Download and run the AWS logs agent.\ncurl https://s3.amazonaws.com/aws-cloudwatch/downloads/latest/awslogs-agent-setup.py -O\npython ./awslogs-agent-setup.py --non-interactive --region us-east-1 -c /var/awslogs/etc/awslogs.conf\n\n# Create a the awslogs config.\ncat \u003e\u003e /var/awslogs/etc/awslogs.conf \u003c\u003c- EOF\n[/var/log/user-data.log]\nfile = /var/log/user-data.log\nlog_group_name = /var/log/user-data.log\nlog_stream_name = {instance_id}\nEOF\n\n# Start the awslogs service, also start on reboot.\n# Note: Errors go to /var/log/awslogs.log\nservice awslogs restart\nchkconfig awslogs on\n\n# OpenShift setup\n# See: https://docs.openshift.org/latest/install_config/install/host_preparation.html\n\n# Install packages required to setup OpenShift.\nyum install -y wget git net-tools bind-utils iptables-services bridge-utils bash-completion\nyum update -y\n\n# Note: The step below is not in the official docs, I needed it to install\n# Docker. If anyone finds out why, I'd love to know.\n# See: https://forums.aws.amazon.com/thread.jspa?messageID=574126\nyum-config-manager --enable rhui-REGION-rhel-server-extras\n\n# Docker setup. Check the version with `docker version`, should be 1.12.\nyum install -y docker\n\n# Update the docker config to allow OpenShift's local insecure registry.\nsed -i '/OPTIONS=.*/c\\OPTIONS=\"--selinux-enabled --insecure-registry 172.30.0.0/16 --log-opt max-size=1M --log-opt max-file=3\"' \\\n/etc/sysconfig/docker\nsystemctl restart docker\n\n# Note we are not configuring Docker storage as per the guide.\n", + "template": "#!/usr/bin/env bash\n\n# This script template is expected to be populated during the setup of a\n# OpenShift node. It runs on host startup.\n\n# Log everything we do.\nset -x\nexec \u003e /var/log/user-data.log 2\u003e\u00261\n\n# Create a folder to hold our AWS logs config.\n# mkdir -p /var/awslogs/etc\n\n# Download and run the AWS logs agent.\ncurl https://s3.amazonaws.com/aws-cloudwatch/downloads/latest/awslogs-agent-setup.py -O\npython ./awslogs-agent-setup.py --non-interactive --region us-east-1 -c /var/awslogs/etc/awslogs.conf\n\n# Create a the awslogs config.\ncat \u003e\u003e /var/awslogs/etc/awslogs.conf \u003c\u003c- EOF\n[/var/log/user-data.log]\nfile = /var/log/user-data.log\nlog_group_name = /var/log/user-data.log\nlog_stream_name = {instance_id}\nEOF\n\n# Start the awslogs service, also start on reboot.\n# Note: Errors go to /var/log/awslogs.log\nservice awslogs restart\nchkconfig awslogs on\n\n# OpenShift setup\n# See: https://docs.openshift.org/latest/install_config/install/host_preparation.html\n\n# Install packages required to setup OpenShift.\nyum install -y wget git net-tools bind-utils iptables-services bridge-utils bash-completion\nyum update -y\n\n# Note: The step below is not in the official docs, I needed it to install\n# Docker. If anyone finds out why, I'd love to know.\n# See: https://forums.aws.amazon.com/thread.jspa?messageID=574126\nyum-config-manager --enable rhui-REGION-rhel-server-extras\n\n# Docker setup. Check the version with `docker version`, should be 1.12.\nyum install -y docker\n\n# Update the docker config to allow OpenShift's local insecure registry.\nsed -i '/OPTIONS=.*/c\\OPTIONS=\"--selinux-enabled --insecure-registry 172.30.0.0/16 --log-opt max-size=1M --log-opt max-file=3\"' \\\n/etc/sysconfig/docker\nsystemctl restart docker\n\n# Note we are not configuring Docker storage as per the guide.\n" + }, + "meta": {}, + "tainted": false + }, + "deposed": [], + "provider": "" } }, "depends_on": []