Skip to content

Commit

Permalink
Completed the of the platform, all that is left is the public DNS for…
Browse files Browse the repository at this point in the history
… the router.
  • Loading branch information
dwmkerr committed Jan 30, 2017
1 parent 76ede4d commit 09b9942
Show file tree
Hide file tree
Showing 14 changed files with 1,036 additions and 264 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,2 +1,3 @@
# Infrastructure ignores.
.terraform
terraform.tfvars
60 changes: 48 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,14 @@
# terraform-aws-openshift
It's OpenShift, on AWS, handled by Terraform. But it's also WIP, eh?

This project shows you how to set up OpenShift Origin on AWS using Terraform.

## Overview

Terraform is used to create infrastructure as shown:

![Network Diagram](./docs/network-diagram.png)

Once the infrastructure is set up, a single command is used to install the OpenShift platform on the hosts.

## Prerequisites

Expand All @@ -26,17 +35,11 @@ This will keep your AWS credentials in the `$HOME/.aws/credentials` file, which
The cluster is implemented as a [Terraform Module](https://www.terraform.io/docs/modules/index.html). To launch, just run:

```bash
# Create the module.
terraform get

# See what we will create, or do a dry run!
terraform plan

# Create the cluster!
terraform apply
# Get the modules, create the infrastructure.
terraform get && terraform apply
```

You will be asked for a region to deploy in, use `us-east-1` should work fine! You can configure the nuances of how the cluster is created in the [`main.tf`](./main.tf) file. Once created, you will see a message like:
You will be asked for a region to deploy in, use `us-east-1` or your preferred region. You can configure the nuances of how the cluster is created in the [`main.tf`](./main.tf) file. Once created, you will see a message like:

```
$ terraform apply
Expand All @@ -50,7 +53,38 @@ var.region
Apply complete! Resources: 20 added, 0 changed, 0 destroyed.
```

That's it.
That's it! The infrastructure is ready and you can install OpenShift.

## Installing OpenShift

Make sure you have your local identity added:

```
$ ssh-add ~/.ssh/id_rsa
```

Then just run the install script on the bastion:

```
$ cat install-from-bastion.sh | ssh -A ec2-user@$(terraform output bastion-public_dns)
```

It will take about 20 minutes.

## Additional Configuration

Access the master or nodes to update configuration and add feature as needed:

```
$ ssh -A ec2-user@$(terraform output bastion-public_dns)
$ ssh -A master.openshift.local
$ sudo su
$ oc get nodes
NAME STATUS AGE
master.openshift.local Ready 1h
node1.openshift.local Ready 1h
node2.openshift.local Ready 1h
```

## Destroying the Cluster

Expand All @@ -70,8 +104,10 @@ You'll be paying for:

- https://www.udemy.com/openshift-enterprise-installation-and-configuration - The basic structure of the network is based on this course.
- https://blog.openshift.com/openshift-container-platform-reference-architecture-implementation-guides/ - Detailed guide on high available solutions, including production grade AWS setup.
- https://access.redhat.com/sites/default/files/attachments/ocp-on-gce-3.pdf - Some useful info on using the bastion for installation.

## TODO

- [ ] Consider whether it is needed to script elastic IPs for the instances and DNS.
- [ ] Test whether the previously registered domain name is actually forwarding to the public DNS.
- [ ] Consider documenting public DNS setup.
- [ ] Consider moving the nodes into a private subnet.
Binary file added docs/network-diagram.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
56 changes: 56 additions & 0 deletions install-from-bastion.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
# Fail on errors.
set -x

# Elevate priviledges, retaining the environment.
sudo -E su

# Install dev tools and Ansible 2.2
yum install -y "@Development Tools" python2-pip openssl-devel python-devel gcc libffi-devel
pip install -Iv ansible==2.2.0.0

# Clone the openshift-ansible repo, which contains the installer.
git clone https://github.com/openshift/openshift-ansible
cd openshift-ansible

# Create our Ansible inventory:
mkdir -p /etc/ansible
cat > /etc/ansible/hosts <<- EOF
# Create an OSEv3 group that contains the masters and nodes groups
[OSEv3:children]
masters
nodes
# Set variables common for all OSEv3 hosts
[OSEv3:vars]
# SSH user, this user should allow ssh based auth without requiring a password
ansible_ssh_user=ec2-user
# If ansible_ssh_user is not root, ansible_become must be set to true
ansible_become=true
deployment_type=origin
# uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider
# openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]
# Create the masters host group. Be explicit with the openshift_hostname,
# otherwise it will resolve to something like ip-10-0-1-98.ec2.internal and use
# that as the node name.
[masters]
master.openshift.local openshift_hostname=master.openshift.local
# host group for etcd
[etcd]
master.openshift.local
# host group for nodes, includes region info
[nodes]
master.openshift.local openshift_node_labels="{'region': 'infra', 'zone': 'default'}" openshift_schedulable=true
node1.openshift.local openshift_hostname=node1.openshift.local openshift_node_labels="{'region': 'primary', 'zone': 'east'}"
node2.openshift.local openshift_hostname=node2.openshift.local openshift_node_labels="{'region': 'primary', 'zone': 'west'}"
EOF

# Run the playbook.
ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook playbooks/byo/config.yml

ansible-playbook playbooks/adhoc/uninstall.yml
54 changes: 48 additions & 6 deletions main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -17,12 +17,54 @@ module "openshift" {
}

// Output some useful variables for quick SSH access etc.
output "master-dns" {
value = "${module.openshift.master-dns}"
output "master-public_dns" {
value = "${module.openshift.master-public_dns}"
}
output "node1-dns" {
value = "${module.openshift.node1-dns}"
output "master-public_ip" {
value = "${module.openshift.master-public_ip}"
}
output "node2-dns" {
value = "${module.openshift.node2-dns}"
output "master-private_dns" {
value = "${module.openshift.master-private_dns}"
}
output "master-private_ip" {
value = "${module.openshift.master-private_ip}"
}

output "node1-public_dns" {
value = "${module.openshift.node1-public_dns}"
}
output "node1-public_ip" {
value = "${module.openshift.node1-public_ip}"
}
output "node1-private_dns" {
value = "${module.openshift.node1-private_dns}"
}
output "node1-private_ip" {
value = "${module.openshift.node1-private_ip}"
}

output "node2-public_dns" {
value = "${module.openshift.node2-public_dns}"
}
output "node2-public_ip" {
value = "${module.openshift.node2-public_ip}"
}
output "node2-private_dns" {
value = "${module.openshift.node2-private_dns}"
}
output "node2-private_ip" {
value = "${module.openshift.node2-private_ip}"
}

output "bastion-public_dns" {
value = "${module.openshift.bastion-public_dns}"
}
output "bastion-public_ip" {
value = "${module.openshift.bastion-public_ip}"
}
output "bastion-private_dns" {
value = "${module.openshift.bastion-private_dns}"
}
output "bastion-private_ip" {
value = "${module.openshift.bastion-private_ip}"
}
61 changes: 49 additions & 12 deletions modules/openshift/02-security-groups.tf
Original file line number Diff line number Diff line change
@@ -1,9 +1,5 @@
// This is not the best way to handle security groups for an OpenShift cluster,
// as the various different needs are bundled into one security group. However
// this suffices for a simple demo.
// IMPORTANT: This is *not* production ready. SSH access is allowed to all
// instances from anywhere.

// This security group allows intra-node communication on all ports with all
// protocols.
resource "aws_security_group" "openshift-vpc" {
name = "openshift-vpc"
description = "Default security group that allows all instances in the VPC to talk to each other over any port and protocol."
Expand All @@ -29,11 +25,11 @@ resource "aws_security_group" "openshift-vpc" {
}
}

// This security group allows public access to the instances for HTTP, HTTPS
// common HTTP/S proxy ports and SSH.
resource "aws_security_group" "openshift-public-access" {
name = "openshift-public-access"
description = "Security group that allows public access to instances, HTTP, HTTPS, SSH and more."
// This security group allows public ingress to the instances for HTTP, HTTPS
// and common HTTP/S proxy ports.
resource "aws_security_group" "openshift-public-ingress" {
name = "openshift-public-ingress"
description = "Security group that allows public ingress to instances, HTTP, HTTPS and more."
vpc_id = "${aws_vpc.openshift.id}"

// HTTP
Expand Down Expand Up @@ -68,6 +64,47 @@ resource "aws_security_group" "openshift-public-access" {
cidr_blocks = ["0.0.0.0/0"]
}

tags {
Name = "OpenShift Public Access"
Project = "openshift"
}
}

// This security group allows public egress from the instances for HTTP and
// HTTPS, which is needed for yum updates, git access etc etc.
resource "aws_security_group" "openshift-public-egress" {
name = "openshift-public-egress"
description = "Security group that allows egress to the internet for instances over HTTP and HTTPS."
vpc_id = "${aws_vpc.openshift.id}"

// HTTP
egress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

// HTTPS
egress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

tags {
Name = "OpenShift Public Access"
Project = "openshift"
}
}

// Security group which allows SSH access to a host. Used for the bastion.
resource "aws_security_group" "openshift-ssh" {
name = "openshift-ssh"
description = "Security group that allows public ingress over SSH."
vpc_id = "${aws_vpc.openshift.id}"

// SSH
ingress {
from_port = 22
Expand All @@ -77,7 +114,7 @@ resource "aws_security_group" "openshift-public-access" {
}

tags {
Name = "OpenShift Public Access"
Name = "OpenShift SSH Access"
Project = "openshift"
}
}
64 changes: 64 additions & 0 deletions modules/openshift/03-roles.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
// Create a role which OpenShift instances will assume.
// This role has a policy saying it can be assumed by ec2
// instances.
resource "aws_iam_role" "openshift-instance-role" {
name = "openshift-instance-role"

assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}

// This policy allows an instance to forward logs to CloudWatch, and
// create the Log Stream or Log Group if it doesn't exist.
resource "aws_iam_policy" "openshift-policy-forward-logs" {
name = "openshift-instance-forward-logs"
path = "/"
description = "Allows an instance to forward logs to CloudWatch"

policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents",
"logs:DescribeLogStreams"
],
"Resource": [
"arn:aws:logs:*:*:*"
]
}
]
}
EOF
}


// Attach the policies to the role.
resource "aws_iam_policy_attachment" "openshift-attachment-forward-logs" {
name = "openshift-attachment-forward-logs"
roles = ["${aws_iam_role.openshift-instance-role.name}"]
policy_arn = "${aws_iam_policy.openshift-policy-forward-logs.arn}"
}

// Create a instance profile for the role.
resource "aws_iam_instance_profile" "openshift-instance-profile" {
name = "openshift-instance-profile"
roles = ["${aws_iam_role.openshift-instance-role.name}"]
}
Loading

0 comments on commit 09b9942

Please sign in to comment.