Skip to content
This repository has been archived by the owner on Apr 13, 2019. It is now read-only.

Commit

Permalink
Merge pull request #165 from reactiveops/release/2.7.1
Browse files Browse the repository at this point in the history
fixing issue where node groups with hooks are not parsed properly
  • Loading branch information
ejether authored Jan 7, 2019
2 parents 226c206 + 7775643 commit 2d30dff
Show file tree
Hide file tree
Showing 5 changed files with 42 additions and 29 deletions.
7 changes: 3 additions & 4 deletions docs/getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ Pentagon is “batteries included”- not only does one get a network with a clu
* `. yaml_source inventory/default/config/private/secrets.yml`
* Sources environment variables required for the following steps. This will be required each time you work with the infrastructure repository or if you move the repository to another location.
* `bash inventory/default/config/local/local-config-init`
* If using AWS, create an S3 bucket named `<project-name>-infrastructure` in your AWS account. Terraform will store its state file here. Make sure the AWS IAM user has write access to it.
* If using AWS, create an S3 bucket named `<project-name>-infrastructure` in your AWS account. Terraform will store its state file here. Make sure the AWS IAM user has write access to it.
* `aws s3 mb s3://<project-name>-infrastructure`

## AWS
Expand All @@ -54,7 +54,7 @@ This creates the VPC and private, public, and admin subnets in that VPC for non
* Edit `aws_vpc.auto.tfvars` and verify the generated `aws_azs` actually exist in `aws_region`
* `terraform init`
* `terraform plan`
* `terraform apply`
* `terraform apply`
* In `inventory/default/clusters/*/vars.yml`, set `VPC_ID` using the newly created VPC ID. You can find that ID in Terraform output or using the AWS web console.

### Configure DNS and Route53
Expand All @@ -80,8 +80,7 @@ Pentagon uses Kops to create clusters in AWS. The default layout creates configu

* Make sure your KOPS variables are set correctly with `. yaml_source inventory/default/config/local/vars.yml && . yaml_source inventory/default/config/private/secrets.yml`
* Move into to the path for the cluster you want to work on with `cd inventory/default/clusters/<production|working>`
* If you are using the `aws_vpc` Terraform provided, ensure you have set `nat_gateways` in the `vars.yml` for each cluster and that they the order of the `nat_gateway` ids matches the order of the subnets listed. This will ensure that the Kops cluster will have a properly configured network with the private subnets associated to the existing NAT gateways.
* You can do this using the Makefile `make vpc_id` and `make nat_gateways`.
* If you are using the `aws_vpc` Terraform provided, ensure you have set `nat_gateways` in the `vars.yml` for each cluster and that they the order of the `nat_gateway` ids matches the order of the subnets listed. This will ensure that the Kops cluster will have a properly configured network with the private subnets associated to the existing NAT gateways.

### Create Kubernetes Cluster
* Use the [Kops component](components.md#kopscluster) to create your cluster.
Expand Down
18 changes: 8 additions & 10 deletions docs/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,19 +11,19 @@ After running `pentagon start-project` you will have a directory with a layout s
```
See also [Extended Layout](#extended-layout-description)

Generally speaking, the layout of the infrastructure repository is heierachical. That is to say, higher level directories contain scripts, resources, and variables that are intended to be used earlier in the creation of your infrastructure.
Generally speaking, the layout of the infrastructure repository is heierachical. That is to say, higher level directories contain scripts, resources, and variables that are intended to be used earlier in the creation of your infrastructure.

## Core Directories

### inventory/
The inventory directory is used to store an arbitrary segment of your infrastructure. It can be a separate AWS account, AWS VPC, GCP Project or, GCP Netrowk. It can be as fine grained as you like, but the config directory in each "inventory item" is scoped to, at most, one AWS Account+VPC or one GCP Project+Network. By default, the `inventory` directoy includes one `default` directory with configurtion for one VPC and two Kops clusters. You can pass `pentagon start-project` the `--no-configure` flag to build your own.

### inventory/(default)/config/
The config directory is separated into `local` and `private`. Files, scripts, and templates in `config/local` are checked into source control and should not contain any workstation specific values.
The config directory is separated into `local` and `private`. Files, scripts, and templates in `config/local` are checked into source control and should not contain any workstation specific values.

`config/local/env-vars.sh` uses a specific list of variable names, locates the values in `config/local/vars.yml` and `config/private/secrets.yml` and exports them as an environment variable. These environment variables are used throughout the infrastructure repository so make sure you `source config/local/env-vars.sh`.
`config/local/env-vars.sh` uses a specific list of variable names, locates the values in `config/local/vars.yml` and `config/private/secrets.yml` and exports them as an environment variable. These environment variables are used throughout the infrastructure repository so make sure you `source config/local/env-vars.sh`.

Some configurations require absolute paths which, if checked into source control, can make working with teams challenging. The `config/local/local-config-init` script makes this easier by providing a fast way to generate workstation specific configurations from the `ansible.cfg-default` and `ssh_config-default` template files. The generated workstation specific configuration files are written to `config/private`.
Some configurations require absolute paths which, if checked into source control, can make working with teams challenging. The `config/local/local-config-init` script makes this easier by providing a fast way to generate workstation specific configurations from the `ansible.cfg-default` and `ssh_config-default` template files. The generated workstation specific configuration files are written to `config/private`.

`config/private/ssh_config` and `config/private/ansible.cfg` greatly simplify interaction with your cloud VMs. It is configured to automatically use the correct key and user name based on the IP address of the host. You can either use the command `ssh -F '${INFRASTRUCTURE_REPO}/config/local/ssh_config` or alias SSH with `alias ssh="ssh -F '${INFRASTRUCTURE_REPO}/config/local/ssh_config'`.

Expand Down Expand Up @@ -69,19 +69,18 @@ This is not checked into Git.
## Extended Layout Description

```
├── Makefile
├── README.md
├── ansible-requirements.yml
├── config.yml
├── inventory
│   └── default * Directory for default cloud
│   └── default * Directory for default cloud
│   ├── clusters * Directory for Clusters
│   │   ├── production * Production Cluster Directory
│   │   │   └── vars.yml * Variables specific to production. Used by `pentagon add kops.cluster`
│   │   └── working * Working Cluster Directory
│   │   └── vars.yml * Variables specific to working. Used by `pentagon add kops.cluster`
│   ├── config * Configuration Directory
│   │   ├── local * Local, non-secret configuration
│   │   ├── local * Local, non-secret configuration
│   │   │   ├── ansible.cfg-default * templating code to create private configuration
│   │   │   ├── local-config-init
│   │   │   ├── ssh_config-default
Expand All @@ -105,12 +104,11 @@ This is not checked into Git.
│   │   ├── env.yml
│   │   └── vpn.yml
│   └── terraform * Terraform for entire inventory item
│   ├── Makefile
│   ├── aws_vpc.auto.tfvars
│   ├── aws_vpc.tf
│   ├── aws_vpc_variables.tf
│   ├── backend.tf
│   └── provider.tf
├── plugins * Ansible plugins
├── plugins * Ansible plugins
└── requirements.txt
```
```
2 changes: 1 addition & 1 deletion pentagon/component/core/files/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ Per user configuration needs to be generated. These files cannot directly use th

### VPC

The VPC Terraform code is in `default/vpc` and has a `Makefile` to help with the commands needed.
The VPC Terraform code is in `default/vpc`.

### Kubernetes

Expand Down
2 changes: 1 addition & 1 deletion pentagon/meta.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@

__version__ = "2.7.0"
__version__ = "2.7.1"
__author__ = 'ReactiveOps, Inc.'
42 changes: 29 additions & 13 deletions pentagon/migration/migrations/migration_2_6_2.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,13 +24,13 @@

readme = """
# Migration 2.6.2 -> 2.7.0
# Migration 2.6.2 -> 2.7.1
## This migration:
- removes older artifacts like the `post-kops.sh` if they exist
- renames `inventory/<inventory>/clusters/<cluster>/cluster` -> `inventory/<inventory>/clusters/<cluster>/cluster-config` to match the current standard
- splits any Kops instance group with more than on subnet into multiple instance groups with a single subnet.
* it attempts to guess on the correct min/max size of the instance groups by `current min/max / number of subnets` as an integer.
- splits any Kops instance group with more than on subnet into multiple instance groups with a single subnet.
* it attempts to guess on the correct min/max size of the instance groups by `current min/max / number of subnets` as an integer.
* it leaves the existing instance group in place to ease the migration
* there are instructions in each `inventory/<inventory>/clusters/<cluster>/cluster-config/nodes.yml`
- adds audit logging to all kops clusters if not already there
Expand All @@ -42,7 +42,7 @@
- the manifold update to the kops clusters will be a multi step process and may incur some risk.
## Follow up tasks:
- the update to the aws-iam-authenticator config no longer requires any cloud storage. Delete the bucket if it exists.
- the update to the aws-iam-authenticator config no longer requires any cloud storage. Delete the bucket if it exists.
- this version update changes the standards for the EtcD verion. This is a breaking change so it is not handled automatically in this migration.
"""
Expand Down Expand Up @@ -255,7 +255,7 @@ def literal_unicode_representer(dumper, data):

class Migration(migration.Migration):
_starting_version = '2.6.2'
_ending_version = '2.7.0'
_ending_version = '2.7.1'

_readme_string = readme

Expand Down Expand Up @@ -301,8 +301,14 @@ def run(self):
for document in yaml.load_all(yaml_file.read()):
if document.get('kind') == 'InstanceGroup':
if document['spec']['role'] == 'Node':
for hook in document['spec'].get('hooks', []):
if hook.get('manifest') is not None:
hook['manifest'] = literal_unicode(hook['manifest'])
nodes.append(document)
elif document['spec']['role'] == 'Master':
for hook in document['spec'].get('hooks', []):
if hook.get('manifest') is not None:
hook['manifest'] = literal_unicode(hook['manifest'])
masters.append(document)
else:
continue
Expand All @@ -312,14 +318,18 @@ def run(self):
nodes_file.write(yaml.dump_all(nodes, default_flow_style=False))

with open("{}/cluster-config/{}".format(item_path, 'masters.yml'), 'w') as masters_file:
masters_file.write(yaml.dump_all(nodes, default_flow_style=False))
masters_file.write(yaml.dump_all(masters, default_flow_style=False))

# becauce the nodes.yml may hav multiple documents, we need to abuse the YamlEditor class a little bit
# Because the nodes.yml may have multiple documents, we need to abuse the YamlEditor class a little bit
with open(old_node_groups) as oig:
new_node_groups = []
for node_group in yaml.load_all(oig.read()):

# Keep exisiting node group in the file to eash manual steps
# Keep exisiting node group in the file to ease manual steps
for hook in node_group['spec'].get('hooks', []):
if hook.get('manifest') is not None:
hook['manifest'] = literal_unicode(hook['manifest'])

new_node_groups.append(node_group)

sn_count = len(node_group['spec']['subnets'])
Expand Down Expand Up @@ -377,19 +387,27 @@ def run(self):

hooks = cluster_spec.get("hooks")
if hooks:
logging.debug(hooks)
for hook in hooks:
if hook['name'] == 'kops-hook-authenticator-config.service':
hooks.pop(hooks.index(hook))
kops_hook_index = hooks.index(hook)
logging.debug("Found kops auth hook at index %d", kops_hook_index)
else:
logging.debug("Found other existing hook %s", hook['name'])
hook['manifest'] = literal_unicode(hook['manifest'])

logging.debug("Removing existing kops-hook-authenticator-config.service at %d", kops_hook_index)
hooks.pop(kops_hook_index)
else:
logging.debug("No hooks found in cluster spec.")
cluster_spec['hooks'] = []

# Using the above magic to keep formatting on the literal strings in the yaml

for policy_type in cluster_spec.get('additionalPolicies', {}):
cluster_spec['additionalPolicies'][policy_type] = literal_unicode(cluster_spec['additionalPolicies'][policy_type])

hook = yaml.load(aws_iam_kops_hook)
hook['manifest'] = literal_unicode(hook['manifest'])
cluster_spec['hooks'].append(hook)

file_assets = cluster_spec.get('fileAssets')
if not file_assets:
Expand All @@ -407,8 +425,6 @@ def run(self):
if fa.get('content'):
fa['content'] = literal_unicode(fa['content'])

cluster_spec['hooks'].append(hook)

if not cluster_spec.get('kubeAPIServer'):
cluster_spec['kubeAPIServer'] = {}

Expand Down

0 comments on commit 2d30dff

Please sign in to comment.