Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minor updates to user docs - concepts #7345

Merged
merged 9 commits into from
Jul 1, 2024
Prev Previous commit
Next Next commit
minor tweaks to language and links for docs in the concepts section
tom-webber committed Jun 28, 2024
commit 8a31b22fc1f09f132ad2307c6903a1aab309a2a8
26 changes: 13 additions & 13 deletions source/concepts/environments/auto-nuke.html.md.erb
Original file line number Diff line number Diff line change
@@ -18,31 +18,31 @@ review_in: 6 months

## Feature description

This feature automatically nukes and optionally recreates development environments on weekly basis. This is useful for environments with the sandbox permission, which allow users provisioning resources directly through the AWS web console as opposite to using terraform. In such cases, the auto-nuke will make sure the resources created manually will be cleared on weekly basis. If requested, the resources defined in terraform will then be recreated.
This feature automatically destroys all resources in development environments on a weekly basis, and provides a utitily to recreate resources in these environments. This is useful for environments with the sandbox permission, which allow users to provision resources directly through the AWS web console alongside infrastructure as code (IaC). In such cases, the auto-nuke will make ensure the manually created resources will be regularly removed. If requested, resources defined in terraform can then be recreated.

Every Sunday:

- At 10.00pm the awsnuke.yml workflow is triggered. This workflow nukes all the configured development environments using the AWS Nuke tool (https://github.com/rebuy-de/aws-nuke).
- At 12.00 noon the nuke-redeploy.yml workflow is triggered. If requested, this workflow redeploys the nuked environment using terraform apply.
- At 22:00 the [awsnuke.yml workflow](https://github.com/ministryofjustice/modernisation-platform-environments/blob/main/.github/workflows/awsnuke.yml) is triggered. This workflow nukes all the configured development environments using the [AWS Nuke tool](https://github.com/rebuy-de/aws-nuke).
- At 12:00 the [nuke-redeploy.yml workflow](https://github.com/ministryofjustice/modernisation-platform-environments/blob/main/.github/workflows/nuke-redeploy.yml) is triggered. If requested, this workflow redeploys IaC into the nuked environment using `terraform apply`.

A sketch of the algorithm is as follows:
An outline of the 'nuke' algorithm is as follows:

- For every account in a dynamically generated list of all sandbox accounts
- Assume the role MemberInfrastructureAccess under the account ID
- Nuke the resources under the account ID
- (Optionally) Perform terraform apply in order to recreate all resources from terraform
- For every account in a dynamically generated list of all sandbox accounts:
- Assume the [`MemberInfrastructureAccess` role](https://github.com/ministryofjustice/modernisation-platform/blob/ab3eb5a6a8e6253afc9db794362034ba4ae1cd94/terraform/environments/bootstrap/member-bootstrap/iam.tf#L266) under the account ID
- Nuke the resources under the account ID
- (Optionally) Perform terraform apply in order to recreate all resources from terraform

## Configuration

Auto-nuke consumes the following dynamically generated Github secrets stored in the Modernisation Platorm Environments repository:

- `MODERNISATION_PLATFORM_AUTONUKE_BLOCKLIST`: Account aliases to always exclude from auto-nuke. This takes precedence over all other configuration options. Due to the destructive nature of the tool, AWS-Nuke (https://github.com/rebuy-de/aws-nuke) requires at least one Account ID in the configured blocklist. Our blocklist contains all production. preproduction and core accounts.
- `MODERNISATION_PLATFORM_AUTONUKE_BLOCKLIST`: Account aliases to always exclude from auto-nuke. This takes precedence over all other configuration options. Due to the destructive nature of the tool, [AWS-Nuke](https://github.com/rebuy-de/aws-nuke) requires at least one account ID in the configured blocklist. Our blocklist contains all production, preproduction, and core accounts.

- `MODERNISATION_PLATFORM_AUTONUKE`: Account aliases of sandbox accounts to be auto-nuked on weekly basis.

- `MODERNISATION_PLATFORM_AUTONUKE_REBUILD`: Accounts to be rebuilt after auto-nuke runs. This secret is consumed by the `nuke-redeploy.yml` workflow.

The `nuke-config-template.txt` is populated with account and blocklist information during the runtime of the `awsnuke.yml` workflow, to produce a valid aws-nuke configuration file.
The [`nuke-config-template.txt`](https://github.com/ministryofjustice/modernisation-platform-environments/blob/main/scripts/nuke-config-template.txt) is populated with account and blocklist information during the runtime of the `awsnuke.yml` workflow, to produce a valid aws-nuke configuration file.

### When new sandbox development environment is onboarded

@@ -67,8 +67,8 @@ Eg:

Valid values are:

`include` = nukes but doesn’t rebuild (default option if nothing added)
`exclude` = doesn’t nuke or rebuild
`rebuild` = nukes and rebuilds
- `include` = nukes but doesn’t rebuild (default option if nothing added)
- `exclude` = doesn’t nuke or rebuild
- `rebuild` = nukes and rebuilds

Please contact us in [#ask-modernisation-platform](https://mojdt.slack.com/archives/C01A7QK5VM1) channel for details.
10 changes: 5 additions & 5 deletions source/concepts/environments/instance-scheduling.html.md.erb
Original file line number Diff line number Diff line change
@@ -18,9 +18,9 @@ review_in: 6 months

## Feature description

This feature automatically stops non-production EC2 and RDS instances overnight, in order to save on AWS costs and reduce environmental impact. Stopped instances don't incur charges, but Elastic IP addresses or EBS volumes attached to those instances do.
This feature automatically stops non-production EC2 and RDS instances overnight and over each weekend, in order to save on AWS costs and reduce environmental impact. Stopped instances don't incur charges, but Elastic IP addresses or EBS volumes attached to those instances do.

The instances will be automatically stopped every weekday at 9pm night and started at 6am in the morning. By default, this includes every EC2 and RDS instance in every non-production environment (development, test, pre-production) without requiring any configuration from the end user. Users can customise the default behaviour by attaching the `instance-scheduling` tag to EC2 and RDS instances with one of the following values:
The instances will be automatically [stopped each weekday at 21:00](https://github.com/ministryofjustice/modernisation-platform/blob/19a7e48b366cfbb9d24c30f4620b12df886baa8e/terraform/environments/core-shared-services/instance-scheduler-lambda-function.tf#L35) and [started at 06:00 each weekday](https://github.com/ministryofjustice/modernisation-platform/blob/19a7e48b366cfbb9d24c30f4620b12df886baa8e/terraform/environments/core-shared-services/instance-scheduler-lambda-function.tf#L61) morning, which includes shut down on Friday night and startup on Monday morning. By default, this includes every EC2 and RDS instance in every non-production environment (development, test, preproduction) without requiring any configuration from the end user. Users can customise the default behaviour by attaching the `instance-scheduling` tag to EC2 and RDS instances with one of the following values:

- `default` - Automatically stop the instance overnight and start it in the morning. Absence of the `instance-scheduling` tag will have the same effect.
- `skip-scheduling` - Skip auto scheduling for the instance
@@ -44,15 +44,15 @@ Ordering instances and automatically stopping them on public holidays is not sup

For those teams that require the shutdown & startup of ec2 & rds resources in a specific order or at different times, the option exists to make use of github workflows & cron schedules to stop & start services.

- These workflows can be run from the application source github via the use of oidc for authenticaiton to the Modernisation Platform - see https://user-guide.modernisation-platform.service.justice.gov.uk/user-guide/deploying-your-application.html#deploying-your-application. It is recommended to hold the AWS account number for the member account as a github secret, especially if the repo is public.
- These workflows can be run from the application source github [via the use of oidc for authentication to the Modernisation Platform](https://user-guide.modernisation-platform.service.justice.gov.uk/user-guide/deploying-your-application.html#deploying-your-application). It is recommended to hold the AWS account number for the member account as a github secret, especially if the repo is public.

- An example of how to use a github workflow to meet this requirement can be found here - https://github.com/ministryofjustice/modernisation-platform-configuration-management/blob/main/.github/workflows/flexible-instance-stop-start.yml. Note that the workflow uses a separate script to run the AWS CLI commands for shutdown & startup. These can be easily reused & customised to meet specific needs.
- An example of how to use a github workflow to meet this requirement can be [found here](https://github.com/ministryofjustice/modernisation-platform-configuration-management/blob/main/.github/workflows/flexible-instance-stop-start.yml). Note that the workflow uses [a separate script](https://github.com/ministryofjustice/modernisation-platform-configuration-management/blob/main/scripts/flexistopstart.sh) to run the AWS CLI commands for shutdown & startup. These can be easily reused & customised to meet specific needs.

- EC2 or RDS resources that are stopped or started in this manner must have the `skip-scheduling` tag added as described above.

- Note that there are some restrictions that come with using github schedules - most importantly that github themselves do not guarantee execution of the action at the specified time. Actions can be delayed at busy times or even dropped entirely so it is recommended to avoid schedules running on-the-hour or half-hour.

Further information regarding github schedule events can be found here - https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#schedule
Further information regarding github schedule events can be [found here](https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#schedule).

## References

5 changes: 2 additions & 3 deletions source/concepts/networking/certificate-services.html.md.erb
Original file line number Diff line number Diff line change
@@ -18,14 +18,13 @@ review_in: 6 months

## Public Certificates

There are two main ways to use public certificates for DNS on the Modernisation Platform; ACM (Amazon Certificate Manager) public certificates, and Gandi.net certificates imported into ACM.
Please see [How to configure DNS for public services](../../user-guide/how-to-configure-dns.html) for more information.
There are two main ways to use public certificates for DNS on the Modernisation Platform; [ACM (Amazon Certificate Manager)](https://docs.aws.amazon.com/acm/latest/userguide/acm-overview.html) public certificates, and Gandi.net certificates imported into ACM. Please see [How to configure DNS for public services](../../user-guide/how-to-configure-dns.html) for more information.

## Private Certificates

We provide a [Private root Certificate Authority (CA)](https://docs.aws.amazon.com/acm-pca/latest/userguide/PcaWelcome.html) in the [network services account and VPC](networking-approach.html#other-vpcs), along with subordinate production and non production CAs.

The subordinate CA's are then shared to the application environments via a [RAM](https://docs.aws.amazon.com/ram/latest/userguide/what-is.html) share (either production or non production depending on the environment).
The subordinate CA's are then shared to the application environments via a [Resource Access Manager (RAM)](https://docs.aws.amazon.com/ram/latest/userguide/what-is.html) share (either production or non-production depending on the environment).

Certificates can then be created using the Private subordinate CA, the certificates remain local to the application environment.

4 changes: 2 additions & 2 deletions source/concepts/networking/dns.html.md.erb
Original file line number Diff line number Diff line change
@@ -16,9 +16,9 @@ review_in: 6 months

# <%= current_page.data.title %>

DNS is centralised in the networking services account.
DNS is centralised in the core networking services account.

We use AWS Route53 to provide and manage DNS records.
We use [AWS Route 53](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/Welcome.html) to provide and manage DNS records.

There are public and private [hosted zones](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zones-working-with.html) for the Modernisation Platform.

Original file line number Diff line number Diff line change
@@ -20,7 +20,7 @@ review_in: 6 months

For most EC2 running modern Linux operating systems, [SSH](https://en.wikipedia.org/wiki/Secure_Shell_Protocol) access will be via [AWS Systems Manager Session Manager (SSM)](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html).

This provides secure and auditable access to EC2s without the need to expose ports or use a bastion. This can also be used for port forwarding to access private web consoles, [RDS databases](https://aws.amazon.com/rds/) or Windows [RDP](https://en.wikipedia.org/wiki/Remote_Desktop_Protocol).
This provides secure and auditable access to EC2s without the need to expose ports or use a bastion. This can also be used for port forwarding to access private web consoles, [RDS databases](https://aws.amazon.com/rds/) or [Windows Remote Desktop (RDP)](https://en.wikipedia.org/wiki/Remote_Desktop_Protocol).

## Bastions

@@ -29,9 +29,9 @@ For instances running older versions of Linux where the [SSM Agent](https://docs
The bastion will be preconfigured with the relevant security and network connectivity required.
You can then securely connect to this bastion host via Systems Manager, and then on to your instance.

If you find the bastion is down (between 20:00 and 05:00) then you may need to restart it. The best way to do this is to amend the Auto Scaling Group called bastion_linux_daily to set the values to 1 where they are 0. This will build a bastion EC2 server.
If you find the bastion is down (between 20:00 and 05:00) then you may need to restart it. The best way to do this is to amend the Auto Scaling Group called `bastion_linux_daily` to set the values to `1` where they are `0`. This will build a bastion EC2 server.

There will only be 1 listed in most cases (bastion_linux_daily) so select that, click on edit in the top box and set all 3 values (desired capacity, minimum capacity and maximum capacity) to 1 and select Update. This will cause AWS to build a new instance and one will be available in around 5 minutes.
There will only be 1 listed in most cases (`bastion_linux_daily`) so select that, click on edit in the top box and set all 3 values (desired capacity, minimum capacity and maximum capacity) to `1` and select Update. This will cause AWS to build a new instance and one will be available in around 5 minutes.

## How to connect
For information on how to connect to instances or Bastions see [Accessing EC2s](../../user-guide/accessing-ec2s.html).
Loading