- Bring your own aws account, supply creds via your method of choice, add your region "region" in varibles.tf.
- Initialize ECS to create the proper iam ecs role (arn:aws:iam::<acnt_id>:role/ecsTaskExecutionRole)and add the arn to "ecs_role" under ecs.
- Build and upload Docker image to ECR, put arn in variables.tf "ecs_image" under ecs.
- Bring your own public r53 zone, put the zone id and fqdn in variables.tf "pub_dns_name" and "pub_dns_id" under alb.
- Bring your own ACM cert fleet.<yourdnszone.fqdn> for your public r53 zone, put cert arn in variables.tf "alb_cert" under alb.
- Self-signed cert chain provided in pem format for the internal ABL for client connectivity, app.fleet.priv signed by ca.fleet.priv. Certs exist in docker and cert dirs, better method inc.
- Once build, SSH to instance described in the Output info.
- Create /etc/osquery/enroll with the enroll_secret found in Fleet console and /etc/osquery/kolide.pem using the kolide.cert from the project or download it from Fleet app. Run systemctl restart osqueryd.
- Setup your own S3/Dynamo DB backend. Not required, but strongly suggested. This is not fun to manually destroy if you lose your state.
- I'm a noob and this is a only a proof of concept.
- ???????
- Get rich or die tryin.
- Multi-AZ capable via az_count variable, public / private subnets and routes with NAT/IGW support.
- External ALB with verifiable ACM cert and internal ALB with self-signed cert chain (this works somehow!!??).
- Aurora RDS cluster with multi-az support.
- Redis Elasticache cluster with multi-az for node groups with relicas.
- ECS Fargate cluster with multi-az support running Fleet containers.
- Instance to test connectivity.
- Steal whatever you want and take all the credit (doesn't align well with step #12 above).