Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Migrate provider and update github actions #10

Merged
merged 1 commit into from
Nov 22, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 21 additions & 4 deletions .github/workflows/terraform.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,20 @@ on:
branches:
- main
pull_request:
branches:
- main

jobs:
terraform:
name: 'Terraform'
runs-on: ubuntu-latest
environment: production

permissions:
# Give the default GITHUB_TOKEN write permission to commit and push the
# added or changed files to the repository.
contents: write

# Use the Bash shell regardless whether the GitHub Actions runner is ubuntu-latest, macos-latest, or windows-latest
defaults:
run:
Expand All @@ -20,7 +27,7 @@ jobs:
steps:
# Checkout the repository to the GitHub Actions runner
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@v3

# Copy the terraform.tfvars.example file for variables
- name: Create terraform.tfvars
Expand All @@ -30,10 +37,20 @@ jobs:
- name: Create random SSH keys
run: mkdir ~/.ssh && touch ~/.ssh/id_rsa && touch ~/.ssh/id_rsa.pub

# Initialize a new or existing Terraform working directory by creating initial files, loading any remote state, downloading modules, etc.
# Initialize Terraform working directory by creating initial files, loading any remote state, downloading modules, etc.
- name: Terraform Init
run: terraform init

# Generates an execution plan for Terraform
# Validate Terraform files
- name: Terraform Validate
run: terraform validate
run: terraform validate

# Format Terraform files
- name: Terraform Format
run: terraform fmt --recursive

# Commit files
- name: Commit and Push
uses: stefanzweifel/git-auto-commit-action@v5.0.0
with:
commit_message: 'Formatted terraform files'
9 changes: 8 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,14 @@ k create -f ./nginx-example/ingress.yaml
curl -k https://192.168.0.101
```

## Exposing your cluster to the internet with a free subdomain! (Optional)
## Expose your cluster to the internet (Optional)

It is possible to expose your cluster to the internet over a small vps even if both your vps and your public ips are dynamic. This is possible by setting up dynamic dns for both your internal network and the vps using something like duckdns
and a docker container to regularly monitor the IP addresses on both ends. A connection can be then made using wireguard to traverse the network between these 2 nodes. This way you can hide your public IP while exposing services to the internet.

Project Link: [wireguard-k8s-lb](https://github.com/Naman1997/wireguard-k8s-lb) (This is one possible implementation)

### How to do this manually?

You'll need an account with duckdns - they provide you with a free subdomain that you can use to host your web services from your home internet. You'll also be needing a VPS in the cloud that can take in your traffic from a public IP address so that you don't expose your own IP address. Oracle provides a [free tier](https://www.oracle.com/in/cloud/free/) account with 4 vcpus and 24GB of memory. I'll be using this to create a VM. To expose the traffic properly, follow this [guide](https://github.com/Naman1997/simple-fcos-cluster/blob/main/docs/Wireguard_Setup.md).

Expand Down
16 changes: 8 additions & 8 deletions main.tf
Original file line number Diff line number Diff line change
@@ -1,17 +1,17 @@
terraform {
required_providers {
proxmox = {
source = "telmate/proxmox"
version = "2.9.14"
source = "bpg/proxmox"
version = "0.38.1"
}
}
}

provider "proxmox" {
pm_api_url = var.PROXMOX_API_ENDPOINT
pm_user = "${var.PROXMOX_USERNAME}@pam"
pm_password = var.PROXMOX_PASSWORD
pm_tls_insecure = true
endpoint = var.PROXMOX_API_ENDPOINT
username = "${var.PROXMOX_USERNAME}@pam"
password = var.PROXMOX_PASSWORD
insecure = true
}

data "external" "versions" {
Expand Down Expand Up @@ -176,10 +176,10 @@ resource "local_file" "haproxy_config" {
content = templatefile("${path.root}/templates/haproxy.tmpl",
{
node_map_masters = zipmap(
tolist(module.master_domain.*.address), tolist(module.master_domain.*.name)
module.master_domain.*.address, module.master_domain.*.name
),
node_map_workers = zipmap(
tolist(module.worker_domain.*.address), tolist(module.worker_domain.*.name)
module.worker_domain.*.address, module.worker_domain.*.name
)
}
)
Expand Down
61 changes: 41 additions & 20 deletions modules/domain/main.tf
Original file line number Diff line number Diff line change
@@ -1,26 +1,43 @@
terraform {
required_providers {
proxmox = {
source = "telmate/proxmox"
version = "2.9.14"
source = "bpg/proxmox"
version = "0.38.1"
}
}
}

resource "proxmox_vm_qemu" "node" {
name = var.name
memory = var.memory
cores = var.vcpus
sockets = var.sockets
onboot = var.autostart
target_node = var.target_node
agent = 1
clone = "coreos-golden"
full_clone = true
boot = "order=virtio0;net0"
args = "-fw_cfg name=opt/com.coreos/config,file=/root/ignition/ignition_${var.name}.ign"
resource "proxmox_virtual_environment_vm" "node" {
name = var.name
on_boot = var.autostart
node_name = var.target_node
scsi_hardware = "virtio-scsi-pci"
kvm_arguments = "-fw_cfg name=opt/com.coreos/config,file=/root/ignition/ignition_${var.name}.ign"
timeout_shutdown_vm = 300
reboot = true

network {
memory {
dedicated = var.memory
floating = var.memory
}

cpu {
cores = var.vcpus
type = "host"
sockets = var.sockets
}

agent {
enabled = true
timeout = "10s"
}

clone {
retries = 3
vm_id = 7000
}

network_device {
model = "e1000"
bridge = var.default_bridge
}
Expand All @@ -41,7 +58,7 @@ resource "proxmox_vm_qemu" "node" {
done
EOT
environment = {
ADDRESS = self.ssh_host
ADDRESS = element([for addresses in self.ipv4_addresses : addresses[0] if addresses[0] != "127.0.0.1"], 0)
}
when = destroy
}
Expand All @@ -63,19 +80,23 @@ resource "proxmox_vm_qemu" "node" {
done
EOT
environment = {
ADDRESS = self.ssh_host
ADDRESS = element([for addresses in self.ipv4_addresses : addresses[0] if addresses[0] != "127.0.0.1"], 0)
}
when = create
}
}

locals {
non_local_ipv4_address = element([for addresses in proxmox_virtual_environment_vm.node.ipv4_addresses : addresses[0] if addresses[0] != "127.0.0.1"], 0)
}

resource "null_resource" "wait_for_ssh" {
depends_on = [
proxmox_vm_qemu.node
proxmox_virtual_environment_vm.node
]
provisioner "remote-exec" {
connection {
host = proxmox_vm_qemu.node.ssh_host
host = local.non_local_ipv4_address
user = "core"
private_key = file("~/.ssh/id_rsa")
timeout = "5m"
Expand All @@ -86,4 +107,4 @@ resource "null_resource" "wait_for_ssh" {
"echo Connected to `hostname`"
]
}
}
}
6 changes: 3 additions & 3 deletions modules/domain/outputs.tf
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
output "address" {
value = proxmox_vm_qemu.node.ssh_host
description = "IP Address of the node"
value = local.non_local_ipv4_address
description = "Non-local IP Address of the node"
}

output "name" {
value = proxmox_vm_qemu.node.name
value = proxmox_virtual_environment_vm.node.name
description = "Name of the node"
}