Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bastion Connections #81

Open
verehfy opened this issue Apr 23, 2021 · 16 comments · May be fixed by #129
Open

Bastion Connections #81

verehfy opened this issue Apr 23, 2021 · 16 comments · May be fixed by #129

Comments

@verehfy
Copy link

verehfy commented Apr 23, 2021

Hi there,

Thank you for making this awesome provider. I am interested in connecting to postgresql through a bastion server, would there be any appetite for adding this as a feature? I could see the provider configuration looking something like this:

provider "postgresql" {
  host            = "postgres_server_ip"
  port            = 5432
  database        = "postgres"
  username        = "postgres_user"
  password        = "postgres_password"
  sslmode         = "require"
  connect_timeout = 15
  bastion {
    host = example.com
    port = 22
    private_key = <superSecretKey>
  }
}

Something similar to the Provisioner Connection Setting would be super helpful.

@cjcdoomed
Copy link

I'd say support the syntax for Provisioner Connection Setting for ssh connections exactly.

@volphy
Copy link

volphy commented Jun 17, 2021

I would say that it would be a killer feature for PostgreSQL running in the AWS RDS.

As of now, if a given RDS instance is running in a private subnet, it's not reachable by this provider.

@a0s
Copy link

a0s commented Jul 5, 2021

Same problem for me. Definitely, I am not a big fan of expose postgres port to internet.

@laurenty
Copy link

This would indeed be a great feature. I'm currently unable to use this provider because the RDS instance isn't publicly available.

@cyrilgdn
Copy link
Owner

Hi @verehfy,

Thanks for opening this issue.

I need to think twice about this feature, I understand what it could bring but I'm not sure it's the role of the provider to do it 🤔

Otherwise it means that every single provider that could potentially access private resources (i.e.: most of the providers) need to implement this. (and there's probably similar issue opened in lot's of them)
For me it's the role of the Terraform caller to create the way to access the resources you need (VPN, SSH tunnel, run Terraform in the VPC, etc...).
You could for example have a Terraform wrapper that creates the tunnel, call the providers with the tunnel settings, then close the tunnel when not needed.

But, as I said, I understand that it would be convenient if the provider allows it. I can at least check how it's done in the provisioner or in other providers that already do it.

There's this PR #115 open for that but it's maybe a bit too rudimentary in a first look.

@a0s
Copy link

a0s commented Aug 19, 2021

@cyrilgdn I tried to use null-resource with local-exec to start ssh tunnel to bastion host and it works .. bad. Yes, when you create something the tunnel starts ok. But when terraform makes destroy action null-resource does nothing, local-exec does not call, you don't get your tunnel, and terraform fails.

It would be nice to have global terraform pre/post-hooks, but we don't have it. Moreover, with such types of hooks, you can find yourself in the situation when you try to connect to a bastion host which you haven't created yet (it should be created during the main terraform phase).

@laurenty
Copy link

Also, keep Terraform Cloud in mind

@cyrilgdn
Copy link
Owner

Also, keep Terraform Cloud in mind

Indeed 👍 !

@jindrichskupa
Copy link

I made an PR #129 I am open to change configuration and add ssh private key / password support.

@jindrichskupa jindrichskupa linked a pull request Sep 6, 2021 that will close this issue
5 tasks
@a0s
Copy link

a0s commented Sep 7, 2021

@jindrichskupa Interesting! Is it possible to pass private key by value (without creating temporary file)?

@jindrichskupa
Copy link

@jindrichskupa Interesting! Is it possible to pass private key by value (without creating temporary file)?

Not now, but I can add it. I will update the config structure as was described above (the structure looks better for me).

@a0s
Copy link

a0s commented Sep 14, 2021

@jindrichskupa It would be very helpful. But... i just realised that we can't use it like this:

resource "tls_private_key" "bastion" {
  algorithm = "RSA"
  rsa_bits = 4096
}
resource "aws_key_pair" "bastion" {
  key_name = local.bastion_name
  public_key = tls_private_key.bastion.public_key_openssh
}
resource "aws_instance" "bastion" {
  // ..
  key_name = aws_key_pair.bastion.key_name
  // ..
}
provider "postgresql" {
  depends_on = [aws_instance.bastion]
  bastion {
    private_key_value = tls_private_key.bastion.private_key_pem
  }
}

@jindrichskupa
Copy link

@a0s Why? I updated the PR with private key support as string. The block is not bastion (I like jumphost instead 😃 ).

@codan84
Copy link

codan84 commented Oct 18, 2022

Would be great to have something like that. However, there is another bastion type requiring AWS session manager to access, making things even more complicated.

@n1ngu
Copy link

n1ngu commented Jan 4, 2023

For me it's the role of the Terraform caller to create the way to access the resources you need (VPN, SSH tunnel, run Terraform in the VPC, etc...).

This is what I had been doing so far, executing diverse autossh commands before the terraform plan/apply instructions like

autossh -f -M 0 bastionhost -N -v -L 0.0.0.0:5001:database1:5432 -L 0.0.0.0:5002:database2:5432

and then start this provider on different localhost ports

provider "postgresql" {
  alias = "staging"
  host  = "localhost
  port  = 5001
  # ...
}

provider "postgresql" {
  alias = "production"
  host  = "localhost
  port  = 5002
  # ...
}

BUT

This blew up once I tried to embrace AWS RDS IAM authentication because the aws_rds_iam_auth will only infer the wrong hostname:port from the provider block, forcing me to write workarounds like

data "external" "rds_auth_token" {
  program = [
    "sh",
    "-c",
    "echo {\\\"password\\\": \\\"$(aws rds generate-db-auth-token --hostname ${each.value.hostname} --port ${each.value.port} --username ${each.value.username} )\\\"}",
  ]
  for_each = {
    staging = {
      hostname = "database1.asdfasdf.region.rds.amazonaws.com"
      port     = 5432
      username = "terraform"
    }
    production = {
      hostname = "database2.asdfasdf.region.rds.amazonaws.com"
      port     = 5432
      username = "terraform"
    }
  }
}

provider "postgresql" {
  alias              = "staging"
  host               = "localhost"
  port               = 5002
  database           = "postgres"
  username           = "terraform"
  password           = data.external.rds_auth_token["staging"].result.password
  sslmode            = "require"
  superuser          = false
  aws_rds_iam_auth   = false
}

provider "postgresql" {
  alias              = "production"
  host               = "localhost"
  port               = 5002
  database           = "postgres"
  username           = "terraform"
  password           = data.external.rds_auth_token["production"].result.password
  sslmode            = "require"
  superuser          = false
  aws_rds_iam_auth   = false
}

which is rather falky when executed on environments with dubious external datasource support like Atlantis, Terraform Cloud or the official terraform Docker image, requires the external python AWS CLI instead of using the builtin AWS SDK and is honestly quite a burden compared to the naive aws_rds_iam_auth = true I expected.

The bastion host is not going anywhere (I am not opening RDS access to arbitrary CI provider IPs) and having this feature would be awesome.

@zacksiri
Copy link

zacksiri commented Jun 1, 2023

Yeap, having this feature is necessary. Most postgresql instances are not going to be available on public.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants