Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use an existing ALB #228

Closed
countergram opened this issue Sep 30, 2017 · 78 comments
Closed

Use an existing ALB #228

countergram opened this issue Sep 30, 2017 · 78 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature.
Milestone

Comments

@countergram
Copy link

As a user of Terraform (or, substitute CloudFormation), I would like to use an existing ALB with the ingress controller so that I can keep my infrastructure automation centralized rather than in several different places. This also externalizes the various current and future associations between ALB and other parts of the infrastructure that may already be defined in TF/CFN (certs, Route53, WAF, CloudFront, other config).

@joshrosso joshrosso added the kind/feature Categorizes issue or PR as related to a new feature. label Oct 4, 2017
@joshrosso joshrosso added this to the backlog milestone Oct 4, 2017
@joshrosso
Copy link

@countergram Thanks, we've heard this request and similar a few times now.

Seems a feature that many would like is an ability to explicitly call our a named ALB via annotation (or eventually configmap).

@markbooch
Copy link

Is there any updates for this issues?

@marcosdiez
Copy link
Contributor

Hey, I might have solved your problem in this PR: #830 . Testing and feedback is welcome :)

@benderillo
Copy link

benderillo commented Mar 20, 2019

@joshrosso Is there any update on this request?

@tdmalone
Copy link

tdmalone commented Apr 7, 2019

Relevant: #914

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 6, 2019
@npolagani
Copy link

I was able to use one ALB for multiple Ingress Resources in Version 1.0.1. I did create a new cluster and installed Version 1.1.2, which is creating a new ALB for each Ingress Resource. Is there anyway that I can use same ALB in 1.1.2 ?

@tdmalone
Copy link

tdmalone commented Aug 6, 2019

^ cross-posted at #984 (comment), #724 (comment)

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 5, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot
Copy link
Contributor

@leoskyrocker: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@gkrizek
Copy link

gkrizek commented Mar 4, 2020

@M00nF1sh @joshrosso Any update here? I think this issue is still relevant today and yet unsolved.

Having a way for Kubernetes to attach to existing ALBs/Target Groups is incredibly valuable for lots of reasons. Really surprised there's no way to do it right now.

@M00nF1sh
Copy link
Collaborator

M00nF1sh commented Mar 4, 2020

@gkrizek
we'll address this issue in V2.
For attach to existing TargetGroups, we'll expose an CRD called endpointBinding to allow do that.
For attach to existing ALB, we haven't decided whether to use an annotation on Ingress(like alb-arn: xxxx) or an AWS tag on the ALB(like ownership: shared, ingress: ingress-name, cluster: cluster-name). any opinions?

@gkrizek
Copy link

gkrizek commented Mar 4, 2020

@M00nF1sh Great to hear! I hate to be "that guy" but is there a v2 anticipated release date?

I like the CRD idea for target groups, I think that's the right direction. I think both options are valid for the ALB, however I think tags are preferred. Because with an ARN you might have to do some wacky stuff to get the ARN into a manifest/helm chart. With tags it'd be pretty easy to define without needed explicit values from AWS. Also it would allow for an ingress to attach to multiple ALBs if one chooses.

@M00nF1sh
Copy link
Collaborator

M00nF1sh commented Mar 4, 2020

@gkrizek There is no anticipated released date yet( i cannot promise one), but I'll keep update https://github.com/kubernetes-sigs/aws-alb-ingress-controller/projects/1 whenever i got time to work on it 🤣.
BTW, there is an alpha version of V2 which works just fine: https://github.com/kubernetes-sigs/aws-alb-ingress-controller/releases/tag/v1.2.0-alpha.1 (you can reuse an ALB by apply correct tags, however, the controller will try to delete the ALB once we delete the ingress)

@rifelpet
Copy link
Contributor

rifelpet commented Mar 4, 2020

having a single ingress: tag forces a many:1 relationship of ingresses to ALBs which contradicts the concept of ingress grouping mentioned in other issues. It'd be great if the design could allow many ALBs to be reused by many ingresses. Perhaps a "binding" CRD similar to the endpoint/targetgroup solution mentioned above?

@gkrizek
Copy link

gkrizek commented Mar 4, 2020

@M00nF1sh I figured so 😉 . Sounds good, I'll check out the alpha for now. Thanks for the help.

@M00nF1sh
Copy link
Collaborator

M00nF1sh commented Mar 4, 2020

@rifelpet
it's actually an ingress.k8s.aws/stack: <value> annotation on ALB in V2, where the can be "namespace/ingress-name" or "group-name". So it's still a 1-1 relation between a group and ALB.

(However, personally i favor to require an explicit annotation of ..../alb-arn:xxxx on one-of-ingresses among group to denote the reuse, since tagging on ALB requires to plan for Ingress before hand)

What do you mean by allow many ALBs to be reused by many ingresses.? Current design is one group will only have one ALB.

It's possible to extend it to be like one group with multiple ALB(like auto-split rules), but is there really a use case for this?
since i assume there are app-specific dependencies like some Ingress must be hosted by a single DNS name, so it's impossible for the controller to make the split decision if rule exceeds ALB's limits, instead it's better for the user to split there ingresses into different groups.

@sichiba
Copy link

sichiba commented Aug 1, 2023

for those of you looking to use the same ALB to mutualise different ingresses. you can acheive it by adding this annotation.
alb.ingress.kubernetes.io/group.name: xxxxx
add this annotation to every ingress you want to append to the same ALB

here's an example of the ingress manifest

kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/certificate-arn: {{ .Values.networking.ingress.certificate }}
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=true
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/group.name: xxxxxx
finalizers:

  • ingress.k8s.aws/resources
    name: {{ .Values.appName }}
    namespace: {{ .Values.namespace }}
    spec:
    ingressClassName: alb

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 25, 2024
@ffMathy
Copy link

ffMathy commented Jan 25, 2024

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 25, 2024
@shraddhabang shraddhabang modified the milestones: backlog, v2.8.0 Feb 8, 2024
@MIJOTHY-V2
Copy link

MIJOTHY-V2 commented Apr 4, 2024

Also interested in this. Our use-case is that we'd like to make use of APIGateway through terraform, but HTTP APIs require a listener ARN to be supplied. Datasource lookups can lead to a lot of pain with "known at apply" forced recreations of e.g. VPC links. Hence we're creating a skeleton ALB and listeners through terraform, then handing off post-creation management of the ALB + listeners to the lb-controller. We'd also prefer deletion of the ingress resource to not cause deletion of the ALB + listeners, for the sake of clean terraform deletions (though that's not such a big deal as an out-of-band deletion can be wrangled, and preventing deletion can lead to issues with e.g. finalisers).

A setup we are currently trialling to work around the lack of first-class support is as follows:

  • Define some tags that will be added to the ingress as an annotation:

    some-qualifier/alb-creation-terraform-managed=true
    some-qualifier/alb-deletion-terraform-managed=true
    some-qualifier/alb-listener-creation-terraform-managed=true
    some-qualifier/alb-listener-deletion-terraform-managed=true
    
  • Attach a policy to the role we're assigning to the load-balancer-controller, to explicitly deny it permissions to create/delete ALBs and listeners with the above tags

    Policy terraform
    data "aws_iam_policy_document" "terraform_managed_ingress_alb" {
    
      statement {
        sid    = "CreateLoadBalancer"
        effect = "Deny"
    
        actions = [
          "elasticloadbalancing:CreateLoadBalancer",
        ]
        resources = ["*"]
    
        condition {
          test     = "Bool"
          variable = "aws:RequestTag/some-qualifier/alb-creation-terraform-managed"
          values   = ["true"]
        }
      }
    
      statement {
        sid    = "DeleteLoadBalancer"
        effect = "Deny"
    
        actions = [
          "elasticloadbalancing:DeleteLoadBalancer",
        ]
        resources = ["*"]
    
        condition {
          test     = "Bool"
          variable = "aws:ResourceTag/some-qualifier/alb-deletion-terraform-managed"
          values   = ["true"]
        }
      }
    
      statement {
        sid    = "CreateListener"
        effect = "Deny"
    
        actions = [
          "elasticloadbalancing:CreateListener",
        ]
        resources = ["*"]
    
        condition {
          test     = "Bool"
          variable = "aws:RequestTag/some-qualifier/alb-listener-creation-terraform-managed"
          values   = ["true"]
        }
      }
    
      statement {
        sid    = "DeleteListener"
        effect = "Deny"
    
        actions = [
          "elasticloadbalancing:DeleteListener",
        ]
        resources = ["*"]
    
        condition {
          test     = "Bool"
          variable = "aws:ResourceTag/some-qualifier/alb-listener-deletion-terraform-managed"
          values   = ["true"]
        }
      }
    }
    
  • Create the ALB in terraform with the following tags, with a lifecycle block to ignore certain changes (security groups, tags etc.):

    elbv2.k8s.aws/cluster # cluster name
    ingress.k8s.aws/stack # $ingress_namespace/$ingress_name
    ingress.k8s.aws/resource # "LoadBalancer"
    
  • Create the listener(s) in terraform with the following tags, with a lifecycle block to ignore certain changes (tags etc.):

    elbv2.k8s.aws/cluster # cluster name
    ingress.k8s.aws/stack # $ingress_namespace/$ingress_name
    ingress.k8s.aws/resource # listener port
    

This seems like it allows the ALB and listener(s) to be adopted by the aws-load-balancer-controller for the relevant ingress, and for all the resources to be updated but not deleted. So we are able to make use of the ALB resources in terraform without needing to rely on apply-time k8s datasource lookups (which has caused us a lot of pain). It feels a bit brittle in that it's depending on what could be seen as implementation details of the aws-load-balancer-controller. It would obviously be preferable to have this functionality be supported by the controller itself. But any sort of feedback on this approach would also be good to hear. We may be barking up the wrong tree by trying to have a terraformed APIGateway integrate with a k8s-managed ALB.

@koleror
Copy link

koleror commented May 17, 2024

Any update on this?
I'm also interested in precreating the ALB using terraform (and leave the controller handle the target groups - or fill them).
My use case is that I'd like to put a cloudfront in front of 2 ALBs to do some path based routing (can't do it in ALB unfortunately as I need to restrict some routes to prefix list, which can only be done at security group level).

@dwickr
Copy link

dwickr commented May 17, 2024

@koleror have you looked into using the TargetGroupBinding resource? It allows you to create the ALB and TG in Terraform and then have the AWS LB Controller register your nodes w/ the TG.

@koleror
Copy link

koleror commented May 18, 2024

@koleror have you looked into using the TargetGroupBinding resource? It allows you to create the ALB and TG in Terraform and then have the AWS LB Controller register your nodes w/ the TG.

Will give it a try, thanks!

@Matthieulvt
Copy link

I'm currently facing this issue since I've created an alb with target group in terraform and once I deploy the alb controller, it will create another alb.

Based on the documentation I can see that

The ALB for an IngressGroup is found by searching for an AWS tag ingress.k8s.aws/stack tag with the name of the IngressGroup as its value. For an implicit IngressGroup, the value is namespace/ingressname.

When the groupName of an IngressGroup for an Ingress is changed, the Ingress will be moved to a new IngressGroup and be supported by the ALB for the new IngressGroup. If the ALB for the new IngressGroup doesn't exist, a new ALB will be created.

If an IngressGroup no longer contains any Ingresses, the ALB for that IngressGroup will be deleted and any deletion protection of that ALB will be ignored.

Which can explain that the ALB controller first try to check if any ALB exist for the IngressClass specified through the ALB Tags, and If does not exist, a new ALB will be create. So In order to specify an ALB to the ALB controller, you need to update the tags in place in your terraform ALB creation (or elsewhere based on your infra management)

In my case I've noticed that I forgot to specify tags on my main ALB and the newly ALB created got the following tags :

elbv2.k8s.aws/cluster | prod-eks
ingress.k8s.aws/resource | LoadBalancer
ingress.k8s.aws/stack | <Namespace_Kube>/<Ingress_Name>

You can check which tags you need by letting the ALB controller create a new one to figure which tags you need on your main ALB.

After I changed the tags, I got my ALB controller working with my specified ALB. I'm also using the targetGroupBinding so it's using both my listener http and https on my ALB and update them when needed.

@ascopes
Copy link

ascopes commented Jul 3, 2024

Is there a workaround with TargetGroupBinding if you need to manage an entire ingress resource with an existing load balancer (so that the concern of routing can be kept within Kubernetes itself rather than needing IaC modification for each new target service)?

@hlascelles
Copy link

hlascelles commented Jul 5, 2024

You can set up the ALB up to point at port 80 in the cluster which is handled by traefik on any box. Thus all Ingress management is all in cluster, no IaC changes for new services. eg in CDK:

    const targetGroup = new ApplicationTargetGroup(this, "ClustersAlbTargetGroup", {
      vpc: vpc,
      targetGroupName: `ClustersAlbTG`,
      port: 80,
      protocol: ApplicationProtocol.HTTP,
      targetType: TargetType.IP,
      targets: [],
      // Test the Traefik ping endpoint
      healthCheck: {
        port: "9000",
        path: "/ping"
      }
    });

Of course you will have to do more work to get ALB>cluster comms over SSL.

@nethershaw
Copy link

for those of you looking to use the same ALB to mutualise different ingresses. you can acheive it by adding this annotation. alb.ingress.kubernetes.io/group.name: xxxxx add this annotation to every ingress you want to append to the same ALB

here's an example of the ingress manifest

kind: Ingress metadata: annotations: alb.ingress.kubernetes.io/certificate-arn: {{ .Values.networking.ingress.certificate }} alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]' alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=true alb.ingress.kubernetes.io/target-type: ip alb.ingress.kubernetes.io/group.name: xxxxxx finalizers:

  • ingress.k8s.aws/resources
    name: {{ .Values.appName }}
    namespace: {{ .Values.namespace }}
    spec:
    ingressClassName: alb

This does nothing of the kind if the controllers that would reference the common ALB are on separate Kubernetes clusters, which is exactly the scenario where it would be useful for migrating workloads.

@seifrajhi
Copy link
Member

+1
This is still relevant, I hope to see this feature implemented soon

@hlascelles
Copy link

hlascelles commented Oct 3, 2024

AWS have now published a post that shows how to do much of the deployment I was describing earlier: #228 (comment)

In their post they do talk about using a AWS Load Balancer Controller, however we do not use it (as we do not need/want the cluster to create the ALB outside of infra-as-code), and instead we get the traffic picked up by traefik in-cluster.

https://aws.amazon.com/blogs/containers/patterns-for-targetgroupbinding-with-aws-load-balancer-controller/

when_ingress_not_enough_what_is_tgb

This does all work, and fulfils the goal of this issue, but it is unnecessarily difficult... It would be good to have a one (or low number) of lines config to get this working. I feel a cluster should not be able to create infrastructure, including ALBs.

@ascopes
Copy link

ascopes commented Oct 6, 2024

@hlascelles I agree. In an ideal world it should just be configuring existing infrastructure covered by Terraform or CloudFormation. Otherwise it becomes unnecessarily difficult to ensure environments remain immutable as possible, and it increases the attack surface.

TargetGroupBinding itself only provides a clearly documented API for attaching directly to Services, not ingresses, so it means you still have to have an actual ingress controller behind AWS LBC as well, which is an extra hop that is unnecessary when using VPC CNI.

The Gateway controller is another option but is unsuitable for networks with large numbers of services as a starting point as it forces you to use AWS VPC lattice which gets extremely expensive very quickly.

@pushkark8s
Copy link

Closing this issue as we have addressed it with the new TargetGroupBinding feature for multi-cluster support in the AWS Load Balancer Controller. This allows customers to use a single NLB or ALB to route traffic to services across multiple EKS clusters.

Key points:

  1. Customers can now use the TargetGroupBinding feature to tie multiple EKS clusters to a single load balancer. This enables traffic distribution across clusters for improved resilience and availability.

  2. The multi-cluster support works for both Application Load Balancers (ALBs) and Network Load Balancers (NLBs).

  3. Detailed instructions on how to use this feature is available in the attached design patterns document - https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/use_cases/multi_cluster/

  4. Customer can use terraform or Cloudformation or APIs to create LoadBalancer and TG out of band and specify TGB to attach to the load balancer

  5. Customer can also use LBC controller to create LB and TG, please make sure to use deletion_protection annotation in your service or ingress, so that LB is not accidentally deleted.

@zac-nixon
Copy link
Collaborator

As mentioned above, I think this has been handled a multitude of ways.

@laurisvan
Copy link

I must say that after spending 3 hours in trying to make Kubernetes use our existing ALB (and eventually giving up), an easier way would be needed.

The things I tried:

I deliberately did not try TargetGroupBinding - it might have worked, but it would have required a reference to Services, whereas we are using a lot of pre-packaged Helm charts that limit the usage to configuring ingress annotations only. IMHO reworking target groups just because we cannot use an existing load balancer (or listener) handle feels plain wrong.

@MIJOTHY-V2
Copy link

MIJOTHY-V2 commented Feb 9, 2025

Closing this issue as we have addressed it with the new TargetGroupBinding feature for multi-cluster support in the AWS Load Balancer Controller. This allows customers to use a single NLB or ALB to route traffic to services across multiple EKS clusters.

Key points:

1. Customers can now use the TargetGroupBinding feature to tie multiple EKS clusters to a single load balancer. This enables traffic distribution across clusters for improved resilience and availability.

2. The multi-cluster support works for both Application Load Balancers (ALBs) and Network Load Balancers (NLBs).

3. Detailed instructions on how to use this feature is available in the attached design patterns document - https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/use_cases/multi_cluster/

4. Customer can use terraform or Cloudformation or APIs to create LoadBalancer and TG out of band and specify TGB to attach to the load balancer

5. Customer can also use LBC controller to create LB and TG, please make sure to use deletion_protection annotation in your service or ingress, so that LB is not accidentally deleted.

As has been described in many other comments, this doesn't address the issue completely; it fails to address the use-case where you want to create the ALB out of band, but have the controller create & manage the target groups itself (rather than adopt existing target groups). I don't think this issue should be closed.

@zac-nixon
Copy link
Collaborator

What is your use case for having a self managed LoadBalancer, but managed Target Group? During TG creation, you would have to manually attach it to the LB. To allow resource deletion, you would have to manually detach the TG from the LB. I don't see a good use case for a managed TG without managed LB.

@MIJOTHY-V2
Copy link

What is your use case for having a self managed LoadBalancer, but managed Target Group? During TG creation, you would have to manually attach it to the LB. To allow resource deletion, you would have to manually detach the TG from the LB. I don't see a good use case for a managed TG without managed LB.

Could you explain why managed TG/self-managed ALB implies manual attach/detaching? The expectation here would be something like

  1. Create ALB out of band
  2. Create ingress resource, specifying the ALB ARN or other identifier via annotation
  3. aws-load-balancer-controller "adopts" the ALB, and is responsible for creating and attaching/detaching target groups based on ingress rules, and doing other configuration

A few other comments that describe the use-case (& existing workarounds), including the original post and responses:

@jwenz723
Copy link
Contributor

I would prefer to manage the complete lifecycle of the ALB, listeners, and target groups via a k8s ingress resource, but I need to have an AWS WAF and AWS Global Accelerator attached to the alb. The aws-load-balancer-controller does not support creation of WAF or Global Accelerator.

So what I do is I provision the ALB, WAF, and Global Accelerator in a single terraform module that way it is easy to pass references to each of the resources around to the other resources.

I share my ALB across many k8s hosted applications/workloads and at the time of ALB creation it is unknown how many target groups I will need to have attached to my ALB.

Using a TargetGroupBinding is cumbersome because it requires me to first create the target groups via terraform (as I stated before I don’t know how many at ALB creation time). I would prefer to not use terraform at all, but I acknowledge that managing WAF and GA is out of scope for this project. Creating target groups is certainly not out of scope though.

In my world, developers onboard their apps into our k8s clusters on their own and deploy their k8s ingress with their app. They don’t typically interact with the terraform code which creates the ALB. Giving them the ability to create target groups on demand using an ingress resource removes the need for them to learn and interact with terraform to create a new target group.

This is why I would prefer to have the target groups created by aws-load-balancer-controller.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

No branches or pull requests