-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use an existing ALB #228
Comments
@countergram Thanks, we've heard this request and similar a few times now. Seems a feature that many would like is an ability to explicitly call our a named ALB via annotation (or eventually configmap). |
Is there any updates for this issues? |
Hey, I might have solved your problem in this PR: #830 . Testing and feedback is welcome :) |
@joshrosso Is there any update on this request? |
Relevant: #914 |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
I was able to use one ALB for multiple Ingress Resources in Version 1.0.1. I did create a new cluster and installed Version 1.1.2, which is creating a new ALB for each Ingress Resource. Is there anyway that I can use same ALB in 1.1.2 ? |
^ cross-posted at #984 (comment), #724 (comment) |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@leoskyrocker: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@M00nF1sh @joshrosso Any update here? I think this issue is still relevant today and yet unsolved. Having a way for Kubernetes to attach to existing ALBs/Target Groups is incredibly valuable for lots of reasons. Really surprised there's no way to do it right now. |
@gkrizek |
@M00nF1sh Great to hear! I hate to be "that guy" but is there a v2 anticipated release date? I like the CRD idea for target groups, I think that's the right direction. I think both options are valid for the ALB, however I think tags are preferred. Because with an ARN you might have to do some wacky stuff to get the ARN into a manifest/helm chart. With tags it'd be pretty easy to define without needed explicit values from AWS. Also it would allow for an ingress to attach to multiple ALBs if one chooses. |
@gkrizek There is no anticipated released date yet( i cannot promise one), but I'll keep update https://github.com/kubernetes-sigs/aws-alb-ingress-controller/projects/1 whenever i got time to work on it 🤣. |
having a single |
@M00nF1sh I figured so 😉 . Sounds good, I'll check out the alpha for now. Thanks for the help. |
@rifelpet (However, personally i favor to require an explicit annotation of What do you mean by It's possible to extend it to be like one group with multiple ALB(like auto-split rules), but is there really a use case for this? |
for those of you looking to use the same ALB to mutualise different ingresses. you can acheive it by adding this annotation. here's an example of the ingress manifest kind: Ingress
|
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
Also interested in this. Our use-case is that we'd like to make use of APIGateway through terraform, but HTTP APIs require a listener ARN to be supplied. Datasource lookups can lead to a lot of pain with "known at apply" forced recreations of e.g. VPC links. Hence we're creating a skeleton ALB and listeners through terraform, then handing off post-creation management of the ALB + listeners to the lb-controller. We'd also prefer deletion of the ingress resource to not cause deletion of the ALB + listeners, for the sake of clean terraform deletions (though that's not such a big deal as an out-of-band deletion can be wrangled, and preventing deletion can lead to issues with e.g. finalisers). A setup we are currently trialling to work around the lack of first-class support is as follows:
This seems like it allows the ALB and listener(s) to be adopted by the |
Any update on this? |
@koleror have you looked into using the TargetGroupBinding resource? It allows you to create the ALB and TG in Terraform and then have the AWS LB Controller register your nodes w/ the TG. |
Will give it a try, thanks! |
I'm currently facing this issue since I've created an alb with target group in terraform and once I deploy the alb controller, it will create another alb. Based on the documentation I can see that
When the groupName of an IngressGroup for an Ingress is changed, the Ingress will be moved to a new IngressGroup and be supported by the ALB for the new IngressGroup. If the ALB for the new IngressGroup doesn't exist, a new ALB will be created. If an IngressGroup no longer contains any Ingresses, the ALB for that IngressGroup will be deleted and any deletion protection of that ALB will be ignored. Which can explain that the ALB controller first try to check if any ALB exist for the IngressClass specified through the ALB Tags, and If does not exist, a new ALB will be create. So In order to specify an ALB to the ALB controller, you need to update the tags in place in your terraform ALB creation (or elsewhere based on your infra management) In my case I've noticed that I forgot to specify tags on my main ALB and the newly ALB created got the following tags :
You can check which tags you need by letting the ALB controller create a new one to figure which tags you need on your main ALB. After I changed the tags, I got my ALB controller working with my specified ALB. I'm also using the targetGroupBinding so it's using both my listener http and https on my ALB and update them when needed. |
Is there a workaround with TargetGroupBinding if you need to manage an entire ingress resource with an existing load balancer (so that the concern of routing can be kept within Kubernetes itself rather than needing IaC modification for each new target service)? |
You can set up the ALB up to point at port 80 in the cluster which is handled by traefik on any box. Thus all Ingress management is all in cluster, no IaC changes for new services. eg in CDK: const targetGroup = new ApplicationTargetGroup(this, "ClustersAlbTargetGroup", {
vpc: vpc,
targetGroupName: `ClustersAlbTG`,
port: 80,
protocol: ApplicationProtocol.HTTP,
targetType: TargetType.IP,
targets: [],
// Test the Traefik ping endpoint
healthCheck: {
port: "9000",
path: "/ping"
}
}); Of course you will have to do more work to get ALB>cluster comms over SSL. |
This does nothing of the kind if the controllers that would reference the common ALB are on separate Kubernetes clusters, which is exactly the scenario where it would be useful for migrating workloads. |
+1 |
AWS have now published a post that shows how to do much of the deployment I was describing earlier: #228 (comment) In their post they do talk about using a This does all work, and fulfils the goal of this issue, but it is unnecessarily difficult... It would be good to have a one (or low number) of lines config to get this working. I feel a cluster should not be able to create infrastructure, including ALBs. |
@hlascelles I agree. In an ideal world it should just be configuring existing infrastructure covered by Terraform or CloudFormation. Otherwise it becomes unnecessarily difficult to ensure environments remain immutable as possible, and it increases the attack surface. TargetGroupBinding itself only provides a clearly documented API for attaching directly to Services, not ingresses, so it means you still have to have an actual ingress controller behind AWS LBC as well, which is an extra hop that is unnecessary when using VPC CNI. The Gateway controller is another option but is unsuitable for networks with large numbers of services as a starting point as it forces you to use AWS VPC lattice which gets extremely expensive very quickly. |
Closing this issue as we have addressed it with the new TargetGroupBinding feature for multi-cluster support in the AWS Load Balancer Controller. This allows customers to use a single NLB or ALB to route traffic to services across multiple EKS clusters. Key points:
|
As mentioned above, I think this has been handled a multitude of ways. |
I must say that after spending 3 hours in trying to make Kubernetes use our existing ALB (and eventually giving up), an easier way would be needed. The things I tried:
I deliberately did not try TargetGroupBinding - it might have worked, but it would have required a reference to Services, whereas we are using a lot of pre-packaged Helm charts that limit the usage to configuring ingress annotations only. IMHO reworking target groups just because we cannot use an existing load balancer (or listener) handle feels plain wrong. |
As has been described in many other comments, this doesn't address the issue completely; it fails to address the use-case where you want to create the ALB out of band, but have the controller create & manage the target groups itself (rather than adopt existing target groups). I don't think this issue should be closed. |
What is your use case for having a self managed LoadBalancer, but managed Target Group? During TG creation, you would have to manually attach it to the LB. To allow resource deletion, you would have to manually detach the TG from the LB. I don't see a good use case for a managed TG without managed LB. |
Could you explain why managed TG/self-managed ALB implies manual attach/detaching? The expectation here would be something like
A few other comments that describe the use-case (& existing workarounds), including the original post and responses: |
I would prefer to manage the complete lifecycle of the ALB, listeners, and target groups via a k8s ingress resource, but I need to have an AWS WAF and AWS Global Accelerator attached to the alb. The aws-load-balancer-controller does not support creation of WAF or Global Accelerator. So what I do is I provision the ALB, WAF, and Global Accelerator in a single terraform module that way it is easy to pass references to each of the resources around to the other resources. I share my ALB across many k8s hosted applications/workloads and at the time of ALB creation it is unknown how many target groups I will need to have attached to my ALB. Using a TargetGroupBinding is cumbersome because it requires me to first create the target groups via terraform (as I stated before I don’t know how many at ALB creation time). I would prefer to not use terraform at all, but I acknowledge that managing WAF and GA is out of scope for this project. Creating target groups is certainly not out of scope though. In my world, developers onboard their apps into our k8s clusters on their own and deploy their k8s ingress with their app. They don’t typically interact with the terraform code which creates the ALB. Giving them the ability to create target groups on demand using an ingress resource removes the need for them to learn and interact with terraform to create a new target group. This is why I would prefer to have the target groups created by aws-load-balancer-controller. |
As a user of Terraform (or, substitute CloudFormation), I would like to use an existing ALB with the ingress controller so that I can keep my infrastructure automation centralized rather than in several different places. This also externalizes the various current and future associations between ALB and other parts of the infrastructure that may already be defined in TF/CFN (certs, Route53, WAF, CloudFront, other config).
The text was updated successfully, but these errors were encountered: