-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create option to reuse an existing ALB instead of creating a new ALB per Ingress #298
Comments
@bigkraig Any update? |
Wait... I guess I missed this in reading the documentation. Are you saying that every Ingress created deploys it's own ALB? So for our 60 or so ingresses we'd end up with 60 ALB's? What about different host names within the same ingress? Does that at least reuse the same ALB? |
@mwelch-ptc That is correct. There is a 1-to-1 mapping of |
Seems to be fairly costly. We have looked at other solutions due to this issue. |
What is everyones thoughts on how to prioritize the rules if a single ALB spans ingress resources and potentially event namespaces? I can see where in larger clusters multiple teams may accidentally take the same path. |
This is a general Kubernetes ingress issue, not specific to this ingress controller. I think the discussion of this should be had in a more general forum instead of an issue against this controller. |
Im tempted to say that this is not a general Kubernetes ingress issue. Existing load balancers supported by Kubernetes are Layer 4 - And are supported by Ingress Controllers that do the Layer 7 (This means they can use 1 load balancer and then deal with layer 7 when it gets into the cluster) ALB is Layer 7 and is dealing with it before it gets to Kubernetes, so we cannot assume they are going to change for this use case. As this becomes more standard i think this could change GCE suggests "If you are exposing an HTTP(S) service hosted on Kubernetes Engine, HTTP(S) load balancing is the recommended method for load balancing." and I would imagine as EKS kicks off it will suggest the same. |
We can already generally do this by having a singular ingress resource, although it makes whatever deployment scheme you're using for Kubernetes have to adjust to that. It's also worth pointing out that in Kubernetes Ingress documentation, it literally states:
I think it would be really nice to have the ability to do this in a nice way. |
@spacez320 I read that as that you can have an ingress with multiple services behind it, so a single load balancer for many services as opposed to a load balancer per service. There is still the issue that the IngressBackend type does not have a way to reference a service in another namespace. I think until the ingress resource spec is changed, there isn't a correct way of implementing something like this. |
@bigkraig I don't think this issue should be closed. The issue isn't about having a single |
@patrickf55places got it, within the namespace is possible with the spec but I am still unsure how we would organize the routes or resolve conflicts. |
@bigkraig Well, I think it's both, and I think that's what @patrickf55places meant by saying "possible across different namespaces, but not necessarily". We should be able to define an Ingress anywhere and share an Amazon load balancer, I think. I understand if there's limitations in the spec, though. Should someone go out and try to raise this issue with the wider community? Is that possibly already happening? |
What about using something similar to how nginx ingress handles it?
|
I was glad to find this GitHub issue and also bummed that it seems like it will be a long time before this will get implemented. It smells like there is a lot of complexity associated with the change and potentially not resources to dig into it. I'm assuming it will be many months and thus our engineering team is going to switch our technical approach to use a different load balancing ingress strategy with AWS costs that scale economically in-line with our needs. If that assessment feels wrong, please let me know. |
I've created another ingress controller that combines multiple ingress resources into a new one => https://github.com/jakubkulhan/ingress-merge Partial ingresses are annotated with |
Hey, I might have solved your problem in this PR: #830 . Testing and feedback is welcome :) |
I'm currently using ingress-merge and while it works I'm having issues with the health checks as the services that I'm exposing do different things by default and we don't have a standard health check url for all microservices, do you have a solution for this issue?, I think the limitation comes from aws-alb-ingress-controller rather than ingress-merge, but if there is a way to have different health checks that would be awesome. Thanks everyone for your effort. |
@fygge on slack gave me the answer:
Tested and works ok. |
ALB does that with listener rules priority, where a new rule gets lower priority than existing rules (excluded default rule). Problem will be if you set a priority number that conflicts with an existing rule. Maybe a new kind of ingress controller is in call here, something that with the Ingress object, controls only target groups and listener rules and attach then to a ALB created at the controller configuration(or at first object request). This is what this issue is asking with the current controller, but for larger organization, this could bring complexity with path or host-header rules, causing problems with overlapping Ingress objects. |
Would be great to get this feature either merged into the latest code or revitalise v1.2 I would love to use this feature, but also want to be able to use IAM Role for Service Accounts in EKS and having tested the tag v1.0.0-alpha.1 that support hasn't been merged in (works great on v1.1.8). |
How did you get that to work? Can you share the ingress definitions for a couple applications? Are you getting it to work by specifying same ingress name but define a different rule in both? Also, are you using a package / deployment manager like Helm 3? I believe it has validations to not deploy an existing resource (The request to create the ingress does not even get to k8s) |
Here's a complete helm template used to define ingress for one particular application.
We use Helm 3, but exact same definition worked in Helm 2 too. The key part is 'group.name' metadata above. Two helm releases that use this template result in two ingress objects, with different names, and then in two target groups used by a single ALB. Other applications define their ingresses in a similar way, and also share the same ALB. The deployment of alb-ingress-controller is basically using default options, except for
|
Thanks ! I will give it a try and confirm if its working for our use-cases. Btw - #914 states that this is not production ready and should not be used. Any idea when it will be available as an official release? |
|
I was came across the same kind of requirement. Have to deploy 20+ services in kubernetes in AWS Fargate profile. Since the fargate is not support NLB as of now the only option is ALB. But each deployment created new Alb. This needs 20+ public ips and also more cost on alb. I achieved the solution with two ingress controller, alb ingress and nginx ingress. Nginx will be the target for alb with port 80. And application services running in cluster with different port and namespaces will communicate nginx. I have documented my solution. I think it will help your requirement. |
Any timelines on when this functionality will be part of a stable release? |
@MXClyde we are doing the final phase of testing for the new aws-loadbalancer-controller and the RC image is here https://github.com/kubernetes-sigs/aws-alb-ingress-controller/releases/tag/v2.0.0-rc5. You can find details here https://github.com/kubernetes-sigs/aws-alb-ingress-controller/tree/v2_ga. Stay tuned for What's new coming soon ! |
We have published the v2.0.0 release at https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases/tag/v2.0.0 The documents have been updated https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html and https://docs.aws.amazon.com/eks/latest/userguide/load-balancing.html and the blog post is here https://aws.amazon.com/blogs/containers/introducing-aws-load-balancer-controller/ |
Can you guys improve your documentation system like how mongodb or elasticsearch does it? Like in a way that I can select which version of the library/system documentation I want to view? I mean, can you not overwrite the published url with the latest one, coz I have to maintain some old versions, coz it gets confusing trying remember what I did before (with 1.1.4 for example) and looking at version 2. |
It seems this is still not implemented. Any plans for supporting it in future? |
I am quite sure this feature is already implemented, I used it recently. |
Oops, sorry, my bad. I actually thought about reusing existing ALB created outside of Ingress controller (sharing the ALB between multiple EKS clusters). But nevermind. |
I've been thinking to try manually adding the tags that the ingress controller uses to identify the ALBs it creates across restarts, so the ALB can be created outside kubernetes and simply controlled by the controller. |
@Tzrlk : just to clarify, you can configure these things with ALB controller. The
I agree that reusing an existing balancer might be a nice feature, but most of configurability was added to the ALB controller. |
@vprus Indeed I've implemented that method previously. Looking through more of the issues, @angapov's requirement is answered as part of #228 with the release of TargetGroupBindings. In fact, the method I was suggesting above to hack it into place appears to have worked for one of the last commenters there. |
Maybe its me but it didn't work as intended for me. I created an ingress resource and it instantiated an alb, I created a secondary ingress and specified the group name and i did not see that a record was created in the hosted zone or in the rules of the alb's listener.
|
@CPWu What's in alb ingress controller logs? and what's in external-dns logs? |
To whom it may concern, I created a PR that partially solves this: #2655 The idea is that you still generate your ALB, the alb rules and the target group with terraform (or whatever else) |
I was able to resolve this using below steps :-
Once the above step is done add the below annotation mapping to your ingress.yaml file alb.ingress.kubernetes.io/group.name: "exisitingalb" Save and apply this config using kubectl apply -f ingress.yaml. This should use the existing load balancer as an ingress resource to provision the alb. All the magic happens via tags |
This didn't work for me. If you change the ingress name the existing ALB gets deleted and a new ALB is provisioned. Is there currently no way to re-use an existing ALB or make sure you get the same domain?. Are there any plans to support this? |
@carlosrochap Did you examine the AWS loadbalancer controller logs to find more details why is it happening? |
@divvy19 or anyone else that knows...if the ingress is deleted, does the controller try to delete the ALB? We can remove the permissions from the controller to be able to create and delete the ALB but I would also be curious if this floods the controller logs with nonstop exceptions while the controller tries to reconcile the deleted ingress object and delete the ALB? |
This blog looks to be related: "TargetGroupBinding is a custom resource managed by the AWS Load Balancer Controller. It allows you to expose Kubernetes applications using existing load balancers. A TargetGroupBinding resource binds a Kubernetes Service with a load balancer target group. When you create a TargetGroupBinding resource, the controller automatically configures the target group to route traffic to a Service." |
I read in this comment #85 (comment) that host-based routing was released for AWS ALBs shortly after ALB Ingress Controller was released.
It would be pretty cool to have an option to reuse an ALB for an Ingress via annotation -- i'd be interested in contributing towards this, but I'm not sure what's needed to make this feasible.
The text was updated successfully, but these errors were encountered: