-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature request: shared ALB across multiple Ingress objects #688
Comments
How about adding an annotation like
|
If the group works in all others as I sketched then I think its no different from a security perspective, but it does have the downside of requiring explicit coordination between different teams. I think conflicts have to be dealt with by the controller - refuse to update an ALB where rules currently conflict, and log errors to the relevant ingresses - not hard to do. Use the group idea could make it reasonable to not just use different hostnames but also permit routes within one or more common hostnames, because namespaces are opting into this - but that seems harder to reason about than just 'unique hostnames apply' :) - OTOH, some folk may need that! I think I see the group concept as largely a separate feature building on the sharing capability; I have no objections to having it exist and even be a mandatory thing to enable sharing - for our use case it would be fine. w.r.t. performance, I don't understand the problem:
If the problem with many ingresses today is due to the ALB interactions and AWS ratelimits, then that would be made better simply by the act of grouping; if the problem is k8s interactions - I'd need more detail to model and help - but happy to do so... |
I like the group idea and with regards to conflict resolution, we could log errors on the ingress that created the conflict, leveraging the last modified timestamps. It might look fairly similar in a debugging situation as we have now. I'm also a bit of a fan of having the group be an opt-in feature via an annotation either on the ingress or at a namespace level. From a performance perspective, its worth mentioning that ALBs do not scale instantaneously and when you start to share an ALB across ingresses, you are susceptible to noisy neighbors. The scaling rate is also something along the lines of 2x every N minutes if I remember correctly. That is slow but it will be increasingly slow if multiple services on the ALB receive a burst of traffic simultaneously, which is very common in SOA. All the more reason where from my perspective for Ticketmasters use case, its an opt-in cost optimization. |
:D if we have such groups, we can add a optional controller option like defaultIngressGroup, which can applies to all ingress without explicit group annotation, which fulfills @rbtcollins use case 😄. For the performance issue, it mainly came from AWS side. the k8s side is already relies on cache. |
Cool; so to summarise:
|
Yeah, i'd like to implement this once i finished the refactor on hand.. |
I think conflict handling on things that are either a) already grouped or b) requested to group should be to:
Example of the special cases I'm thinking of: While we can't canonically say which of A or B should get host X:/, Y:/ is unconflicted, and we could in principle merge that into the existing ALB config without altering any existing rules for X. Then only A and B ingresses need error events registered. |
Is any activity around ALB sharing stuff? Maybe some help needed? |
@dshmatov |
Hi Guys, Any idea when the sharing will be implemented? |
@sppwf sorry the delay on this feature. It need far more work than i originally expected.(also more features to support like use it to configure auth/rules on specific ingress path, and i toke a long vocation in Dec =.=).
(BTW, is this an hard blocker for your use case? i.e. The only use case that cannot be mitigated with alternatives is have services in multiple namespaces, and wants a single ALB. I can try re-priority them if it's indeed some hard blocker) |
Hi, Well, right know I am using Nginx controller with clasical ELB with SSL on it. We are still in development stage, however I would like to move to ALB once we get to prod. we keep our services in different namespaces, however i still want to use same ALB from cost perspective and operational one as well. Two months is fine to wait, hope that will be start of March. Thanks |
Hey, we want to use a single ALB for services in multiple namespaces. They should serve different HTTP paths for the same hostname. Our RBAC setup require different namespaces for different services. Thanks for your effort! Can we help you somehow? Maybe giving early feedback? Regards, |
Hello, our use case is a microservice API that has multiple components developed and deployed by different teams into different namespaces (with proper RBAC), but we want to expose it through one endpoint (ALB). Having ALB per namespace is a huge blocker for us (mainly because of very strict limits on subnets size in our air-gapped env). Best, |
sure, i'll give update when there is an draft version available and testing will be super helpful 👍 @xdrus I'm aware this is an major pain point for different namespaces. Will address this soon. |
Hey, I might have solved your problem in this PR: #830 . Testing and feedback is welcome :) |
hi @marcosdiez . How do i test this single ALB feature.? The PR is still not merged. Any docs or updated config i need to use ? Could you guide me ? |
@linuxbsdfreak I updated the instructions on the PR itself: #830 |
@marcosdiez . I am installing the package via helm charts as described here https://hub.helm.sh/charts/incubator/aws-alb-ingress-controller Set the clusterName, awsRegion, awsVpcID, podAnnotations.iam.amazonaws.com/role: k8s-alb-controller, What do i need to do extra ? |
Please don't use helm this time. Please install it using |
@marcodiez- Could you post me the final yaml that i need to use for single ALB with multiple namespaces ? |
@marcosdiez . This is the yaml
|
@linuxbsdfreak https://gist.github.com/marcosdiez/d6943375c6d8b1dc607529e42d01f44e don't forget to change some fields like the AWS KEY and SECRET (in case you are not using IAM) and some parameters like the cluster name |
@marcodiez. Thanks for the info. How does the service config look like? What annotations do i need to provide for the service to use a single ALB? The ALB name in my case is k8s-alb in the controller config |
You don't have to change your annotations. Just try. It should work! |
It's been 2 years since. Is there anyone still working on this? |
this would be very useful to us. |
I have two services with one ALB each one with different hostnames and ACM certificates. Now joining two ALB in one |
Bump. This would be a really useful feature to have. |
+1 |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@M00nF1sh any update on this request? |
/remove-lifecycle stale |
no update? :( |
This is a desperately needed feature for my organization as we already have hundreds of load balancers created by various departments for lots of legacy apps. My department is transitioning one set of environments over to EKS which would further explode that number if we can't share ALBs created by the controller. |
I was came across the same kind of requirement. Have to deploy 20+ services in kubernetes in AWS Fargate profile. Since the fargate is not support NLB as of now the only option is ALB. But each deployment created new Alb. This needs 20+ public ips and also more cost on alb. I achieved the solution with two ingress controller, alb ingress and nginx ingress. Nginx will be the target for alb with port 80. And application services running in cluster with different port and namespaces will communicate nginx. I have documented my solution. I think it will help your requirement. |
Alternative solution: https://github.com/zalando-incubator/kube-ingress-aws-controller
|
@rajeshwrn Yes, I read it before. There is no cotradiction between solution. We also use alb + ingress controller, but few days ago I found zalando ingress controller, and it handles cases with multiple domains better, from my humble point of view |
@excavador Sure, I will try this out. But do you have any idea whether it will work on EKS Fargate. |
Hi guys, just share my 2 cents. Complete solutions might need AWS ACM (HTTPS) and AWS route53 (DNS) to work together. Most of you miss this part and start a new way of doing things. Stacked load balancer can be hard to troubleshoot. So think twice first. For example timeout and headers.. you will know what I mean when you start using it. ALB isn't always the best beacuse of the pre-warm behavior, meaning hard to be zero downtime. You may consider NLB. Another reason is because ALB is HTTP only. But I still like it because of target group binding. To have 2 ACM, you just need to put a comma in between the ARN. Also like to provide AWS provided way of NLB to do the same thing. Cheers and wish you have fun. |
@teochenglim ALB + ACM works perfectly fine, as well as SNI (multi-domain) solution. If alb-ingress-controller would be able to group separate small ingresses to single ALB me personally would be happy completely |
https://github.com/zalando-incubator/kube-ingress-aws-controller looks very very nice. Would you consider helm Charts it? |
@teochenglim I am not an developer, just random user from random company :) |
We are doing the final phase of testing for the new aws-loadbalancer-controller and the RC image is here https://github.com/kubernetes-sigs/aws-alb-ingress-controller/releases/tag/v2.0.0-rc5. You can find details here https://github.com/kubernetes-sigs/aws-alb-ingress-controller/tree/v2_ga. Stay tuned for What's new coming soon ! |
Looked into documentation of new version: https://github.com/kubernetes-sigs/aws-alb-ingress-controller/blob/v2_ga/docs/guide/ingress/annotations.md WOW. Just WOW. Thank you for your work! |
We have published the v2.0.0 release at https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases/tag/v2.0.0 The documents have been updated https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html and https://docs.aws.amazon.com/eks/latest/userguide/load-balancing.html and the blog post is here https://aws.amazon.com/blogs/containers/introducing-aws-load-balancer-controller/ |
closing this as v2.0.0 is released |
I want to highlight that this is not 'cross-namespace ingress', which is contentious in the core ( kubernetes/kubernetes#17088 ) but rather the ability to use a single ALB, with vhosts, to satisfy multiple independent ingress objects, which may safely be spread over multiple namespaces.
The key difference is that rather than having one ingress which is able to send traffic in ways unexpected to services, each team or service or whatever defines an ingress, and the vhost(s) they want, exactly as normal- its just an optimisation for the controller to reuse an ALB rather than making new ones.
Why? Two big reasons.
security group limitations. With many ELBs (we were exceeding 120ish at one point), the number of security groups becomes a limitation, but actually they are all very boring, almost identical. Having a fixed sg works, until someone needs a different port, and then the whole self-service proposition breaks and an admin has to fix things. ugh.
For test cluster deployment we've found DNS propogation issues (ELB provisioned but NXDOMAIN on a lookup too it if you are too fast (and then short negative caching and argggh)., and AWS ELB provisioning times (not all the time, but often enough to find assumptions in code rather to often to be good) to be ongoing stability and performance problems, and we're expecting that to translate to ALB's too; if someone from AWS can categorically state that thats not the case, then yippee, otherwise ... :)
The basic top level sketch is just to accept additional ingress documents that use different virtual hosts, and have the same ALB configuration, and then bundle them onto the same ALB.
Other ingress controllers like nginx have done exactly this: (https://github.com/ProProgrammer/cross-namespace-nginx-ingress-kubernetes).
Seems to me that it could be made a controller cmdline option to enable, making it opt-in, and thus mitigating any reliability / stability issues by not exposing every user to them.
Possible issues:
what of it: its broken, but its broken whether or not the ALB is shared.
ignore the change until all ingresses that are sharing the same ALB agree on the new setting (whatever it is), then action it (and in the mean time complain via events).
The text was updated successfully, but these errors were encountered: