Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create option to reuse an existing ALB instead of creating a new ALB per Ingress #298

Closed
julianvmodesto opened this issue Jan 10, 2018 · 71 comments
Labels
dependency/external help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature.

Comments

@julianvmodesto
Copy link

I read in this comment #85 (comment) that host-based routing was released for AWS ALBs shortly after ALB Ingress Controller was released.

It would be pretty cool to have an option to reuse an ALB for an Ingress via annotation -- i'd be interested in contributing towards this, but I'm not sure what's needed to make this feasible.

@bigkraig bigkraig added help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. labels Feb 5, 2018
@pperzyna
Copy link

@bigkraig Any update?

@mwelch-ptc
Copy link

Wait... I guess I missed this in reading the documentation. Are you saying that every Ingress created deploys it's own ALB? So for our 60 or so ingresses we'd end up with 60 ALB's? What about different host names within the same ingress? Does that at least reuse the same ALB?

@patrickf55places
Copy link

@mwelch-ptc That is correct. There is a 1-to-1 mapping of Ingress resources to ALBs, even if host names are the same.

@kurtdavis
Copy link

Seems to be fairly costly. We have looked at other solutions due to this issue.

@bigkraig
Copy link

bigkraig commented Jun 20, 2018

What is everyones thoughts on how to prioritize the rules if a single ALB spans ingress resources and potentially event namespaces? I can see where in larger clusters multiple teams may accidentally take the same path.

@ghost
Copy link

ghost commented Jun 21, 2018

What is everyones thoughts on how to prioritize the rules if a single ALB spans ingress resources and potentially event namespaces? I can see where in larger clusters multiple teams may accidentally take the same path.

This is a general Kubernetes ingress issue, not specific to this ingress controller. I think the discussion of this should be had in a more general forum instead of an issue against this controller.

@whithajess
Copy link

whithajess commented Jul 16, 2018

Im tempted to say that this is not a general Kubernetes ingress issue.

Existing load balancers supported by Kubernetes are Layer 4 - And are supported by Ingress Controllers that do the Layer 7 (This means they can use 1 load balancer and then deal with layer 7 when it gets into the cluster)

ALB is Layer 7 and is dealing with it before it gets to Kubernetes, so we cannot assume they are going to change for this use case.

As this becomes more standard i think this could change GCE suggests "If you are exposing an HTTP(S) service hosted on Kubernetes Engine, HTTP(S) load balancing is the recommended method for load balancing." and I would imagine as EKS kicks off it will suggest the same.

@spacez320
Copy link

We can already generally do this by having a singular ingress resource, although it makes whatever deployment scheme you're using for Kubernetes have to adjust to that. It's also worth pointing out that in Kubernetes Ingress documentation, it literally states:

An Ingress allows you to keep the number of loadbalancers down to a minimum.

I think it would be really nice to have the ability to do this in a nice way.

@bigkraig
Copy link

bigkraig commented Aug 3, 2018

@spacez320 I read that as that you can have an ingress with multiple services behind it, so a single load balancer for many services as opposed to a load balancer per service.

There is still the issue that the IngressBackend type does not have a way to reference a service in another namespace. I think until the ingress resource spec is changed, there isn't a correct way of implementing something like this.

@bigkraig bigkraig closed this as completed Aug 3, 2018
@patrickf55places
Copy link

@bigkraig I don't think this issue should be closed. The issue isn't about having a single Ingress resource that can span multiple namespaces. It is about having multiple Ingress resources (possible across different namespaces, but not necessarily) that all use the same AWS application load balancer.

@bigkraig bigkraig reopened this Aug 3, 2018
@bigkraig
Copy link

bigkraig commented Aug 3, 2018

@patrickf55places got it, within the namespace is possible with the spec but I am still unsure how we would organize the routes or resolve conflicts.

@spacez320
Copy link

@bigkraig Well, I think it's both, and I think that's what @patrickf55places meant by saying "possible across different namespaces, but not necessarily". We should be able to define an Ingress anywhere and share an Amazon load balancer, I think.

I understand if there's limitations in the spec, though. Should someone go out and try to raise this issue with the wider community? Is that possibly already happening?

@natefox
Copy link

natefox commented Aug 15, 2018

What about using something similar to how nginx ingress handles it?
https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/mergeable-ingress-types

Multiple minions can be applied per master as long as they do not have conflicting paths. If a conflicting path is present then the path defined on the oldest minion will be used.

@joegoggins
Copy link

I was glad to find this GitHub issue and also bummed that it seems like it will be a long time before this will get implemented. It smells like there is a lot of complexity associated with the change and potentially not resources to dig into it. I'm assuming it will be many months and thus our engineering team is going to switch our technical approach to use a different load balancing ingress strategy with AWS costs that scale economically in-line with our needs. If that assessment feels wrong, please let me know.

@jakubkulhan
Copy link

I've created another ingress controller that combines multiple ingress resources into a new one => https://github.com/jakubkulhan/ingress-merge

Partial ingresses are annotated with kubernetes.io/ingress.class: merge, merge ingress controller processes them and outputs a new ingress annotated with kubernetes.io/ingress.class: alb, then ALB ingress controller takes over and creates single AWS load balancer.

@marcosdiez
Copy link
Contributor

Hey, I might have solved your problem in this PR: #830 . Testing and feedback is welcome :)

@kainlite
Copy link

I'm currently using ingress-merge and while it works I'm having issues with the health checks as the services that I'm exposing do different things by default and we don't have a standard health check url for all microservices, do you have a solution for this issue?, I think the limitation comes from aws-alb-ingress-controller rather than ingress-merge, but if there is a way to have different health checks that would be awesome. Thanks everyone for your effort.

@kainlite
Copy link

@fygge on slack gave me the answer:

You can put the health check annotation on the service instead of on the ingress resource. Thereby having one health check per target group / service.

Tested and works ok.

@mlsmaycon
Copy link

mlsmaycon commented Feb 17, 2019

What about using something similar to how nginx ingress handles it?
https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/mergeable-ingress-types

Multiple minions can be applied per master as long as they do not have conflicting paths. If a conflicting path is present then the path defined on the oldest minion will be used.

ALB does that with listener rules priority, where a new rule gets lower priority than existing rules (excluded default rule). Problem will be if you set a priority number that conflicts with an existing rule.

Maybe a new kind of ingress controller is in call here, something that with the Ingress object, controls only target groups and listener rules and attach then to a ALB created at the controller configuration(or at first object request). This is what this issue is asking with the current controller, but for larger organization, this could bring complexity with path or host-header rules, causing problems with overlapping Ingress objects.

@kirkdave
Copy link

kirkdave commented Aug 5, 2020

Would be great to get this feature either merged into the latest code or revitalise v1.2

I would love to use this feature, but also want to be able to use IAM Role for Service Accounts in EKS and having tested the tag v1.0.0-alpha.1 that support hasn't been merged in (works great on v1.1.8).

@dmanchikalapudi
Copy link

For avoidance of doubt, this issue is actually fixed in a 1.2 alpha release, specifically docker.io/amazon/aws-alb-ingress-controller:v1.2.0-alpha.1 and I have a dozen of ingresses sharing single ALB. But, that alpha release was made a year ago. It would be very nice to get some clarity whether it's coming in any official form.

How did you get that to work? Can you share the ingress definitions for a couple applications? Are you getting it to work by specifying same ingress name but define a different rule in both? Also, are you using a package / deployment manager like Helm 3? I believe it has validations to not deploy an existing resource (The request to create the ingress does not even get to k8s)

@vprus
Copy link

vprus commented Aug 13, 2020

Here's a complete helm template used to define ingress for one particular application.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: {{ .Release.Name }}-ingress
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internal
    alb.ingress.kubernetes.io/group.name: analytics
  labels:
    app: {{ .Release.Name }}
spec:
  rules:
    - host: {{ .Release.Name }}.somecompany.a
      http:
        paths:
          - path: /*
            backend:
              serviceName: {{ .Release.Name }}-jobmanager-external
              servicePort: 8081

We use Helm 3, but exact same definition worked in Helm 2 too. The key part is 'group.name' metadata above. Two helm releases that use this template result in two ingress objects, with different names, and then in two target groups used by a single ALB. Other applications define their ingresses in a similar way, and also share the same ALB. The deployment of alb-ingress-controller is basically using default options, except for

image: docker.io/amazon/aws-alb-ingress-controller:v1.2.0-alpha.1

@dmanchikalapudi
Copy link

Here's a complete helm template used to define ingress for one particular application.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: {{ .Release.Name }}-ingress
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internal
    alb.ingress.kubernetes.io/group.name: analytics
  labels:
    app: {{ .Release.Name }}
spec:
  rules:
    - host: {{ .Release.Name }}.somecompany.a
      http:
        paths:
          - path: /*
            backend:
              serviceName: {{ .Release.Name }}-jobmanager-external
              servicePort: 8081

We use Helm 3, but exact same definition worked in Helm 2 too. The key part is 'group.name' metadata above. Two helm releases that use this template result in two ingress objects, with different names, and then in two target groups used by a single ALB. Other applications define their ingresses in a similar way, and also share the same ALB. The deployment of alb-ingress-controller is basically using default options, except for

image: docker.io/amazon/aws-alb-ingress-controller:v1.2.0-alpha.1

Thanks ! I will give it a try and confirm if its working for our use-cases.

Btw - #914 states that this is not production ready and should not be used. Any idea when it will be available as an official release?

@ffjia
Copy link

ffjia commented Aug 26, 2020

IAM Role for Service Accounts
@kirkdave did you make the ALB ingress IRSA work in EKS?

@kirkdave
Copy link

@ffjia It works with IRSA if you either build the Docker image from the branch or, as I did, use the image that @M00nF1sh created in #914 - m00nf1sh/aws-alb-ingress-controller:v1.2.0-alpha.2

@rajeshwrn
Copy link

I was came across the same kind of requirement.

Have to deploy 20+ services in kubernetes in AWS Fargate profile. Since the fargate is not support NLB as of now the only option is ALB. But each deployment created new Alb. This needs 20+ public ips and also more cost on alb.

I achieved the solution with two ingress controller, alb ingress and nginx ingress.

Nginx will be the target for alb with port 80. And application services running in cluster with different port and namespaces will communicate nginx.

I have documented my solution. I think it will help your requirement.

https://github.com/rajeshwrn/alb-nginx-controller

@MXClyde
Copy link

MXClyde commented Oct 6, 2020

Any timelines on when this functionality will be part of a stable release?

@astrived
Copy link

@MXClyde we are doing the final phase of testing for the new aws-loadbalancer-controller and the RC image is here https://github.com/kubernetes-sigs/aws-alb-ingress-controller/releases/tag/v2.0.0-rc5. You can find details here https://github.com/kubernetes-sigs/aws-alb-ingress-controller/tree/v2_ga. Stay tuned for What's new coming soon !

@astrived
Copy link

We have published the v2.0.0 release at https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases/tag/v2.0.0 The documents have been updated https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html and https://docs.aws.amazon.com/eks/latest/userguide/load-balancing.html and the blog post is here https://aws.amazon.com/blogs/containers/introducing-aws-load-balancer-controller/

@haalcala
Copy link

Can you guys improve your documentation system like how mongodb or elasticsearch does it? Like in a way that I can select which version of the library/system documentation I want to view? I mean, can you not overwrite the published url with the latest one, coz I have to maintain some old versions, coz it gets confusing trying remember what I did before (with 1.1.4 for example) and looking at version 2.

@angapov
Copy link

angapov commented May 17, 2022

It seems this is still not implemented. Any plans for supporting it in future?

@GoodMirek
Copy link

I am quite sure this feature is already implemented, I used it recently.
Look at https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/ingress/annotations/#ingressgroup

@angapov
Copy link

angapov commented May 17, 2022

Oops, sorry, my bad. I actually thought about reusing existing ALB created outside of Ingress controller (sharing the ALB between multiple EKS clusters). But nevermind.

@Tzrlk
Copy link

Tzrlk commented May 20, 2022

I've been thinking to try manually adding the tags that the ingress controller uses to identify the ALBs it creates across restarts, so the ALB can be created outside kubernetes and simply controlled by the controller.
The benefits of this approach are that I wouldn't have to give the controller permission to create ALBs, only target groups, etc. and could reduce the complexity of my k8s config significantly by configuring the obvious stuff up-front (http -> https redir, ssl cert, etc.)
It'd be great if the devs can confirm whether that approach is currently possible, otherwise when I get back to that area of my current project, I'll give it a spin, and report back.

@vprus
Copy link

vprus commented May 24, 2022

@Tzrlk : just to clarify, you can configure these things with ALB controller. The alb.ingress.kubernetes.io/certificate-arn annotation allows to set SSL certificate, the alb.ingress.kubernetes.io/ssl-policy one sets SSL policy and to redirect HTTP you do something like

alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'

I agree that reusing an existing balancer might be a nice feature, but most of configurability was added to the ALB controller.

@Tzrlk
Copy link

Tzrlk commented May 24, 2022

@vprus Indeed I've implemented that method previously. Looking through more of the issues, @angapov's requirement is answered as part of #228 with the release of TargetGroupBindings. In fact, the method I was suggesting above to hack it into place appears to have worked for one of the last commenters there.

@CPWu
Copy link

CPWu commented May 27, 2022

Maybe its me but it didn't work as intended for me. I created an ingress resource and it instantiated an alb, I created a secondary ingress and specified the group name and i did not see that a record was created in the hosted zone or in the rules of the alb's listener.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: hello-bye-ingress
  namespace: hello-world
  annotations:
    alb.ingress.kubernetes.io/scheme: internet-facing
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/load-balancer-name: hello-world-loadbalancer
    alb.ingress.kubernetes.io/target-type: instance
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
    alb.ingress.kubernetes.io/healthcheck-path: /
    alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
    alb.ingress.kubernetes.io/group.name: test
spec:
  rules:
  - host: helloworld.url.test.ai
    http:
      paths: 
      - backend:
          serviceName: hello-world-service
          servicePort: 80
  - host: byeworld.url.test.ai
    http:
      paths: 
      - backend:
          serviceName: bye-world-service
          servicePort: 80
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: secondary-ingress
  namespace: hello-world
  annotations:
    alb.ingress.kubernetes.io/group.name: test
spec:
  rules:
  - host: this.url.test.ai
    http:
      paths: 
      - backend:
          serviceName: this-world-service
          servicePort: 80

@thunder-spb
Copy link

@CPWu What's in alb ingress controller logs? and what's in external-dns logs?

@marcosdiez
Copy link
Contributor

To whom it may concern, I created a PR that partially solves this: #2655

The idea is that you still generate your ALB, the alb rules and the target group with terraform (or whatever else)
but now it's possible for the aws load balancer controller to locate the target group via it's name (which happens to be unique).

@divvy19
Copy link

divvy19 commented Aug 1, 2022

I was able to resolve this using below steps :-
Add below tags in the already existing ALB --

  1. ingress.k8s.aws/resource --> LoadBalancer
  2. ingress.k8s.aws/stack --> exisitingalb
  3. elbv2.k8s.aws/cluster --> name of your cluster

Once the above step is done add the below annotation mapping to your ingress.yaml file

alb.ingress.kubernetes.io/group.name: "exisitingalb"

Save and apply this config using kubectl apply -f ingress.yaml.

This should use the existing load balancer as an ingress resource to provision the alb. All the magic happens via tags

@carlosrochap
Copy link

carlosrochap commented Oct 4, 2022

Reference in

This didn't work for me. If you change the ingress name the existing ALB gets deleted and a new ALB is provisioned. Is there currently no way to re-use an existing ALB or make sure you get the same domain?.

Are there any plans to support this?

@GoodMirek
Copy link

This didn't work for me. If you change the ingress name the existing ALB gets deleted and a new ALB is provisioned.

@carlosrochap Did you examine the AWS loadbalancer controller logs to find more details why is it happening?

@geastman3
Copy link

@divvy19 or anyone else that knows...if the ingress is deleted, does the controller try to delete the ALB? We can remove the permissions from the controller to be able to create and delete the ALB but I would also be curious if this floods the controller logs with nonstop exceptions while the controller tries to reconcile the deleted ingress object and delete the ALB?

@warnerm8
Copy link

warnerm8 commented Aug 16, 2023

This blog looks to be related:

"TargetGroupBinding is a custom resource managed by the AWS Load Balancer Controller. It allows you to expose Kubernetes applications using existing load balancers. A TargetGroupBinding resource binds a Kubernetes Service with a load balancer target group. When you create a TargetGroupBinding resource, the controller automatically configures the target group to route traffic to a Service."

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependency/external help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

No branches or pull requests