Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

enabled DNS Service discovery in HAProxy #154

Merged
merged 9 commits into from
Jul 19, 2018

Conversation

oktalz
Copy link
Contributor

@oktalz oktalz commented May 11, 2018

This is just a preview, comments are welcome :)

Add option to use DNS Service discovery
docs are missing, but they will be added soon

@jcmoraisjr
Copy link
Owner

Hi, thanks for this PR! A couple of tips (and question) regarding the changes:

  • Current annotation strategy is boring, needs a refactor on backend data, but I'll ask you to use it in order to have only one strategy. You can follow how pkg/common/ingress/annotations/balance work, it's pretty much the same for dnsresolver.
  • cfg.haproxyConfig.DNSResolver has the dns-resolver configmap option. Use it instead of configMap.Data[].
  • What about the following syntax on the configmap: resolvername1=ip:port,ip:port,...;resolvername2=ip:port,ip:port,.... The resolvername and a counter could be used to name the nameserver.
  • Doc should talk about use headless service on ingress resources otherwise the DNS will resolve all the backends to the same service IP address, which means HAProxy won't control the load balancing. This should be a headless service because I couldn't find a way to query the endpoints on "normal" services and if this in fact exists @rikatz should know.
  • Apparently UseResolver is being populated twice (dynconfig)
  • What about edit the server range instead of creating a server-template?

And a couple of doubts about HAProxy:

  • If a DNS query return several IPs (the endpoints), all of them are cached and distributed in the server entries of a backend?
  • Is there a way to query HAProxy (socket) on how IP address it's being used in the resolvers or in the backend/server(s)?

@oktalz
Copy link
Contributor Author

oktalz commented May 15, 2018

Hi, thanks

  • sure, will check and change according to it
  • ok
  • resolvername1=ip:port,ip:port,...; syntax is ok, counter is not necessary, you still must write what resolver you want to use in service
  • yes if service is not headless, we do not get all ip/ports of backends, so use case is limited to headless services
  • thx, missed that
  • server-template is ideal for DNS service discovery.

HAProxy:

  • when DNS returns several records (A, AAAA, or SRV) they are all set to server entries of a backend.
    main advantage of using DNS is that you don't need to change configuration file, or even change anything in Runtime API (through socket)
  • in backend with show servers state you get the same response whether you use server or server-template

@rikatz
Copy link
Contributor

rikatz commented May 21, 2018

Hi, sorry about taking so long.

About the usage of endpoints instead of services, take a look here.

This should be enough to explain why using Service IP is a bad idea, as you will not be able to balance between each POD/endpoint.

If the idea is, anyway using DNS as backend (as this could allow HAProxy ingress to be used as proxy between non kubernetes workloads), you should always create your Services as Headless instead of using ClusterIPs (when dealing with incluster workloads). This will make kube-dns/coredns/whatever dns return the IP of each POD, and doesn't rely on kube-proxy (as shown in the link above).

@oktalz
Copy link
Contributor Author

oktalz commented May 22, 2018

@jcmoraisjr

  • updated annotation strategy
  • updated resolvers syntax

before writing docs (and cleaning to be able to merge), To clarify, in order to use DNS service discovery in HAProxy ingress controler, two things must be configured:

  • list od DNS resolvers - annotation ingress.kubernetes.io/dns-resolver
    (that is separate section in haproxy config, not related to any backend)
  • for service (not ingress) what DNS resolver should we use - annotation ingress.kubernetes.io/haproxy-use-resolver
    this is important because we cannot enable it for all services, a service must be headless https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/

@rikatz
yes usage is limited only for headless services, that is why this is not enabled by default.

@jcmoraisjr
Copy link
Owner

@oktalz

  • Do you mean a configmap dns-resolvers?
  • Perhaps you prefer a multi-line configuration, you can have a look here and on its doc
  • No need to add a name related to the tecnology which implements the resolver, ingress.kubernetes.io/use-resolver is fine.

v0.6 is close which means you will need to rebase the code just one more time. I'll wait your changes before merge v0.7 PRs. Sorry about the mess of the recent merges.

@oktalz
Copy link
Contributor Author

oktalz commented May 23, 2018

@jcmoraisjr

  • yes dns-resolvers
  • I'll discuss it with @aiharos, but multiline seems more readable
  • since HAProxy is the one who does dns resolving it is more clear like that, or maybe we can have
    haproxy.kubernetes.io/use-resolver but if you do not like it, it can be
    ingress.kubernetes.io/use-resolver

I'll rebase and add doc and example, so you can more easily inspect changes.

@oktalz
Copy link
Contributor Author

oktalz commented May 23, 2018

@jcmoraisjr rebased, cleaned and added docs :)

the only thing that is left for discussion is usage of haproxy.kubernetes.io/use-resolver
I believe that is better to have haproxy.... instead of ingress... to make more distinction between both behaviours, expecialy since usage has requirement of using headless services, but if you prefer ingress.kubernetes.io/use-resolver I can easily change that.

@jcmoraisjr
Copy link
Owner

Thanks @oktalz ! Some major suggestions:

  • Regarding annotation name - the controller has already some HAProxy related configmap options and annotations without using a special convention; DNS resolver is a feature also implemented by other proxies; there is a planned change to optionally prefix all annotations with haproxy; the headless svc is a k8s/ingress requirement which is not related with haproxy itself. Because of that I'd suggest you to use the same prefix already being used.

  • Use Backend type instead of Service.Annotations[] on dynconfig. You can create a Config type on dnsresolver/main and use it on Backend instead of a string type. Have a look at Backend.BlueGreen, ConfigurationSnippet or Connection fields.

  • Try to move all the ControllerConfig updating to controller/config. Some tips:

    • Move the "check if UseResolver references a valid dnsresolver" logic to the annotation parser (ann/dnsresolver/main), and if invalid, log a warning and use an empty string
    • Move DNSResolver.Merge from type to controller/config
    • Move Merge check and call from dynconfig to controller/config createDNSResolvers()
    • Use only an if backendmap[].DNSResolver.UseResolver != "" then update TotalSlots and continue on fillBackendServerSlots
  • On dynconfig, use d.updBackendsMap[backendName].SlotsIncrement instead of the default value from configmap

  • "cluster.local" is a convention which could be changed. Use a configmap which defaults to "cluster.local".

  • inter on server-template should use the configmap {{ $cfg.BackendCheckInterval }}.

  • A minor fix - dnsresolver/main has a "balance-algorithm" comment.

@jcmoraisjr jcmoraisjr added this to the v0.7 milestone May 28, 2018
@oktalz
Copy link
Contributor Author

oktalz commented Jun 1, 2018

@jcmoraisjr
I have a question.
do all annotation works only as kind: Ingress annotations (+with configmap)?
because idea behind use-resolver is to use it as kind: Service annotation

I tried using ConfigurationSnippet annotation as service annotation, but it does not work for me

You can look the example in 3.web-svc.yml

maybe the problem is on my side somewhere, so I'm asking to confirm

@jcmoraisjr
Copy link
Owner

Hi, yes, all annotations was designed to be used as ingress annotations. Note that the same service can be used by several different ingress, using annotations on ingress give you more flexibility - you can have different backend configurations to the same service.

@oktalz
Copy link
Contributor Author

oktalz commented Jun 8, 2018

@jcmoraisjr
All requested changes has been done.

  • moved logic to pkg/common/ingress/annotations/dnsresolver
  • moved UseResolver to be ingress anotation
  • added annotation for cluster.logic so that can be changed

and all minor changes you requested.

@jcmoraisjr
Copy link
Owner

Hi, thanks again to submit this niece feature. Below some minor suggestions I've found:

  • main.go has a comment of another component on Config struct
  • Change [0] to [1] in the readme - this PR will be deployed as a snapshot
  • The resolvers, in the haproxy.tmpl has a s on timeouts while other timeouts (connect, queue, client, server, etc) leave the user configure the suffix he wants. What about follow this same strategy?
  • cluster-dns-domain is a k8s global configuration so it fits better as a configmap option
  • In the example page, haproxy-config-map.yml is missing a 1. prefix to match the filename
  • link **headless** to https://kubernetes.io/docs/concepts/services-networking/service/#headless-services in the example page

And also some major issues:

  • If the annotation io.k8s...dns-resolvers is not used, the parser will skip io.k8s...use-resolver configuration. If io.k8s...dns-resolvers is used, the dns-resolvers declared in the configmap are ignored
  • HAProxy 1.8.9 eat 100% of one CPU when a resolver is used, hope you can reproduce this
  • An index out of range raises when the number of replicas is increased and using dynamic scaling. Couldn't find the time to investigate this further and propose a redesign

@oktalz
Copy link
Contributor Author

oktalz commented Jun 11, 2018

hi @jcmoraisjr thnx for the input

regarding major issues

  • usage of use-resolver: yes, that was by design, will change that. Also, for dns-resolvers from configmap,I will add them to backend if used.
  • will try to reproduce this.
  • will try to reproduce this.

@oktalz
Copy link
Contributor Author

oktalz commented Jun 14, 2018

hi @jcmoraisjr
so new configmap options are added (with defaults):
dns-timeout-retry: 1s
dns-hold-obsolete: 0s
dns-hold-valid: 1s
dns-accepted-payload-size: "8192"
dns-cluster-domain: cluster.local

dns-cluster-domain was renamed form cluster-dns-domain so we have better consistency.

regarding issues you detected:

  • fixed, you can now use both configmap and annotaions combination
  • yes with HAProxy 1.8.9 high cpu usage can happen when resolver is being used. This is not because of Ingress controller. Issue was already fixed for future versions. HAProxy 1.8.8 works fine.
  • fixed, there was an issue how use resolvers were checked.

Copy link
Owner

@jcmoraisjr jcmoraisjr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, a few more comments below.

const (
DNSResolversAnn = "ingress.kubernetes.io/dns-resolvers"
UseResolverAnn = "ingress.kubernetes.io/use-resolver"
)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

const is a local declaration, start the identifiers with lowercase instead

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure, np

}
}
cfg.DNSResolvers = resolvers
}
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Resolvers are global lists, annotations have precedence over configmap, so why not simply iterate backends and override the default resolvers without a single if?
  • Warn about "resolver name %v not found"
  • What if two annotations declare the same resolver? This need to be at least warned - or resolvers from annotations could have a prefix so colisions would never happen

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • With prefixes we would get multiple (same) data in haproxy config file.
  • I will override configmap setup with one from annotation without checking if it exist or not
  • warning 'not found' can and will be added, will still must check if resolver actually exist or not
  • I'm not sure what do you mean if two annotations declare same resolver. You can only define it in configmap or with annotations.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can use annotations to declare new resolvers on more than one ingress resource.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can I distinct different ingress annotations in code?
I'm not really sure I can do that in pkg/controller/config.go (but will search a bit more)

Then maybe the best option would be to only allow configmap setting for dnsresolvers (since even in haproxy configuration this is global). That way we do not have any problem with possible miss-configuration

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't know if I understood. Ingress have name and namespace, and both configurations are unique. If name and namespace match, you have the same ingress and you have the same resolver annotation.

What I usually do is: immutable configurations or behavior are command-line options, global or infra/ops configurations are configmap options, useful configurations to devs are annotations. dns resolvers are global and sounds to me like a infra/ops responsibility so perhaps it fits better on configmap only.

reloadRequired = true
}
continue
}
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Say a few words why to test UseResolver != ""
  • This continue above should only happen if the only difference between old and cur is the endpoints, so move the whole if statement after the if !reloadRequired {} (below)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in previous commits where use_resolver was used differently that was required,
now all when it only used for logging some info i can clean it.

dns-timeout-retry: 1s
dns-hold-obsolete: 0s
dns-hold-valid: 1s
dns-accepted-payload-size: "8192"
dns-cluster-domain: cluster.local
@oktalz
Copy link
Contributor Author

oktalz commented Jun 21, 2018

the main difference between this and last code is in removal of useResolver check in dynamicUpdateBackends and minor changes for comments you made

@oktalz
Copy link
Contributor Author

oktalz commented Jun 26, 2018

After testing some options:

  • I've removed option to setup dns-resolvers through annotation.
    Now you can only configure it via configmap. This is better way than
    doing all the checks and collisions/confusions that annotations in this particular case can bring.
    Plus, the code is much cleaner withouth that.

  • By default, if you configure use-resolver that does not exists, controller fallback to default behavior with log message

@jcmoraisjr
Copy link
Owner

Now dns-resolvers is a global configmap option, so what about:

  • remove annotations/dnsresolvers.Config.DNSResolvers
  • move ParseDNSResolvers() as a private func on controller/config

@oktalz
Copy link
Contributor Author

oktalz commented Jun 28, 2018

  • i will remove it, I have left it in case backends at some point shout have that info. But, yes, curently it is not necessary to have it

  • It makes more sense to leave it where it is and not clutter config, but have no problems with moving it there

@oktalz oktalz force-pushed the 1.8-dev-dns branch 2 times, most recently from d3159b6 to 265e5d2 Compare July 2, 2018 07:47
@oktalz
Copy link
Contributor Author

oktalz commented Jul 2, 2018

hi @jcmoraisjr

with HAProxy 1.8.12 I can confirm that high CPU usage is not a problem anymore.

@jcmoraisjr
Copy link
Owner

Some doc updates and a minor change to the template:

  • example/README - add a hint to "change to the IP address of your cluster DNS server" before apply -f both configmap resources
  • Add the third important thing - if using kube-dns, the cache ttl defaults to 30s. Ask the user to add --max-ttl and --max-cache-ttl to the dnsmasq container to a proper value, otherwise the HAProxy backend would take up to 30s to update
  • example/README - cluster-dns-domain is a configmap option and not an annotation
  • main README - add at least the "default 30 seconds behavior" hint of kube-dns cache ttl in the dns-resolvers configmap help
  • template - a minor update - change all the $ing.Cfg to $cfg just to follow the convention

Since DNS resolver are global in HAProxy we only need to define it in
configmap. There is no need to allow extra setups that could lead
to confusing configuration. Backends still have all the data related to
them
@oktalz
Copy link
Contributor Author

oktalz commented Jul 19, 2018

hi @jcmoraisjr,

I've added hints to readme files.
In main readme i put a link to example/README so we do not have same hint/comments on two places.

Also I'v updated template to better follow convention.

@jcmoraisjr
Copy link
Owner

Hi @oktalz, perhaps missing push the changes?

@oktalz
Copy link
Contributor Author

oktalz commented Jul 19, 2018

Hi @jcmoraisjr

Changes was minor so I incorporated (amend) them in last commit a9aec84 and did a rebase/force push.

@jcmoraisjr
Copy link
Owner

I'd wait GitHub or my client git notification for a very long time ... Thanks for the PR and the patience! Merging in a few minutes.

@jcmoraisjr jcmoraisjr merged commit 20c3eff into jcmoraisjr:master Jul 19, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants