Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[kube] Allow to run agent on the node #3401

Merged
merged 5 commits into from
Jul 6, 2017
Merged

[kube] Allow to run agent on the node #3401

merged 5 commits into from
Jul 6, 2017

Conversation

xvello
Copy link
Contributor

@xvello xvello commented Jun 29, 2017

What does this PR do?

  • Allows the user to set bearer token path and api_server hostname and credentials manually, to allow running directly on the node.
  • If kubernetes.yaml is present and valid, enable k8s tagging for Docker and SD metrics
  • If both client certs are configured, skip token lookup to remove the error

Sister PR for configuration: DataDog/integrations-core#508

Tested on container (datadog/dev-dd-agent:xvello_kube_on_host image) and deb package on ubuntu host.

fixes #3221

xvello added 2 commits June 29, 2017 10:33
- added bearer_token_path for serviceaccount token auth
- added api_server_host and api_server_port options
@xvello xvello requested a review from hkaj June 29, 2017 12:53
k8_check_config = check_yaml(k8_config_file_path)
return len(k8_check_config['instances']) > 0
except Exception:
return False
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we log.debug the exception at least? This kind of silent catch-all is difficult to debug

@hkaj
Copy link
Member

hkaj commented Jun 29, 2017

Hi @sophaskins, from last time we talked I think you could be interested in this. Anything else you think we should make configurable?

@sophaskins
Copy link
Contributor

Yes! I definitely am! I've been spending this week forward porting our patches on top of 5.14.1 (from 5.10.1 - yikes!), so it's fresh in my mind. These are definitely things that we'd love to see!

It makes things a bit messier, but I'd like to be able to specify whether or not the apiserver uses TLS. We have our apiserver pods bind to localhost:8080 as plain HTTP using --insecure-bind-address / --insecure-port options to kube-apiserver (https://kubernetes.io/docs/admin/kube-apiserver/). Our current patch to kubeutil.py to support that looks like:

         self.tls_settings = self._init_tls_settings(instance)

         # apiserver
-        self.kubernetes_api_url = 'https://%s/api/v1' % (os.environ.get('KUBERNETES_SERVICE_HOST') or self.DEFAULT_MASTER_NAME)
+        if instance.get('kubernetes_api_url'):
+            # when running on master nodes _outside_ of a pod, it's useful
+            # to directly specify the non-tls "insecure address/port"
+            # note that if you use this option, you need to specify the full base
+            # url, ie http://kubernetes:1234/api/v1
+            self.kubernetes_api_url = instance.get('kubernetes_api_url')
+        else:
+            self.kubernetes_api_url = 'https://%s/api/v1' % (os.environ.get('KUBERNETES_SERVICE_HOST') or
+                                                             self.DEFAULT_MASTER_NAME)

         # kubelet
         try:

Since none of the blocks in _init_tls_settings end up working, no auth information is passed along (which is correct in our case)

@xvello
Copy link
Contributor Author

xvello commented Jun 29, 2017

Thanks for your input. I'll switch to a unique api_server_url parameter that'll allow to set the protocol tool

@xvello
Copy link
Contributor Author

xvello commented Jul 3, 2017

Hi @sophaskins
I pushed changes to both PRs, do you think these changes are enough for your use case, or is there more things you would like to configure?

@sophaskins
Copy link
Contributor

Yes! I think that is perfect - thanks <3

@xvello xvello merged commit 972b2c9 into master Jul 6, 2017
@xvello xvello deleted the xvello/kube_on_host branch July 6, 2017 13:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants