Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Metrics Output #418

Closed
jantman opened this issue Aug 28, 2019 · 1 comment
Closed

Metrics Output #418

jantman opened this issue Aug 28, 2019 · 1 comment

Comments

@jantman
Copy link
Owner

jantman commented Aug 28, 2019

My employer now has need for a built-in method of outputting metrics - per-limit usage and current limit, as well as overall runtime - to a time-series service. We currently use Datadog. I'm being allowed to implement this on work time.

We're currently doing this via a Python script that wraps awslimitchecker's Python API, similar to the suggestions I made in #152 (statsd/graphite) and #256 (prometheus), the latter of which resulted in merging docs/examples/prometheus.py.

My plan is to implement a base MetricSink class with a relatively simple interface (likely similar to what I described in this comment), documentation for implementing subclasses to support specific metrics stores, and the first subclass for Datadog.

I think the only undecided bit so far is how to handle third-party dependencies - do we make them extras and just have other metrics subclasses merged into the project itself, or do we distribute the plugins as separate packages and use setuptools entrypoints for plugin discovery?

@jantman
Copy link
Owner Author

jantman commented Sep 10, 2019

This was merged to master in #427 and released as 7.1.0. The release is now live on PyPI and the Docker image should be live on Docker Hub shortly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant