-
Notifications
You must be signed in to change notification settings - Fork 188
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Metrics Output #418
Labels
Comments
jantman
added a commit
that referenced
this issue
Aug 29, 2019
jantman
added a commit
that referenced
this issue
Aug 29, 2019
jantman
added a commit
that referenced
this issue
Aug 30, 2019
jantman
added a commit
that referenced
this issue
Aug 30, 2019
This was merged to master in #427 and released as 7.1.0. The release is now live on PyPI and the Docker image should be live on Docker Hub shortly. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
My employer now has need for a built-in method of outputting metrics - per-limit usage and current limit, as well as overall runtime - to a time-series service. We currently use Datadog. I'm being allowed to implement this on work time.
We're currently doing this via a Python script that wraps awslimitchecker's Python API, similar to the suggestions I made in #152 (statsd/graphite) and #256 (prometheus), the latter of which resulted in merging docs/examples/prometheus.py.
My plan is to implement a base MetricSink class with a relatively simple interface (likely similar to what I described in this comment), documentation for implementing subclasses to support specific metrics stores, and the first subclass for Datadog.
I think the only undecided bit so far is how to handle third-party dependencies - do we make them extras and just have other metrics subclasses merged into the project itself, or do we distribute the plugins as separate packages and use setuptools entrypoints for plugin discovery?
The text was updated successfully, but these errors were encountered: