Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: DLQ metrics #829

Merged
merged 3 commits into from
Jan 26, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
112 changes: 74 additions & 38 deletions docs/metrics.md
Original file line number Diff line number Diff line change
@@ -1,80 +1,116 @@
### Overview
Conduit comes with a number of already defined metrics. The metrics available are exposed through an HTTP API and
ready to be scraped by Prometheus. It's also possible to easily define new metrics with existing types, or just create a
# Metrics

## Overview

Conduit comes with a number of already defined metrics. The metrics available
are exposed through an HTTP API and ready to be scraped by Prometheus. It's also
possible to easily define new metrics with existing types, or just create a
completely new metric type.

### Accessing metrics
Metrics are exposed at `/metrics`. For example, if you're running Conduit locally, you can get metrics if you run:
`curl localhost:8080/metrics`.
## Accessing metrics

Metrics are exposed at `/metrics`. For example, if you're running Conduit
locally, you can get metrics if you run `curl localhost:8080/metrics`.

## Available metrics

### Available metrics
* **Conduit metrics**: We currently have a number of high level pipeline, processor and connector metrics, all of which are
defined in [measure.go](https://github.com/ConduitIO/conduit/blob/main/pkg/foundation/metrics/measure/measure.go). Those are:
* **Conduit metrics**: We currently have a number of high level pipeline,
processor and connector metrics, all of which are defined
in [measure.go](https://github.com/ConduitIO/conduit/blob/main/pkg/foundation/metrics/measure/measure.go).
Those are:

| Pipeline name | Type | Description |
|------------------------------------------------|-----------|----------------------------------------------------------------------------------------------------------------|
| `conduit_pipelines` | Gauge | Number of pipelines by status. |
| `conduit_connectors` | Gauge | Number of connectors by type (source, destination). |
| `conduit_processors` | Gauge | Number of processors by name and type. |
| `conduit_connector_bytes` | Histogram | Number of bytes a connector processed by pipeline name, plugin and type (source, destination). |
| `conduit_connector_bytes` | Histogram | Number of bytes* a connector processed by pipeline name, plugin and type (source, destination). |
| `conduit_dlq_bytes` | Histogram | Number of bytes* a DLQ connector processed per pipeline and plugin. |
| `conduit_pipeline_execution_duration_seconds` | Histogram | Amount of time records spent in a pipeline. |
| `conduit_connector_execution_duration_seconds` | Histogram | Amount of time spent reading or writing records per pipeline, plugin and connector type (source, destination). |
| `conduit_processor_execution_duration_seconds` | Histogram | Amount of time spent on processing records per pipeline and processor. |
| `conduit_dlq_execution_duration_seconds` | Histogram | Amount of time spent writing records to DLQ connector per pipeline and plugin. |

* **Go runtime metrics**: The default metrics exposed by Prometheus' official Go package, [client_golang](https://pkg.go.dev/github.com/prometheus/client_golang).
* **gRPC metrics**: The gRPC instrumentation package we use is [promgrpc](https://github.com/piotrkowalczuk/promgrpc).
The metrics exposed are listed [here](https://github.com/piotrkowalczuk/promgrpc#metrics).
* **HTTP API metrics**: We use [promhttp](https://pkg.go.dev/github.com/prometheus/client_golang/prometheus/promhttp),
\*We calculate bytes based on the JSON representation of the record payload
and key.

* **Go runtime metrics**: The default metrics exposed by Prometheus' official Go
package, [client_golang](https://pkg.go.dev/github.com/prometheus/client_golang).
* **gRPC metrics**: The gRPC instrumentation package we use
is [promgrpc](https://github.com/piotrkowalczuk/promgrpc). The metrics exposed
are listed [here](https://github.com/piotrkowalczuk/promgrpc#metrics).
* **HTTP API metrics**: We
use [promhttp](https://pkg.go.dev/github.com/prometheus/client_golang/prometheus/promhttp),
Prometheus' official package for instrumentation of HTTP servers.

### Adding new metrics
Currently, we have a number of metric types already defined in [metrics.go](https://github.com/ConduitIO/conduit/blob/main/pkg/pipeline/stream/metrics.go).
Those are: counter, gauge, timer and histogram and their "labeled" versions too. A labeled metric is one where labels
must be set before usage. In many cases, the already present metric types should be sufficient.
## Adding new metrics

Currently, we have a number of metric types already defined
in [metrics.go](https://github.com/ConduitIO/conduit/blob/main/pkg/pipeline/stream/metrics.go).
Those are: counter, gauge, timer and histogram and their "labeled" versions too.
A labeled metric is one where labels must be set before usage. In many cases,
the already present metric types should be sufficient.

Adding a new metric of an existing type is simple. Let's say we want to count
number of message processed, per pipeline. To do so we will define a labeled
counter and increase the counter in source nodes, each time a message is read.

### Create a new labeled counter

Adding a new metric of an existing type is simple. Let's say we want to count number of message processed, per pipeline.
To do so we will define a labeled counter and increase the counter in source nodes, each time a message is read.
To do so, add the following code
to [measure.go](https://github.com/ConduitIO/conduit/blob/main/pkg/foundation/metrics/measure/measure.go).

#### Create a new labeled counter
To do so, add the following code to [measure.go](https://github.com/ConduitIO/conduit/blob/main/pkg/foundation/metrics/measure/measure.go).
```go
PipelineMsgMetrics = metrics.NewLabeledCounter(
"conduit_pipeline_msg_counter",
"Number of messages per pipeline.",
[]string{"pipeline_name"},
"conduit_pipeline_msg_counter",
"Number of messages per pipeline.",
[]string{"pipeline_name"},
)
```

The labeled counter created here:

* has the name `conduit_pipeline_msg_counter`,
* has the description `Number of messages per pipeline.`,
* accepts a `pipeline_name` label.

#### Instantiate a counter with a label
Think of the labeled counter as of a factory for counters. It lets us create counters where the label it defines is set
to a specific value (a pipeline name in our case).
### Instantiate a counter with a label

In other words, for each pipeline we will have a separate counter (for which the `pipeline_name` label is set
to the pipeline name). To do so, when building a source node in [lifecycle.go](https://github.com/ConduitIO/conduit/blob/main/pkg/pipeline/lifecycle.go),
Think of the labeled counter as of a factory for counters. It lets us create
counters where the label it defines is set to a specific value (a pipeline name
in our case).

In other words, for each pipeline we will have a separate counter (for which the
`pipeline_name` label is set to the pipeline name). To do so, when building a
source node
in [lifecycle.go](https://github.com/ConduitIO/conduit/blob/main/pkg/pipeline/lifecycle.go),
we can add the following:

```go
sourceNode := stream.SourceNode{
// initialize other fields
PipelineMsgMetrics: measure.PipelineMsgMetrics.WithValues(pl.Config.Name),
// initialize other fields
PipelineMsgMetrics: measure.PipelineMsgMetrics.WithValues(pl.Config.Name),
}
```

#### Increment the counter
When a message is successfully read in a source node, we can increment the counter:
### Increment the counter

When a message is successfully read in a source node, we can increment the
counter:

```go
r, err := n.Source.Read(ctx)
if err == nil {
n.PipelineMsgMetrics.Inc()
n.PipelineMsgMetrics.Inc()
}
```

#### Check the metrics
Assuming you have a pipeline running locally, you can execute `curl -Ss localhost:8080/metrics | grep conduit_pipeline_msg_counter`
to check your newly created metrics. You will see something along the lines of:
### Check the metrics

Assuming you have a pipeline running locally, you can execute
`curl -Ss localhost:8080/metrics | grep conduit_pipeline_msg_counter` to check
your newly created metrics. You will see something along the lines of:

```
# HELP conduit_pipeline_msg_counter Number of messages per pipeline.
# TYPE conduit_pipeline_msg_counter counter
Expand Down