Skip to content

Commit

Permalink
Updates from review
Browse files Browse the repository at this point in the history
  • Loading branch information
knylander-grafana committed Jan 15, 2025
1 parent 62f4b38 commit fb65ebd
Show file tree
Hide file tree
Showing 3 changed files with 5 additions and 5 deletions.
4 changes: 2 additions & 2 deletions docs/sources/tempo/configuration/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -263,7 +263,7 @@ To avoid these out-of-memory crashes, use `max_span_attr_byte` to limit the maxi
Any key or values that exceed the configured limit are truncated before storing.
The default value is `2048`.

Use the `tempo_distributor_attributes_truncated_total` metric to track how many attributes are truncated.
Use the `tempo_distributor_attributes_truncated_total` metric to track how many attributes are truncated.

## Ingester

Expand Down Expand Up @@ -329,7 +329,7 @@ If you want to enable metrics-generator for your Grafana Cloud account, refer to
You can limit spans with end times that occur within a configured duration to be considered in metrics generation using `metrics_ingestion_time_range_slack`.
In Grafana Cloud, this value defaults to 30 seconds so all spans sent to the metrics-generation more than 30 seconds in the past are discarded or rejected.

For more information about the `local-blocks` configuration option, refer to [TraceQL metrics](https://grafana.com/docs/tempo/latest/operations/traceql-metrics/#configure-the-local-blocks-processor).
For more information about the `local-blocks` configuration option, refer to [TraceQL metrics](https://grafana.com/docs/tempo/<TEMPO_VERSION>/operations/traceql-metrics/#configure-the-local-blocks-processor).

```yaml
# Metrics-generator configuration block
Expand Down
4 changes: 2 additions & 2 deletions docs/sources/tempo/troubleshooting/out-of-memory-errors.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,10 +14,10 @@ Learn about out-of-memory (OOM) errors and how to troubleshoot them.
Tempo queriers can run out of memory when fetching traces that have spans with very large attributes.
This issue has been observed when trying to fetch a single trace using the [`tracebyID` endpoint](https://grafana.com/docs/tempo/latest/api_docs/#query).


To avoid these out-of-memory crashes, use `max_span_attr_byte` to limit the maximum allowable size of any individual attribute.
Any key or values that exceed the configured limit are truncated before storing.
Use the `tempo_distributor_attributes_truncated_total` metric to track how many attributes are truncated.

Use the `tempo_distributor_attributes_truncated_total` metric to track how many attributes are truncated.

```yaml
# Optional
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ If this metric is greater than zero (0), check the logs of the compactor for an

- Verify that the Compactor has the LIST, GET, PUT, and DELETE permissions on the bucket objects.
- If these permissions are missing, assign them to the compactor container.
- For detailed information, check - https://grafana.com/docs/tempo/latest/configuration/s3/#permissions
- For detailed information, refer to the [Amazon S3 permissions](https://grafana.com/docs/tempo/<TEMPO_VERSION>/configuration/hosted-storage/s3/#permissions).
- If there’s a compactor sitting idle while others are running, port-forward to the compactor’s http endpoint. Then go to `/compactor/ring` and click **Forget** on the inactive compactor.
- Check the following configuration parameters to ensure that there are correct settings:
- `max_block_bytes` to determine when the ingester cuts blocks. A good number is anywhere from 100MB to 2GB depending on the workload.
Expand Down

0 comments on commit fb65ebd

Please sign in to comment.