Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[frontendproxy] - add envoy access logs #1768

Merged
merged 17 commits into from
Nov 20, 2024

Conversation

puckpuck
Copy link
Contributor

@puckpuck puckpuck commented Nov 7, 2024

Changes

Supersedes #1751

This adds Envoy access logs to the OTLP logs pipeline. Semantic conventions were followed as much as possible for the various attributes added.

Merge Requirements

For new feature contributions, please make sure you have completed the following
essential items:

  • CHANGELOG.md updated to document new feature additions
  • Appropriate documentation updates in the docs
  • Appropriate Helm chart updates in the helm-charts

Maintainers will not merge until the above have been completed. If you're unsure
which docs need to be changed ping the
@open-telemetry/demo-approvers.

@puckpuck puckpuck requested a review from a team as a code owner November 7, 2024 00:33
@github-actions github-actions bot added the helm-update-required Requires an update to the Helm chart when released label Nov 7, 2024
@julianocosta89
Copy link
Member

I'm getting an error on this one:
image

@julianocosta89
Copy link
Member

@puckpuck are you also seeing the error I mentioned above?

@puckpuck
Copy link
Contributor Author

It took me a while to figure this one out, but it seems like the log volume and log size is too big for the Grafana <-> OpenSearch exchange.

I found other minor issues and optimizations as well, but will ultimately impose a limit on the # of entries that are returned to that log panel. I may settle on 100 entries since this is meant to be a demo (and update the panel title accordingly as well).

I'll update this PR later today with the fixes.

@puckpuck
Copy link
Contributor Author

@julianocosta89 can you try again now?

@julianocosta89
Copy link
Member

@puckpuck not getting the error, but not getting any log for frontend-proxy.
All other services logs are working fine.

I've noticed some error messages on the collector:

otel-col                 | 2024-11-18T22:31:02.080Z	warn	batchprocessor@v0.113.0/batch_processor.go:263	Sender failed	{
"kind": "processor", 
"name": "batch",
"pipeline": "logs",
"error": "not retryable error: Permanent error: {
    \"type\":\"mapper_parsing_exception\",
    \"reason\":\"Could not dynamically add mapping for field [http.connection.id]. Existing mapping for [attributes.http] must be of type object but found [text].\",
    \"caused_by\":{
        \"type\":\"\",
        \"reason\":\"\",
        \"caused_by\":null
    }
}

@puckpuck
Copy link
Contributor Author

@puckpuck not getting the error, but not getting any log for frontend-proxy. All other services logs are working fine.

I've noticed some error messages on the collector:

otel-col                 | 2024-11-18T22:31:02.080Z	warn	batchprocessor@v0.113.0/batch_processor.go:263	Sender failed	{
"kind": "processor", 
"name": "batch",
"pipeline": "logs",
"error": "not retryable error: Permanent error: {
    \"type\":\"mapper_parsing_exception\",
    \"reason\":\"Could not dynamically add mapping for field [http.connection.id]. Existing mapping for [attributes.http] must be of type object but found [text].\",
    \"caused_by\":{
        \"type\":\"\",
        \"reason\":\"\",
        \"caused_by\":null
    }
}

Can you recheck your configuration including otelcol-config-extras.yml on this one? I just ran the following commands, and everything worked as expected.

docker system prune -af
docker compose build frontendproxy
make start

@julianocosta89
Copy link
Member

@puckpuck added some nitty picks in here fd2da2f, hope you don't mind.

The @timestamp wasn't working for currencyservice, as this service doesn't have timestamp set on the logs, all timestamps were set to 1970-01-01 01:00:00.000 and the filter wasn't allowing new logs to be shown.

I've changed the query to use observedTimestamp, which I tested and worked fine for all services. LMK if that works for you.

Copy link
Member

@julianocosta89 julianocosta89 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 🚀
Also, thx @martinjt!

@julianocosta89
Copy link
Member

One thing that I missed was the log level, but couldn't find a way to have that in Envoy

julianocosta89 and others added 2 commits November 19, 2024 15:04
Co-authored-by: Martin Thwaites <github@my2cents.co.uk>
Co-authored-by: Pierre Tessier <pierre@pierretessier.com>
Co-authored-by: Juliano Costa <juliano.costa@datadoghq.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
helm-update-required Requires an update to the Helm chart when released
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants