-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Docker logs receiver #31597
Comments
I think it's reasonable to ping codeowners of |
Intuitively, it makes most sense to me to have all signal types handled in the same receiver. This is simpler for both users and maintainers, especially given that the receivers will most likely share a lot of code and dependencies. On the topic of getting logs from the Docker API vs scraping them from disk, this is a bit similar to the Kubernetes case, but the tradeoffs are significantly different. The performance penalty is much smaller for Docker, because we just read from a (usually local) socket, as opposed to forcing the API Server to do a lot of additional work. There's also a way to enrich Kubernetes telemetry with metadata via the k8sattributes processor, whereas there's no such way for Docker. |
I propose to:
Here's a related previous issue to collect Docker events: |
Pinging code owners for receiver/dockerstats: @rmfitzpatrick @jamesmoessis. See Adding Labels via Comments if you do not have permissions to add labels yourself. |
As for the formal side of the changes, I propose this order:
Let me know if this is incorrect. |
I don't think 2. should be blocked by first doing 1. Renaming a component in a graceful manner is a non-trivial endeavor. We did something similar with changing
|
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
@andrzej-stencel could you remove the |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
This issue has been closed as inactive because it has been stale for 120 days with no activity. |
Should still be open, no? Using the filelog receiver is quite annoying as causes issues around file permissions, would much rather use the socket (like how Grafana Alloy does it) |
+1 |
The purpose and use-cases of the new component
There are three types of logs that can be scraped:
Additional context
section.Currently Docker container logs can be fetched by using
filelog
receiver. However, there are cases where you can't access the files, but can access Docker API (@swiatekm-sumo please elaborate if needed). I'm not aware of receivers able to scrape daemon logs and events.Example configuration for the component
This is something worth discussing in this issue. Below config should express what I think we need:
where I could, I tried to be consistent with
dockerstats
receiver.Telemetry data types supported
Logs, but see also
Additional context
section.Is this a vendor-specific component?
Code Owner(s)
@aboguszewski-sumo and possibly more people from Sumo Logic
Sponsor (optional)
@astencel-sumo
Additional context
There is already an issue with regard to Docker events: #29096
Also, there is
dockerstats
receiver, but currently it scrapes only metrics.Now, the question is: how should we resolve the potential connection between these three? We can either:
dockerstats
receiver and add scraping logs and events theredockerstats
receiverIf you look at the issue linked above, we can see that codeowners of
dockerstats
receiver approved the idea of scraping events.The text was updated successfully, but these errors were encountered: