-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Filebeat] Error while parsing lastError field in Azure platform logs #24292
Comments
Pinging @elastic/integrations (Team:Integrations) |
@jmmcorreia , the problem is described right at the end of the error log
We map by default only the common fields we see appear in the platform logs as it would be a lot of work to map each property from each type of platform logs from each resource type. It looks like the |
Thanks for helping me @narph I tried to check the
{
"my-azure-logs-index" : {
"mappings" : { }
}
} Then I tried I am still trying the workaround to see if I am able to make it work. Will provide an update when I have any more news. |
closed by #26148, please reopen if this will not fix your issue |
Hi everyone,
so I was testing the Azure module for filebeat to pull the logs into ES and run into the following issue. Basically, if there is any error message present in the APIM(API Management service) logs, those will not reach the ES backend. In other words, when the request is not successful on the APIM side, the service can add 6 extra fields to its log entry which are : LastErrorElaspsed, LastErrorSource, LastErrorSection, LasErrorReason, LastErrorMessage, LastErrorScope.
These are the fields whose data is being lost. However, they are being sent by azure as shown by the printed entry in the Filebeat logs:
The following warning message accompanied the log entry shown above
These are the steps to reproduce the issue:
1: Use the Filebeat Azure Module to pull platform logs from Azure using the following config:
Just a few extra details about my setup. I'm using the kubernetes operator to deploy ES, Kibana and Filebeat. They are all running in an AKS cluster.
The text was updated successfully, but these errors were encountered: