Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Excessive TLAB memory allocations from MetricsTurboFilter #5317

Closed
drekbour opened this issue Jul 29, 2024 · 2 comments
Closed

Excessive TLAB memory allocations from MetricsTurboFilter #5317

drekbour opened this issue Jul 29, 2024 · 2 comments
Labels
superseded An issue that has been superseded by another

Comments

@drekbour
Copy link

Describe the bug

In the core loop of an otherwise low-garbage application, majority of heap allocations are reported as coming MetricsTurboFilter with the following stack trace linked to a filtered out debug logger

ThreadLocal.ThreadLocalMap.Entry#Entry
ThreadLocal$ThreadLocalMap.set(ThreadLocal, Object)
ThreadLocal$ThreadLocalMap.access$100(ThreadLocal$ThreadLocalMap, ThreadLocal, Object)
ThreadLocal.setInitialValue()
ThreadLocal.get()
MetricsTurboFilter.decide(Marker, Logger, Level, String, Object[], Throwable)
TurboFilterList.getTurboFilterChainDecision(Marker, Logger, Level, String, Object[], Throwable)
LoggerContext.getTurboFilterChainDecision_0_3OrMore(Marker, Logger, Level, String, Object[], Throwable)
Logger.filterAndLog_0_Or3Plus(String, Marker, Level, String, Object[], Throwable)
Logger.debug(String)

Environment

  • Micrometer version 1.9.12
  • Micrometer registry Spring Boot configured
  • OS: RHEL
  • Java version: 8, 21

To Reproduce
Profile a contrived hot loop like this in a spring boot application

  while(true) { log.debug(Foo"); }

Run briefly a desktop this reported allocations of

  • 500MB/s for the lambda (probably compiled away to zero in real application)
  • 100MB/s for the setInitialAccess calls

Expected behavior
Filtered call to log.debug("Foo") is expected to be lightning fast in java ecosystem.

Avoid memory allocations where possible -

ignoreMetrics.set(true);
try {
    r.run();
} finally {
    ignoreMetrics.remove(); // << change this to ignoreMetics.set(Boolean.FALSE)
}

Additional context
If there's some argument about threadlocals being a memory leak, I would point out that this adds just a few weak-ref bytes per thread. If an app is discarding threads at a rate that this allocation is a problem, it has far larger problems anyway.

See #3891 for related changes to avoid ThreadLocal access where not required

@jonatan-ivanov
Copy link
Member

Thanks for the issue!
Micrometer 1.9 (and Spring Boot 2.7) is (are) not OSS supported anymore, please upgrade, see: https://micrometer.io/support/

If there's some argument about threadlocals being a memory leak, I would point out that this adds just a few weak-ref bytes per thread. If an app is discarding threads at a rate that this allocation is a problem, it has far larger problems anyway.

As you said, this might not be a big issue for apps that are (re)using platform threads but with virtual threads this can be an issue: virtual threads can be created and dropped at a high rate so you can end-up with a lot of references. I tested this with JDK 22 and I was able to create a significant amount of leak (well over 1GiB) just by doing this (a lot):

Thread.ofVirtual().start(() -> threadLocal.set(Boolean.TRUE));

I can see the following solutions:

  • Logback implementing this natively (through JMX?) so we don't need the filter neither the thread-local (please create an issue for logback if you think this would work for you)
  • Scoped Values can solve this issue but the feature is still in preview
  • Making this somehow overridable, I'm not sure how though since ignoreMetrics is static

I think this is the same issue as #3952, let me close this in favor of #3952 and let's continue this discussion there. Please let us know if you disagree and we can reopen.

@jonatan-ivanov jonatan-ivanov added superseded An issue that has been superseded by another and removed waiting-for-triage labels Jul 30, 2024
@jonatan-ivanov
Copy link
Member

Superseded by #3952

@jonatan-ivanov jonatan-ivanov closed this as not planned Won't fix, can't repro, duplicate, stale Jul 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
superseded An issue that has been superseded by another
Projects
None yet
Development

No branches or pull requests

2 participants