-
Notifications
You must be signed in to change notification settings - Fork 553
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak in instrumentation-pg #2479
Comments
same issue |
We observed this issue after upgrading from |
Here is a stack trace from our container startup. The line in opentelemetry-js-contrib/plugins/node/opentelemetry-instrumentation-pg/src/instrumentation.ts Lines 454 to 461 in ad560df
|
Same issue, our database calls are now very very slow as well. Downgrading to auto-instrumentations-node: v0.48.0 solved this issue for us |
As a workaround, downgrade of |
FYI @maryliag (component owner) |
Hi everyone, thank you for raising this issue! Indeed I made a mistake and was adding several event listeners to the same pool. Although this is obviously not ideal, I want to clarify the comment
This (wrong metric data) is not happening, because I did noticed the updater was getting called a few times, so I made sure it always sends the correct value, event if called multiple times. I just didn't realize the reason why it was getting called multiple times 😓 Thanks to the input from this issue, I was able to create a fix for it. Also some minor performance improvements on #2484 If you continue seeing this issue on a version with this fix, please let me know! |
Thanks @maryliag ! We've been seeing some serious performance issues that coincidentally began right after we updated our opentelemetry deps. Specifically, over the course of several hours, |
Hi guys!
|
this seems to align with a new listeners being added on every new connection, which is what the PR I created is fixing, so hopefully this will solve your problem. Also, hope the new metrics related to pool will be helpful 😄 |
Expereienced this too, had to rollback to 0.44.0 of |
To add, when using the event listeners, they should be a matching remove to avoid this. It would be nice of the metrics could be an opt-in |
when do you suggest the removal would occur? |
|
I still see an error and immense DB slowness with the latest version of the package.
|
@clintonb thanks for the feedback - would you mind sharing the full stacktrace of that error again (for the new version)? I'm trying to track down possible ways this flag could fail. (the memory leak is currently the highest priority, once we've sorted this out we can tackle performance improvements in a separate issue) |
Hmm...trace shows
|
It's happening for me as well.
|
Hey @nanotower , anyway to get more info on your stacktrace? I want to confirm that the version being used is indeed the one you imported.
that has a clear
which points to a line (306) on current version that is a comment |
You're right @maryliag . My node_modules are running
Minor version below the one with the fix. Thanks for the heads up. |
Since there hasn't been more reports of this issue with the new version, I will close it |
What version of OpenTelemetry are you using?
@opentelemetry/instrumentation-pg: 0.45.1
What version of Node are you using?
v20.15.1
What did you do?
Used
@opentelemetry/instrumentation-pg
with TypeORM and pg-pool. Received warning about possible memory leak, which happens since listeners are constantly attached for each query. Partial stack trace:What did you expect to see?
No memory leak.
Additional context
This issue was introduced in this commit - 12adb43#diff-1360de1aed58c38c5cbd3a1759a8af761049ea2c562a477964623fb2358b2730R379
Since pg.Pool.connect is called for each query to obtain connection, duplicated event listeners are attached each time, which would probably also lead to wrong metrics data.
It seems in order to fix it, additional check is needed to make sure event listeners are added only on a first
connect
call.The text was updated successfully, but these errors were encountered: