You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Amount of go routines for azure-event-hubs-go/storage.(*LeaserCheckpointer).GetLeases.func1 doesn't increase over time
Actual Behavior
the go routines are leaking and memory footprint grows
Analysis
I found the same issue as described in #92 in the function azure-event-hubs-go/storage.(*LeaserCheckpointer).GetLeases
I have a piece of software, that is reading from eventhubs and writes simple events to a mongodb.
The following two screenshots of flame graphs of pprof were just taken 5 minutes apart. You can see the growth of the azure-event-hubs-go/storage.(*LeaserCheckpointer).GetLeases.func1 go routine, while all others stay the same:
Running the software for 5 days (with millions of events daily) leaves us with growing go routines like the following, as long as the pod is not restarted:
Expected Behavior
Amount of go routines for azure-event-hubs-go/storage.(*LeaserCheckpointer).GetLeases.func1 doesn't increase over time
Actual Behavior
the go routines are leaking and memory footprint grows
Analysis
I found the same issue as described in #92 in the function azure-event-hubs-go/storage.(*LeaserCheckpointer).GetLeases
I have a piece of software, that is reading from eventhubs and writes simple events to a mongodb.
The following two screenshots of flame graphs of pprof were just taken 5 minutes apart. You can see the growth of the azure-event-hubs-go/storage.(*LeaserCheckpointer).GetLeases.func1 go routine, while all others stay the same:
Running the software for 5 days (with millions of events daily) leaves us with growing go routines like the following, as long as the pod is not restarted:
Proposed solution
The commit that fixes #92 introduced the
defer close(resCh)
.This probably has to be added for
leaseCh
after line 193 in azure-event-hubs-go/storage.go as well.Environment
The text was updated successfully, but these errors were encountered: