Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Go routine leak in azure-event-hubs-go/storage.(*LeaserCheckpointer).GetLeases #136

Closed
jonnylangefeld opened this issue Aug 27, 2019 · 0 comments · Fixed by #137
Closed

Comments

@jonnylangefeld
Copy link

jonnylangefeld commented Aug 27, 2019

Expected Behavior

Amount of go routines for azure-event-hubs-go/storage.(*LeaserCheckpointer).GetLeases.func1 doesn't increase over time

Actual Behavior

the go routines are leaking and memory footprint grows

Analysis

I found the same issue as described in #92 in the function azure-event-hubs-go/storage.(*LeaserCheckpointer).GetLeases

I have a piece of software, that is reading from eventhubs and writes simple events to a mongodb.
The following two screenshots of flame graphs of pprof were just taken 5 minutes apart. You can see the growth of the azure-event-hubs-go/storage.(*LeaserCheckpointer).GetLeases.func1 go routine, while all others stay the same:

Screen Shot 2019-08-27 at 2 15 18 PM

Screen Shot 2019-08-27 at 2 21 30 PM

Running the software for 5 days (with millions of events daily) leaves us with growing go routines like the following, as long as the pod is not restarted:

025e4600-c837-11e9-8fa6-390dfa12bed9

Proposed solution

The commit that fixes #92 introduced the defer close(resCh).
This probably has to be added for leaseCh after line 193 in azure-event-hubs-go/storage.go as well.

Environment

  • OS: linux
  • Go version: 12.9
  • Version of Library: 2.0.0
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
1 participant