-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Go HeapAlloc slowing increasing #5144
Comments
@mjdesa -- we need to get the Go runtime stats into Grafana for our test beds. Telegraf has an outstanding PR by @mark-rushakoff that will help. |
37K queries had been executed in total on this node. |
Would be good to run a test w/ |
Added a task to track go runtime stats into Grafana to Tracker. |
Any chance the heap growth is tied to the growing number of tsm files? All of those indexes are kept on the heap. Would be good to run a test at this scale (5M unique series posted every 10s) without queries and see if the heap growth changes. |
Yeah, @pauldix -- that sounds feasible. Heap usage growth was pretty slow, we just noticed it and so I thought we should capture it. It the write load was taken off, perhaps it would flatline. |
The PR that @otoolep mentioned is influxdata/telegraf#449. Not merged yet but I think we're close. It should work fine now if you need it and you're willing to build telegraf from that branch. |
Tests run #4977 indicate this issue is resolved. |
During 500K EPS write load of the TSM engine, internal metrics show the HeapAlloc profile slowly increasing. There was also query load on the system during this time, but not that significant.
See attached graph. This was with a6cdb52
The text was updated successfully, but these errors were encountered: