-
Notifications
You must be signed in to change notification settings - Fork 801
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HTTP server leaks memory in python3.7.1 #340
Comments
That code shouldn't even be calling the line it's pointing to, as there's no http requests being made. |
i'm assuming a continuous stream of http requests, external to the script (the |
I found the issue and created a merge request. I hope it will get merged soon. |
The leak is caused by the fact that in Python 3.7, the default behavior of the `ThreadingMixin` is to use no daemon threads, but to request to block on threads on close. Because of that, it collects references to all created threads, creating the "leak": https://github.com/python/cpython/blob/v3.7.0/Lib/socketserver.py#L661 * Python 3.7: `block_on_close` is `True`: https://github.com/python/cpython/blob/v3.7.0/Lib/socketserver.py#L635 * Python 3.6: `_block_on_close` is `False`: https://github.com/python/cpython/blob/v3.6.7/Lib/socketserver.py#L639 * Python 2.7: There is no `block_on_close`, thus no logic for collecting references: https://github.com/python/cpython/blob/v2.7.15/Lib/SocketServer.py#L582 Fix by setting `daemon_threads` to `True`, which in our case should be a reasonable setting for all Python versions. Also, the new in Python 3.7 `ThreadingHTTPServer` stdlib class also sets it by default: https://github.com/python/cpython/blob/v3.7.0/Lib/http/server.py#L144 Signed-off-by: Sebastian Brandt <sebastian.brandt@friday.de>
Until then, there is a quick fix: from prometheus_client.exposition import _ThreadingSimpleServer, start_http_server
_ThreadingSimpleServer.daemon_threads = True
start_http_server(9099) |
Signed-off-by: Sebastian Brandt <sebastian.brandt@friday.de>
* Fix thread leak in Python 3.7 #340 The leak is caused by the fact that in Python 3.7, the default behavior of the `ThreadingMixin` is to use no daemon threads, but to request to block on threads on close. Because of that, it collects references to all created threads, creating the "leak": https://github.com/python/cpython/blob/v3.7.0/Lib/socketserver.py#L661 * Python 3.7: `block_on_close` is `True`: https://github.com/python/cpython/blob/v3.7.0/Lib/socketserver.py#L635 * Python 3.6: `_block_on_close` is `False`: https://github.com/python/cpython/blob/v3.6.7/Lib/socketserver.py#L639 * Python 2.7: There is no `block_on_close`, thus no logic for collecting references: https://github.com/python/cpython/blob/v2.7.15/Lib/SocketServer.py#L582 Fix by setting `daemon_threads` to `True`, which in our case should be a reasonable setting for all Python versions. Also, the new in Python 3.7 `ThreadingHTTPServer` stdlib class also sets it by default: https://github.com/python/cpython/blob/v3.7.0/Lib/http/server.py#L144 Signed-off-by: Sebastian Brandt <sebastian.brandt@friday.de>
@brian-brazil since this is a rather bad thing in microservice environments, would you mind releasing this soon? |
@brian-brazil @sbrandtb I am facing this issue with prometheus-client==0.5.0 and Python 3.5.2 and it's not getting fixed by applying the fix which was mentioned by @sbrandtb. I am not only facing this issue when I am using start_http_server but also when I am trying to use flask's server with prometheus end point monitored. Can you guys help me out? To be precise my close_wait connections keep on increasing till my server stops serving requests. x@ip-10-0-0-0:~$ lsof -i | grep 8002 |
@tusharmakkar08 Please check out the comment in my merge request: This issue is very specific to Python >= 3.7 Since you experience an issue in 3.5, I doubt it's the same cause. Also, the shape of the curve on Grafana does not match the behaviour of this bug at all: It does not suddenly start (7:something) nor suddenly decrease again, granted there are no other effects and your metrics endpoint is called regularly (also, that makes the graph go up very linearly, while yours is everything else than that) However, without any more information like code we can't tell anything more. |
I think my issue is more similar to prometheus/jmx_exporter#327 and prometheus/jmx_exporter#352. |
@tusharmakkar08 I think you should open a new issue with code how to reproduce, because you won't get much audience here or help without reproducible code. |
Any one resolved this? |
@yaverhussain The issue has been fixed and merged by me two years ago in #356 |
What version of prom client
…On Wednesday, February 10, 2021, Sebastian Brandt ***@***.***> wrote:
@yaverhussain <https://github.com/yaverhussain> The issue has been fixed
and merged by me two years ago in #356
<#356>
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#340 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AB7ZPYTD5P4MHVCKRFLXEKDS6JITVANCNFSM4GDAI52A>
.
|
@yaverhussain >= 0.6.0, see badges here |
i use python 3.6, prom client 0.8.0 and twisted 20.3.0 so i might have
different issue
I have syslog that captures latency logs from websphere (high production
load) and convert them to prom metrics on a different port
problem is memory increases for every new prom label i add and there are
huge amount of labels "URLs" ((however memory stabilises at some point
which means i have to allocate a large amount of memory to my container
app). at the current pace my pod runs out of memory (1 gig) in 1.5 hours. i
realised i would need 2.5 G
…On Thu, Feb 11, 2021 at 8:12 PM Sebastian Brandt ***@***.***> wrote:
@yaverhussain <https://github.com/yaverhussain> >= 0.6.0, see badges here
<5aa256d>
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#340 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AB7ZPYURFXOD5Z2FEZDUBOTS6N7N7ANCNFSM4GDAI52A>
.
|
Hello, rather than continuing to comment on this issue, would you make a new issue if you believe there is a problem with this client? If you have a general usage question, I recommend reaching out to the prometheus-users mailing list. Thanks in advance! That said, if you are adding distinct URLs as label values, such as having ids/uuids in the label value, then you will use a lot of memory as metrics will need to be stored for each URL. Usually people will template urls like |
I'm using version
0.4.2
, and it seems like the default http server, started viastart_http_server
leaks memory on every request.Repro script:
If you kick off the above script and then do:
you can observe its memory footprint growing on every iteration:
it's clearly coming from the server, because as soon as you stop the
curl
, memory usage stops growingThe text was updated successfully, but these errors were encountered: