Skip to content

Commit

Permalink
Merge branch 'pr-3148-rebased', remote-tracking branches 'benoitc/pr/…
Browse files Browse the repository at this point in the history
…2407', 'pajod/docs-misc', 'pajod/pr-3157-rebased', 'benoitc/pr/3273', 'benoitc/pr/3271', 'benoitc/pr/3252', 'benoitc/pr/2938' and 'benoitc/pr/3275' into integration-v23.1.0
  • Loading branch information
pajod committed Aug 15, 2024
9 parents cf861a2 + 13f54ed + 99cbc81 + 56b3e42 + 2096e42 + 6e1ca03 + b0115b9 + ef94875 + 717fbc2 commit 7f7034b
Show file tree
Hide file tree
Showing 20 changed files with 445 additions and 328 deletions.
7 changes: 2 additions & 5 deletions .pylintrc
Original file line number Diff line number Diff line change
Expand Up @@ -15,9 +15,8 @@ disable=
bad-mcs-classmethod-argument,
bare-except,
broad-except,
duplicate-bases,
duplicate-code,
eval-used,
superfluous-parens,
fixme,
import-error,
import-outside-toplevel,
Expand Down Expand Up @@ -47,9 +46,7 @@ disable=
ungrouped-imports,
unused-argument,
useless-object-inheritance,
useless-import-alias,
comparison-with-callable,
try-except-raise,
consider-using-with,
consider-using-f-string,
unspecified-encoding
unspecified-encoding,
9 changes: 9 additions & 0 deletions docs/source/faq.rst
Original file line number Diff line number Diff line change
Expand Up @@ -109,6 +109,15 @@ threads. However `a work has been started
.. _worker_class: settings.html#worker-class
.. _`number of workers`: design.html#how-many-workers

Why are are responses delayed on startup/re-exec?
-------------------------------------------------

If workers are competing for resources during wsgi import, the result may be slower
than sequential startup. Either avoid duplicate work altogether
via :ref:`preload-app`. Or, if that is not an option, tune worker spawn sequence by
adding a delay in the :ref:`pre-fork` to sacrifice overall startup completion time
for reduced time for first request completion.

Why I don't see any logs in the console?
----------------------------------------

Expand Down
56 changes: 15 additions & 41 deletions docs/source/install.rst
Original file line number Diff line number Diff line change
Expand Up @@ -96,68 +96,42 @@ advantages:
rolled back in case of incompatibility. The package can also be purged
entirely from the system in seconds.

stable ("buster")
------------------
stable (as of 2024, "bookworm")
-------------------------------

The version of Gunicorn in the Debian_ "stable" distribution is 19.9.0
(December 2020). You can install it using::
The version of Gunicorn in the Debian_ "stable" distribution is 20.1.0
(2021-04-28). You can install it using::

$ sudo apt-get install gunicorn3

You can also use the most recent version 20.0.4 (December 2020) by using
`Debian Backports`_. First, copy the following line to your
``/etc/apt/sources.list``::

deb http://ftp.debian.org/debian buster-backports main

Then, update your local package lists::

$ sudo apt-get update

You can then install the latest version using::

$ sudo apt-get -t buster-backports install gunicorn

oldstable ("stretch")
---------------------

While Debian releases newer than Stretch will give you gunicorn with Python 3
support no matter if you install the gunicorn or gunicorn3 package for Stretch
you specifically have to install gunicorn3 to get Python 3 support.

The version of Gunicorn in the Debian_ "oldstable" distribution is 19.6.0
(December 2020). You can install it using::

$ sudo apt-get install gunicorn3
$ sudo apt-get install gunicorn

You can also use the most recent version 19.7.1 (December 2020) by using
You may have access to a more recent packaged version by using
`Debian Backports`_. First, copy the following line to your
``/etc/apt/sources.list``::

deb http://ftp.debian.org/debian stretch-backports main
deb http://ftp.debian.org/debian bookworm-backports main

Then, update your local package lists::

$ sudo apt-get update

You can then install the latest version using::
You can then install the latest available version using::

$ sudo apt-get -t stretch-backports install gunicorn3
$ sudo apt-get -t bookworm-backports install gunicorn

Testing ("bullseye") / Unstable ("sid")
---------------------------------------
Testing (as of 2024, "trixie") / Unstable ("sid")
-------------------------------------------------

"bullseye" and "sid" contain the latest released version of Gunicorn 20.0.4
(December 2020). You can install it in the usual way::
"trixie" and "sid" ship the most recently packaged version of Gunicorn 20.1.0
(2021-04-28). You can install it in the usual way::

$ sudo apt-get install gunicorn


Ubuntu
======

Ubuntu_ 20.04 LTS (Focal Fossa) or later contains the Gunicorn package by
default 20.0.4 (December 2020) so that you can install it in the usual way::
Ubuntu_ 20.04 LTS (Focal Fossa) and later ship packages similar to Debian
so that you can install it in the usual way::

$ sudo apt-get update
$ sudo apt-get install gunicorn
Expand Down
61 changes: 60 additions & 1 deletion docs/source/settings.rst
Original file line number Diff line number Diff line change
Expand Up @@ -82,6 +82,10 @@ The default behavior is to attempt inotify with a fallback to file
system polling. Generally, inotify should be preferred if available
because it consumes less system resources.

.. note::
If the application fails to load while this option is used,
the (potentially sensitive!) traceback will be shared in
the response to subsequent HTTP requests.
.. note::
In order to use the inotify reloader, you must have the ``inotify``
package installed.
Expand Down Expand Up @@ -114,10 +118,13 @@ Valid engines are:

**Default:** ``[]``

Extends :ref:`reload` option to also watch and reload on additional files
Alternative or extension to :ref:`reload` option to (also) watch
and reload on additional files
(e.g., templates, configurations, specifications, etc.).

.. versionadded:: 19.8
.. versionchanged:: 23.FIXME
Option no longer silently ignored if used without :ref:`reload`.

.. _spew:

Expand Down Expand Up @@ -1716,6 +1723,58 @@ The maximum number of simultaneous clients.

This setting only affects the ``gthread``, ``eventlet`` and ``gevent`` worker types.

.. _prune-function:

``prune_function``
~~~~~~~~~~~~~~~~~~

**Command line:** ``--prune-function``

**Default:**

.. code-block:: python
def prune_score(pid):
return 0
A function that is passed a process ID of a worker and returns a
score (such as total memory used). Once every prune seconds, the
worker with the highest score is killed (unless the score is below
the prune floor).

.. _prune-seconds:

``prune_seconds``
~~~~~~~~~~~~~~~~~

**Command line:** ``--prune-seconds INT``

**Default:** ``0``

How many seconds to wait between killing the worker with the highest
score from the prune function. If set to 0 (the default), then no
pruning is done. The actual time waited is a random value between
95% and 105% of this value.

A worker handling an unusually large request can significantly grow
how much memory it is consuming for the rest of its existence. So
rare large requests will tend to eventually make every worker
unnecessarily large. If the large requests are indeed rare, then
you can significantly reduce the total memory used by your service
by periodically pruning the largest worker process.

.. _prune-floor:

``prune_floor``
~~~~~~~~~~~~~~~

**Command line:** ``--prune-floor INT``

**Default:** ``0``

When the score from the prune function is at or below this value, the
worker will not be killed even if it has the highest score.

.. _max-requests:

``max_requests``
Expand Down
33 changes: 33 additions & 0 deletions examples/example_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,39 @@
timeout = 30
keepalive = 2

#
# prune_function
# A function that is passed a process ID of a worker and returns a
# score (such as total memory used). Once every prune seconds, the
# worker with the highest score is killed (unless the score is below
# the prune floor).
#
# prune_seconds
# How many seconds to wait between killing the worker with the highest
# score from the prune function. If set to 0 (the default), then no
# pruning is done. The actual time waited is a random value between
# 90% and 100% of this value.
#
# prune_floor
# When the score from the prune function is at or below this value, the
# worker will not be killed even if it has the highest score.
#

import psutil

def proc_vmsize(pid):
# Return how many MB of virtual memory is being used by a worker process
try:
p = psutil.Process(pid)
mb = p.memory_info().vms/1024/1024
return mb
except psutil.NoSuchProcessError:
return 0

prune_seconds = 5*60 # Prune largest worker every 4.75-5.25m
prune_function = proc_vmsize # Measure worker size in MB of VM
prune_floor = 300 # Don't kill workers using <= 300 MB of VM

#
# spew - Install a trace function that spews every line of Python
# that is executed when running the server. This is the
Expand Down
4 changes: 4 additions & 0 deletions gunicorn/app/wsgiapp.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,8 @@

class WSGIApplication(Application):
def init(self, parser, opts, args):
# pylint: disable=cyclic-import

self.app_uri = None

if opts.paste:
Expand Down Expand Up @@ -47,6 +49,8 @@ def load_wsgiapp(self):
return util.import_app(self.app_uri)

def load_pasteapp(self):
# pylint: disable=cyclic-import

from .pasterapp import get_wsgi_app
return get_wsgi_app(self.app_uri, defaults=self.cfg.paste_global_conf)

Expand Down
Loading

0 comments on commit 7f7034b

Please sign in to comment.