Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Major rework of documentation & many small fixes #1527

Merged
merged 1 commit into from
Aug 18, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
51 changes: 18 additions & 33 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,63 +17,48 @@

## Description

Locust is an easy-to-use, distributed, user load testing tool. It is intended for load-testing web sites (or other systems) and
figuring out how many concurrent users a system can handle.
Locust is an easy to use, scriptable and scalable performance testing tool.

The idea is that during a test, a swarm of simulated users will attack your website. The behavior of each user is defined by you
using Python code, and the swarming process is monitored from a web UI in real-time. This will help you battle test and identify
bottlenecks in your code before letting real users in.

Locust is completely event-based, and therefore it's possible to support thousands of concurrent users on a single machine.
In contrast to many other event-based apps it doesn't use callbacks. Instead it uses light-weight processes, through <a href="http://www.gevent.org/">gevent</a>.
Each locust swarming your site is actually running inside its own process (or greenlet, to be correct).
This allows you to write very expressive scenarios in Python without complicating your code with callbacks.
You define the behaviour of your users in regular Python code, instead of using a clunky UI or domain specific language.

This makes Locust infintely expandable and very developer friendly.

## Features

* **Write user test scenarios in plain-old Python**<br>
No need for clunky UIs or bloated XML—just code as you normally would. Based on coroutines instead
of callbacks, your code looks and behaves like normal, blocking Python code.

If you want your users to loop, perform some conditional behaviour or do some calculations, you just use the regular programming constructs provided by Python. Locust runs every user inside its own greenlet (a lightweight process/coroutine). This enables you to write your tests like normal (blocking) Python code instead of having to use callbacks or some other mechanism.
Because your scenarios are "just python" you can use your regular IDE, and version control your tests as regular code (as opposed to some other tools that use XML or binary formats)

* **Distributed & Scalable - supports hundreds of thousands of users**<br>
Locust supports running load tests distributed over multiple machines.
Being event-based, even one Locust node can handle thousands of users in a single process.
Part of the reason behind this is that even if you simulate that many users, not all are actively hitting your system. Often, users are idle figuring out what to do next. Requests per second != number of users online.

Locust makes it easy to run load tests distributed over multiple machines.
It is event-based (using <a href="http://www.gevent.org/">gevent</a>), which makes it possible for a single process to handle many thousands concurrent users. While there may be other tools that are capable of doing more requests per second on a given hardware, the low overhead of each Locust user makes it very suitable for testing highly concurrent workloads.

* **Web-based UI**<br>
Locust has a neat HTML+JS that shows all relevant test details in real-time. And since the UI is web-based, it's cross-platform and easily extendable.

Locust has a user friendly web interface that shows the progress of your test in real-time.

* **Can test any system**<br>
Even though Locust is web-oriented, it can be used to test almost any system. Just write a client for what ever you wish to test and swarm it with users! It's super easy!

* **Hackable**<br>
Locust is very small and very hackable and we intend to keep it that way. All heavy-lifting of evented I/O and coroutines are delegated to gevent. The brittleness of alternative testing tools was the reason we created Locust.
Even though Locust primarily works with web sites/services, it can be used to test almost any system or protocol.

* **Hackable**

Locust is small and very flexible and we intend to keep it that way.

## Documentation

More info and documentation can be found at: <a href="https://docs.locust.io/">https://docs.locust.io/</a>

## Questions/help?

For questions about how to use Locust, feel free to stop by the Slack or ask questions on Stack Overflow tagged Locust.

## Bug reporting

Open a Github issue and follow the template listed there.

## Authors

- <a href="http://cgbystrom.com">Carl Bystr&ouml;m</a> (@<a href="https://twitter.com/cgbystrom">cgbystrom</a> on Twitter)
- <a href="http://heyman.info">Jonatan Heyman</a> (@<a href="https://twitter.com/jonatanheyman">jonatanheyman</a> on Twitter)
- Joakim Hamrén (@<a href="https://twitter.com/Jahaaja">Jahaaja</a>)
- Hugo Heyman (@<a href="https://twitter.com/hugoheyman">hugoheyman</a>)
- Lars Holmberg

## License

Open source licensed under the MIT license (see _LICENSE_ file for details).


## Supported Python Versions

Locust is supported on Python 3.6, 3.7, 3.8.
Open source licensed under the MIT license (see _LICENSE_ file for details).
30 changes: 15 additions & 15 deletions docs/generating-custom-load-shape.rst
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
.. _generating-custom-load-shape:

=================================
============================================================
Generating a custom load shape using a `LoadTestShape` class
=================================
============================================================

Sometimes a completely custom shaped load test is required that cannot be achieved by simply setting or changing the user count and spawn rate. For example, generating a spike during a test or ramping up and down at custom times. In these cases using the `LoadTestShape` class can give complete control over the user count and spawn rate at all times.

Expand All @@ -14,25 +14,25 @@ Define your class inheriting the `LoadTestShape` class in your locust file. If t
Examples
---------------------------------------------

There are also some [examples on github](https://github.com/locustio/locust/tree/master/examples/custom_shape) including:
There are also some `examples on github <https://github.com/locustio/locust/tree/master/examples/custom_shape>`_ including:

- Generating a double wave shape
- Time based stages like K6
- Step load pattern like Visual Studio

Here is a simple example that will increase user count in blocks of 100 then stop the load test after 10 minutes:

```python
class MyCustomShape(LoadTestShape):
time_limit = 600
spawn_rate = 20

def tick(self):
run_time = self.get_run_time()
.. code-block:: python

if run_time < self.time_limit:
user_count = round(run_time, -2)
return (user_count, spawn_rate)
class MyCustomShape(LoadTestShape):
time_limit = 600
spawn_rate = 20

def tick(self):
run_time = self.get_run_time()

return None
```
if run_time < self.time_limit:
user_count = round(run_time, -2)
return (user_count, spawn_rate)

return None
28 changes: 28 additions & 0 deletions docs/history.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
.. _history:

===============================
The history of Locust
===============================

Locust was created because we were fed up with existing solutions. None of them are solving the
right problem and to me, they are missing the point. We've tried both Apache JMeter and Tsung.
Both tools are quite OK to use; we've used the former many times benchmarking stuff at work.
JMeter comes with a UI, which you might think for a second is a good thing. But you soon realize it's
a PITA to "code" your testing scenarios through some point-and-click interface. Secondly, JMeter
is thread-bound. This means for every user you want to simulate, you need a separate thread.
Needless to say, benchmarking thousands of users on a single machine just isn't feasible.

Tsung, on the other hand, does not have these thread issues as it's written in Erlang. It can make
use of the light-weight processes offered by BEAM itself and happily scale up. But when it comes to
defining the test scenarios, Tsung is as limited as JMeter. It offers an XML-based DSL to define how
a user should behave when testing. I guess you can imagine the horror of "coding" this. Displaying
any sorts of graphs or reports when completed requires you to post-process the log files generated from
the test. Only then can you get an understanding of how the test went.

Anyway, we've tried to address these issues when creating Locust. Hopefully none of the above
pain points should exist.

I guess you could say we're really just trying to scratch our own itch here. We hope others will
find it as useful as we do.

- `Jonatan Heyman <http://heyman.info>`_ (`@jonatanheyman <https://twitter.com/jonatanheyman>`_ on Twitter)
2 changes: 2 additions & 0 deletions docs/installation.rst
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
.. _installation:

Installation
============

Expand Down
2 changes: 1 addition & 1 deletion docs/running-locust-distributed.rst
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ See :ref:`running-locust-distributed-without-web-ui`


Generating a custom load shape using a `LoadTestShape` class
=============================================
============================================================

See :ref:`generating-custom-load-shape`

Expand Down
2 changes: 2 additions & 0 deletions docs/testing-other-systems.rst
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
.. _testing-other-systems:

===========================================
Testing other systems using custom clients
===========================================
Expand Down
70 changes: 24 additions & 46 deletions docs/what-is-locust.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,85 +2,63 @@
What is Locust?
===============================

Locust is an easy-to-use, distributed, user load testing tool. It is intended for load-testing web sites
(or other systems) and figuring out how many concurrent users a system can handle.
Locust is an easy to use, scriptable and scalable performance testing tool.

The idea is that during a test, a swarm of `locust <http://en.wikipedia.org/wiki/Locust>`_ users
will attack your website. The behavior of each user is defined by you using Python code, and the
swarming process is monitored from a web UI in real-time. This will help you battle test and identify
bottlenecks in your code before letting real users in.
You define the behaviour of your users in regular Python code, instead of using a clunky UI or domain specific language.

Locust is completely event-based, and therefore it's possible to support thousands of concurrent
users on a single machine. In contrast to many other event-based apps it doesn't use callbacks.
Instead it uses light-weight processes, through `gevent <http://www.gevent.org/>`_. Each locust
swarming your site is actually running inside its own process (or greenlet, to be correct). This
allows you to write very expressive scenarios in Python without complicating your code with callbacks.
This makes Locust infintely expandable and very developer friendly.

To start using Locust, go to :ref:`installation`

Features
========

* **Write user test scenarios in plain-old Python**

No need for clunky UIs or bloated XML—just code as you normally would. Based on coroutines instead
of callbacks, your code looks and behaves like normal, blocking Python code.
If you want your users to loop, perform some conditional behaviour or do some calculations, you just use the regular programming constructs provided by Python.
Locust runs every user inside its own greenlet (a lightweight process/coroutine). This enables you to write your tests like normal (blocking) Python code instead of having to use callbacks or some other mechanism.
Because your scenarios are "just python" you can use your regular IDE, and version control your tests as regular code (as opposed to some other tools that use XML or binary formats)

* **Distributed & Scalable - supports hundreds of thousands of users**

Locust supports running load tests distributed over multiple machines.
Being event-based, even one Locust node can handle thousands of users in a single process.
Part of the reason behind this is that even if you simulate that many users, not all are actively
hitting your system. Often, users are idle figuring out what to do next.
Requests per second != number of users online.

Locust makes it easy to run load tests distributed over multiple machines.
It is event-based (using `gevent <http://www.gevent.org/>`_), which makes it possible for a single process to handle many thousands concurrent users.
While there may be other tools that are capable of doing more requests per second on a given hardware, the low overhead of each Locust user makes it very suitable for testing highly concurrent workloads.

* **Web-based UI**

Locust has a neat HTML+JS user interface that shows relevant test details in real-time. And since
the UI is web-based, it's cross-platform and easily extendable.
Locust has a user friendly web interface that shows the progress of your test in real-time. You can even change the load while the test is running. It can also be run without the UI, making it easy to use for CI/CD testing.

* **Can test any system**

Even though Locust is web-oriented, it can be used to test almost any system. Just write a client
for what ever you wish to test and swarm it with locusts! It's super easy!
Even though Locust primarily works with web sites/services, it can be used to test almost any system or protocol. Just :ref:`write a client <testing-other-systems>`
for what you want to test, or `explore some created by the community <https://github.com/SvenskaSpel/locust-plugins#users>`_.

* **Hackable**

Locust is small and very hackable and we intend to keep it that way. All heavy-lifting of evented
I/O and coroutines are delegated to gevent. The brittleness of alternative testing tools was the
reason we created Locust.

Background
==========
Locust is small and very flexible and we intend to keep it that way. If you want to `send reporting data to that database & graphing system you like <https://github.com/SvenskaSpel/locust-plugins/blob/master/locust_plugins/listeners.py>`_, wrap calls to a REST API to handle the particulars of your system or run a :ref:`totally custom load pattern <generating-custom-load-shape>`, there is nothing stopping you!

Locust was created because we were fed up with existing solutions. None of them are solving the
right problem and to me, they are missing the point. We've tried both Apache JMeter and Tsung.
Both tools are quite OK to use; we've used the former many times benchmarking stuff at work.
JMeter comes with a UI, which you might think for a second is a good thing. But you soon realize it's
a PITA to "code" your testing scenarios through some point-and-click interface. Secondly, JMeter
is thread-bound. This means for every user you want to simulate, you need a separate thread.
Needless to say, benchmarking thousands of users on a single machine just isn't feasible.
Name & background
=================

Tsung, on the other hand, does not have these thread issues as it's written in Erlang. It can make
use of the light-weight processes offered by BEAM itself and happily scale up. But when it comes to
defining the test scenarios, Tsung is as limited as JMeter. It offers an XML-based DSL to define how
a user should behave when testing. I guess you can imagine the horror of "coding" this. Displaying
any sorts of graphs or reports when completed requires you to post-process the log files generated from
the test. Only then can you get an understanding of how the test went.
`Locust <http://en.wikipedia.org/wiki/Locust>`_ takes its name from the grasshopper species, known for their swarming behaviour.

Anyway, we've tried to address these issues when creating Locust. Hopefully none of the above
pain points should exist.
Previous versions of Locust used terminology borrowed from nature (swarming, hatching, attacking etc), but now employs more industry standard naming.

I guess you could say we're really just trying to scratch our own itch here. We hope others will
find it as useful as we do.
:ref:`history`

Authors
=======

- `Jonatan Heyman <http://heyman.info>`_ (`@jonatanheyman <https://twitter.com/jonatanheyman>`_ on Twitter)
- Lars Holmberg (`@cyberw <https://github.com/cyberw>`_ on Github)
- Carl Byström (`@cgbystrom <https://twitter.com/cgbystrom>`_ on Twitter)
- Joakim Hamrén (`@Jahaaja <https://twitter.com/Jahaaja>`_ on Twitter)
- Hugo Heyman (`@hugoheyman <https://twitter.com/hugoheyman>`_ on Twitter)

Many thanks to our other great `contributors <https://github.com/locustio/locust/graphs/contributors>`_!


License
=======

Expand Down