-
Notifications
You must be signed in to change notification settings - Fork 31
Testing
Constantly testing your code after new implementations or refactorizations is very important for the following reasons:
- By testing all functionalities of the framework, releases of the main branch have certainty of working as expected.
- By testing you can easily check if the current functionalities in
f3dasm
are still working - By testing your code you can find bugs more easily and they will appear earlier in the development process.
Testing also secures your implementation for future releases: if somebody else adds functionalities to the framework that stops your code from working (and you have written a proper test), the new feature will not be merged as the tests are not passed.
Testing is part of the Pull request checks.
The following tools are used and recommended for testing within the f3dasm
framework.
VS code has a handy extension that let you handle testing your code.
- You can click on the testing icon in the left ribbon bar: !
- If no test framework has been configured, click on 'Configure Python Tests',
- select
pytest
- select 'Use existing config file (setup.cfg)'.
- select
All the tests of the f3dasm
package will appear in the side menu. You can click on an individual test to run it.
The main testing tool used in the f3dasm
framework is pytest with some added functionalities/packages:
-
pytest-cov
, an extension of pytest which reports the code coverage of your tests -
hypothesis
, a tool for property-based testing. Great video on that here
Testing packages are automatically installed when you install the development requirements requirements_dev.txt
, or you can install them manually with the requirements.txt
file found in the /tests/
folder:
pip install -r ./tests/requirements_dev.txt
Options of pytest
can be found in the setup.cfg
file of the f3dasm
package under the head [tool:pytest]
. More on these options can be found here
First make sure that you have activated the correct environment! Tests are run in various ways:
- by navigating to the
f3dasm
repository and callingpytest
in terminal. This will run all the tests in the/tests/
folder - by running a group of tests, e.g.
pytest -v -s -m smoke
. This will run all the tests that are marked with the 'smoke' tag. - by executing a command of the make-file:
make test-smoke
is equivalent topytest -v -s -m smoke
.
When the tests are finished, a summary shows the number of tests that are passed/failed and a coverage report.
It is a good habit to make a test of your new functionality right after you have created it. I'll demonstrate you will make new Python tests file:
- Create a new
.py
file in the/tests/
folder. The Python file should start withtest_
- Put your new test file in the approriate sub-folder, or make a new one
Make sure to make every folder a Python-package by creating an (empty)
__init__.py
file!
- Copy the following code as a stub:
import pytest
def test_example():
pass
if __name__ == "__main__": # pragma: no cover
pytest.main()
- Create a function without any arguments and put the code you want to test here
Arguments to test functions are used to import fixtures, more on that in
You can run the single test by calling pytest ./tests/path_to_test_script.py::
If you have to create complicated data structures before you can actually test some behaviour, it is ueful to create fixtures of this data to be reused. More on that can be found here
@pytest.fixture
def itemlist():
return ['apple','orange', 'banana']
def test_length_of_list(itemlist):
itemlist.append('grapefruit')
assert len(itemlist) == 4
Fixtures can be placed:
- In a module file itself. The fixture can be used within this module
- In a separate file
conftest.py
within a (sub)package. You can specify the scope of this fixture to be available throughout the whole package:
@pytest.fixture(scope="package")
def fixture_to_be_used():
...
Source: Pytest documentation on fixtures
If you want to tests behaviour for lots of different combinations, you might want to parametrize certain variables.
@pytest.mark.parametrize("num, output",[(1,11),(2,22),(3,35),(4,44)])
def test_multiplication_11(num, output):
assert 11*num == output
This will create 4 tests, each with the tuple values in the parametrize-decorator. You can also add the parametrize decorator multiple times to a function for easier readability:
@pytest.mark.parametrize("seed", [42, 43])
@pytest.mark.parametrize("optimizer", ['Adam', 'SGD'])
@pytest.mark.parametrize("function", ['Levy','Ackley'])
def test_all_optimizers_and_functions(seed, function, optimizer):
...
Source: Pytest documentation on parametrizing
You can mark tests the following ways:
- You can add a mark-decorator to a class or function to mark that specific test:
@pytest.mark.mymark
def test_mymark():
...
- You can add a mark to a testfile to give all tests in the file the mark, e.g.
pytestmark = pytest.mark.smoke
When you run pytest
, all the tests in the /tests/
folder will be run.
You can run tests with the following markers in the f3dasm
framework:
requires_dependency
smoke
If your package requires a dependency that is not handled by the Python requirements enforced by the f3dasm
package (e.g. Abaqus for finite-element simulations), you can mark the test with the requires_dependency(name)
mark:
@pytest.mark.requires_dependency("abaqus")
def test_abaqus():
...
You can skip these tests with the following (custom) flag when calling pytest
:
pytest -S abaqus
You can skip all tests that have a require_depenency
mark, regardless of which package, by calling:
pytest -S all
This can be handy for making tests that cannot be executed on remote continuous integration services, like GitHub Actions.
The "smoke mark" is a way to label certain test cases as "smoke tests", which are a subset of all tests that are considered to be the most important and critical to the overall functionality of the system being tested. These tests are typically run first and often, as they are considered to be the most critical, fast and indicative of the overall health of the system.
An example of a applying a smoke mark is as follows: if you have a test that tests all 500+ combinations of optimizers, test-functions and samplers, you might consider to create a smaller test that only does 10 combination or so and mark that as a smoke test. When you are developing, the smoke tests can be executed quickly as 10 test doesn't take that much time compared to the full 500+ test-set.
Code coverage is a measure of the amount of code that is executed during the running of tests. It helps to ensure that a certain minimum amount of the codebase is being tested and that there are no untested areas. The measure is typically expressed as a percentage of the total number of statements in the codebase that are covered by tests.
In the context of the f3dasm framework, the team has set a minimum code coverage requirement of 70%. This means that at least 70% of the statements in the codebase should be executed during the running of tests.
Whenever you run pytest
, it will return a table denoting the files and statements that have been missed. You can automatically create a HTML page to investigate the code coverage by executing the followin make-command from the commandline:
make test-html
- Please read more on the documentation page of pytest