-
Notifications
You must be signed in to change notification settings - Fork 211
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fail_under setting with precision is not working #403
Comments
Thanks. Can you provide details about how you configure your virtualenv and run your tests? |
Thanks for the quick response! Sure, I think most of what you're looking for is here: https://github.com/votingworks/arlo/blob/89c50e43216963f06af6e4c5104b67fd33e4ff36/Makefile. Here are the relevant bits for running tests/coverage: PIPENV=python3.7 -m pipenv
test-server:
FLASK_ENV=test ${PIPENV} run python -m pytest ${FILE} \
-k '${TEST}' --ignore=arlo-client -vv ${FLAGS}
test-server-coverage:
FLAGS='--cov=. ${FLAGS}' make test-server I don't know exactly what to tell you about the virutalenv. I didn't set up the repo and don't quite understand how it all works to be honest. |
Unrelated to this bug, but I also realized as I've been working with the test coverage more that it would be more useful to me to be able to set a threshold for the actual number of missed lines, instead of a percentage. I am introducing test coverage to a repo that didn't have it before, so I'm trying to lock in the coverage at it's current state so I don't regress (until I have time to invest in covering all the remaining bits). The problem with using a percentage is that whenever I write new code, it changes the percentage. Even if all the new code is covered, the percentage increases... So I'll have to update the fail_under threshold with each PR. If I could lock in the actual number of uncovered lines, then it would be a much more useful baseline to compare to when I add new code. Wondering if you have thoughts on this. If useful, I could open up a new issue to discuss. |
I also experience this issue:
Playing with the numbers:
|
If you are seeing this issue, can you increase the reporting precision to see what the actual coverage value is? For example, if the total coverage is 93.18757, it will be reported to two decimal places as 93.19, but the actual value is less than 93.189. |
This PR fixes this reporting issue. |
Summary
I have
report: precision
set to 2 andfail_under
set to 97.47, and my test coverage total is reading as 97.47, but I'm getting a failure message and failure code (exit code 2).Expected vs actual result
Expected: test coverage passes
Actual:
FAIL Required test coverage of 97.47% not reached. Total coverage: 97.47%
I even tried modifying
fail_under
to 97.469, in which case I got this even more nonsensical message:FAIL Required test coverage of 97.469% not reached. Total coverage: 97.47%
Reproducer
Versions
Output of relevant packages
pip list
,python --version
,pytest --version
etc.Make sure you include complete output of
tox
if you use it (it will show versions of various things).Config
Include your
tox.ini
,pytest.ini
,.coveragerc
,setup.cfg
or any relevant configuration.Code
Link to your repository, gist, pastebin or just paste raw code that illustrates the issue.
If you paste raw code make sure you quote it, eg:
votingworks/arlo@89c50e4
The text was updated successfully, but these errors were encountered: