-
-
Notifications
You must be signed in to change notification settings - Fork 18.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug: assert_produces_warning(None) not raising AssertionError with warning #38626
Bug: assert_produces_warning(None) not raising AssertionError with warning #38626
Conversation
These changes cause 23 test failures from |
@@ -2724,11 +2724,10 @@ class for all warnings. To check that no warning is returned, | |||
extra_warnings = [] | |||
|
|||
for actual_warning in w: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
isn't this code functionally the same as existing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought if expected_warning
is False
or None
, then extra_warnings
does not get appended to because of the continue
. So the check at the end for raising on extra warnings doesn't get triggered.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Current implementation will ignore the else clause below, which consolidates extra warnings. I see that I messed that up in one of my previous PRs. This PR reverts that error.
Probably, if some functions are extracted, that would improve the readability of the logic (maybe in a separate PR).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The added tests do raise on master as well, so behavior for them has changed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like this fixes the issue reported.
Let us take a look at the failing tests.
if not expected_warning: | ||
continue | ||
|
||
expected_warning = cast(Type[Warning], expected_warning) | ||
if issubclass(actual_warning.category, expected_warning): | ||
if expected_warning and issubclass( | ||
actual_warning.category, expected_warning | ||
): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this is a good catch.
@@ -152,3 +152,20 @@ def test_right_category_wrong_match_raises(pair_different_warnings): | |||
with tm.assert_produces_warning(target_category, match=r"^Match this"): | |||
warnings.warn("Do not match it", target_category) | |||
warnings.warn("Match this", other_category) | |||
|
|||
|
|||
@pytest.mark.parametrize("false_or_none", [False, None]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good tests to prevent the issue from happening!
wow
ok these at least should be straightforward to fix |
Bottom 4 failures can be fixed by removing pandas/pandas/tests/indexes/test_common.py Lines 362 to 363 in ca52e39
Looks like this deprecation came from #37877, this removal would make sense if those indexes should give a |
can also just xfail these 3 tests and open an issue as well. |
@@ -543,6 +543,7 @@ def test_tda_add_sub_index(self): | |||
expected = tdi - tdi | |||
tm.assert_index_equal(result, expected) | |||
|
|||
@pytest.mark.xfail(reason="GH38630", strict=False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there a reason you are passing strict? we default to true - eg if these are fixed we want the tests to fail (as a hint to remove the xfail)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Problem I ran into is that some parameterizations actually pass, so strict=True
fails for these. Couldn't figure out a way to xfail
only the failing combinations because the parameterizations are complex (2 defined elsewhere in fixtures, 1 uses multiple calls to pytest.mark.parametrize
).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok that's fine
ping on green
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seeing 2 failures where pandas/tests/io/parser/test_common.py::test_chunks_have_consistent_numerical_type[python]
gives unexpected ResourceWarning
.
Do you know if this warning occurs consistently? Should something in the test be modified to handle a potential ResourceWarning
? Or just another xfail case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we ar trying to track these cases down as something is leaking
if they r causing actual failures then ok to xfail (and list these in the associated issue with checkboxes)
Green except Travis |
thanks @mzeitlin11 keep em coming! |
black pandas
git diff upstream/master -u -- "*.py" | flake8 --diff