-
-
Notifications
You must be signed in to change notification settings - Fork 18.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TST: test_mixed_freq_regular_first failing on master #16371
Comments
looks like maybe mpl 2.0.2? (not sure how recent it was) |
I cannot reproduce this locally, also not with matplotlib 2.0.2 (and also the succeeding travis build before it started failing already had 2.0.2) |
@tacaswell shot in the dark here, but any guesses what's up with these kinds of failures? e.g. https://travis-ci.org/pandas-dev/pandas/jobs/236416092#L2162 No worries if you don't have any ideas. I haven't been able to confirm it, but these failures are consistent with what would happen if we plot onto an existing figure with a different datetime frequency. All figures should be closed between tests, but who knows? Maybe something weird with pytest-xdist? Another failure: https://travis-ci.org/TomAugspurger/pandas/jobs/233810539 |
Collisions from the pyplot global state is plausible. Does it work if you restart the test? Flipping through the builds on master it looks like it sometimes passes and sometimes fails. In the mpl test suite we have had issue where one tests is missing the clean up logic (which in this case I assume is taken care of in by the class?). The dictionary iteration randomization means sometimes the sensitive test runs after a test that properly cleans up and sometimes not (at least I think that is the mechanism, have not dug too far into it). |
Collisions from the pyplot global state is plausible.
This is the best guess we have right now, thanks. I'll try to replace this
instances with explicit fig / axes creation.
Does it work if you restart the test?
Yeah, it's failing like 20-30% of the time.
Thanks
…On Sun, May 28, 2017 at 10:01 PM, Thomas A Caswell ***@***.*** > wrote:
Collisions from the pyplot global state is plausible.
Does it work if you restart the test?
Flipping through the builds on master it looks like it sometimes passes
and sometimes fails. In the mpl test suite we have had issue where one
tests is missing the clean up logic (which in this case I assume is taken
care of in by the class?). The dictionary iteration randomization means
sometimes the sensitive test runs after a test that properly cleans up and
sometimes not (at least I think that is the mechanism, have not dug too far
into it).
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#16371 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABQHIvJ3xyli4PqGRGWxSZYnYs4sh663ks5r-jUDgaJpZM4NdCAR>
.
|
Adding an explicit |
Started with https://travis-ci.org/pandas-dev/pandas/builds/232766866, which clearly (I think?) isn't to blame.
I don't see any differences in dependencies between https://travis-ci.org/pandas-dev/pandas/jobs/232654666 and https://travis-ci.org/pandas-dev/pandas/jobs/232766873 will dig in more later.
The text was updated successfully, but these errors were encountered: