-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix Pixart Slow Tests #6962
Fix Pixart Slow Tests #6962
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
@@ -332,7 +338,7 @@ def tearDown(self): | |||
torch.cuda.empty_cache() | |||
|
|||
def test_pixart_1024(self): | |||
generator = torch.manual_seed(0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't it by default create it on CPU?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. I just changed it to match how the generator is created in the other tests.
@@ -341,14 +347,13 @@ def test_pixart_1024(self): | |||
image = pipe(prompt, generator=generator, output_type="np").images | |||
|
|||
image_slice = image[0, -3:, -3:, -1] | |||
expected_slice = np.array([0.2891, 0.2749, 0.2595, 0.3020, 0.2698, 0.2671, 0.3169, 0.2993, 0.3179]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's quite big a change. Why is this coming?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The difference produced currently between the output and expected output is quite large
https://github.com/huggingface/diffusers/actions/runs/7876381153/job/21491555064#step:7:485
I just ran print_tensor_test
on the current outputs in our new containers (with torch 2.2) and updated. If this seems like a deeper issue, let me know how you want to tackle it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we pintpoint a version that passes with the current assertion values? Maybe with Torch 2.1, etc.?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, the values will have to change here. This test is running 20 inference steps, which is too many.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But we're not changing the num_inference_steps
yet no? It's okay if the values are changed because of a reduced number of steps. But at 20 steps, the assertion values shouldn't change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The tests are failing with torch 2.1 as well. It could also be the CUDA version in the runners. I've run the pipeline example in the docs and the generated image seems fine.
It feels like the test is most likely failing due to changes in torch version or CUDA version. But if you would like to work on an alternative solution, feel free to take a look.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay let's merge then.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@DN6 the assertion value changes seem quite large to me. I think we should look into what's causing them to change that much before this.
What does this PR do?
Fixes precision related issues in the Pixart Slow tests.
Fixes # (issue)
Before submitting
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.