-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Decoding latents independently using AnimateDiffVideoToVideoPipeline takes more memory than outputting images directly #7378
Labels
bug
Something isn't working
Comments
Cc: @DN6 |
What happens when you delete the |
When I delete the pipe object, still out of memory. Here's what I'm running:
|
6 tasks
Just taking a wild guess, but can you try something like: with torch.no_grad():
images = pipe.decode_latents(combined_outputs) Additionally, before running just maybe it might help to do garbage collection from torch cuda and python. |
Works! Good guess HAHA |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Describe the bug
When I use the AnimateDiffVideoToVideo pipeline to output images, my server doesn't run out of memory. However, if I output the latents, and then manually run pipe.decode_latents, I somehow run out of memory.
Reproduction
Code with manual latent decoding:
I get the error that is in the logs pasted below.
But if I directly run the pipe to get image outputs, I have no error.
I have about 22.5GB of vRAM available on my server.
Logs
System Info
diffusers
version: 0.28.0.dev0Who can help?
@DN6 @saya
The text was updated successfully, but these errors were encountered: