-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Preliminary confirmation: Image gen now runs on 4 GB cards #486
Comments
@lstein i think this can be closed -- this is absolutely fantastic. I'm getting something like ~2 mins for a 50-step basic txt2img prompt via dream with default settings, and ~9 minutes for the same with 250 steps on a Quadro P1000, which is certainly no speed demon of a card. I'd call this an excellent result considering that a lot more machines just got access to all of this functionality! |
This is fantastic news. Thanks for the confirmations! |
I can confirm that image generation works on 4 GB cards: However, to run it I had to remove the memory check in It ran fine without it, so I wonder if the check is incorrect/incomplete. Still amazed that it's now possible to generate high-quality 512x512 images on a laptop's GPU. |
I had noticed that for me the model now takes 30% (60 seconds compared to 40 seconds) longer to load was to busy with testing the VRAM fixes for the image generation notice it until recently , @lstein can you point me to the patch for this so I can have a look sometime ? |
I just merged in a patch from @mh-dm which appears to dramatically reduce the memory requirements for loading the model. Instead of using 4.3 GB in my benchmarks, the patched version requires just 2.17 G. Along with the image generation optimization, this makes me hopeful that the system will now run on 4 GB cards.
If anyone would like to test this, please check out the "development" branch.
The text was updated successfully, but these errors were encountered: