-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
System Requirement? #4
Comments
I am running the provided inference script on a 48GB A6000, it's using about 44GB with the default settings and takes around 3 minutes and 45 seconds to generate an 8fps 768x423 video. You have to modify line 198 to |
So.... I guess I should give up if the goal is to run it on a PC? I mean, even the best available PC is 24GB VRAM (4090). Would appreciate if someone could confirm |
I was also hopping we could get this running on local 24GB cards, well maybe at least smaller resolution could be possible due to exponential scaling... |
Here's some results from my testing today:
It looks like those with < 48GB are out of luck even at lower resolution and number of frames. |
Hopefully the new CogVideoX Image-to-Video will fit into 24 GB https://github.com/huggingface/diffusers/releases/tag/v0.30.3 their 5b img2vid model does... |
You can try this repo(https://github.com/NUS-HPC-AI-Lab/VideoSys) to use Vchitech-2.0 with 24GB card。 |
You can try this repo(https://github.com/NUS-HPC-AI-Lab/VideoSys) to use Vchitech-2.0 with 24GB card。 |
How much minimum VRAM is required to run this?
Also, is this for CUDA only? Can it run on MPS?
The text was updated successfully, but these errors were encountered: