Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Issue]: OutOfMemoryError V100 #3739

Open
2 tasks done
pmiscn opened this issue Feb 3, 2025 · 3 comments
Open
2 tasks done

[Issue]: OutOfMemoryError V100 #3739

pmiscn opened this issue Feb 3, 2025 · 3 comments
Labels
invalid This doesn't seem right

Comments

@pmiscn
Copy link

pmiscn commented Feb 3, 2025

Issue Description

V100 16G*4 esxi7 direct GPU,both Ubuntu and windows are all report this error。

Image

08:19:01-647919 ERROR Processing: step=base args={'prompt_embeds': 'cuda:1:torch.bfloat16:torch.Size([1, 77, 768])',
'negative_prompt_embeds': 'cuda:1:torch.bfloat16:torch.Size([1, 77, 768])', 'guidance_scale':
6, 'generator': [<torch._C.Generator object at 0x00000246AED3A630>], 'callback_on_step_end':
<function diffusers_callback at 0x00000246A37F0C20>, 'callback_on_step_end_tensor_inputs':
['latents', 'prompt_embeds', 'negative_prompt_embeds', 'noise_pred'], 'num_inference_steps':
20, 'eta': 1.0, 'guidance_rescale': 0.7, 'output_type': 'latent', 'width': 1024, 'height':
1024} CUDA out of memory. Tried to allocate 8.00 GiB. GPU 1 has a total capacity of 16.00 GiB
of which 4.50 GiB is free. Of the allocated memory 10.02 GiB is allocated by PyTorch, and
60.32 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large
try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See
documentation for Memory Management
(https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
08:19:01-675918 ERROR Processing: OutOfMemoryError

Version Platform Description

No response

Relevant log output

Backend

Diffusers

UI

Standard

Branch

Master

Model

StableDiffusion 1.5

Acknowledgements

  • I have read the above and searched for existing issues
  • I confirm that this is classified correctly and its not an extension issue
@vladmandic
Copy link
Owner

run with --debug and upload log from startup up to the error message. positing a single error line is never sufficient.

also, you provided no version/platform information so pretty much nothing can be done - marking this issue as invalid until complete information is provided.

@vladmandic vladmandic added the invalid This doesn't seem right label Feb 3, 2025
@Disty0
Copy link
Collaborator

Disty0 commented Feb 3, 2025

Tesla V100 is old and doesn't support flash atten. You can't run SD 1.5 at 1024x1024 without flash atten or memory efficient atten if you have under 24 GB VRAM per GPU.

You can try enabling Dynamic Atten in SDP options and restart the webui for it to apply the change.
Dynamic Atten is more efficient with VRAM usage.

@vladmandic
Copy link
Owner

@pmiscn please see updates from myself and disty.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
invalid This doesn't seem right
Projects
None yet
Development

No branches or pull requests

3 participants