Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No module named 'patch_conv' #138

Closed
nitinmukesh opened this issue Jan 9, 2025 · 5 comments
Closed

No module named 'patch_conv' #138

nitinmukesh opened this issue Jan 9, 2025 · 5 comments
Labels
Answered Answered the question

Comments

@nitinmukesh
Copy link

nitinmukesh commented Jan 9, 2025

Installed
pip install git+https://github.com/huggingface/diffusers before use Sana in diffusers

diffusers 0.33.0.dev0


import torch
from diffusers import SanaPipeline

pipe = SanaPipeline.from_pretrained(
    "Efficient-Large-Model/Sana_1600M_4Kpx_BF16_diffusers",
    variant="bf16",
    torch_dtype=torch.bfloat16,
)
pipe.to("cuda")

pipe.vae.to(torch.bfloat16)
pipe.text_encoder.to(torch.bfloat16)

# for 4096x4096 image generation OOM issue
if pipe.transformer.config.sample_size == 128:
    from patch_conv import convert_model
    pipe.vae = convert_model(pipe.vae, splits=32)

prompt = 'A cute 🐼 eating 🎋, ink drawing style'
image = pipe(
    prompt=prompt,
    height=4096,
    width=4096,
    guidance_scale=5.0,
    num_inference_steps=20,
    generator=torch.Generator(device="cuda").manual_seed(42),
)[0]

image[0].save("sana.png")
Traceback (most recent call last):
  File "C:\ai1\diffuser_t2i\Sana4K.py", line 16, in <module>
    from patch_conv import convert_model
ModuleNotFoundError: No module named 'patch_conv'
@geronimi73
Copy link

pip install patch_conv

@lawrence-cj
Copy link
Collaborator

patch_conv is a tmp solution for OOM. The official VAE tile will be a better solution once this PR is merged

huggingface/diffusers#10510

@nitinmukesh
Copy link
Author

@lawrence-cj

After installing pip install patch_conv

(venv) C:\ai1\diffuser_t2i>python app.py
INFO: Could not find files for the given pattern(s).
* Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.

A mixture of bf16 and non-bf16 filenames will be loaded.
Loaded bf16 filenames:
[transformer/diffusion_pytorch_model.bf16.safetensors, text_encoder/model.bf16-00002-of-00002.safetensors, vae/diffusion_pytorch_model.bf16.safetensors, text_encoder/model.bf16-00001-of-00002.safetensors]
Loaded non-bf16 filenames:
[transformer/diffusion_pytorch_model-00001-of-00002.safetensors, transformer/diffusion_pytorch_model-00002-of-00002.safetensors
If this behavior is not expected, please check your folder structure.
Loading checkpoint shards: 100%|████████████████████████████████████| 2/2 [00:00<00:00,  2.77it/s]
Loading pipeline components...: 100%|███████████████████████████████| 5/5 [00:07<00:00,  1.40s/it]
Sana memory optimization mode: Low VRAM
Traceback (most recent call last):
  File "C:\ai1\diffuser_t2i\venv\lib\site-packages\gradio\queueing.py", line 625, in process_events
    response = await route_utils.call_process_api(
  File "C:\ai1\diffuser_t2i\venv\lib\site-packages\gradio\route_utils.py", line 322, in call_process_api
    output = await app.get_blocks().process_api(
  File "C:\ai1\diffuser_t2i\venv\lib\site-packages\gradio\blocks.py", line 2047, in process_api
    result = await self.call_function(
  File "C:\ai1\diffuser_t2i\venv\lib\site-packages\gradio\blocks.py", line 1594, in call_function
    prediction = await anyio.to_thread.run_sync(  # type: ignore
  File "C:\ai1\diffuser_t2i\venv\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "C:\ai1\diffuser_t2i\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 2505, in run_sync_in_worker_thread
    return await future
  File "C:\ai1\diffuser_t2i\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 1005, in run
    result = context.run(func, *args)
  File "C:\ai1\diffuser_t2i\venv\lib\site-packages\gradio\utils.py", line 869, in wrapper
    response = f(*args, **kwargs)
  File "C:\ai1\diffuser_t2i\tabs\tab_sana.py", line 103, in generate_images
    images = pipe(**inference_params).images
  File "C:\ai1\diffuser_t2i\venv\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "C:\ai1\diffuser_t2i\venv\lib\site-packages\diffusers\pipelines\sana\pipeline_sana.py", line 744, in __call__
    raise ValueError("Invalid sample size")
ValueError: Invalid sample size

@lawrence-cj
Copy link
Collaborator

lawrence-cj commented Jan 10, 2025

Seems the code is not the newest version. Would you mind run
pip install git+https://github.com/huggingface/diffusers again?

@nitinmukesh

@nitinmukesh
Copy link
Author

Thank you, It fixed the problem.
The VRAM requirement is huge with 4K model which is understood. Waiting for Quantized model

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Answered Answered the question
Projects
None yet
Development

No branches or pull requests

3 participants