Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Issue]: SDXL 1.0 Producing Black Images #1858

Closed
2 tasks done
lewisp95 opened this issue Jul 28, 2023 · 42 comments
Closed
2 tasks done

[Issue]: SDXL 1.0 Producing Black Images #1858

lewisp95 opened this issue Jul 28, 2023 · 42 comments
Labels
cannot reproduce Reported issue cannot be easily reproducible question Further information is requested

Comments

@lewisp95
Copy link

Issue Description

When attempting to generate images with SDXL 1.0 all I get is a black square [EXAMPLE ATTACHED]
00000-Kitten in space suit

Version Platform Description

Windows 10 [64 bit]
Google Chrome

12:37:28-168928 INFO Starting SD.Next
12:37:28-172918 INFO Python 3.10.9 on Windows
12:37:28-226138 INFO Version: a32bf08 Fri Jul 28 12:15:25 2023 +0300
12:37:28-677988 DEBUG Setting environment tuning
12:37:28-679982 DEBUG Torch overrides: cuda=False rocm=False ipex=False diml=False
12:37:28-680980 DEBUG Torch allowed: cuda=True rocm=True ipex=True diml=True
12:37:28-684983 INFO nVidia CUDA toolkit detected
12:37:28-870251 DEBUG Repository update time: Fri Jul 28 10:15:25 2023
12:37:28-871223 DEBUG Previous setup time: Fri Jul 28 12:28:49 2023
12:37:28-873217 INFO Disabled extensions: []
12:37:28-875150 INFO Enabled extensions-builtin: ['a1111-sd-webui-lycoris', 'clip-interrogator-ext', 'LDSR', 'Lora',
'multidiffusion-upscaler-for-automatic1111', 'ScuNET', 'sd-dynamic-thresholding',
'sd-extension-aesthetic-scorer', 'sd-extension-steps-animation', 'sd-extension-system-info',
'sd-webui-agent-scheduler', 'sd-webui-controlnet', 'sd-webui-model-converter', 'seed_travel',
'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg', 'SwinIR']
12:37:28-883121 INFO Enabled extensions: ['a1111-sd-webui-tagcomplete', 'canvas-zoom', 'sd-webui-ar',
'sd-webui-aspect-ratio-helper', 'sd-webui-infinite-image-browsing',
'Stable-Diffusion-Webui-Civitai-Helper', 'ultimate-upscale-for-automatic1111']
12:37:28-888121 DEBUG Latest extensions time: Fri Jul 28 12:25:12 2023
12:37:28-889105 DEBUG Timestamps: version:1690535725 setup:1690543729 extension:1690543512
12:37:28-892108 INFO No changes detected: Quick launch active
12:37:28-894092 INFO Verifying requirements
12:37:28-919064 INFO Disabled extensions: []
12:37:28-921059 INFO Enabled extensions-builtin: ['a1111-sd-webui-lycoris', 'clip-interrogator-ext', 'LDSR', 'Lora',
'multidiffusion-upscaler-for-automatic1111', 'ScuNET', 'sd-dynamic-thresholding',
'sd-extension-aesthetic-scorer', 'sd-extension-steps-animation', 'sd-extension-system-info',
'sd-webui-agent-scheduler', 'sd-webui-controlnet', 'sd-webui-model-converter', 'seed_travel',
'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg', 'SwinIR']
12:37:28-930036 INFO Enabled extensions: ['a1111-sd-webui-tagcomplete', 'canvas-zoom', 'sd-webui-ar',
'sd-webui-aspect-ratio-helper', 'sd-webui-infinite-image-browsing',
'Stable-Diffusion-Webui-Civitai-Helper', 'ultimate-upscale-for-automatic1111']

Relevant log output

12:37:58-961923 DEBUG    Model weights loaded: {'ram': {'used': 3.92, 'total': 15.94}, 'gpu': {'used': 1.08, 'total':
                         8.0}, 'retries': 0, 'oom': 0}
12:37:59-702036 DEBUG    Model weights moved: {'ram': {'used': 1.94, 'total': 15.94}, 'gpu': {'used': 3.11, 'total':
                         8.0}, 'retries': 0, 'oom': 0}
12:37:59-713007 INFO     Applying xformers cross attention optimization
12:37:59-766818 INFO     Embeddings: loaded=3 skipped=1
12:37:59-777787 INFO     Model loaded in 5.5s (load=1.0s config=0.1s create=2.2s apply=0.5s vae=0.9s move=0.7s)
12:38:00-046995 DEBUG    gc: collected=423 device=cuda {'ram': {'used': 1.97, 'total': 15.94}, 'gpu': {'used': 3.11,
                         'total': 8.0}, 'retries': 0, 'oom': 0}
12:38:00-049974 INFO     Model load finished: {'ram': {'used': 1.97, 'total': 15.94}, 'gpu': {'used': 3.11, 'total':
                         8.0}, 'retries': 0, 'oom': 0} cached=0
12:38:00-354070 DEBUG    gc: collected=124 device=cuda {'ram': {'used': 1.13, 'total': 15.94}, 'gpu': {'used': 3.11,
                         'total': 8.0}, 'retries': 0, 'oom': 0}
12:38:00-357072 INFO     Startup time: 31.4s (torch=6.8s gradio=1.0s libraries=2.7s vae=0.2s models=0.1s codeformer=0.1s
                         scripts=6.3s onchange=0.1s ui-txt2img=0.3s ui-img2img=0.2s ui-settings=0.1s ui-extensions=5.4s
                         ui-defaults=0.1s launch=0.3s app-started=0.7s checkpoint=6.9s)
12:38:00-361090 DEBUG    Server alive=True Requests=2 memory used: 1.13 total: 15.94
12:40:00-457119 DEBUG    Server alive=True Requests=107 memory used: 1.13 total: 15.94
12:40:03-727197 DEBUG    Paste prompt: Kitten in space suit
                         Steps: 20, Seed: 322289802, Sampler: Euler a, CFG scale: 6, Size: 1024x1024, Parser: Full
                         parser, Model: SDXL_sd_xl_base_1.0, Model hash: 31e35c80fc, Version: a32bf08, Pipeline:
                         Original, Token merging ratio: 0.5
12:40:18-319292 DEBUG    gc: collected=2028 device=cuda {'ram': {'used': 1.13, 'total': 15.94}, 'gpu': {'used': 3.11,
                         'total': 8.0}, 'retries': 0, 'oom': 0}
12:40:18-322255 DEBUG    txt2img: id_task=task(8in5a9tqo05c7b7)|prompt=Kitten in space
                         suit|negative_prompt=|prompt_styles=[]|steps=20|sampler_index=3|latent_index=None|restore_faces
                         =False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subsee
                         d_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=1024|width=1024|enable_hr=False|
                         denoising_strength=0.7|hr_scale=2|hr_upscaler=Latent|hr_second_pass_steps=20|hr_resize_x=0|hr_r
                         esize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_start=0.8||refiner_prompt=|r
                         efiner_negative=|override_settings_texts=[]args=(0, False, 'MultiDiffusion', False, True, 1024,
                         1024, 96, 96, 48, 1, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4,
                         0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '',
                         'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False,
                         0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '',
                         'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False,
                         0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '',
                         'Background', 0.2, -1.0, False, 1536, 96, True, True, True, False, False, 7, 100, 'Constant',
                         0, 'Constant', 0, 4, False, 'x264', 'blend', 10, 0, 0, False, True, True, True, 'intermediate',
                         'animation', <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at
                         0x00000234B4008160>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at
                         0x00000234B204F760>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at
                         0x00000234B204D090>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at
                         0x00000234B204FEE0>, False, False, 'positive', 'comma', 0, False, False, '', 7, '', [], 0, '',
                         [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None,
                         None, False, None, None, False, 50, False, 4.0, '', 10.0, 'Linear', 3, False, 30.0, True,
                         False, False, 0, 0.0, 'Lanczos', 1, True, 0, 0, 0.001, 75, 0.0, False, True)
12:40:18-347189 DEBUG    Script process: ['Tiled Diffusion:0.0s', 'Tiled VAE:0.0s', 'Dynamic Thresholding (CFG Scale
                         Fix):0.0s', 'Steps animation:0.0s', 'Agent Scheduler:0.0s', 'ControlNet:0.0s', 'Aspect Ratio
                         picker:0.0s', 'Aspect Ratio Helper:0.0s']
12:40:18-350181 DEBUG    Script before-process-batch: ['Tiled Diffusion:0.0s', 'Tiled VAE:0.0s', 'Dynamic Thresholding
                         (CFG Scale Fix):0.0s', 'Steps animation:0.0s', 'Agent Scheduler:0.0s', 'ControlNet:0.0s',
                         'Aspect Ratio picker:0.0s', 'Aspect Ratio Helper:0.0s']
12:40:18-353172 DEBUG    Script process-batch: ['Tiled Diffusion:0.0s', 'Tiled VAE:0.0s', 'Dynamic Thresholding (CFG
                         Scale Fix):0.0s', 'Steps animation:0.0s', 'Agent Scheduler:0.0s', 'ControlNet:0.0s', 'Aspect
                         Ratio picker:0.0s', 'Aspect Ratio Helper:0.0s']
12:40:18-604499 DEBUG    Sampler: Euler a {'uses_ensd': True}
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:10<00:00,  1.98it/s]
12:40:44-668493 DEBUG    Script postprocess-batch: ['Tiled Diffusion:0.0s', 'Tiled VAE:0.0s', 'Dynamic Thresholding (CFG
                         Scale Fix):0.0s', 'Steps animation:0.0s', 'Agent Scheduler:0.0s', 'ControlNet:0.0s', 'Aspect
                         Ratio picker:0.0s', 'Aspect Ratio Helper:0.0s']
12:40:44-703377 DEBUG    Script postprocess-image: ['Tiled Diffusion:0.0s', 'Tiled VAE:0.0s', 'Dynamic Thresholding (CFG
                         Scale Fix):0.0s', 'Steps animation:0.0s', 'Agent Scheduler:0.0s', 'ControlNet:0.0s', 'Aspect
                         Ratio picker:0.0s', 'Aspect Ratio Helper:0.0s']
12:40:44-707366 DEBUG    Saving image: PNG K:\Stable
                         Diffusion\Local\stable-diffusion\Vlad\automatic\outputs/text\2023-07-28\00019-Kitten in space
                         suit.png (1024, 1024)
12:40:45-062821 DEBUG    gc: collected=592 device=cuda {'ram': {'used': 3.55, 'total': 15.94}, 'gpu': {'used': 3.41,
                         'total': 8.0}, 'retries': 0, 'oom': 0}
12:40:45-065814 DEBUG    Script postprocess: ['Tiled Diffusion:0.0s', 'Tiled VAE:0.0s', 'Dynamic Thresholding (CFG Scale
                         Fix):0.0s', 'Steps animation:0.0s', 'Agent Scheduler:0.0s', 'ControlNet:0.0s', 'Aspect Ratio
                         picker:0.0s', 'Aspect Ratio Helper:0.0s']
12:40:45-072823 DEBUG    Processed: 1 Memory: {'ram': {'used': 3.54, 'total': 15.94}, 'gpu': {'used': 3.41, 'total':
                         8.0}, 'retries': 0, 'oom': 0} txt
12:40:45-347089 DEBUG    gc: collected=220 device=cuda {'ram': {'used': 3.55, 'total': 15.94}, 'gpu': {'used': 3.16,
                         'total': 8.0}, 'retries': 0, 'oom': 0}

Acknowledgements

  • I have read the above and searched for existing issues
  • I confirm that this is classified correctly and its not an extension or diffusers-specific issue
@lewisp95
Copy link
Author

I'm using an Nvidia RTX 3070

@vladmandic
Copy link
Owner

At least try running with --safeas you have number of extensions that are not compatible with diffusers.

@lewisp95
Copy link
Author

At least try running with --safeas you have number of extensions that are not compatible with diffusers.

Ah yeah, I should have thought of that, that's the one thing I hadn't tried, I'll try that now and update you.

@lewisp95
Copy link
Author

So I tried --safe and got the same result
13:38:06-333512 DEBUG Paste prompt: Kitten in space suit
Steps: 20, Seed: 1526630701, Sampler: Euler a, CFG scale: 6, Size: 1024x1024, Parser: Full
parser, Model: SDXL_sd_xl_base_1.0, Model hash: 31e35c80fc, Version: a32bf08, Pipeline:
Original, Operations: txt2img, Token merging ratio: 0.5
13:38:10-540668 DEBUG gc: collected=1192 device=cuda {'ram': {'used': 1.12, 'total': 15.94}, 'gpu': {'used': 3.11,
'total': 8.0}, 'retries': 0, 'oom': 0}
13:38:10-543661 DEBUG txt2img: id_task=task(d7164cmf65tgq2n)|prompt=Kitten in space
suit|negative_prompt=|prompt_styles=[]|steps=20|sampler_index=3|latent_index=None|restore_faces
=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=1526630701.0|subseed=-1.
0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=1024|width=1024|enable_h
r=False|denoising_strength=0.7|hr_scale=2|hr_upscaler=Latent|hr_second_pass_steps=20|hr_resize_
x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_start=0.8||refiner_p
rompt=|refiner_negative=|override_settings_texts=[]args=(0, False, 'MultiDiffusion', False,
True, 1024, 1024, 96, 96, 48, 1, 'None', 2, False, 10, 1, 1, 64, False, False, False, False,
False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '',
'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False,
0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '',
'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False,
0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '',
'Background', 0.2, -1.0, False, 1536, 96, True, True, True, False, False, 7, 100, 'Constant',
0, 'Constant', 0, 4, False, 'x264', 'blend', 10, 0, 0, False, True, True, True, 'intermediate',
'animation', <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at
0x0000025FAACB3250>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at
0x0000025FAB784F40>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at
0x0000025FAB74EAD0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at
0x0000025FAB7876D0>, False, False, 'positive', 'comma', 0, False, False, '', 7, '', [], 0, '',
[], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None,
None, False, None, None, False, 50, False, 4.0, '', 10.0, 'Linear', 3, False, 30.0, True,
False, False, 0, 0.0, 'Lanczos', 1, True, 0, 0, 0.001, 75, 0.0, False, True)
13:38:10-562610 DEBUG Script process: ['Tiled Diffusion:0.0s', 'Tiled VAE:0.0s', 'Dynamic Thresholding (CFG Scale
Fix):0.0s', 'Steps animation:0.0s', 'Agent Scheduler:0.0s', 'ControlNet:0.0s']
13:38:10-566599 DEBUG Script before-process-batch: ['Tiled Diffusion:0.0s', 'Tiled VAE:0.0s', 'Dynamic Thresholding
(CFG Scale Fix):0.0s', 'Steps animation:0.0s', 'Agent Scheduler:0.0s', 'ControlNet:0.0s']
13:38:10-568596 DEBUG Script process-batch: ['Tiled Diffusion:0.0s', 'Tiled VAE:0.0s', 'Dynamic Thresholding (CFG
Scale Fix):0.0s', 'Steps animation:0.0s', 'Agent Scheduler:0.0s', 'ControlNet:0.0s']
13:38:10-951914 DEBUG Sampler: Euler a {'uses_ensd': True}
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:10<00:00, 1.97it/s]
13:38:37-032929 DEBUG Script postprocess-batch: ['Tiled Diffusion:0.0s', 'Tiled VAE:0.0s', 'Dynamic Thresholding (CFG
Scale Fix):0.0s', 'Steps animation:0.0s', 'Agent Scheduler:0.0s', 'ControlNet:0.0s']
13:38:37-064845 DEBUG Script postprocess-image: ['Tiled Diffusion:0.0s', 'Tiled VAE:0.0s', 'Dynamic Thresholding (CFG
Scale Fix):0.0s', 'Steps animation:0.0s', 'Agent Scheduler:0.0s', 'ControlNet:0.0s']
13:38:37-067835 DEBUG Saving image: PNG K:\Stable
Diffusion\Local\stable-diffusion\Vlad\automatic\outputs/text\2023-07-28\00024-Kitten in space
suit.png (1024, 1024)
13:38:37-409951 DEBUG gc: collected=578 device=cuda {'ram': {'used': 3.51, 'total': 15.94}, 'gpu': {'used': 3.41,
'total': 8.0}, 'retries': 0, 'oom': 0}
13:38:37-412915 DEBUG Script postprocess: ['Tiled Diffusion:0.0s', 'Tiled VAE:0.0s', 'Dynamic Thresholding (CFG Scale
Fix):0.0s', 'Steps animation:0.0s', 'Agent Scheduler:0.0s', 'ControlNet:0.0s']
13:38:37-421922 DEBUG Processed: 1 Memory: {'ram': {'used': 3.51, 'total': 15.94}, 'gpu': {'used': 3.41, 'total':
8.0}, 'retries': 0, 'oom': 0} txt
13:38:37-694185 DEBUG gc: collected=220 device=cuda {'ram': {'used': 3.51, 'total': 15.94}, 'gpu': {'used': 3.41,
'total': 8.0}, 'retries': 0, 'oom': 0}

@ToddAT
Copy link

ToddAT commented Jul 28, 2023

I get the same result, both using --safe and running regular.

@TeutonJon78
Copy link

SD.Next doesn't support running the SDXL models in Original backend mode (yet?). Try the diffusers backend.

@ToddAT
Copy link

ToddAT commented Jul 28, 2023

SD.Next doesn't support running the SDXL models in Original backend mode (yet?). Try the diffusers backend.

How do you do this?

@vladmandic
Copy link
Owner

Read wiki.

@Thom293
Copy link

Thom293 commented Jul 29, 2023

Im having same issue, running in diffusers mode. XL was working 2 hours ago with same settings. Now black images. I made no changes, but did not turn off git update.
Screenshot 2023-07-28 231141

@Nourollah
Copy link

I have the same issue.

@bbecausereasonss
Copy link

Have you tried no half vae?

@freke70
Copy link

freke70 commented Jul 31, 2023

Exact same problem

@Nourollah
Copy link

Have you tried no half vae?

Yes. Not worked.

@freke70
Copy link

freke70 commented Jul 31, 2023

I got it working now by doing this to start it:
bash webui.sh --safe --backend diffusers

Setting the diffusers in the GUI didn't do anything

@Nourollah
Copy link

I got it working now by doing this to start it: bash webui.sh --safe --backend diffusers

Setting the diffusers in the GUI didn't do anything

I just tested it. The refiner model gets loaded under these circumstances but causes this error:
ValueError: Model expects an added time embedding vector of length 2560, but a vector of 2816 was created. The model has an incorrect config. Please check unet.config.time_embedding_typeandtext_encoder_2.config.projection_dim.

For base model still lead to this:
ERROR Diffusers failed loading model using pipeline: path/automatic/models/Stable-diffusion/XL1,0/sd_xl_base_1.0.safetensors Stable Diffusion __init__() got an unexpected keyword argument 'text_encoder_2' WARNING Model not loaded

@SubmitCodes
Copy link

I have same problem but different errors, I get errors with torch size, IDK what that is, I am not an expert at all, black squares all the time, tried different combinations in compute settings and nothing worked. This only happens when I run SDXL 1.0 and any models in SDXL.

16:03:51-692756 ERROR Error loading model weights: D:\SD\automatic\models\Stable-diffusion\sd_xl_base_1.0.safetensors
Error(s) in loading state_dict for LatentDiffusion:
size mismatch for model.diffusion_model.input_blocks.4.1.proj_in.weight: copying a
param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([640, 640, 1, 1]).
size mismatch for
model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_k.weight: copying a param
with shape torch.Size([640, 2048]) from checkpoint, the shape in current model is
torch.Size([640, 768]).
size mismatch for
model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_v.weight: copying a param
with shape torch.Size([640, 2048]) from checkpoint, the shape in current model is
torch.Size([640, 768]).
size mismatch for model.diffusion_model.input_blocks.4.1.proj_out.weight: copying a
param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([640, 640, 1, 1]).
size mismatch for model.diffusion_model.input_blocks.5.1.proj_in.weight: copying a
param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([640, 640, 1, 1]).
size mismatch for
model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_k.weight: copying a param
with shape torch.Size([640, 2048]) from checkpoint, the shape in current model is
torch.Size([640, 768]).
size mismatch for
model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_v.weight: copying a param
with shape torch.Size([640, 2048]) from checkpoint, the shape in current model is
torch.Size([640, 768]).
size mismatch for model.diffusion_model.input_blocks.5.1.proj_out.weight: copying a
param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([640, 640, 1, 1]).
size mismatch for model.diffusion_model.input_blocks.7.1.proj_in.weight: copying a
param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is
torch.Size([1280, 1280, 1, 1]).
size mismatch for
model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_k.weight: copying a param
with shape torch.Size([1280, 2048]) from checkpoint, the shape in current model is
torch.Size([1280, 768]).
size mismatch for
model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_v.weight: copying a param
with shape torch.Size([1280, 2048]) from checkpoint, the shape in current model is
torch.Size([1280, 768]).
size mismatch for model.diffusion_model.input_blocks.7.1.proj_out.weight: copying a
param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is
torch.Size([1280, 1280, 1, 1]).
size mismatch for model.diffusion_model.input_blocks.8.1.proj_in.weight: copying a
param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is
torch.Size([1280, 1280, 1, 1]).
size mismatch for
model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn2.to_k.weight: copying a param
with shape torch.Size([1280, 2048]) from checkpoint, the shape in current model is
torch.Size([1280, 768]).
size mismatch for
model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn2.to_v.weight: copying a param
with shape torch.Size([1280, 2048]) from checkpoint, the shape in current model is
torch.Size([1280, 768]).
size mismatch for model.diffusion_model.input_blocks.8.1.proj_out.weight: copying a
param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is
torch.Size([1280, 1280, 1, 1]).
size mismatch for model.diffusion_model.middle_block.1.proj_in.weight: copying a param
with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is
torch.Size([1280, 1280, 1, 1]).
size mismatch for
model.diffusion_model.middle_block.1.transformer_blocks.0.attn2.to_k.weight: copying a param
with shape torch.Size([1280, 2048]) from checkpoint, the shape in current model is
torch.Size([1280, 768]).
size mismatch for
model.diffusion_model.middle_block.1.transformer_blocks.0.attn2.to_v.weight: copying a param
with shape torch.Size([1280, 2048]) from checkpoint, the shape in current model is
torch.Size([1280, 768]).
size mismatch for model.diffusion_model.middle_block.1.proj_out.weight: copying a param
with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is
torch.Size([1280, 1280, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.2.0.in_layers.0.weight: copying a
param with shape torch.Size([1920]) from checkpoint, the shape in current model is
torch.Size([2560]).
size mismatch for model.diffusion_model.output_blocks.2.0.in_layers.0.bias: copying a
param with shape torch.Size([1920]) from checkpoint, the shape in current model is
torch.Size([2560]).
size mismatch for model.diffusion_model.output_blocks.2.0.in_layers.2.weight: copying a
param with shape torch.Size([1280, 1920, 3, 3]) from checkpoint, the shape in current model is
torch.Size([1280, 2560, 3, 3]).
size mismatch for model.diffusion_model.output_blocks.2.0.skip_connection.weight:
copying a param with shape torch.Size([1280, 1920, 1, 1]) from checkpoint, the shape in current
model is torch.Size([1280, 2560, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.3.0.in_layers.0.weight: copying a
param with shape torch.Size([1920]) from checkpoint, the shape in current model is
torch.Size([2560]).
size mismatch for model.diffusion_model.output_blocks.3.0.in_layers.0.bias: copying a
param with shape torch.Size([1920]) from checkpoint, the shape in current model is
torch.Size([2560]).
size mismatch for model.diffusion_model.output_blocks.3.0.in_layers.2.weight: copying a
param with shape torch.Size([640, 1920, 3, 3]) from checkpoint, the shape in current model is
torch.Size([1280, 2560, 3, 3]).
size mismatch for model.diffusion_model.output_blocks.3.0.in_layers.2.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.3.0.emb_layers.1.weight: copying
a param with shape torch.Size([640, 1280]) from checkpoint, the shape in current model is
torch.Size([1280, 1280]).
size mismatch for model.diffusion_model.output_blocks.3.0.emb_layers.1.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.3.0.out_layers.0.weight: copying
a param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.3.0.out_layers.0.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.3.0.out_layers.3.weight: copying
a param with shape torch.Size([640, 640, 3, 3]) from checkpoint, the shape in current model is
torch.Size([1280, 1280, 3, 3]).
size mismatch for model.diffusion_model.output_blocks.3.0.out_layers.3.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.3.0.skip_connection.weight:
copying a param with shape torch.Size([640, 1920, 1, 1]) from checkpoint, the shape in current
model is torch.Size([1280, 2560, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.3.0.skip_connection.bias: copying
a param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.3.1.norm.weight: copying a param
with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.3.1.norm.bias: copying a param
with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.3.1.proj_in.weight: copying a
param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.3.1.proj_in.bias: copying a param
with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn1.to_q.weight: copying a param
with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280]).
size mismatch for
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn1.to_k.weight: copying a param
with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280]).
size mismatch for
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn1.to_v.weight: copying a param
with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280]).
size mismatch for
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn1.to_out.0.weight: copying a
param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280]).
size mismatch for
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn1.to_out.0.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.ff.net.0.proj.weight: copying a
param with shape torch.Size([5120, 640]) from checkpoint, the shape in current model is
torch.Size([10240, 1280]).
size mismatch for
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.ff.net.0.proj.bias: copying a
param with shape torch.Size([5120]) from checkpoint, the shape in current model is
torch.Size([10240]).
size mismatch for
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.ff.net.2.weight: copying a param
with shape torch.Size([640, 2560]) from checkpoint, the shape in current model is
torch.Size([1280, 5120]).
size mismatch for
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.ff.net.2.bias: copying a param
with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_q.weight: copying a param
with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280]).
size mismatch for
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_k.weight: copying a param
with shape torch.Size([640, 2048]) from checkpoint, the shape in current model is
torch.Size([1280, 768]).
size mismatch for
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_v.weight: copying a param
with shape torch.Size([640, 2048]) from checkpoint, the shape in current model is
torch.Size([1280, 768]).
size mismatch for
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_out.0.weight: copying a
param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280]).
size mismatch for
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_out.0.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.norm1.weight: copying a param with
shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.norm1.bias: copying a param with
shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.norm2.weight: copying a param with
shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.norm2.bias: copying a param with
shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.norm3.weight: copying a param with
shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.norm3.bias: copying a param with
shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.3.1.proj_out.weight: copying a
param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.3.1.proj_out.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.4.0.in_layers.0.weight: copying a
param with shape torch.Size([1280]) from checkpoint, the shape in current model is
torch.Size([2560]).
size mismatch for model.diffusion_model.output_blocks.4.0.in_layers.0.bias: copying a
param with shape torch.Size([1280]) from checkpoint, the shape in current model is
torch.Size([2560]).
size mismatch for model.diffusion_model.output_blocks.4.0.in_layers.2.weight: copying a
param with shape torch.Size([640, 1280, 3, 3]) from checkpoint, the shape in current model is
torch.Size([1280, 2560, 3, 3]).
size mismatch for model.diffusion_model.output_blocks.4.0.in_layers.2.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.4.0.emb_layers.1.weight: copying
a param with shape torch.Size([640, 1280]) from checkpoint, the shape in current model is
torch.Size([1280, 1280]).
size mismatch for model.diffusion_model.output_blocks.4.0.emb_layers.1.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.4.0.out_layers.0.weight: copying
a param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.4.0.out_layers.0.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.4.0.out_layers.3.weight: copying
a param with shape torch.Size([640, 640, 3, 3]) from checkpoint, the shape in current model is
torch.Size([1280, 1280, 3, 3]).
size mismatch for model.diffusion_model.output_blocks.4.0.out_layers.3.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.4.0.skip_connection.weight:
copying a param with shape torch.Size([640, 1280, 1, 1]) from checkpoint, the shape in current
model is torch.Size([1280, 2560, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.4.0.skip_connection.bias: copying
a param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.4.1.norm.weight: copying a param
with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.4.1.norm.bias: copying a param
with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.4.1.proj_in.weight: copying a
param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.4.1.proj_in.bias: copying a param
with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn1.to_q.weight: copying a param
with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280]).
size mismatch for
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn1.to_k.weight: copying a param
with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280]).
size mismatch for
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn1.to_v.weight: copying a param
with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280]).
size mismatch for
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn1.to_out.0.weight: copying a
param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280]).
size mismatch for
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn1.to_out.0.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.ff.net.0.proj.weight: copying a
param with shape torch.Size([5120, 640]) from checkpoint, the shape in current model is
torch.Size([10240, 1280]).
size mismatch for
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.ff.net.0.proj.bias: copying a
param with shape torch.Size([5120]) from checkpoint, the shape in current model is
torch.Size([10240]).
size mismatch for
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.ff.net.2.weight: copying a param
with shape torch.Size([640, 2560]) from checkpoint, the shape in current model is
torch.Size([1280, 5120]).
size mismatch for
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.ff.net.2.bias: copying a param
with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_q.weight: copying a param
with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280]).
size mismatch for
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_k.weight: copying a param
with shape torch.Size([640, 2048]) from checkpoint, the shape in current model is
torch.Size([1280, 768]).
size mismatch for
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_v.weight: copying a param
with shape torch.Size([640, 2048]) from checkpoint, the shape in current model is
torch.Size([1280, 768]).
size mismatch for
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_out.0.weight: copying a
param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280]).
size mismatch for
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_out.0.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.norm1.weight: copying a param with
shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.norm1.bias: copying a param with
shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.norm2.weight: copying a param with
shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.norm2.bias: copying a param with
shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.norm3.weight: copying a param with
shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.norm3.bias: copying a param with
shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.4.1.proj_out.weight: copying a
param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.4.1.proj_out.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.5.0.in_layers.0.weight: copying a
param with shape torch.Size([960]) from checkpoint, the shape in current model is
torch.Size([1920]).
size mismatch for model.diffusion_model.output_blocks.5.0.in_layers.0.bias: copying a
param with shape torch.Size([960]) from checkpoint, the shape in current model is
torch.Size([1920]).
size mismatch for model.diffusion_model.output_blocks.5.0.in_layers.2.weight: copying a
param with shape torch.Size([640, 960, 3, 3]) from checkpoint, the shape in current model is
torch.Size([1280, 1920, 3, 3]).
size mismatch for model.diffusion_model.output_blocks.5.0.in_layers.2.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.5.0.emb_layers.1.weight: copying
a param with shape torch.Size([640, 1280]) from checkpoint, the shape in current model is
torch.Size([1280, 1280]).
size mismatch for model.diffusion_model.output_blocks.5.0.emb_layers.1.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.5.0.out_layers.0.weight: copying
a param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.5.0.out_layers.0.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.5.0.out_layers.3.weight: copying
a param with shape torch.Size([640, 640, 3, 3]) from checkpoint, the shape in current model is
torch.Size([1280, 1280, 3, 3]).
size mismatch for model.diffusion_model.output_blocks.5.0.out_layers.3.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.5.0.skip_connection.weight:
copying a param with shape torch.Size([640, 960, 1, 1]) from checkpoint, the shape in current
model is torch.Size([1280, 1920, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.5.0.skip_connection.bias: copying
a param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.5.1.norm.weight: copying a param
with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.5.1.norm.bias: copying a param
with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.5.1.proj_in.weight: copying a
param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.5.1.proj_in.bias: copying a param
with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn1.to_q.weight: copying a param
with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280]).
size mismatch for
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn1.to_k.weight: copying a param
with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280]).
size mismatch for
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn1.to_v.weight: copying a param
with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280]).
size mismatch for
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn1.to_out.0.weight: copying a
param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280]).
size mismatch for
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn1.to_out.0.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.ff.net.0.proj.weight: copying a
param with shape torch.Size([5120, 640]) from checkpoint, the shape in current model is
torch.Size([10240, 1280]).
size mismatch for
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.ff.net.0.proj.bias: copying a
param with shape torch.Size([5120]) from checkpoint, the shape in current model is
torch.Size([10240]).
size mismatch for
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.ff.net.2.weight: copying a param
with shape torch.Size([640, 2560]) from checkpoint, the shape in current model is
torch.Size([1280, 5120]).
size mismatch for
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.ff.net.2.bias: copying a param
with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_q.weight: copying a param
with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280]).
size mismatch for
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_k.weight: copying a param
with shape torch.Size([640, 2048]) from checkpoint, the shape in current model is
torch.Size([1280, 768]).
size mismatch for
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_v.weight: copying a param
with shape torch.Size([640, 2048]) from checkpoint, the shape in current model is
torch.Size([1280, 768]).
size mismatch for
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_out.0.weight: copying a
param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280]).
size mismatch for
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_out.0.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.norm1.weight: copying a param with
shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.norm1.bias: copying a param with
shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.norm2.weight: copying a param with
shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.norm2.bias: copying a param with
shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.norm3.weight: copying a param with
shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.norm3.bias: copying a param with
shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.5.1.proj_out.weight: copying a
param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.5.1.proj_out.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.5.2.conv.weight: copying a param
with shape torch.Size([640, 640, 3, 3]) from checkpoint, the shape in current model is
torch.Size([1280, 1280, 3, 3]).
size mismatch for model.diffusion_model.output_blocks.5.2.conv.bias: copying a param
with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.6.0.in_layers.0.weight: copying a
param with shape torch.Size([960]) from checkpoint, the shape in current model is
torch.Size([1920]).
size mismatch for model.diffusion_model.output_blocks.6.0.in_layers.0.bias: copying a
param with shape torch.Size([960]) from checkpoint, the shape in current model is
torch.Size([1920]).
size mismatch for model.diffusion_model.output_blocks.6.0.in_layers.2.weight: copying a
param with shape torch.Size([320, 960, 3, 3]) from checkpoint, the shape in current model is
torch.Size([640, 1920, 3, 3]).
size mismatch for model.diffusion_model.output_blocks.6.0.in_layers.2.bias: copying a
param with shape torch.Size([320]) from checkpoint, the shape in current model is
torch.Size([640]).
size mismatch for model.diffusion_model.output_blocks.6.0.emb_layers.1.weight: copying
a param with shape torch.Size([320, 1280]) from checkpoint, the shape in current model is
torch.Size([640, 1280]).
size mismatch for model.diffusion_model.output_blocks.6.0.emb_layers.1.bias: copying a
param with shape torch.Size([320]) from checkpoint, the shape in current model is
torch.Size([640]).
size mismatch for model.diffusion_model.output_blocks.6.0.out_layers.0.weight: copying
a param with shape torch.Size([320]) from checkpoint, the shape in current model is
torch.Size([640]).
size mismatch for model.diffusion_model.output_blocks.6.0.out_layers.0.bias: copying a
param with shape torch.Size([320]) from checkpoint, the shape in current model is
torch.Size([640]).
size mismatch for model.diffusion_model.output_blocks.6.0.out_layers.3.weight: copying
a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is
torch.Size([640, 640, 3, 3]).
size mismatch for model.diffusion_model.output_blocks.6.0.out_layers.3.bias: copying a
param with shape torch.Size([320]) from checkpoint, the shape in current model is
torch.Size([640]).
size mismatch for model.diffusion_model.output_blocks.6.0.skip_connection.weight:
copying a param with shape torch.Size([320, 960, 1, 1]) from checkpoint, the shape in current
model is torch.Size([640, 1920, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.6.0.skip_connection.bias: copying
a param with shape torch.Size([320]) from checkpoint, the shape in current model is
torch.Size([640]).
size mismatch for model.diffusion_model.output_blocks.7.0.in_layers.0.weight: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.7.0.in_layers.0.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.7.0.in_layers.2.weight: copying a
param with shape torch.Size([320, 640, 3, 3]) from checkpoint, the shape in current model is
torch.Size([640, 1280, 3, 3]).
size mismatch for model.diffusion_model.output_blocks.7.0.in_layers.2.bias: copying a
param with shape torch.Size([320]) from checkpoint, the shape in current model is
torch.Size([640]).
size mismatch for model.diffusion_model.output_blocks.7.0.emb_layers.1.weight: copying
a param with shape torch.Size([320, 1280]) from checkpoint, the shape in current model is
torch.Size([640, 1280]).
size mismatch for model.diffusion_model.output_blocks.7.0.emb_layers.1.bias: copying a
param with shape torch.Size([320]) from checkpoint, the shape in current model is
torch.Size([640]).
size mismatch for model.diffusion_model.output_blocks.7.0.out_layers.0.weight: copying
a param with shape torch.Size([320]) from checkpoint, the shape in current model is
torch.Size([640]).
size mismatch for model.diffusion_model.output_blocks.7.0.out_layers.0.bias: copying a
param with shape torch.Size([320]) from checkpoint, the shape in current model is
torch.Size([640]).
size mismatch for model.diffusion_model.output_blocks.7.0.out_layers.3.weight: copying
a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is
torch.Size([640, 640, 3, 3]).
size mismatch for model.diffusion_model.output_blocks.7.0.out_layers.3.bias: copying a
param with shape torch.Size([320]) from checkpoint, the shape in current model is
torch.Size([640]).
size mismatch for model.diffusion_model.output_blocks.7.0.skip_connection.weight:
copying a param with shape torch.Size([320, 640, 1, 1]) from checkpoint, the shape in current
model is torch.Size([640, 1280, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.7.0.skip_connection.bias: copying
a param with shape torch.Size([320]) from checkpoint, the shape in current model is
torch.Size([640]).
size mismatch for model.diffusion_model.output_blocks.8.0.in_layers.0.weight: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([960]).
size mismatch for model.diffusion_model.output_blocks.8.0.in_layers.0.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([960]).
size mismatch for model.diffusion_model.output_blocks.8.0.in_layers.2.weight: copying a
param with shape torch.Size([320, 640, 3, 3]) from checkpoint, the shape in current model is
torch.Size([640, 960, 3, 3]).
size mismatch for model.diffusion_model.output_blocks.8.0.in_layers.2.bias: copying a
param with shape torch.Size([320]) from checkpoint, the shape in current model is
torch.Size([640]).
size mismatch for model.diffusion_model.output_blocks.8.0.emb_layers.1.weight: copying
a param with shape torch.Size([320, 1280]) from checkpoint, the shape in current model is
torch.Size([640, 1280]).
size mismatch for model.diffusion_model.output_blocks.8.0.emb_layers.1.bias: copying a
param with shape torch.Size([320]) from checkpoint, the shape in current model is
torch.Size([640]).
size mismatch for model.diffusion_model.output_blocks.8.0.out_layers.0.weight: copying
a param with shape torch.Size([320]) from checkpoint, the shape in current model is
torch.Size([640]).
size mismatch for model.diffusion_model.output_blocks.8.0.out_layers.0.bias: copying a
param with shape torch.Size([320]) from checkpoint, the shape in current model is
torch.Size([640]).
size mismatch for model.diffusion_model.output_blocks.8.0.out_layers.3.weight: copying
a param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is
torch.Size([640, 640, 3, 3]).
size mismatch for model.diffusion_model.output_blocks.8.0.out_layers.3.bias: copying a
param with shape torch.Size([320]) from checkpoint, the shape in current model is
torch.Size([640]).
size mismatch for model.diffusion_model.output_blocks.8.0.skip_connection.weight:
copying a param with shape torch.Size([320, 640, 1, 1]) from checkpoint, the shape in current
model is torch.Size([640, 960, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.8.0.skip_connection.bias: copying
a param with shape torch.Size([320]) from checkpoint, the shape in current model is
torch.Size([640]).

Screenshot 2023-07-31 164444

Screenshot 2023-07-31 164539

please help idk why this is happening, only with SDXL

@VictorZakharov
Copy link

Same errors as @SubmitCodes. Using SDXL or any model that derives from SDXL from civitai.

@DrakeRichards
Copy link
Contributor

DrakeRichards commented Aug 1, 2023

I was having this issue as well after following the steps in the wiki. I fixed it by downloading the model from Huggingface (using "Models > Huggingface > Download model" in the web UI) instead of using the .safetensors file.

This only occurred with me when using the base SDXL base .safetensors file from Huggingface. I was able to generate just fine with the .safetensors file from DreamShaper XL on CivitAI.

@SubmitCodes
Copy link

This only occurred with me when using the base SDXL base .safetensors file from Huggingface. I was able to generate just fine with the .safetensors file from DreamShaper XL on CivitAI.

For me, both don't work, unfortunately.

I was having this issue as well after following the steps in the wiki. I fixed it by downloading the model from Huggingface (using "Models > Huggingface > Download model" in the web UI) instead of using the .safetensors file.

I just tried that, no longer black images, but the images are super unusable, like at all. I tried with different sizes, steps and samplers but nothing worked.
IDK if there is a way to fix that in settings, but I would love to be able to use it normally with no issue and without using it through the diffusers and with more samplers.

@VictorZakharov
Copy link

@SubmitCodes SDXL does not work with 512x512, try 1024x1024.

@SubmitCodes
Copy link

@SubmitCodes SDXL does not work with 512x512, try 1024x1024.

I did

I tried with different sizes, steps and samplers but nothing worked.

@kotysoft
Copy link

kotysoft commented Aug 2, 2023

Had the same issue with automatic1111 in windows. Everything was up to date (manually updated xformers to 0.20)
once i added --no-half argument, SDXL started to work, but was horribly slow. Later I found out that for me --disable-nan-check caused the issue. So now running with --xformers --medvram and SDXL running fine

@Nyx01
Copy link

Nyx01 commented Aug 7, 2023

I got mine working again by changing the settings under Diffuser Settings > VAE upcasting to default, and only check Enable VAE slicing. Apply and don't just click server reset (this didn't work for me), but close and reopen the whole thing. --autolaunch is the only thing I'm starting it up with as well. Everything else is default settings (fresh install). Using sd_xl_base_1.0_0.9vae.safetensors.

Edit: After playing around with settings a bit more, for me anyway, it is definitely the VAE upcasting option. If this is changed from default, I get black images.

@mykeehu
Copy link
Contributor

mykeehu commented Aug 9, 2023

Unfortunately, this didn't help me either, even though I set up and restarted the program. So for now I'll wait until vlad fixes this, until then I'll use the auto1111 version.

@Nyx01
Copy link

Nyx01 commented Aug 9, 2023

Have you tried the vae fix here?
https://github.com/vladmandic/automatic/wiki/SD-XL#fixed-fp16-vae
If you set this up incorrectly, it could also cause black images I've noticed. (I messed it up myself and the black screens came back when I did. I didn't save the raw configuration file, I right click saved the link and got the html page from github by mistake.) But fixing this did also clear up the black image issue. Also make sure you're loading the vae correctly in the settings. I think it's under the stable diffusion page in the settings.

@mykeehu
Copy link
Contributor

mykeehu commented Aug 9, 2023

Yes, I did the VAE fix, I still get a black image. :( So I'm a bit tired of it.
image

@mykeehu
Copy link
Contributor

mykeehu commented Aug 9, 2023

Had the same issue with automatic1111 in windows. Everything was up to date (manually updated xformers to 0.20) once i added --no-half argument, SDXL started to work, but was horribly slow. Later I found out that for me --disable-nan-check caused the issue. So now running with --xformers --medvram and SDXL running fine

Unfortunately, this did not help. I use SDP instead of xformers, and start with just that many switches:
--upgrade --autolaunch --theme=light

@vladmandic
Copy link
Owner

black images are always platform specifc, there is no single magic bullet or otherwise it would have been implemented already.

there are many users here reporting similar issues, but no info on what systems its happening:
everyone here, please:

  • os, gpu, backend (you can see all in system info)
  • vae used.
  • settings: no-half, no-half-vae, precision type, device precision type

@zifnub
Copy link

zifnub commented Aug 11, 2023

I had the same issue, with a fresh install, according to https://github.com/vladmandic/automatic/wiki/Installation .
Default settings unchanged and zero addons installed, only changed made were SD-XL,VRAM Optimization & Fixed FP16 VAE added according to the steps listed here : https://github.com/vladmandic/automatic/wiki/SD-XL

Following Nyx01 steps as listed above resolved the issue. Set VAE upcasting to default and close down and start up again, using the restart sever option does not work.
Win10
RTX4090
sdxl-vae-fp16-fix

@Trojaner
Copy link
Contributor

Maybe related

04:43:03-004052 ERROR    Loading diffusers VAE failed:
                         D:\stable-diffusion-webui\models\VAE\diffusion_pytorch_model.safetensors
                         'encoder.norm_out.weight'

@kabloink
Copy link

kabloink commented Aug 12, 2023

The only thing that worked for me was adding --backend diffusers to the bash prompt. Changing it in the settings doesn't seem to make a difference. The --safe option made little difference as well as the no half settings.

@vladmandic
Copy link
Owner

Why is everyone ignoring the ask I had two days ago?
Anyone that has this problem, please provide required info as stated.
This thread deteriorated into random notes that I cannot even start to address.

@kabloink
Copy link

Sorry

Ubuntu Mate 22.04.3
NVIDIA GeForce RTX 3060
CUDA
SDLX_VAE

Float16, but changing the settings to disable half made little difference with the all black
The pipeline is reported to original even if I change the settings to use the SDXL diffusers
Using the --backend diffusers prompt fixes the black output and the system info reports the pipeline as diffusers

@vladmandic
Copy link
Owner

@kabloink your issue is not same, that's all covered in wiki:

  • settings pipeline to sdxl means nothing unless you set backend to diffusers. and you can do that either in settings or via cmd line.
  • sdxl original vae is fp32 only (thats not sdnext limitation, that how original sdxl vae is written). there are fp16 vaes available and if you use that, then you can use fp16. otherwise black images are 100% expected.

and all that is covered in wiki.

everyone else, i have no idea how much of this is applicable to your systems - purely as this thread has turned into noise without much relevant information. so again, please post exactly what's required so i can try to help.

@vladmandic vladmandic added question Further information is requested cannot reproduce Reported issue cannot be easily reproducible labels Aug 12, 2023
@mykeehu
Copy link
Contributor

mykeehu commented Aug 15, 2023

black images are always platform specifc, there is no single magic bullet or otherwise it would have been implemented already.

there are many users here reporting similar issues, but no info on what systems its happening: everyone here, please:

  • os, gpu, backend (you can see all in system info)
  • vae used.
  • settings: no-half, no-half-vae, precision type, device precision type

Ok, I deleted the whole configuration file a couple of days ago and I haven't reconfigured the system yet. What I know off the top of my head right now:

  • my hardware is: i9-13900K, 64 GB, RTX 3060 12GB
  • I used 1.0 and 0.9 VAE too
  • I used SDXL 1.0 with 0.9 VAE baked (originally) with no-half, no-half-vae options and FP16 too. Used original pipeline.
    Here is my log from last used.

@vladmandic
Copy link
Owner

black images are always platform specifc, there is no single magic bullet or otherwise it would have been implemented already.

there are many users here reporting similar issues, but no info on what systems its happening: everyone here, please:

  • os, gpu, backend (you can see all in system info)
  • vae used.
  • settings: no-half, no-half-vae, precision type, device precision type

Ok, I deleted the whole configuration file a couple of days ago and I haven't reconfigured the system yet. What I know off the top of my head right now:

  • my hardware is: i9-13900K, 64 GB, RTX 3060 12GB
  • I used 1.0 and 0.9 VAE too
  • I used SDXL 1.0 with 0.9 VAE baked (originally) with no-half, no-half-vae options and FP16 too. Used original pipeline.
    Here is my log from last used.

You're trying to load SDXL using wrong backend (you have not even enabled diffusers) and that does not work, thats not related to VAE black images at all. Read SDXL wiki.

@Symbiomatrix
Copy link
Contributor

Symbiomatrix commented Aug 15, 2023

OS: Windows 10
Gpu: Rtx 3090
Backend: Diffusers
RAM: 16gb
Torch parameters: dtype=torch.float16 vae=torch.float16 unet=torch.float16

Not sure if I should open a separate issue. I'm consistently hitting "DefaultCPUAllocator: not enough memory" whilst attempting to load SDXL models; diffusers with regular SD 1.5 models works, and I've been able to get comfy to work with said memory limitation. Any settings that could ease up on the memory usage? The wiki seems only to elaborate on vram, which is ample in my case.

@vladmandic
Copy link
Owner

OS: Windows 10
Gpu: Rtx 3090
Backend: Diffusers
RAM: 16gb
Torch parameters: dtype=torch.float16 vae=torch.float16 unet=torch.float16

Not sure if I should open a separate issue. I'm consistently hitting "DefaultCPUAllocator: not enough memory" whilst attempting to load SDXL models; diffusers with regular SD 1.5 models works, and I've been able to get comfy to work with said memory limitation. Any settings that could ease up on the memory usage? The wiki seems only to elaborate on vram, which is ample in my case.

Definitely separate. Although you might want to ask community on discord for best low ram settings.

@bbecausereasonss
Copy link

bbecausereasonss commented Aug 19, 2023

Fresh install same issues. In Automatic1111's I had to add the no half vae -- however here, this did not fix it.

Win11x64
4090
64RAM
Setting Torch parameters: dtype=torch.float16 vae=torch.float16 unet=torch.float16

Also getting these errors on model load:

Calculating model hash: C:\Users\xxxx\Deep\automatic\models\Stable-diffusion\SDXL\sd_xl_base_1.0_0.9vae.safetensors 0…
09:45:03-984251 ERROR Error loading model weights:
C:\Users\xxxx\Deep\automatic\models\Stable-diffusion\SDXL\sd_xl_base_1.0_0.9vae.safetensors
Error(s) in loading state_dict for LatentDiffusion:
size mismatch for model.diffusion_model.input_blocks.4.1.proj_in.weight: copying a param
with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([640, 640, 1, 1]).
size mismatch for
model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_k.weight: copying a param
with shape torch.Size([640, 2048]) from checkpoint, the shape in current model is
torch.Size([640, 768]).
size mismatch for
model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_v.weight: copying a param
with shape torch.Size([640, 2048]) from checkpoint, the shape in current model is
torch.Size([640, 768]).
size mismatch for model.diffusion_model.input_blocks.4.1.proj_out.weight: copying a
param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([640, 640, 1, 1]).
size mismatch for model.diffusion_model.input_blocks.5.1.proj_in.weight: copying a param
with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([640, 640, 1, 1]).
size mismatch for
model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_k.weight: copying a param
with shape torch.Size([640, 2048]) from checkpoint, the shape in current model is
torch.Size([640, 768]).
size mismatch for
model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_v.weight: copying a param
with shape torch.Size([640, 2048]) from checkpoint, the shape in current model is
torch.Size([640, 768]).
size mismatch for model.diffusion_model.input_blocks.5.1.proj_out.weight: copying a
param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([640, 640, 1, 1]).
size mismatch for model.diffusion_model.input_blocks.7.1.proj_in.weight: copying a param
with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is
torch.Size([1280, 1280, 1, 1]).
size mismatch for
model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_k.weight: copying a param
with shape torch.Size([1280, 2048]) from checkpoint, the shape in current model is
torch.Size([1280, 768]).
size mismatch for
model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_v.weight: copying a param
with shape torch.Size([1280, 2048]) from checkpoint, the shape in current model is
torch.Size([1280, 768]).
size mismatch for model.diffusion_model.input_blocks.7.1.proj_out.weight: copying a
param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is
torch.Size([1280, 1280, 1, 1]).
size mismatch for model.diffusion_model.input_blocks.8.1.proj_in.weight: copying a param
with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is
torch.Size([1280, 1280, 1, 1]).
size mismatch for
model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn2.to_k.weight: copying a param
with shape torch.Size([1280, 2048]) from checkpoint, the shape in current model is
torch.Size([1280, 768]).
size mismatch for
model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn2.to_v.weight: copying a param
with shape torch.Size([1280, 2048]) from checkpoint, the shape in current model is
torch.Size([1280, 768]).
size mismatch for model.diffusion_model.input_blocks.8.1.proj_out.weight: copying a
param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is
torch.Size([1280, 1280, 1, 1]).
size mismatch for model.diffusion_model.middle_block.1.proj_in.weight: copying a param
with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is
torch.Size([1280, 1280, 1, 1]).
size mismatch for
model.diffusion_model.middle_block.1.transformer_blocks.0.attn2.to_k.weight: copying a param
with shape torch.Size([1280, 2048]) from checkpoint, the shape in current model is
torch.Size([1280, 768]).
size mismatch for
model.diffusion_model.middle_block.1.transformer_blocks.0.attn2.to_v.weight: copying a param
with shape torch.Size([1280, 2048]) from checkpoint, the shape in current model is
torch.Size([1280, 768]).
size mismatch for model.diffusion_model.middle_block.1.proj_out.weight: copying a param
with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is
torch.Size([1280, 1280, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.2.0.in_layers.0.weight: copying a
param with shape torch.Size([1920]) from checkpoint, the shape in current model is
torch.Size([2560]).
size mismatch for model.diffusion_model.output_blocks.2.0.in_layers.0.bias: copying a
param with shape torch.Size([1920]) from checkpoint, the shape in current model is
torch.Size([2560]).
size mismatch for model.diffusion_model.output_blocks.2.0.in_layers.2.weight: copying a
param with shape torch.Size([1280, 1920, 3, 3]) from checkpoint, the shape in current model is
torch.Size([1280, 2560, 3, 3]).
size mismatch for model.diffusion_model.output_blocks.2.0.skip_connection.weight:
copying a param with shape torch.Size([1280, 1920, 1, 1]) from checkpoint, the shape in current
model is torch.Size([1280, 2560, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.3.0.in_layers.0.weight: copying a
param with shape torch.Size([1920]) from checkpoint, the shape in current model is
torch.Size([2560]).
size mismatch for model.diffusion_model.output_blocks.3.0.in_layers.0.bias: copying a
param with shape torch.Size([1920]) from checkpoint, the shape in current model is
torch.Size([2560]).
size mismatch for model.diffusion_model.output_blocks.3.0.in_layers.2.weight: copying a
param with shape torch.Size([640, 1920, 3, 3]) from checkpoint, the shape in current model is
torch.Size([1280, 2560, 3, 3]).
size mismatch for model.diffusion_model.output_blocks.3.0.in_layers.2.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.3.0.emb_layers.1.weight: copying a
param with shape torch.Size([640, 1280]) from checkpoint, the shape in current model is
torch.Size([1280, 1280]).
size mismatch for model.diffusion_model.output_blocks.3.0.emb_layers.1.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.3.0.out_layers.0.weight: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.3.0.out_layers.0.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.3.0.out_layers.3.weight: copying a
param with shape torch.Size([640, 640, 3, 3]) from checkpoint, the shape in current model is
torch.Size([1280, 1280, 3, 3]).
size mismatch for model.diffusion_model.output_blocks.3.0.out_layers.3.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.3.0.skip_connection.weight:
copying a param with shape torch.Size([640, 1920, 1, 1]) from checkpoint, the shape in current
model is torch.Size([1280, 2560, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.3.0.skip_connection.bias: copying
a param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.3.1.norm.weight: copying a param
with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.3.1.norm.bias: copying a param
with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.3.1.proj_in.weight: copying a
param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.3.1.proj_in.bias: copying a param
with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn1.to_q.weight: copying a param
with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280]).
size mismatch for
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn1.to_k.weight: copying a param
with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280]).
size mismatch for
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn1.to_v.weight: copying a param
with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280]).
size mismatch for
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn1.to_out.0.weight: copying a
param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280]).
size mismatch for
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn1.to_out.0.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.ff.net.0.proj.weight: copying a
param with shape torch.Size([5120, 640]) from checkpoint, the shape in current model is
torch.Size([10240, 1280]).
size mismatch for
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.ff.net.0.proj.bias: copying a param
with shape torch.Size([5120]) from checkpoint, the shape in current model is
torch.Size([10240]).
size mismatch for
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.ff.net.2.weight: copying a param
with shape torch.Size([640, 2560]) from checkpoint, the shape in current model is
torch.Size([1280, 5120]).
size mismatch for
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.ff.net.2.bias: copying a param with
shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_q.weight: copying a param
with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280]).
size mismatch for
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_k.weight: copying a param
with shape torch.Size([640, 2048]) from checkpoint, the shape in current model is
torch.Size([1280, 768]).
size mismatch for
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_v.weight: copying a param
with shape torch.Size([640, 2048]) from checkpoint, the shape in current model is
torch.Size([1280, 768]).
size mismatch for
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_out.0.weight: copying a
param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280]).
size mismatch for
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_out.0.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.norm1.weight: copying a param with
shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.norm1.bias: copying a param with
shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.norm2.weight: copying a param with
shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.norm2.bias: copying a param with
shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.norm3.weight: copying a param with
shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.3.1.transformer_blocks.0.norm3.bias: copying a param with
shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.3.1.proj_out.weight: copying a
param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.3.1.proj_out.bias: copying a param
with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.4.0.in_layers.0.weight: copying a
param with shape torch.Size([1280]) from checkpoint, the shape in current model is
torch.Size([2560]).
size mismatch for model.diffusion_model.output_blocks.4.0.in_layers.0.bias: copying a
param with shape torch.Size([1280]) from checkpoint, the shape in current model is
torch.Size([2560]).
size mismatch for model.diffusion_model.output_blocks.4.0.in_layers.2.weight: copying a
param with shape torch.Size([640, 1280, 3, 3]) from checkpoint, the shape in current model is
torch.Size([1280, 2560, 3, 3]).
size mismatch for model.diffusion_model.output_blocks.4.0.in_layers.2.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.4.0.emb_layers.1.weight: copying a
param with shape torch.Size([640, 1280]) from checkpoint, the shape in current model is
torch.Size([1280, 1280]).
size mismatch for model.diffusion_model.output_blocks.4.0.emb_layers.1.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.4.0.out_layers.0.weight: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.4.0.out_layers.0.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.4.0.out_layers.3.weight: copying a
param with shape torch.Size([640, 640, 3, 3]) from checkpoint, the shape in current model is
torch.Size([1280, 1280, 3, 3]).
size mismatch for model.diffusion_model.output_blocks.4.0.out_layers.3.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.4.0.skip_connection.weight:
copying a param with shape torch.Size([640, 1280, 1, 1]) from checkpoint, the shape in current
model is torch.Size([1280, 2560, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.4.0.skip_connection.bias: copying
a param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.4.1.norm.weight: copying a param
with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.4.1.norm.bias: copying a param
with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.4.1.proj_in.weight: copying a
param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.4.1.proj_in.bias: copying a param
with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn1.to_q.weight: copying a param
with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280]).
size mismatch for
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn1.to_k.weight: copying a param
with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280]).
size mismatch for
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn1.to_v.weight: copying a param
with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280]).
size mismatch for
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn1.to_out.0.weight: copying a
param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280]).
size mismatch for
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn1.to_out.0.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.ff.net.0.proj.weight: copying a
param with shape torch.Size([5120, 640]) from checkpoint, the shape in current model is
torch.Size([10240, 1280]).
size mismatch for
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.ff.net.0.proj.bias: copying a param
with shape torch.Size([5120]) from checkpoint, the shape in current model is
torch.Size([10240]).
size mismatch for
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.ff.net.2.weight: copying a param
with shape torch.Size([640, 2560]) from checkpoint, the shape in current model is
torch.Size([1280, 5120]).
size mismatch for
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.ff.net.2.bias: copying a param with
shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_q.weight: copying a param
with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280]).
size mismatch for
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_k.weight: copying a param
with shape torch.Size([640, 2048]) from checkpoint, the shape in current model is
torch.Size([1280, 768]).
size mismatch for
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_v.weight: copying a param
with shape torch.Size([640, 2048]) from checkpoint, the shape in current model is
torch.Size([1280, 768]).
size mismatch for
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_out.0.weight: copying a
param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280]).
size mismatch for
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_out.0.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.norm1.weight: copying a param with
shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.norm1.bias: copying a param with
shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.norm2.weight: copying a param with
shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.norm2.bias: copying a param with
shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.norm3.weight: copying a param with
shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.4.1.transformer_blocks.0.norm3.bias: copying a param with
shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.4.1.proj_out.weight: copying a
param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.4.1.proj_out.bias: copying a param
with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.5.0.in_layers.0.weight: copying a
param with shape torch.Size([960]) from checkpoint, the shape in current model is
torch.Size([1920]).
size mismatch for model.diffusion_model.output_blocks.5.0.in_layers.0.bias: copying a
param with shape torch.Size([960]) from checkpoint, the shape in current model is
torch.Size([1920]).
size mismatch for model.diffusion_model.output_blocks.5.0.in_layers.2.weight: copying a
param with shape torch.Size([640, 960, 3, 3]) from checkpoint, the shape in current model is
torch.Size([1280, 1920, 3, 3]).
size mismatch for model.diffusion_model.output_blocks.5.0.in_layers.2.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.5.0.emb_layers.1.weight: copying a
param with shape torch.Size([640, 1280]) from checkpoint, the shape in current model is
torch.Size([1280, 1280]).
size mismatch for model.diffusion_model.output_blocks.5.0.emb_layers.1.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.5.0.out_layers.0.weight: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.5.0.out_layers.0.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.5.0.out_layers.3.weight: copying a
param with shape torch.Size([640, 640, 3, 3]) from checkpoint, the shape in current model is
torch.Size([1280, 1280, 3, 3]).
size mismatch for model.diffusion_model.output_blocks.5.0.out_layers.3.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.5.0.skip_connection.weight:
copying a param with shape torch.Size([640, 960, 1, 1]) from checkpoint, the shape in current
model is torch.Size([1280, 1920, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.5.0.skip_connection.bias: copying
a param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.5.1.norm.weight: copying a param
with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.5.1.norm.bias: copying a param
with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.5.1.proj_in.weight: copying a
param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.5.1.proj_in.bias: copying a param
with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn1.to_q.weight: copying a param
with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280]).
size mismatch for
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn1.to_k.weight: copying a param
with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280]).
size mismatch for
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn1.to_v.weight: copying a param
with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280]).
size mismatch for
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn1.to_out.0.weight: copying a
param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280]).
size mismatch for
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn1.to_out.0.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.ff.net.0.proj.weight: copying a
param with shape torch.Size([5120, 640]) from checkpoint, the shape in current model is
torch.Size([10240, 1280]).
size mismatch for
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.ff.net.0.proj.bias: copying a param
with shape torch.Size([5120]) from checkpoint, the shape in current model is
torch.Size([10240]).
size mismatch for
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.ff.net.2.weight: copying a param
with shape torch.Size([640, 2560]) from checkpoint, the shape in current model is
torch.Size([1280, 5120]).
size mismatch for
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.ff.net.2.bias: copying a param with
shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_q.weight: copying a param
with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280]).
size mismatch for
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_k.weight: copying a param
with shape torch.Size([640, 2048]) from checkpoint, the shape in current model is
torch.Size([1280, 768]).
size mismatch for
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_v.weight: copying a param
with shape torch.Size([640, 2048]) from checkpoint, the shape in current model is
torch.Size([1280, 768]).
size mismatch for
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_out.0.weight: copying a
param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280]).
size mismatch for
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_out.0.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.norm1.weight: copying a param with
shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.norm1.bias: copying a param with
shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.norm2.weight: copying a param with
shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.norm2.bias: copying a param with
shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.norm3.weight: copying a param with
shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for
model.diffusion_model.output_blocks.5.1.transformer_blocks.0.norm3.bias: copying a param with
shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.5.1.proj_out.weight: copying a
param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is
torch.Size([1280, 1280, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.5.1.proj_out.bias: copying a param
with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.5.2.conv.weight: copying a param
with shape torch.Size([640, 640, 3, 3]) from checkpoint, the shape in current model is
torch.Size([1280, 1280, 3, 3]).
size mismatch for model.diffusion_model.output_blocks.5.2.conv.bias: copying a param
with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.6.0.in_layers.0.weight: copying a
param with shape torch.Size([960]) from checkpoint, the shape in current model is
torch.Size([1920]).
size mismatch for model.diffusion_model.output_blocks.6.0.in_layers.0.bias: copying a
param with shape torch.Size([960]) from checkpoint, the shape in current model is
torch.Size([1920]).
size mismatch for model.diffusion_model.output_blocks.6.0.in_layers.2.weight: copying a
param with shape torch.Size([320, 960, 3, 3]) from checkpoint, the shape in current model is
torch.Size([640, 1920, 3, 3]).
size mismatch for model.diffusion_model.output_blocks.6.0.in_layers.2.bias: copying a
param with shape torch.Size([320]) from checkpoint, the shape in current model is
torch.Size([640]).
size mismatch for model.diffusion_model.output_blocks.6.0.emb_layers.1.weight: copying a
param with shape torch.Size([320, 1280]) from checkpoint, the shape in current model is
torch.Size([640, 1280]).
size mismatch for model.diffusion_model.output_blocks.6.0.emb_layers.1.bias: copying a
param with shape torch.Size([320]) from checkpoint, the shape in current model is
torch.Size([640]).
size mismatch for model.diffusion_model.output_blocks.6.0.out_layers.0.weight: copying a
param with shape torch.Size([320]) from checkpoint, the shape in current model is
torch.Size([640]).
size mismatch for model.diffusion_model.output_blocks.6.0.out_layers.0.bias: copying a
param with shape torch.Size([320]) from checkpoint, the shape in current model is
torch.Size([640]).
size mismatch for model.diffusion_model.output_blocks.6.0.out_layers.3.weight: copying a
param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is
torch.Size([640, 640, 3, 3]).
size mismatch for model.diffusion_model.output_blocks.6.0.out_layers.3.bias: copying a
param with shape torch.Size([320]) from checkpoint, the shape in current model is
torch.Size([640]).
size mismatch for model.diffusion_model.output_blocks.6.0.skip_connection.weight:
copying a param with shape torch.Size([320, 960, 1, 1]) from checkpoint, the shape in current
model is torch.Size([640, 1920, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.6.0.skip_connection.bias: copying
a param with shape torch.Size([320]) from checkpoint, the shape in current model is
torch.Size([640]).
size mismatch for model.diffusion_model.output_blocks.7.0.in_layers.0.weight: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.7.0.in_layers.0.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.7.0.in_layers.2.weight: copying a
param with shape torch.Size([320, 640, 3, 3]) from checkpoint, the shape in current model is
torch.Size([640, 1280, 3, 3]).
size mismatch for model.diffusion_model.output_blocks.7.0.in_layers.2.bias: copying a
param with shape torch.Size([320]) from checkpoint, the shape in current model is
torch.Size([640]).
size mismatch for model.diffusion_model.output_blocks.7.0.emb_layers.1.weight: copying a
param with shape torch.Size([320, 1280]) from checkpoint, the shape in current model is
torch.Size([640, 1280]).
size mismatch for model.diffusion_model.output_blocks.7.0.emb_layers.1.bias: copying a
param with shape torch.Size([320]) from checkpoint, the shape in current model is
torch.Size([640]).
size mismatch for model.diffusion_model.output_blocks.7.0.out_layers.0.weight: copying a
param with shape torch.Size([320]) from checkpoint, the shape in current model is
torch.Size([640]).
size mismatch for model.diffusion_model.output_blocks.7.0.out_layers.0.bias: copying a
param with shape torch.Size([320]) from checkpoint, the shape in current model is
torch.Size([640]).
size mismatch for model.diffusion_model.output_blocks.7.0.out_layers.3.weight: copying a
param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is
torch.Size([640, 640, 3, 3]).
size mismatch for model.diffusion_model.output_blocks.7.0.out_layers.3.bias: copying a
param with shape torch.Size([320]) from checkpoint, the shape in current model is
torch.Size([640]).
size mismatch for model.diffusion_model.output_blocks.7.0.skip_connection.weight:
copying a param with shape torch.Size([320, 640, 1, 1]) from checkpoint, the shape in current
model is torch.Size([640, 1280, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.7.0.skip_connection.bias: copying
a param with shape torch.Size([320]) from checkpoint, the shape in current model is
torch.Size([640]).
size mismatch for model.diffusion_model.output_blocks.8.0.in_layers.0.weight: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([960]).
size mismatch for model.diffusion_model.output_blocks.8.0.in_layers.0.bias: copying a
param with shape torch.Size([640]) from checkpoint, the shape in current model is
torch.Size([960]).
size mismatch for model.diffusion_model.output_blocks.8.0.in_layers.2.weight: copying a
param with shape torch.Size([320, 640, 3, 3]) from checkpoint, the shape in current model is
torch.Size([640, 960, 3, 3]).
size mismatch for model.diffusion_model.output_blocks.8.0.in_layers.2.bias: copying a
param with shape torch.Size([320]) from checkpoint, the shape in current model is
torch.Size([640]).
size mismatch for model.diffusion_model.output_blocks.8.0.emb_layers.1.weight: copying a
param with shape torch.Size([320, 1280]) from checkpoint, the shape in current model is
torch.Size([640, 1280]).
size mismatch for model.diffusion_model.output_blocks.8.0.emb_layers.1.bias: copying a
param with shape torch.Size([320]) from checkpoint, the shape in current model is
torch.Size([640]).
size mismatch for model.diffusion_model.output_blocks.8.0.out_layers.0.weight: copying a
param with shape torch.Size([320]) from checkpoint, the shape in current model is
torch.Size([640]).
size mismatch for model.diffusion_model.output_blocks.8.0.out_layers.0.bias: copying a
param with shape torch.Size([320]) from checkpoint, the shape in current model is
torch.Size([640]).
size mismatch for model.diffusion_model.output_blocks.8.0.out_layers.3.weight: copying a
param with shape torch.Size([320, 320, 3, 3]) from checkpoint, the shape in current model is
torch.Size([640, 640, 3, 3]).
size mismatch for model.diffusion_model.output_blocks.8.0.out_layers.3.bias: copying a
param with shape torch.Size([320]) from checkpoint, the shape in current model is
torch.Size([640]).
size mismatch for model.diffusion_model.output_blocks.8.0.skip_connection.weight:
copying a param with shape torch.Size([320, 640, 1, 1]) from checkpoint, the shape in current
model is torch.Size([640, 960, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.8.0.skip_connection.bias: copying
a param with shape torch.Size([320]) from checkpoint, the shape in current model is
torch.Size([640]).

@vladmandic
Copy link
Owner

@bbecausereasonss
this has nothing to do with fp16 or producing black images, your sd-xl model load is clearly not working and thats because you havent switched backend from original to diffusers. please read wiki.

@vladmandic
Copy link
Owner

vladmandic commented Aug 20, 2023

i'm closing this issue as original author has not provided updates in several weeks and all recent comments are not related to original issue. if there is an update to original issue, i'll reopen.

everyone else, having problems, a) please check wiki first, it really does solve almost everything noted here, b) if it doesn't help, create a new issue - adding to unrelated issues prevents me from helping.

@jiarenyf
Copy link

self.vae.to(torch.float32)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cannot reproduce Reported issue cannot be easily reproducible question Further information is requested
Projects
None yet
Development

No branches or pull requests