Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SDXL Inpainting VAE Normalization #7225

Closed
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -939,6 +939,17 @@ def _encode_vae_image(self, image: torch.Tensor, generator: torch.Generator):
else:
image_latents = retrieve_latents(self.vae.encode(image), generator=generator)

has_latents_mean = hasattr(self.vae.config, "latents_mean") and self.vae.config.latents_mean is not None
Copy link
Collaborator

@yiyixuxu yiyixuxu Mar 9, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need it here?
we didn't normalize the latents in SDXL tex to image

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this would be needed because here we are encoding the input image. This is what we do in the SDXL DreamBooth LoRA training script.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But I agree that #7132 tackles it from a comprehensive perspective.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok I re-opene it and I'm happy to be proven wrong
once we merge #7132, you can update the branch, and you can then test it out and show some results with and without the normalization.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Works for me!

has_latents_std = hasattr(self.vae.config, "latents_std") and self.vae.config.latents_std is not None
if has_latents_mean and has_latents_std:
latents_mean = (
torch.tensor(self.vae.config.latents_mean).view(1, 4, 1, 1).to(image_latents.device, image_latents.dtype)
)
latents_std = (
torch.tensor(self.vae.config.latents_std).view(1, 4, 1, 1).to(image_latents.device, image_latents.dtype)
)
image_latents = (image_latents - latents_mean) / latents_std

if self.vae.config.force_upcast:
self.vae.to(dtype)

Expand Down Expand Up @@ -1771,7 +1782,22 @@ def denoising_value_valid(dnv):
self.upcast_vae()
latents = latents.to(next(iter(self.vae.post_quant_conv.parameters())).dtype)

image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0]
# unscale/denormalize the latents
# denormalize with the mean and std if available and not None
has_latents_mean = hasattr(self.vae.config, "latents_mean") and self.vae.config.latents_mean is not None
has_latents_std = hasattr(self.vae.config, "latents_std") and self.vae.config.latents_std is not None
if has_latents_mean and has_latents_std:
latents_mean = (
torch.tensor(self.vae.config.latents_mean).view(1, 4, 1, 1).to(latents.device, latents.dtype)
)
latents_std = (
torch.tensor(self.vae.config.latents_std).view(1, 4, 1, 1).to(latents.device, latents.dtype)
)
latents = latents * latents_std / self.vae.config.scaling_factor + latents_mean
else:
latents = latents / self.vae.config.scaling_factor

image = self.vae.decode(latents, return_dict=False)[0]

# cast back to fp16 if needed
if needs_upcasting:
Expand Down
Loading