-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
animatediff result is worse than origin Repo #5881
Comments
@akk-123 Looking into it. Just to confirm, the results used the exact same prompts, seeds and schedulers? I ask because the subjects seem to be different in the two results you shared. |
@DN6 yes, I use the same config |
@DN6 Any progress about it? |
@DN6 Hi, Any progress about it? I found the same problem. I think maybe the implementation of animatediff in diffusers has something wrong? |
I meet the same problem, and the result seemed to be significantly different from the original repo. |
Hi @akk-123 taking a look into it this week. Do you notice the same quality difference when you use other checkpoints? |
yes, all checkpoints is worse
I agree, maybe the implementation of animatediff in diffusers has something wrong |
Hi @akk-123 I think the issue seems to be with the import torch
from diffusers import AnimateDiffPipeline, MotionAdapter, DDIMScheduler
from diffusers.utils import export_to_gif
adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2")
model_id = "pagebrain/majicmix-realistic-v7"
pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter).to('cuda')
scheduler = DDIMScheduler.from_pretrained(
model_id,
subfolder="scheduler",
beta_schedule="linear",
clip_sample=False,
timestep_spacing="linspace",
steps_offset=1
)
pipe.scheduler = scheduler
pipe.to("cuda")
pipe.enable_vae_slicing()
prompt="1girl, offshoulder, light smile, shiny skin best quality, masterpiece, photorealistic"
seed = 42
frames = pipe(
prompt=prompt,
num_frames=16,
guidance_scale=7.5,
num_inference_steps=30,
generator=torch.Generator("cpu").manual_seed(seed),
).frames[0]
export_to_gif(frames, "output.gif") |
@DN6 thanks, I try it, It's work, why does this parameter(beta_schedule) have such a big impact on the results? |
I think the original repository trained the model using the linear schedule with DDIM. It could be that the motion checkpoint is sensitive to it. |
Describe the bug
It's seems that animatediff result is bad, generate image is blur, however when I use the sdwebui or origin animatediff, it can get good result
diffuser result
https://github.com/huggingface/diffusers/assets/98469560/994c50d7-0568-4798-8e33-61251c77cd36
sdwebui or origin repo result
https://github.com/huggingface/diffusers/assets/98469560/dee390c2-7e95-4cee-a041-086d95d03cc8
Reproduction
Logs
No response
System Info
diffusers
version: 0.23.0Who can help?
No response
The text was updated successfully, but these errors were encountered: