You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As the title suggests, we would like a pipeline that supports all three techniques. Currently, we have standalone pipelines for VideoToVideo and ControlNet. A combination of the two might be interesting to see!
What to expect?
At low strength values, you would most likely get outputs that are very similar to the original video but can have minor stylistic changes while overall composition remains same.
At high strength values, you would most likely get outputs that are very different from the original video but have similar overall composition with, possibly, major stylistic differences.
You would have to make sure that the strength parameter and controlnet parameters are correctly supported, and most likely only make changes to prepare_latents and __call__. When opening a PR, please provide a minimal reproducible example with demo outputs. Some good example PRs are #8861 and #8934. Some examples of running AnimateDiff are available at #9231 and #6551.
As the title suggests, we would like a pipeline that supports all three techniques. Currently, we have standalone pipelines for VideoToVideo and ControlNet. A combination of the two might be interesting to see!
What to expect?
You would have to make sure that the
strength
parameter and controlnet parameters are correctly supported, and most likely only make changes toprepare_latents
and__call__
. When opening a PR, please provide a minimal reproducible example with demo outputs. Some good example PRs are #8861 and #8934. Some examples of running AnimateDiff are available at #9231 and #6551.For reviews, you can tag me and @DN6.
The text was updated successfully, but these errors were encountered: