Yuzhong Zhao1, Yingya Zhang2, Qixiang Ye1, Fang Wan1†
3Institute of Automation, Chinese Academy of Sciences
4Fudan University, 5Nanyang Technological University
We introduce Timestep Embedding Aware Cache (TeaCache), a training-free caching approach that estimates and leverages the fluctuating differences among model outputs across timesteps, thereby accelerating the inference. TeaCache works well for Video Diffusion Models, Image Diffusion models and Audio Diffusion Models. For more details and results, please visit our project page.
- If you like our project, please give us a star ⭐ on GitHub for the latest update.
- [2025/01/24] 🔥 Support Cosmos for both T2V and I2V. Thanks @zishen-ucap.
- [2025/01/20] 🔥 Support CogVideoX1.5-5B for both T2V and I2V. Thanks @zishen-ucap.
- [2025/01/07] 🔥 Support TangoFlux. TeaCache works well for Audio Diffusion Models!
- [2024/12/30] 🔥 Support Mochi and LTX-Video for Video Diffusion Models. Support Lumina-T2X for Image Diffusion Models.
- [2024/12/27] 🔥 Support FLUX. TeaCache works well for Image Diffusion Models!
- [2024/12/26] 🔥 Support ConsisID. Thanks @SHYuanBest.
- [2024/12/24] 🔥 Support HunyuanVideo.
- [2024/12/19] 🔥 Support CogVideoX.
- [2024/12/06] 🎉 Release the code of TeaCache. Support Open-Sora, Open-Sora-Plan and Latte.
- [2024/11/28] 🎉 Release the paper of TeaCache.
If you develop/use TeaCache in your projects, welcome to let us know.
Model
- ConsisID supports TeaCache. Thanks @SHYuanBest.
- Ruyi-Models supports TeaCache. Thanks @cellzero.
- EasyAnimate supports TeaCache. Thanks @hkunzhe and @bubbliiiing.
ComfyUI
- ComfyUI-HunyuanVideoWrapper supports TeaCache4HunyuanVideo. Thanks @kijai, ctf05 and DarioFT.
- ComfyUI-TeaCacheHunyuanVideo for TeaCache4HunyuanVideo. Thanks @facok.
- ComfyUI-TeaCache for TeaCache. Thanks @YunjieYu.
- Comfyui_TTP_Toolset supports TeaCache. Thanks @TTPlanetPig.
- ComfyUI_Patches_ll supports TeaCache. Thanks @lldacing.
- ComfyUI-TangoFlux supports TeaCache. Thanks @LucipherDev.
Text to Video
- TeaCache4Open-Sora
- TeaCache4Open-Sora-Plan
- TeaCache4Latte
- TeaCache4CogVideoX
- TeaCache4HunyuanVideo
- TeaCache4Mochi
- TeaCache4LTX-Video
- TeaCache4CogVideoX1.5
- EasyAnimate, see here.
- TeaCache4Cosmos
Image to Video
- TeaCache4ConsisID
- TeaCache4CogVideoX1.5
- Ruyi-Models. See here.
- EasyAnimate, see here.
- TeaCache4Cosmos
Video to Video
- EasyAnimate, see here.
Text to Image
Text to Audio
- Welcome for PRs to support other models.
- If the custom model is based on or has similar model structure to the models we've supported, you can try to directly transfer TeaCache to the custom model. For example, rescaling coefficients for CogVideoX-5B can be directly applied to CogVideoX1.5, ConsisID and rescaling coefficients for FLUX can be directly applied to TangoFlux.
- Otherwise, you can refer to these successful attempts, e.g., 1, 2.
This repository is built based on VideoSys, Diffusers, Open-Sora, Open-Sora-Plan, Latte, CogVideoX, HunyuanVideo, ConsisID, FLUX, Mochi, LTX-Video, Lumina-T2X, TangoFlux and Cosmos. Thanks for their contributions!
- The majority of this project is released under the Apache 2.0 license as found in the LICENSE file.
- For VideoSys, Diffusers, Open-Sora, Open-Sora-Plan, Latte, CogVideoX, HunyuanVideo, ConsisID, FLUX, Mochi, LTX-Video, Lumina-T2X, TangoFlux and Cosmos, please follow their LICENSE.
- The service is a research preview. Please contact us if you find any potential violations. (liufeng20@mails.ucas.ac.cn)
If you find TeaCache is useful in your research or applications, please consider giving us a star ⭐ and citing it by the following BibTeX entry.
@article{liu2024timestep,
title={Timestep Embedding Tells: It's Time to Cache for Video Diffusion Model},
author={Liu, Feng and Zhang, Shiwei and Wang, Xiaofeng and Wei, Yujie and Qiu, Haonan and Zhao, Yuzhong and Zhang, Yingya and Ye, Qixiang and Wan, Fang},
journal={arXiv preprint arXiv:2411.19108},
year={2024}
}