Haofeng Liu Chenshu Xu Yifei Yang Lihua Zeng Shengfeng He
- [Apr 5th] v1.0.0 Release.
It is recommended to run our code on a Nvidia GPU with a linux system. Currently, it requires around 14 GB GPU memory to run our method.
To install the required libraries, simply run the following command:
conda env create -f environment.yaml
conda activate dragnoise
To start with, in command line, run the following to start the gradio user interface:
python3 drag_ui.py
Basically, it consists of the following steps:
- Drop our input image into the left-most box.
- Input a prompt describing the image in the "prompt" field
- Click the "Train LoRA" button to train a LoRA given the input image
- Draw a mask in the left-most box to specify the editable areas. (optional)
- Click handle and target points in the middle box. Also, you may reset all points by clicking "Undo point".
- Click the "Run" button to run our algorithm. Edited results will be displayed in the right-most box.
Code related to the Drag algorithm is under Apache 2.0 license.
If you find our repo helpful, please consider leaving a star or cite our paper :
@misc{liu2024drag,
title={Drag Your Noise: Interactive Point-based Editing via Diffusion Semantic Propagation},
author={Haofeng Liu and Chenshu Xu and Yifei Yang and Lihua Zeng and Shengfeng He},
year={2024},
eprint={2404.01050},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
For any questions on this project, please contact liuhaofeng2022@163.com
This work is inspired by the amazing DragGAN. We also benefit from the codebase of DragDiffusion.
- Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold
- DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing
- DragonDiffusion: Enabling Drag-style Manipulation on Diffusion Models
- FreeDrag: Point Tracking is Not You Need for Interactive Point-based Image Editing
- For users struggling in loading models from huggingface due to internet constraint, please 1) follow this links and download the model into the directory "local_pretrained_models"; 2) Run "drag_ui.py" and select the directory to your pretrained model in "Algorithm Parameters -> Base Model Config -> Diffusion Model Path".