Official Implementation of "Cycle Encoding of a StyleGAN Encoder for Improved Reconstruction and Editability" paper.
Please refer to the Prerequisites and Installation in PTI.
Please download the pre-trained models from the following links. We assume that all auxiliary models are downloaded and saved to the directory pretrained_models
.
Link |
---|
FFHQ CycleEncoding Inversion |
Please also download the auxiliary models from e4e.
As described in the paper, this method is faster than the second method. Please perform the following steps:
cd pivotal_tuning
Edit "configs/hyperparameters.py" and set "use_saved_w_pivots = False" and "first_inv_type = 'cycle'".
sh run_pivotal_tuning.sh
This method achieves better reconstruction quality than the first method. Please perform the following steps:
cd refinement
sh run_regularized_refinement.sh
Copy or link the directory saved_w
to the directory pivotal_tuning
.
cd pivotal_tuning
Edit "configs/hyperparameters.py" and set "use_saved_w_pivots = True".
sh run_pivotal_tuning.sh
cd cycle_encoding/w_to_wplus
sh run_w_to_wplus.sh
cd cycle_encoding/wplus_to_w
sh run_wplus_to_w.sh
We used the scripts calc_id_loss_parallel.py and calc_losses_on_images.py from pSp for quantitative evaluation.
The code borrows heavily from PTI. Some code borrows from e4e and pSp.
If you use this code for your research, please cite our paper:
@inproceedings{Xudong2022CycleEncoding,
title={Cycle Encoding of a StyleGAN Encoder for Improved Reconstruction and Editability},
author={Xudong Mao and Liujuan Cao and Aurele T. Gnanha and Zhenguo Yang and Qing Li and Rongrong Ji},
booktitle={Proceedings of ACM International Conference on Multimedia},
year={2022}
}