useful tools
The experiments in this repository have been conducted in Colab and also on my desktop.
We also use VSCode's Remote-Development function and Docker to construct the environment for experiments in the paper.
In this repository, we are experimenting on a Docker container running under a remote GPU environment, and accessing the remote terminal from a local VSCode. If you want to run your experiments in this environment, please refer to the tips/vscode/RemoteDevelopment folder.
- Paper Reading List
- [arXiv:1710.10196] Progressive Growing of GANs for Improved Quality, Stability, and Variation
- [arXiv:1805.08318] Self-Attention Generative Adversarial Networks
- [arXiv:1809.11096] Large Scale GAN Training for High Fidelity Natural Image Synthesis
- [arXiv:1905.01164] SinGAN: Learning a Generative Model from a Single Natural Image
- [arXiv:1912.11035] CNN-generated images are surprisingly easy to spot... for now
- [arXiv:2002.10964] Freeze the Discriminator: a Simple Baseline for Fine-Tuning GANs
- [arXiv:2002.12655] A U-Net Based Discriminator for Generative Adversarial Networks
- [arXiv:2004.02088] Feature Quantization Improves GAN Training
- [arXiv:2003.02567] GANwriting: Content-Conditioned Generation of Styled Handwritten Word Images
- [arXiv:2004.03355] Inclusive GAN: Improving Data and Minority Coverage in Generative Models
- [arXiv:2004.05472] Autoencoding Generative Adversarial Networks
- [arXiv:2006.12681] Contrastive Generative Adversarial Networks
- [arXiv:2006.14567] Taming GANs with Lookahead
- [arXiv:2007.06600] Closed-Form Factorization of Latent Semantics in GANs
- [arXiv:1810.01365] On Self Modulation for Generative Adversarial Networks
- [arXiv:1811.11212] Self-Supervised GANs via Auxiliary Rotation Loss
- [arXiv:2006.10728] Diverse Image Generation via Self-Conditioned GANs
- [arXiv:2010.09893] LT-GAN: Self-Supervised GAN with Latent Transformation Detection
- [arXiv:1611.07004] Image-to-Image Translation with Conditional Adversarial Networks
- [arXiv:1711.09020] StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation
- [arXiv:1711.11585] High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs
- [arXiv:1812.10889] InstaGAN: Instance-aware Image-to-Image Translation
- [arXiv:1907.04312] Positional Normalization
- [arXiv:1910.05253] Adversarial Colorization Of Icons Based On Structure And Color Conditions
- [arXiv:2002.05638] GANILLA: Generative Adversarial Networks for Image to Illustration Translation
- [arXiv:2003.00187] Augmented Cyclic Consistency Regularization for Unpaired Image-to-Image Translation
- [arXiv:2003.02683] Image Generation from Freehand Scene Sketches
- [arXiv:2003.00273] Reusing Discriminators for Encoding Towards Unsupervised Image-to-Image Translation
- [arXiv:2003.04858] Unpaired Image-to-Image Translation using Adversarial Consistency Loss
- [arXiv:2003.07101] Synthesizing human-like sketches from natural images using a conditional convolutional decoder
- [arXiv:2007.05471] Geometric Style Transfer
- [arXiv:1612.03242] StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks
- [arXiv:1802.09178] Photographic Text-to-Image Synthesis with a Hierarchically-nested Adversarial Network
- [arXiv:1904.01480] Semantics Disentangling for Text-to-Image Generation
- [arXiv:2003.12137] Cycle Text-To-Image GAN with BERT
- [arXiv:2004.11437] Efficient Neural Architecture for Text-to-Image Synthesis
- [arXiv:2005.12444] SegAttnGAN: Text to Image Generation with Segmentation Attention
- [arXiv:2005.13192] TIME: Text and Image Mutual-Translation Adversarial Networks
- [arXiv:2008.05865] DF-GAN: Deep Fusion Generative Adversarial Networks for Text-to-Image Synthesis
- [arXiv:2008.08976] Improving Text to Image Generation using Mode-seeking Function
- [arXiv:2003.02365] Creating High Resolution Images with a Latent Adversarial Generator
- [arXiv:2004.00448] Rethinking Data Augmentation for Image Super-resolution: A Comprehensive Analysis and a New Strategy
- [arXiv:1903.05628] Mode Seeking Generative Adversarial Networks for Diverse Image Synthesis
- [arXiv:1904.12848] Unsupervised Data Augmentation for Consistency Training
- [arXiv:1910.12027] Consistency Regularization for Generative Adversarial Networks
- [arXiv:2002.04724] Improved Consistency Regularization for GAN
- [arXiv:2006.02595] Image Augmentations for GAN Training
- [arXiv:2006.05338] Towards Good Practices for Data Augmentation in GAN Training
- [arXiv:2006.06676] Training Generative Adversarial Networks with Limited Data
- [arXiv:2006.10738] Differentiable Augmentation for Data-Efficient GAN Training
- [arXiv:2003.08936] GAN Compression: Efficient Architectures for Interactive Conditional GANs
- [arXiv:2006.08198] AutoGAN-Distiller: Searching to Compress Generative Adversarial Networks
- [arXiv:2009.13829] TinyGAN: Distilling BigGAN for Conditional Image Generation
- [arXiv:1901.04596] AET vs. AED: Unsupervised Representation Learning by Auto-Encoding Transformations rather than Data
- [arXiv:1907.08610] Lookahead Optimizer: k steps forward, 1 step back
- [arXiv:1911.05722] Momentum Contrast for Unsupervised Visual Representation Learning
- [arXiv:1911.09665] Adversarial Examples Improve Image Recognition
- [arXiv:2004.11362] Supervised Contrastive Learning
- [arXiv:2003.00152] Training BatchNorm and Only BatchNorm: On the Expressive Power of Random Features in CNNs
- [arXiv:2010.05981] Shape-Texture Debiased Neural Network Training
- [arXiv:1811.12231] ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness
- [arXiv:1906.05909] Stand-Alone Self-Attention in Vision Models
- [arXiv:1911.08432] Defective Convolutional Layers Learn Robust CNNs
- [arXiv:2003.01367] Curriculum By Texture
- [arXiv:2004.13587] Do We Need Fully Connected Output Layers in Convolutional Networks?
- [arXiv:2004.13621] Exploring Self-attention for Image Recognition
- [arXiv:2006.03677] Visual Transformers: Token-based Image Representation and Processing for Computer Vision
- [arXiv:1905.04899] CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features
- [arXiv:1909.09148] Data Augmentation Revisited: Rethinking the Distribution Gap between Clean and Augmented Data
- [arXiv:1909.12220] Implicit Semantic Data Augmentation for Deep Networks
- [arXiv:2002.11102] On Feature Normalization and Data Augmentation
- [arXiv:2003.05176] Equalization Loss for Long-Tailed Object Recognition
- [arXiv:2004.08955] ResNeSt: Split-Attention Networks
- [arXiv:2002.05709] A Simple Framework for Contrastive Learning of Visual Representations
- [arXiv:2003.04297] Improved Baselines with Momentum Contrastive Learning
- [arXiv:2006.07733] Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning
- [arXiv:2003.07845] Rethinking Batch Normalization in Transformers
- [arXiv:2002.11794] Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers
- [arXiv:2004.02984] MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices
- [arXiv:2004.11886] Lite Transformer with Long-Short Range Attention
- [arXiv:2005.00743] Synthesizer: Rethinking Self-Attention in Transformer Models