Skip to content

Latest commit

 

History

History
111 lines (89 loc) · 3.63 KB

DS_DATA.md

File metadata and controls

111 lines (89 loc) · 3.63 KB

Prepare Downstream Data

First, create directory ./blob/data and download all the datasets.

LaPa

CelebAMask-HQ

AFLW-19

IBUG300W & WFLW

The tree of ./blob/data should look like:

blob/data/
│
├── LaPa/
│   ├── test/
│   ├── train/
│   └── val/
│
├── CelebAMask-HQ/
│   ├── CelebA-HQ-img/
│   ├── CelebAMask-HQ-mask-anno/
│   ├── list_eval_partition.txt
│   └── CelebA-HQ-to-CelebA-mapping.txt
│
├── AFLW-19/  
│   ├── AFLWinfo_release.mat
│   └── data/
│       └── flickr/
│ 
├── IBUG300W/
│   ├── ibug/
│   ├── afw/
│   ├── helen/
│   ├── lfpw/
│   ├── face_landmarks_300w_train.csv
│   ├── face_landmarks_300w_valid_challenge.csv
│   └── face_landmarks_300w_valid_common.csv
│
└── WFLW/
    ├── WFLW_images/
    ├── face_landmarks_wflw_test_blur.csv  
    ├── face_landmarks_wflw_test_expression.csv    
    ├── face_landmarks_wflw_test_largepose.csv  
    ├── face_landmarks_wflw_test_occlusion.csv
    ├── face_landmarks_wflw_test.csv       
    ├── face_landmarks_wflw_test_illumination.csv  
    ├── face_landmarks_wflw_test_makeup.csv     
    └── face_landmarks_wflw_train.csv

Now let's repack all these datasets into uniform formats for efficient reading. Just run with

python -m farl.datasets.prepare ./blob/data

Finally, we should have the following files under ./blob/data:

LaPa.train.zip
LaPa.test.zip

CelebAMaskHQ.train.zip
CelebAMaskHQ.test.zip

AFLW-19.train.zip
AFLW-19.test.zip
AFLW-19.test_frontal.zip

IBUG300W.train.zip
IBUG300W.test_common.zip
IBUG300W.test_challenging.zip

WFLW.train.zip
WFLW.test_all.zip
WFLW.test_blur.zip
WFLW.test_expression.zip
WFLW.test_illumination.zip
WFLW.test_largepose.zip
WFLW.test_makeup.zip
WFLW.test_occlusion.zip