| Documentation | Leaderboard | Paper (Coming Soon) | Twitter/X (Coming Soon) | Developer Slack |
Latest News 🔥
[Latest] We have fixed some bugs and released a new version of DD-Ranking. Please update your package via pip install ddranking --upgrade
.
Unfold to see more details.
-
[2025/01] We have fixed some bugs and released a new version of DD-Ranking. Please update your package via
pip install ddranking --upgrade
. -
[2025/01] Our PyPI package is officially released! Users can now install DD-Ranking via
pip install ddranking
. -
[2024/12] We officially released DD-Ranking! DD-Ranking provides us a new benchmark decoupling the impacts from knowledge distillation and data augmentation.
Unfold to see more details.
Dataset Distillation (DD) aims to condense a large dataset into a much smaller one, which allows a model to achieve comparable performance after training on it. DD has gained extensive attention since it was proposed. With some foundational methods such as DC, DM, and MTT, various works have further pushed this area to a new standard with their novel designs.
Notebaly, more and more methods are transitting from "hard label" to "soft label" in dataset distillation, especially during evaluation. Hard labels are categorical, having the same format of the real dataset. Soft labels are outputs of a pre-trained teacher model. Recently, Deng et al., pointed out that "a label is worth a thousand images". They showed analytically that soft labels are exetremely useful for accuracy improvement.
However, since the essence of soft labels is knowledge distillation, we find that when applying the same evaluation method to randomly selected data, the test accuracy also improves significantly (see the figure above).
This makes us wonder: Can the test accuracy of the model trained on distilled data reflect the real informativeness of the distilled data?
Additionally, we have discoverd unfairness of using only test accuracy to demonstrate one's performance from the following three aspects:
- Results of using hard and soft labels are not directly comparable since soft labels introduce teacher knowledge.
- Strategies of using soft labels are diverse. For instance, different objective functions are used during evaluation, such as soft Cross-Entropy and Kullback–Leibler divergence. Also, one image may be mapped to one or multiple soft labels.
- Different data augmentations are used during evaluation.
Motivated by this, we propose DD-Ranking, a new benchmark for DD evaluation. DD-Ranking provides a fair evaluation scheme for DD methods that can decouple the impacts from knowledge distillation and data augmentation to reflect the real informativeness of the distilled data.
Unfold to see more details.
DD-Ranking (DD, *i.e.*, Dataset Distillation) is an integrated and easy-to-use benchmark for dataset distillation. It aims to provide a fair evaluation scheme for DD methods that can decouple the impacts from knowledge distillation and data augmentation to reflect the real informativeness of the distilled data.
Benchmark
Revisit the original goal of dataset distillation:
The idea is to synthesize a small number of data points that do not need to come from the correct data distribution, but will, when given to the learning algorithm as training data, approximate the model trained on the original data. (Wang et al., 2020)
The evaluation method for DD-Ranking is grounded in the essence of dataset distillation, aiming to better reflect the informativeness of the synthesized data by assessing the following two aspects:
-
The degree to which the real dataset is recovered under hard labels (hard label recovery):
$\text{HLR}=\text{Acc.}{\text{real-hard}}-\text{Acc.}{\text{syn-hard}}$ . -
The improvement over random selection when using personalized evaluation methods (improvement over random):
$\text{IOR}=\text{Acc.}{\text{syn-any}}-\text{Acc.}{\text{rdm-any}}$ .$\text{Acc.}$ is the accuracy of models trained on different samples. Samples' marks are as follows:
-
$\text{real-hard}$ : Real dataset with hard labels; -
$\text{syn-hard}$ : Synthetic dataset with hard labels; -
$\text{syn-any}$ : Synthetic dataset with personalized evaluation methods (hard or soft labels); -
$\text{rdm-any}$ : Randomly selected dataset (under the same compression ratio) with the same personalized evaluation methods.
DD-Ranking uses a weight sum of
Formally, the DD-Ranking Score (DDRS) is defined as:
By default, we set
DD-Ranking is integrated with:
- Multiple strategies of using soft labels in existing works;
- Commonly used data augmentation methods in existing works;
- Commonly used model architectures in existing works.
DD-Ranking has the following features:
- Fair Evaluation: DD-Ranking provides a fair evaluation scheme for DD methods that can decouple the impacts from knowledge distillation and data augmentation to reflect the real informativeness of the distilled data.
- Easy-to-use: DD-Ranking provides a unified interface for dataset distillation evaluation.
- Extensible: DD-Ranking supports various datasets and models.
- Customizable: DD-Ranking supports various data augmentations and soft label strategies.
DD-Ranking currently includes the following datasets and methods (categorized by hard/soft label). Our replication of the following baselines can be found at the methods branch. Evaluation results can be found in the leaderboard and evaluation configurations can be found at the eval branch.
Supported Dataset | Evaluated Hard Label Methods | Evaluated Soft Label Methods |
---|---|---|
CIFAR-10 | DC | DATM |
CIFAR-100 | DSA | SRe2L |
TinyImageNet | DM | RDED |
MTT | D4M |
Install DD-Ranking with pip
or from source:
From pip
pip install ddranking
From source
python setup.py install
Below is a step-by-step guide on how to use our dd_ranking
. This demo is based on soft labels (source code can be found in demo_soft.py
). You can find hard label demo in demo_hard.py
.
Step1: Intialize a soft-label metric evaluator object. Config files are recommended for users to specify hyper-parameters. Sample config files are provided here.
from ddranking.metrics import SoftLabelEvaluator
from ddranking.config import Config
config = Config.from_file("./configs/Demo_Soft_Label.yaml")
soft_label_metric_calc = SoftLabelEvaluator(config)
You can also pass keyword arguments.
device = "cuda"
method_name = "DATM" # Specify your method name
ipc = 10 # Specify your IPC
dataset = "CIFAR10" # Specify your dataset name
syn_data_dir = "./data/CIFAR10/IPC10/" # Specify your synthetic data path
real_data_dir = "./datasets" # Specify your dataset path
model_name = "ConvNet-3" # Specify your model name
teacher_dir = "./teacher_models" # Specify your path to teacher model chcekpoints
im_size = (32, 32) # Specify your image size
dsa_params = { # Specify your data augmentation parameters
"prob_flip": 0.5,
"ratio_rotate": 15.0,
"saturation": 2.0,
"brightness": 1.0,
"contrast": 0.5,
"ratio_scale": 1.2,
"ratio_crop_pad": 0.125,
"ratio_cutout": 0.5
}
save_path = f"./results/{dataset}/{model_name}/IPC{ipc}/dm_hard_scores.csv"
""" We only list arguments that usually need specifying"""
soft_label_metric_calc = SoftLabelEvaluator(
dataset=dataset,
real_data_path=real_data_dir,
ipc=ipc,
model_name=model_name,
soft_label_criterion='sce', # Use Soft Cross Entropy Loss
soft_label_mode='S', # Use one-to-one image to soft label mapping
data_aug_func='dsa', # Use DSA data augmentation
aug_params=dsa_params, # Specify dsa parameters
im_size=im_size,
stu_use_torchvision=False,
tea_use_torchvision=False,
teacher_dir='./teacher_models',
device=device,
save_path=save_path
)
For detailed explanation for hyper-parameters, please refer to our documentation.
Step 2: Load your synthetic data, labels (if any), and learning rate (if any).
syn_images = torch.load('/your/path/to/syn/images.pt')
# You must specify your soft labels if your soft label mode is 'S'
soft_labels = torch.load('/your/path/to/syn/labels.pt')
syn_lr = torch.load('/your/path/to/syn/lr.pt')
Step 3: Compute the metric.
metric = soft_label_metric_calc.compute_metrics(image_tensor=syn_images, soft_labels=soft_labels, syn_lr=syn_lr)
# alternatively, you can specify the image folder path to compute the metric
metric = soft_label_metric_calc.compute_metrics(image_path='./your/path/to/syn/images', soft_labels=soft_labels, syn_lr=syn_lr)
The following results will be returned to you:
HLR mean
: The mean of hard label recovery overnum_eval
runs.HLR std
: The standard deviation of hard label recovery overnum_eval
runs.IOR mean
: The mean of improvement over random overnum_eval
runs.IOR std
: The standard deviation of improvement over random overnum_eval
runs.
Check out our documentation to learn more.
- Evaluation results on ImageNet subsets.
- More baseline methods.
- DD-Ranking scores that decouple the impacts from data augmentation.
Feel free to submit grades to update the DD-Ranking list. We welcome and value any contributions and collaborations. Please check out CONTRIBUTING.md for how to get involved.
* Project lead, †Corresponding
- Zekai Li* (National University of Singapore)
- Xinhao Zhong (National University of Singapore)
- Zhiyuan Liang (University of Science and Technology of China)
- Yuhao Zhou (Sichuan University)
- Mingjia Shi (Sichuan University)
- Ziqiao Wang (National University of Singapore)
- Wangbo Zhao (National University of Singapore)
- Xuanlei Zhao (National University of Singapore)
- Haonan Wang (National University of Singapore)
- Ziheng Qin (National University of Singapore)
- Kai Wang†(National University of Singapore)
- Dai Liu (Munich Technology University)
- Kaipeng Zhang (Shanghai AI Lab)
- Yuzhang Shang (University of Illinois at Chicago)
- Tianyi Zhou (A*STAR)
- Zheng Zhu (GigaAI)
- Kun Wang (University of Science and Technology of China)
- Guang Li (Hokkaido University)
- Junhao Zhang (National University of Singapore)
- Jiawei Liu (National University of Singapore)
- Lingjuan Lyu (Sony)
- Jiancheng Lv (Sichuan University)
- Yaochu Jin (Westlake University)
- Mike Shou (National University of Singapore)
- Angela Yao (National University of Singapore)
- Xavier Bresson (National University of Singapore)
- Tat-Seng Chua (National University of Singapore)
- Xindi Wu (Princeton University)
- Justin Cui (UC Los Angeles)
- George Cazenavette (Massachusetts Institute of Technology)
- Yan Yan (University of Illinois at Chicago)
- Tianlong Chen (UNC Chapel Hill)
- Zhangyang Wang (UT Austin)
- Konstantinos N. Plataniotis (University of Toronto)
- Bo Zhao (Shanghai Jiao Tong University)
- Manolis Kellis (Massachusetts Institute of Technology)
- Yang You (National University of Singapore)
DD-Ranking is released under the MIT License. See LICENSE for more details.