This branch contains the codes for training ReResNet. We make minor modifications on the mmclassification. The specific version is 4e6875d.
Model | Group | Top-1 (%) | Top-5 (%) | Download |
---|---|---|---|---|
ReResNet-50 | C8 | 71.20 | 90.28 | raw | publish | log |
ReResNet-101 | C8 | 74.92 | 92.22 | raw | publish | log |
Note:
- Alternative download link: baiduyun with extracting code
ABCD
. - The raw checkpoint is used to test the accuracy on ImageNet. The publish model is used for downstream tasks, e.g., object detection. We convert the raw model to publish model by tools/publish_model.py.
Please refer to install.md for installation and dataset preparation.
Please see getting_started.md for the basic usage of MMClassification. There are also tutorials for finetuning models, adding new dataset, designing data pipeline, and adding new modules.
We can export ReResNet to a standard ResNet by tools/convert_re_resnet_to_torch.py.
First, download the checkpoint from here and put it to work_dirs/re_resnet50_c8_batch256/epoch_100.pth
.
Then, convert the raw checkpoint to a standard checkpoint for ResNet.
python tools/convert_re_resnet_to_torch.py configs/re_resnet/re_resnet50_c8_batch256.py \
work_dirs/re_resnet50_c8_batch256/epoch_100.pth work_dirs/re_resnet50_c8_batch256/epoch_100_torch.pth
Now, we can test the accuracy with a standard ResNet.
bash tools/dist_test.sh configs/imagenet/resnet50_batch256.py work_dirs/re_resnet50_c8_batch256/epoch_100_torch.pth 8
If you use this toolbox or benchmark in your research, please cite:
@misc{mmclassification,
author = {Yang, Lei and Li, Xiaojie and Lou, Zan and Yang, Mingmin and
Wang, Fei and Qian, Chen and Chen, Kai and Lin, Dahua},
title = {{MMClassification}},
howpublished = {\url{https://github.com/open-mmlab/mmclassification}},
year = {2020}
}
@inproceedings{han2021ReDet,
title={ReDet: A Rotation-equivariant Detector for Aerial Object Detection},
author={Han, Jiaming and Ding, Jian and Xue, Nan and Xia, Gui-Song},
booktitle = {Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR)},
year={2021}
}