Source code for the paper "An Efficient Deep Learning Model for Automatic Modulation Recognition Based on Parameter Estimation and Transformation", which is pubulished in IEEE Communications Letters.
The article is available here:An Efficient Deep Learning Model for Automatic Modulation Recognition Based on Parameter Estimation and Transformation
If you have any question, please contact e-mail: zhangxx8023@gmail.com
Automatic modulation recognition (AMR) is a promising technology for intelligent communication receivers to detect signal modulation schemes. Recently, the emerging deep learning (DL) research has facilitated high-performance DL-AMR approaches. However, most DL-AMR models only focus on recognition accuracy, leading to huge model sizes and high computational complexity, while some lightweight and low-complexity models struggle to meet the accuracy requirements. This letter proposes an efficient DL-AMR model based on phase parameter estimation and transformation, with convolutional neural network (CNN) and gated recurrent unit (GRU) as the feature extraction layers, which can achieve high recognition accuracy equivalent to the existing state-of-the-art models but reduces more than a third of the volume of their parameters. Meanwhile, our model is more competitive in training time and test time than the benchmark models with similar recognition accuracy. Moreover, we further propose to compress our model by pruning, which maintains the recognition accuracy higher than 90% while has less than 1/8 of the number of parameters comparing with state-of-the-art models.
If this paper is helpful to your research, please cite:
@article{zhang2021efficient,
author={Zhang, Fuxin and Luo, Chunbo and Xu, Jialang and Luo, Yang},
journal={IEEE Communications Letters},
title={An Efficient Deep Learning Model for Automatic Modulation Recognition Based on Parameter Estimation and Transformation},
year={2021},
volume={25},
number={10},
pages={3287-3290},
doi={10.1109/LCOMM.2021.3102656}
}
【Model Structure】
【RML2016.10a】
【RML2016.10b】
【RML2018.01a】
【Model comparison on three datasets (A:RML2016.10a, B: RML2016.10b, C: RML2018.01a)】:
RML2016.10a, RML2016.10b, and RML2018.01a
The equation (2) should be:
(This formula had some writting error in the original paper and has been updated in the Arxiv version.)
This model is implemented in Keras, and the environment setting is:
- Python 3.6.10
- TensorFlow-gpu 1.14.0
- Keras-gpu 2.2.4
Thanks leena201818 and wzjialang for their great work!