低密度奇偶校验码
解码方法
计算机科学
算法
稳健性(进化)
人工神经网络
编码(集合论)
误码率
顺序译码
人工智能
区块代码
生物化学
基因
集合(抽象数据类型)
化学
程序设计语言
作者
Qing Wang,Qing Liu,Shunfu Wang,Leian Chen,Haoyu Fang,Luyong Chen,Yuzhang Guo,Zhiqiang Wu
标识
DOI:10.1109/tccn.2022.3212438
摘要
The success of deep learning has encouraged its applications in decoding error-correcting codes, e.g., LDPC decoding. In this paper, we propose a model-driven deep learning method for normalized min-sum (NMS) low-density parity-check (LDPC) decoding, namely neural NMS (NNMS) LDPC decoding network. By unfolding the iterative decoding progress between checking nodes (CNs) and variable nodes (VNs) into a feed-forward propagation network, we can harvest the benefits of both the model-driven deep learning and the conventional normalized min-sum (NMS) LDPC decoding method. In addition, we proposed a shared parameters NNMS with the LeakyReLU and a 12-bit quantizer (SNNMS-LR-Q) which reduces the number of required multipliers and correction factors by sharing parameters, increasing the nonlinear fitting ability by adding LeakyReLU. By utilizing the 12-bit quantizer, we can improve the confrontation ability. Thorough experiments with different code lengths, code rates, channel conditions, and check matrices are implemented to demonstrate the advantages and robustness of our proposed networks. The BER performance of the proposed NNMS is 1.5 dB better than the NMS, using fewer iterations. Meanwhile, The SNNMS-LR-Q outperforms the NNMS regarding the BER performance and efficiency.
科研通智能强力驱动
Strongly Powered by AbleSci AI