计算机科学
标识符
人工智能
监督学习
分类器(UML)
模式识别(心理学)
提取器
特征提取
人工神经网络
机器学习
工艺工程
工程类
程序设计语言
作者
Zitao Wu,Fanggang Wang,Boxiang He
标识
DOI:10.1109/lcomm.2023.3247900
摘要
Specific emitter identification (SEI) methods via deep learning have shown significant progress in accuracy recently. However, these methods require a large amount of the labeled data. In this letter, contrastive learning is introduced to cope with the problem of the lack of the labeled data, which the network modules include the feature extractor, the projection head, and the classifier. We use a two-stage semi-supervised training scheme. In the first stage, the feature extractor extracts the features of the received signal, and the projection head deepens the network to reserve more information in the features. They are trained with the unlabeled samples via the self-supervised contrastive learning loss. In the second stage, the overall network is fine-tuned over a small amount of the labeled data using an alternative loss. This loss combines the original cross-entropy loss with the supervised contrastive learning loss. Numerical results on the FIT/CorteXlab dataset show that even with tens of the labeled samples, the accuracy of the proposed identifier attains around 90%, which outperforms the conventional supervised and semi-supervised identifiers.
科研通智能强力驱动
Strongly Powered by AbleSci AI