计算机科学
编码器
人工神经网络
频道(广播)
电信线路
多路复用
正交频分复用
残余物
利用
光学(聚焦)
解码方法
块(置换群论)
无线
计算机网络
算法
人工智能
电信
物理
几何学
计算机安全
数学
光学
操作系统
作者
Dianxin Luan,John F. Thompson
标识
DOI:10.1109/vtc2022-spring54318.2022.9860803
摘要
In this paper, we deploy the self-attention mechanism to achieve improved channel estimation for orthogonal frequency-division multiplexing waveforms in the downlink. Specifically, we propose a new hybrid encoder-decoder structure (called HA02) for the first time which exploits the attention mechanism to focus on the most important input information. In particular, we implement a transformer encoder block as the encoder to achieve the sparsity in the input features and a residual neural network as the decoder respectively, inspired by the success of the attention mechanism. Using 3GPP channel models, our simulations show superior estimation performance compared with other candidate neural network methods for channel estimation.
科研通智能强力驱动
Strongly Powered by AbleSci AI