计算机科学
语音识别
说话人识别
语音处理
音频信号处理
听觉场景分析
人工智能
语音编码
音频信号
感知
心理学
神经科学
作者
Ruijie Tao,Xinyuan Qian,Yidi Jiang,Junjie Li,Jiadong Wang,Haizhou Li
标识
DOI:10.1109/taslpro.2025.3527766
摘要
Audio-visual target speaker extraction (AV-TSE) aims to extract the specific person's speech from the audio mixture given auxiliary visual cues. Previous methods usually search for the target voice through speech-lip synchronization. However, this strategy mainly focuses on the existence of target speech, while ignoring the variations of the noise characteristics, i.e., interference speaker and the background noise. That may result in extracting noisy signals from the incorrect sound source in challenging acoustic situations. To this end, we propose a novel selective auditory attention mechanism, which can suppress interference speakers and non-speech signals to avoid incorrect speaker extraction. By estimating and utilizing the undesired noisy signal through this mechanism, we design an AV-TSE framework named Subtraction-and-ExtrAction network (SEANet) to suppress the noisy signals. We conduct abundant experiments by re-implementing three popular AV-TSE methods as the baselines and involving nine metrics for evaluation. The experimental results show that our proposed SEANet achieves state-of-the-art results and performs well for all five datasets. The code can be found in: https://github.com/TaoRuijie/SEANet.git
科研通智能强力驱动
Strongly Powered by AbleSci AI