发音
序数回归
计算机科学
判别式
回归
人工智能
均方误差
回归分析
对比度(视觉)
特征(语言学)
语音识别
杠杆(统计)
自然语言处理
模式识别(心理学)
机器学习
统计
数学
语言学
哲学
作者
Bicheng Yan,Hsin-Wei Wang,Yicheng Wang,Jiun-Ting Li,Chi-Han Lin,Berlin Chen
标识
DOI:10.1109/asru57964.2023.10389777
摘要
Automatic pronunciation assessment (APA) manages to quantify the pronunciation proficiency of a second language (L2) learner in a language. Prevailing approaches to APA normally leverage neural models trained with a regression loss function, such as the mean-squared error (MSE) loss, for proficiency level prediction. Despite most regression models can effectively capture the ordinality of proficiency levels in the feature space, they are confronted with a primary obstacle that different phoneme categories with the same proficiency level are inevitably forced to be close to each other, retaining less phoneme-discriminative information. On account of this, we devise a phonemic contrast ordinal (PCO) loss for training regression-based APA models, which aims to preserve better phonemic distinctions between phoneme categories meanwhile considering ordinal relationships of the regression target output. Specifically, we introduce a phoneme-distinct regularizer into the MSE loss, which encourages feature representations of different phoneme categories to be far apart while simultaneously pulling closer the representations belonging to the same phoneme category by means of weighted distances. An extensive set of experiments carried out on the speechocean 762 benchmark dataset demonstrate the feasibility and effectiveness of our model in relation to some existing state-of-the-art models.
科研通智能强力驱动
Strongly Powered by AbleSci AI