计算机科学
水准点(测量)
任务(项目管理)
语义学(计算机科学)
人工智能
自然语言处理
对偶(语法数字)
零(语言学)
口语
程序设计语言
语言学
大地测量学
哲学
经济
管理
地理
作者
Bowen Xing,Libo Qin,Zhihong Zhu,Yu Zhou,Ivor W. Tsang
标识
DOI:10.1109/tpami.2025.3597726
摘要
The state-of-the-art zero-shot cross-lingual spoken language understanding (SLU) model utilizes cross-lingual unsupervised contrastive learning to achieve multilingual semantics alignment. While existing methods have achieved promising results, they still have two issues limiting cross-lingual knowledge transfer: (1) dual-task correlative knowledge is not explicitly modeled and transferred to target languages; (2) the semantics differences among samples are ignored, and the contrastive semantics knowledge is not transferred to target languages. In this paper, we propose a dual-task cross-lingual alignment network (DXA-Net), which makes the first attempt to tackle zero-shot cross-lingual SLU based on the prompt-tuning paradigm. To solve the first issue, we propose the co-guiding prompt, which allows the model to conditionally generate one task's label based on another one's. To solve the second issue, we propose the intent/slot contrastive prompt to teach the model to discriminate whether a pair of samples have the same or similar labels. Additionally, we propose multilingual semantics contrastive prompt to enhance multilingual semantics alignment. Experiments on the benchmark show that our model achieves new state-of-the-art performance on nine languages.
科研通智能强力驱动
Strongly Powered by AbleSci AI