已入深夜,您辛苦了!由于当前在线用户较少,发布求助请尽量完整地填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!祝你早点完成任务,早点休息,好梦!

Cuing Without Sharing: A Federated Cued Speech Recognition Framework via Mutual Knowledge Distillation

计算机科学 提示语 隐藏字幕 手势 自然语言处理 人工智能 语音识别 人机交互 图像(数学) 认知心理学 心理学
作者
Yuxuan Zhang,Lei Liu,Li Liu
标识
DOI:10.1145/3581783.3612134
摘要

Cued Speech (CS) is a visual coding tool to encode spoken languages at the phonetic level, which combines lip-reading and hand gestures to effectively assist communication among people with hearing impairments. The Automatic CS Recognition (ACSR) task aims to recognize CS videos into linguistic texts, which involves both lips and hands as two distinct modalities conveying complementary information. However, the traditional centralized training approach poses potential privacy risks due to the use of facial and gesture videos in CS data. To address this issue, we propose a new Federated Cued Speech Recognition (FedCSR) framework to train an ACSR model over the decentralized CS data without sharing private information. In particular, a mutual knowledge distillation method is proposed to maintain cross-modal semantic consistency of the Non-IID CS data, which ensures learning a unified feature space for linguistic and visual information. On the server side, a globally shared linguistic model is trained to capture the long-term dependencies in the text sentences, which is aligned with the visual information from the local clients via visual-to-linguistic distillation. On the client side, the visual model of each client is trained with its own local data, assisted by linguistic-to-visual distillation treating the linguistic model as the teacher. To the best of our knowledge, this is the first approach to consider the federated ACSR task for privacy protection. Experimental results on the Chinese CS dataset with multiple cuers demonstrate that our approach outperforms both mainstream federated learning baselines and existing centralized state-of-the-art ACSR methods, achieving 9.7% performance improvement for character error rate (CER) and 15.0% for word error rate (WER). The Chinese CS dataset and our code will be open-sourced.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
Orange应助Mengjing采纳,获得10
刚刚
刚刚
无私的宛秋完成签到 ,获得积分10
1秒前
1秒前
科研通AI6应助panpan采纳,获得10
2秒前
顾矜应助最爱炸里脊采纳,获得10
2秒前
3秒前
WanchengHu发布了新的文献求助10
4秒前
lcj应助sheng采纳,获得10
6秒前
xiaocai发布了新的文献求助10
6秒前
思源应助yanwei采纳,获得10
8秒前
dmyy313235发布了新的文献求助30
8秒前
牧谷完成签到 ,获得积分10
9秒前
12秒前
13秒前
可爱的函函应助云辞忧采纳,获得10
14秒前
感动唇彩完成签到 ,获得积分20
14秒前
科研民工发布了新的文献求助10
15秒前
在水一方应助DamenS采纳,获得10
16秒前
研友_VZG7GZ应助DamenS采纳,获得10
16秒前
大模型应助DamenS采纳,获得10
16秒前
SciGPT应助DamenS采纳,获得10
16秒前
大模型应助DamenS采纳,获得10
16秒前
所所应助DamenS采纳,获得10
16秒前
ding应助DamenS采纳,获得10
16秒前
小二郎应助DamenS采纳,获得10
16秒前
Owen应助DamenS采纳,获得10
16秒前
华仔应助DamenS采纳,获得10
16秒前
hanshiyi完成签到,获得积分10
16秒前
所所应助现实的千万采纳,获得10
17秒前
桐桐应助现实的千万采纳,获得10
17秒前
周小荣发布了新的文献求助10
18秒前
19秒前
19秒前
19秒前
回来完成签到,获得积分10
20秒前
22秒前
安年完成签到 ,获得积分10
22秒前
GingerF应助光亮翠风采纳,获得50
23秒前
沉静问芙完成签到 ,获得积分10
23秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Fermented Coffee Market 2000
合成生物食品制造技术导则,团体标准,编号:T/CITS 396-2025 1000
The Leucovorin Guide for Parents: Understanding Autism’s Folate 1000
Pipeline and riser loss of containment 2001 - 2020 (PARLOC 2020) 1000
Critical Thinking: Tools for Taking Charge of Your Learning and Your Life 4th Edition 500
Comparing natural with chemical additive production 500
热门求助领域 (近24小时)
化学 医学 生物 材料科学 工程类 有机化学 内科学 生物化学 物理 计算机科学 纳米技术 遗传学 基因 复合材料 化学工程 物理化学 病理 催化作用 免疫学 量子力学
热门帖子
关注 科研通微信公众号,转发送积分 5243431
求助须知:如何正确求助?哪些是违规求助? 4409785
关于积分的说明 13726299
捐赠科研通 4279240
什么是DOI,文献DOI怎么找? 2348020
邀请新用户注册赠送积分活动 1345332
关于科研通互助平台的介绍 1303470