计算机科学
子空间拓扑
情态动词
水准点(测量)
模态(人机交互)
联合学习
情报检索
数据检索
数据挖掘
人工智能
机器学习
大地测量学
化学
高分子化学
地理
作者
Linlin Zong,Qiujie Xie,Jiahui Zhou,Peiran Wu,Xianchao Zhang,Bo Xu
标识
DOI:10.1145/3404835.3462989
摘要
Deep cross-modal retrieval methods have shown their competitiveness among different cross-modal retrieval algorithms. Generally, these methods require a large amount of training data. However, aggregating large amounts of data will incur huge privacy risks and high maintenance costs. Inspired by the recent success of federated learning, we propose the federated cross-modal retrieval (FedCMR), which learns the model with decentralized multi-modal data. Specifically, we first train the cross-modal retrieval model and learn the common space across multiple modalities in each client using its local data. Then, we jointly learn the common subspace of multiple clients on the trusted central server. Finally, each client updates the common subspace of the local model based on the aggregated common subspace on the server, so that all clients participated in the training can benefit from federated learning. Experiment results on four benchmark datasets demonstrate the effectiveness proposed method.
科研通智能强力驱动
Strongly Powered by AbleSci AI