计算机科学
人工智能
电信
自然语言处理
语音识别
万维网
作者
Feibo Jiang,Dong Li,Yubo Peng,Kezhi Wang,Kun Yang,Cunhua Pan,Xiaohu You
标识
DOI:10.1109/mcom.001.2300575
摘要
Multimodal signals, including text, audio, image, and video, can be integrated into semantic communication (SC) systems to provide an immersive experience with low latency and high quality at the semantic level. However, the multimodal SC has several challenges, including data heterogeneity, semantic ambiguity, and signal distortion during transmission. Recent advancements in large AI models, particularly in the multimodal language model (MLM) and large language model (LLM), offer potential solutions for addressing these issues. To this end, we propose a large AI model-based multimodal SC (LAM-MSC) framework, where we first present the MLM-based multimodal alignment (MMA) that utilizes the MLM to enable the transformation between multimodal and unimodal data while preserving semantic consistency. Then, a personalized LLM-based knowledge base (LKB) is proposed, which allows users to perform personalized semantic extraction or recovery through the LLM. This effectively addresses the semantic ambiguity. Finally, we apply the conditional generative adversarial networks-based channel estimation (CGE) for estimating the wireless channel state information. This approach effectively mitigates the impact of fading channels in SC. Finally, we conduct simulations that demonstrate the superior performance of the LAM-MSC framework.
科研通智能强力驱动
Strongly Powered by AbleSci AI