计算机科学
人工智能
计算机视觉
代表(政治)
极线几何
模棱两可
桥接(联网)
语义鸿沟
语义学(计算机科学)
图像(数学)
图像检索
政治学
计算机网络
政治
程序设计语言
法学
作者
Bohan Li,Yasheng Sun,Zhujin Liang,Dalong Du,Zhuanghui Zhang,Xiaofeng Wang,Yunnan Wang,Xin Jin,Wenjun Zeng
标识
DOI:10.24963/ijcai.2024/107
摘要
In the latest advancements in multimodal learning, effectively addressing the spatial and semantic losses of visual data after encoding remains a critical challenge. This is because the performance of large multimodal models is positively correlated with the coupling between visual encoders and large language models. Existing approaches often face issues such as vector gaps or semantic disparities, resulting in information loss during the propagation process. To address these issues, we propose MAGE (Multimodal Alignment and Generation Enhancement), a novel framework that bridges the semantic spaces of vision and text through an innovative alignment mechanism. By introducing the Intelligent Alignment Network (IAN), MAGE achieves dimensional and semantic alignment. To reduce the gap between synonymous heterogeneous data, we employ a training strategy that combines cross-entropy and mean squared error, significantly enhancing the alignment effect. Moreover, to enhance MAGE’s “Any-to-Any” capability, we developed a fine-tuning dataset for multimodal tool-calling instructions to expand the model’s output capability boundaries. Finally, our proposed multimodal large model architecture, MAGE, achieved significantly better performance compared to similar works across various evaluation benchmarks, including MME, MMBench, and SEED. Complete code and appendix are available at: https://github.com/GTCOM-NLP/MAGE
科研通智能强力驱动
Strongly Powered by AbleSci AI