多模式学习
计算机科学
人工智能
模式
分类
分类学(生物学)
多模态
领域(数学)
机器学习
人机交互
万维网
社会科学
植物
数学
社会学
纯数学
生物
作者
Tadas Baltrušaitis,Chaitanya Ahuja,Louis–Philippe Morency
标识
DOI:10.1109/tpami.2018.2798607
摘要
Our experience of the world is multimodal - we see objects, hear sounds, feel texture, smell odors, and taste flavors. Modality refers to the way in which something happens or is experienced and a research problem is characterized as multimodal when it includes multiple such modalities. In order for Artificial Intelligence to make progress in understanding the world around us, it needs to be able to interpret such multimodal signals together. Multimodal machine learning aims to build models that can process and relate information from multiple modalities. It is a vibrant multi-disciplinary field of increasing importance and with extraordinary potential. Instead of focusing on specific multimodal applications, this paper surveys the recent advances in multimodal machine learning itself and presents them in a common taxonomy. We go beyond the typical early and late fusion categorization and identify broader challenges that are faced by multimodal machine learning, namely: representation, translation, alignment, fusion, and co-learning. This new taxonomy will enable researchers to better understand the state of the field and identify directions for future research.
科研通智能强力驱动
Strongly Powered by AbleSci AI