作者
Xiao Liu,Chenxu Zhang,Fuxiang Huang,Shuyin Xia,Guoyin Wang,Lei Zhang
摘要
State space model (SSM) is a mathematical model used to describe and analyze the behavior of dynamic systems. This model has witnessed numerous applications in several fields, including control theory, signal processing, economics, and machine learning. In the field of deep learning, SSMs are used to process sequence data, such as time series analysis, natural language processing (NLP), and video understanding. By mapping sequence data to state space, long-term dependencies in the data can be better captured. In particular, modern SSMs have shown strong representational capabilities in NLP, especially in long sequence modeling, while maintaining linear time complexity. In particular, based on the latest SSMs, Mamba merges time-varying parameters into SSMs toward efficient training and inference. Given its impressive efficiency and strong long-range dependency modeling capability, Mamba is expected to become a new AI architecture that may be capable of surpassing Transformer. Recently, a number of works attempt to study the potential of Mamba in various fields, such as general vision, multimodal learning, medical image analysis, and remote sensing image analysis, by extending Mamba from natural language domain to visual domain. To fully understand Mamba in the visual domain, we conduct a comprehensive survey and present a taxonomy study. This survey focuses on Mamba's application to a variety of visual tasks and data types, and discusses its predecessors, recent advances, and far-reaching impact on a wide range of domains.