脑电图
计算机科学
解码方法
人工智能
变压器
水准点(测量)
机器学习
心理学
神经科学
电压
工程类
大地测量学
电信
电气工程
地理
作者
Yao Yuxuan,Hongbo Wang,Chen Li,Peng Yiheng,Jingjing Luo
标识
DOI:10.1088/1741-2552/ae17e9
摘要
Abstract Objective. Electroencephalography (EEG) records the spontaneous electrical activity in the brain. Despite the growing application of deep learning in EEG decoding, traditional methods still rely heavily on supervised learning, which is often limited by task specificity and dataset dependency, restricting model performance and generalization. Inspired by the success of large language models (LLMs), EEG
foundation models (EEG FMs) are attracting increasing attention as a unified paradigm for EEG decoding. In this study, we review a selection of representative studies on EEG FMs, aiming to extract trends and provide recommendations for future research.
Approach. We provide a comprehensive analysis of recent advances in EEG FMs, with a focus on downstream tasks, benchmark datasets, model architectures, and pre-training techniques. We analyze and synthesize core FMs components, and systematically compare their performances and generalizabilities.
Main results. Our review reveals that EEG FMs are pre-trained on large-scale datasets, typically involving several hundred subjects. The number of subjects can reach up to 14,987, with a maximum total duration of 27,062 hours. Current EEG FMs most adopt mask-based reconstruction pre-training strategy and employ efficient transformer based architectures. Our comparative analysis shows that EEG FMs demonstrate significant potential in advancing EEG decoding tasks, particularly in seizure detection. However, their performance in complex scenarios such as motor imagery decoding remains limited.
Significance. This review summarizes the existing approaches and performance outcomes of EEG FM, offers valuable insights into their current limitations and delineates prospective avenues for future research.
科研通智能强力驱动
Strongly Powered by AbleSci AI