计算机科学
脑电图
解耦(概率)
人工智能
计算
拳头
变压器
失败
信号处理
工件(错误)
门控
代表(政治)
模式识别(心理学)
语音识别
下游(制造业)
修边
频道(广播)
二次增长
信号(编程语言)
二次方程
编码(集合论)
噪音(视频)
算法
Spike(软件开发)
限制
缩放比例
多输入多输出
核(代数)
作者
Döner, Berkay,Ingolfsson, Thorir Mar,Benini, Luca,Li Yawei
出处
期刊:Cornell University - arXiv
日期:2025-10-28
标识
DOI:10.48550/arxiv.2510.22257
摘要
Electroencephalography (EEG) offers a non-invasive lens into human brain activity, but building large-scale models is hampered by topological heterogeneity: each public EEG data defines its own electrode layout, limiting generalization. We introduce LUNA (Latent Unified Network Architecture), a self-supervised foundation model that reconciles disparate electrode geometries while scaling linearly -- not quadratically -- with channel count. LUNA compresses multi-channel EEG into a fixed-size, topology-agnostic latent space via learned queries and cross-attention. Downstream transformer blocks then operate exclusively on this latent representation using patch-wise temporal self-attention, decoupling computation from electrode count. Pre-trained on TUEG and Siena (over 21,000 hours of raw EEG across diverse montages) using a masked-patch reconstruction objective, LUNA transfers effectively to four downstream tasks: abnormality detection, artifact rejection, slowing classification, and emotion recognition. It demonstrates highly competitive performance across several benchmarks, achieving state-of-the-art results on TUAR and TUSL, e.g., 0.921 AUROC on TUAR, while reducing FLOPs by 300x and trimming GPU memory use by up to 10x. Critically, these gains are consistent across all evaluated electrode configurations. Code is available at https://github.com/pulp-bio/BioFoundation
科研通智能强力驱动
Strongly Powered by AbleSci AI