脑电图
计算机科学
卷积神经网络
节奏
人工智能
估计
模式识别(心理学)
人工神经网络
语音识别
心理学
神经科学
工程类
哲学
系统工程
美学
作者
Naoki Yoshimura,Toshihisa Tanaka,Yuta Inaba
标识
DOI:10.1109/ssp53291.2023.10208053
摘要
The problem of estimating imagined music from electroencephalogram (EEG) is very challenging. In this paper, we focused on beats (pulse trains of single notes), one of the components of music, and attempted to estimate imagined beats from an EEG. First, we presented two types of beat patterns and asked 17 experimental participants to imagine them. Next, the imagined beat pulses were estimated from the EEG during the task based on spatiotemporal convolutional neural network models. We employed a CNN and an EEGNet to evaluate the model's performance with binary cross entropy and focal loss as AUC and F1-measure. Although AUCs between the CNN model and EEGNet are competitive, the number of parameters of the EEGNet is much smaller than that of the CNN. Moreover, we have observed the effect of the loss functions in the F1-measure. Overall, the EEGNet model with the focal loss efficiently performed in imagined beat identification.
科研通智能强力驱动
Strongly Powered by AbleSci AI