Understanding Flow Experience in Video Learning by Multimodal Data

人工智能 无聊 模式 计算机科学 机器学习 多任务学习 多层感知器 均方误差 语音识别 模式识别(心理学) 人工神经网络 任务(项目管理) 统计 心理学 数学 社会心理学 社会学 经济 社会科学 管理
作者
Yankai Wang,Bing Chen,Hongyan Liu,Zhiguo Hu
出处
期刊:International Journal of Human-computer Interaction [Taylor & Francis]
卷期号:40 (12): 3144-3158 被引量:4
标识
DOI:10.1080/10447318.2023.2181878
摘要

Video-based learning has successfully become an effective alternative to face-to-face instruction. In such situations, modeling or predicting learners' flow experience during video learning is critical for enhancing the learning experience and advancing learning technologies. In this study, we set up an instructional scenario for video learning according to flow theory. Different learning states, i.e., boredom, fit (flow), and anxiety, were successfully induced by varying the difficulty levels of the learning task. We collected learners' electrocardiogram (ECG) signals as well as facial video, upper body posture and speech data during the learning process. We proposed classification models of the learning state and regression models to predict flow experience by utilizing different combinations of the data from the four modalities. The results showed that the model performance of learning state recognition was significantly improved by the decision-level fusion of multimodal data. By using the selected important features from all data sources, such as the standard deviation of normal to normal R-R intervals (SDNN), high-frequency (HF) heart rate variability and mel-frequency cepstral coefficients (MFCC), the multilayer perceptron (MLP) classifier gave the best recognition result of learning states (i.e., mean AUC of 0.780). The recognition accuracy of boredom, fit (flow) and anxiety reached 47.48%, 80.89% and 47.41%, respectively. For flow experience prediction, the MLP regressor based on the fusion of two modalities (i.e., ECG and posture) achieved the optimal prediction (i.e., mean RMSE of 0.717). This study demonstrates the feasibility of modeling and predicting the flow experience in video learning by combining multimodal data.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
默默的棒棒糖完成签到 ,获得积分10
1秒前
志纳完成签到,获得积分10
3秒前
两酒窝发布了新的文献求助10
4秒前
十个qin天发布了新的文献求助10
4秒前
我是老大应助天空之城采纳,获得30
5秒前
沉静小萱完成签到 ,获得积分10
5秒前
科研通AI2S应助志纳采纳,获得10
8秒前
Owen应助我不是阿呆采纳,获得10
9秒前
轩辕白竹完成签到,获得积分10
10秒前
524发布了新的文献求助200
12秒前
十个qin天完成签到,获得积分10
13秒前
wr781586完成签到 ,获得积分10
14秒前
轩辕白竹发布了新的文献求助10
14秒前
15秒前
16秒前
20秒前
chenyu发布了新的文献求助10
21秒前
HEIKU应助MM采纳,获得10
21秒前
懒人发布了新的文献求助10
22秒前
22秒前
wddfz完成签到,获得积分10
23秒前
25秒前
隐形曼青应助ahhhh采纳,获得10
26秒前
26秒前
aa发布了新的文献求助10
26秒前
26秒前
dongqing12311完成签到,获得积分10
28秒前
30秒前
30秒前
32秒前
MAIDANG发布了新的文献求助10
32秒前
清晨完成签到,获得积分10
32秒前
FashionBoy应助哈哈哈采纳,获得10
32秒前
Caili应助科研通管家采纳,获得10
32秒前
jianglili应助科研通管家采纳,获得10
33秒前
Caili应助科研通管家采纳,获得10
33秒前
爆米花应助科研通管家采纳,获得10
33秒前
隐形曼青应助科研通管家采纳,获得10
33秒前
Caili应助科研通管家采纳,获得10
33秒前
科研助手6应助科研通管家采纳,获得10
33秒前
高分求助中
【此为提示信息,请勿应助】请按要求发布求助,避免被关 20000
ISCN 2024 – An International System for Human Cytogenomic Nomenclature (2024) 3000
Continuum Thermodynamics and Material Modelling 2000
Encyclopedia of Geology (2nd Edition) 2000
105th Edition CRC Handbook of Chemistry and Physics 1600
Maneuvering of a Damaged Navy Combatant 650
China—Art—Modernity: A Critical Introduction to Chinese Visual Expression from the Beginning of the Twentieth Century to the Present Day 360
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 物理 生物化学 纳米技术 计算机科学 化学工程 内科学 复合材料 物理化学 电极 遗传学 量子力学 基因 冶金 催化作用
热门帖子
关注 科研通微信公众号,转发送积分 3776925
求助须知:如何正确求助?哪些是违规求助? 3322345
关于积分的说明 10209855
捐赠科研通 3037696
什么是DOI,文献DOI怎么找? 1666837
邀请新用户注册赠送积分活动 797658
科研通“疑难数据库(出版商)”最低求助积分说明 758001