计算机科学
情态动词
人工智能
人机交互
计算机视觉
多媒体
化学
高分子化学
作者
Ruchi Singh,E. Ramanujam,Naresh Babu Muppalaneni
标识
DOI:10.1177/18761364251315239
摘要
Virtual education (online education or e-learning) is a form of education where the primary mode of instruction is through digital platforms and the Internet. This approach offers flexibility and accessibility, making it attractive to many students. Many institutes also offer virtual professional courses for business and working professionals. However, ensuring the reachability of courses and evaluating students’ attentiveness presents significant challenges for educators teaching virtually. Various research works have been proposed to evaluate students’ attentiveness using facial landmarks, facial expressions, eye movements, gestures, postures, etc. However, no method has been proposed for real-time analysis and evaluation. This paper introduces a multi-modal student attentiveness detection (MMSAD) model designed to analyze and evaluate real-time class videos using two modalities: facial expressions and landmarks. Using a lightweight deep learning model, the model analyzes students’ emotions from facial expressions and identifies when a person is speaking during an online class by examining lip movements from facial landmarks. The model evaluates students’ emotions using five benchmark datasets, achieving accuracy rates of 99.05% on extended Cohn-Kanade (CK+), 87.5% on RAF-DB, 78.12% on Facial Emotion Recognition-2013 (FER-2013), 98.50% on JAFFE, and 88.01% on KDEF. The model identifies individuals speaking during the class using real-time class videos. The results from these modalities are used to predict attentiveness, categorizing students as either attentive or inattentive.
科研通智能强力驱动
Strongly Powered by AbleSci AI