可解释性
计算机科学
认知
任务(项目管理)
人工智能
透视图(图形)
机器学习
相关性
人工神经网络
自然语言处理
心理学
几何学
数学
经济
神经科学
管理
作者
Haowen Yang,Tianlong Qi,Jin Li,Longjiang Guo,Meirui Ren,Lichen Zhang,Xiaoming Wang
标识
DOI:10.1016/j.knosys.2022.109156
摘要
Cognitive diagnosis is a fundamental task to assist personalized learning in education, and aims to discover learners’ proficiency in knowledge concepts. Because cognitive diagnosis models play a very important role in predicting learner performance and recommending personalized learning resources such as exercises, course videos, and course audio, they have received great attention from researchers. However, existing cognitive diagnosis models mostly start from the interactive perspective of learners’ answers, ignoring the internal quantitative relationship between exercises and knowledge concepts. This study proposes a novel quantitative relationship-based explainable cognitive diagnosis model called QRCDM. First, learners’ concept proficiency was defined based on their answers to objective and subjective questions. Correlation hypotheses are then proposed, which include the explicit correlation between exercises and their corresponding knowledge concepts, as well as the implicit correlation between exercises and the non-inclusive concept. Finally, two contribution matrices of exercises and knowledge concepts through a neural network designed in this study are calculated based on the above hypotheses, which can predict the learner’s concept proficiency and answer score. To reduce the noisy data, the learners’ faults and guessing factors were also considered. In the experiments, the proposed QRCDM was compared with two classical models, DINA, FuzzyCDF and three latest state-of-the-art models, DeepCDM, NeuralCDM and RCD on five real datasets, and the most experimental results on the majority metrics show the effectiveness and interpretability of this work. • Novel cognitive diagnosis model that can express quantitative relationship. • The implicit relationship between exercises and knowledge concepts is excavated. • To verify the interpretability of the model, a support experiment was designed. • Retention degree experiment is designed to verify model’s interpretability.
科研通智能强力驱动
Strongly Powered by AbleSci AI