计算机科学
透明度(行为)
可信赖性
追踪
背景(考古学)
构造(python库)
人机交互
知识管理
数据科学
程序设计语言
计算机安全
生物
操作系统
古生物学
作者
Yu Lu,Deliang Wang,Penghe Chen,Zhi Zhang
标识
DOI:10.1109/tlt.2024.3403135
摘要
Amid the rapid evolution of artificial intelligence, the intricate model structures and opaque decision-making processes of AI-based systems have raised the trustworthy issues in education. We, therefore, first propose a novel three-layer knowledge tracing model designed to address trustworthiness for intelligent tutoring system. Each layer is crafted to tackle a specific challenge: transparency, explainability, and accountability. We have introduced explainable artificial intelligence (xAI) approach to offer technical interpreting information, validated by the established educational theories and principles. The validated interpreting information is subsequently transitioned from its technical context into educational insights, which are then incorporated into the newly designed user interface. Our evaluations indicate that an intelligent tutoring system, when equipped with the designed trustworthy knowledge tracing model, significantly enhances user trust and knowledge from the perspectives of both teachers and students. This study thus contributes a tangible solution that utilizes the xAI approach as the enabling technology to construct trustworthy systems or tools in education.
科研通智能强力驱动
Strongly Powered by AbleSci AI