追踪
计算机科学
基线(sea)
背景(考古学)
人工智能
程序设计语言
古生物学
海洋学
生物
地质学
标识
DOI:10.1007/978-3-031-11644-5_75
摘要
Knowledge tracing refers to the dynamic assessment of a learner's mastery of skills. There has been widespread adoption of the self-attention mechanism in knowledge-tracing models in recent years. These models consistently report performance gains over baseline knowledge tracing models in public datasets. However, why the self-attention mechanism works in knowledge tracing is unknown. This study argues that the ability to encode when a learner attempts to answer the same item multiple times in a row (henceforth referred to as repeated attempts) is a significant reason why self-attention models perform better than other deep knowledge tracing models. We present two experiments to support our argument. We use context-aware knowledge tracing (AKT) as our example self-attention model and dynamic key-value memory networks (DKVMN) and deep performance factors analysis (DPFA) as our baseline models. Firstly, we show that removing repeated attempts from datasets closes the performance gap between the AKT and the baseline models. Secondly, we present DPFA+, an extension of DPFA that is able to consume manually crafted repeated attempts features. We demonstrate that DPFA+ performs better than AKT across all datasets with manually crafted repeated attempts features.
科研通智能强力驱动
Strongly Powered by AbleSci AI