写作评估
自然语言处理
心理学
计算机科学
语言学
数学教育
哲学
作者
Atsushi Mizumoto,Natsuko Shintani,Miyuki Sasaki,Mark Feng Teng
出处
期刊:Research methods in applied linguistics
[Elsevier]
日期:2024-05-24
卷期号:3 (2): 100116-100116
被引量:62
标识
DOI:10.1016/j.rmal.2024.100116
摘要
This study explores the effectiveness of ChatGPT as a tool for evaluating linguistic accuracy in second language (L2) writing, situated within the complexity, accuracy, and fluency (CAF) framework. By using the Cambridge Learner Corpus First Certificate in English (CLC FCE) dataset, an error-tagged learner corpus, it compares ChatGPT's performance to human evaluators and Grammarly in assessing errors or accuracy rates across 232 writing samples. The findings indicate a strong correlation between ChatGPT's assessments and human accuracy ratings, demonstrating its precision in automated assessments. In comparison to Grammarly, ChatGPT shows a closer alignment with human judgments and students' writing scores. Thus, ChatGPT can be a potential tool for enhancing efficiency in L2 research and L2 writing pedagogy.
科研通智能强力驱动
Strongly Powered by AbleSci AI