论辩的
作弊
读写能力
心理学
多样性(控制论)
高等教育
光学(聚焦)
数学教育
语言学
社会心理学
教育学
计算机科学
哲学
政治学
法学
物理
光学
人工智能
标识
DOI:10.1177/07410883251328311
摘要
ChatGPT has created considerable anxiety among teachers concerned that students might turn to large language models (LLMs) to write their assignments. Many of these models are able to create grammatically accurate and coherent texts, thus potentially enabling cheating and undermining literacy and critical thinking skills. This study seeks to explore the extent LLMs can mimic human-produced texts by comparing essays by ChatGPT and student writers. By analyzing 145 essays from each group, we focus on the way writers relate to their readers with respect to the positions they advance in their texts by examining the frequency and types of engagement markers. The findings reveal that student essays are significantly richer in the quantity and variety of engagement features, producing a more interactive and persuasive discourse. The ChatGPT-generated essays exhibited fewer engagement markers, particularly questions and personal asides, indicating its limitations in building interactional arguments. We attribute the patterns in ChatGPT’s output to the language data used to train the model and its underlying statistical algorithms. The study suggests a number of pedagogical implications for incorporating ChatGPT in writing instruction.
科研通智能强力驱动
Strongly Powered by AbleSci AI