确定性
心理学
人类智力
生成语法
人工智能
工程伦理学
计算机科学
认识论
工程类
哲学
标识
DOI:10.1177/00986283241251855
摘要
Introduction Recent innovations in generative artificial intelligence (AI) technologies have led to an educational environment in which human authorship cannot be assumed, thereby posing a significant challenge to upholding academic integrity. Statement of the problem Both humans and AI detection technologies have difficulty distinguishing between AI-generated vs. human-authored text. This weakness raises a significant possibility of false positive errors: human-authored writing incorrectly judged as AI-generated. Literature review AI detection methodology, whether machine or human-based, is based on writing style characteristics. Empirical evidence demonstrates that AI detection technologies are more sensitive to AI-generated text than human judges, yet a positive finding from these technologies cannot provide absolute certainty of AI plagiarism. Teaching implications Given the uncertainty of detecting AI, a forgiving, pro-growth response to AI academic integrity cases is recommended, such as revise and resubmit decisions. Conclusion Faculty should cautiously embrace the use of AI detection technologies with the understanding that false positive errors will occasionally occur. This use is ethical provided that the responses to problematic cases are approached with the goal of educational growth rather than punishment.
科研通智能强力驱动
Strongly Powered by AbleSci AI