质量评定
可靠性(半导体)
可靠性工程
质量(理念)
翻译(生物学)
计算机科学
自然语言处理
工程类
化学
评价方法
物理
功率(物理)
生物化学
量子力学
信使核糖核酸
基因
作者
Sanjun Sun,Lulu Wang,Q. Zhang
标识
DOI:10.1075/tis.23035.sun
摘要
Abstract This study examines the differences between paper- and computer-based translation quality assessment, focusing on score reliability, variability, scoring speed, and raters’ preferences. Utilizing a within-subjects design, 27 raters assessed 29 translations presented in both handwritten and word-processed formats, employing a holistic scoring method. The findings reveal comparable translation quality ratings across both modes, with paper-based scoring showing greater inter-rater disagreement and being affected by handwriting legibility. Paper-based scoring was generally faster, though computer-based scoring demonstrated less variability in inter-rater reliability. Raters exhibited a preference for paper-based scoring due to its perceived faster speed, flexibility in annotating, and eye-friendliness. The study highlights the importance of comprehensive rater training and calibration to mitigate biases and non-uniform severity, as well as the adoption of detailed scoring rubrics to ensure consistent assessment across modes. The article offers insights on refining computer-based scoring systems, including enhancements in annotation functionality and ergonomic considerations.
科研通智能强力驱动
Strongly Powered by AbleSci AI