清晰
计算机科学
透视图(图形)
质量(理念)
数据科学
严厉
心理学
口译(哲学)
工程伦理学
管理科学
人工智能
认识论
工程类
生物化学
化学
哲学
程序设计语言
作者
Louis Anthony Cox,Terje Aven,Seth D. Guikema,Charles N. Haas,James H. Lambert,Karen Lowrie,George Maldonado,Felicia Wu
摘要
Abstract Scientists, publishers, and journal editors are wondering how, whether, and to what extent artificial intelligence (AI) tools might soon help to advance the rigor, efficiency, and value of scientific peer review. Will AI provide timely, useful feedback that helps authors improve their manuscripts while avoiding the biases and inconsistencies of human reviewers? Or might it instead generate low‐quality verbiage, add noise and errors, reinforce flawed reasoning, and erode trust in the review process? This perspective reports on evaluations of two experimental AI systems: (i) a “ Screener ” available at http://screener.riskanalysis.cloud/ that gives authors feedback on whether a draft paper (or abstract, proposal, etc.) appears to be a fit for the journal Risk Analysis , based on the guidance to authors provided by the journal ( https://www.sra.org/journal/what‐makes‐a‐good‐risk‐analysis‐article/ ); and (ii) a more ambitious “ Reviewer ” ( http://aia1.moirai‐solutions.com/ ) that gives substantive technical feedback and recommends how to improve the clarity of methodology and the interpretation of results. The evaluations were conducted by a convenience sample of Risk Analysis Area Editors (AEs) and authors, including two authors of manuscripts in progress and four authors of papers that had already been published. The Screener was generally rated as useful. It has been deployed at Risk Analysis since January of 2025. On the other hand, the Reviewer had mixed ratings, ranging from strongly positive to strongly negative. This perspective describes both the lessons learned and potential next steps in making AI tools useful to authors prior to peer review by human experts.
科研通智能强力驱动
Strongly Powered by AbleSci AI