透明度(行为)
范围(计算机科学)
计算机科学
背景(考古学)
过程(计算)
生成语法
人工智能
妥协
质量(理念)
认识论
数据科学
管理科学
社会学
工程类
哲学
计算机安全
古生物学
社会科学
生物
程序设计语言
操作系统
作者
Ojelanki Ngwenyama,Frantz Rowe
摘要
In this paper, we revisit the issue of collaboration with artificial intelligence (AI) to conduct literature reviews and discuss if this should be done and how it could be done. We also call for further reflection on the epistemic values at risk when using certain types of AI tools based on machine learning or generative AI at different stages of the review process, which often require the scope to be redefined and fundamentally follow an iterative process. Although AI tools accelerate search and screening tasks, particularly when there are vast amounts of literature involved, they may compromise quality, especially when it comes to transparency and explainability. Expert systems are less likely to have a negative impact on these tasks. In a broader context, any AI method should preserve researchers’ ability to critically select, analyze, and interpret the literature.
科研通智能强力驱动
Strongly Powered by AbleSci AI