公司治理
领域(数学分析)
政治学
业务
计算机科学
运筹学
工程类
财务
数学
数学分析
标识
DOI:10.1093/9780198945215.003.0102
摘要
Abstract The employment of AI technologies in various military applications, such as autonomous weapon systems (AWS) or AI-based decision-support systems in use-of-force decision-making, has inspired discussions about the need to address risks associated with AI in warfare via global regulatory measures. This article reviews the state of global governance of AI in the military domain, arguing that it encompasses not only attempts to establish new international law on AWS—which have stalled—but also efforts to set international norms on “responsible” development and use of AI in warfare via various state-led, interdisciplinary, and multistakeholder initiatives. First, the article takes stock of ongoing deliberations on new international law applying to weaponized AI and AWS, as well as the challenges to agreeing whether novel legally binding measures are necessary. Second, it broadens the notion of global governance of AI in the military domain to consider norm-setting efforts on the (in)appropriate uses of AI in security and defense via initiatives such as the responsible AI framework, which is gaining prominence in governance debates. However, the responsible AI framework remains too ambiguous to address legal, ethical, and security challenges of human–machine interaction in the military. Therefore, the article concludes by highlighting two ways forward to build upon, and strengthen, existing initiatives, namely the need for a more comprehensive approach toward risks of AI and the need to operationalize responsible AI principles in practice.
科研通智能强力驱动
Strongly Powered by AbleSci AI