您的浏览器禁用了JavaScript(一种计算机语言,用以实现您与网页的交互),请解除该禁用,或者联系我们。[斯德哥尔摩国际和平研究所]:军事目标选定中的自主武器系统与人工智能决策支持系统:比较与政策应对建议 - 发现报告

军事目标选定中的自主武器系统与人工智能决策支持系统:比较与政策应对建议

AI智能总结
查看更多
军事目标选定中的自主武器系统与人工智能决策支持系统:比较与政策应对建议

A Comparison and Recommended Policy Responses alexander blanchard and laura bruun STOCKHOLM INTERNATIONALPEACE RESEARCH INSTITUTE SIPRI is an independent international institute dedicated to research intoconflict, armaments, arms control and disarmament. Established in 1966,SIPRI provides data, analysis and recommendations, based on open sources, The Governing Board is not responsible for the views expressed in the GOVERNING BOARD Stefan Löfven, Chair (Sweden)Dr Mohamed Ibn Chambas (Ghana)Ambassador Chan Heng Chee (Singapore)Dr Noha El-Mikawy (Egypt)Jean-Marie Guéhenno (France)Dr Radha Kumar (India) DIRECTOR Dan Smith (United Kingdom) Signalistgatan 9SE-169 70 Solna, SwedenTelephone: +46 8 655 97 00Email: sipri@sipri.org AUTONOMOUS WEAPONSYSTEMS AND AI-ENABLED A Comparison and Recommended Policy Responses alexander blanchard and laura bruun June 2025 © SIPRI 2025 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system ortransmitted, in any form or by any means, without the prior permission in writing of SIPRI or as Contents AcknowledgementsExecutive summary 1.IntroductionScope of this reportOutline 2.Characterization Functionality of AWS and AI-DSSImpact on the decision to use forceScope of the targeting cycleRole in the execution of targeting decisions 3456 Figure 2.1. The scope of autonomous weapon systems and AI-enabled4decision support systems in the phases of the targeting cycle 3.Risks of unintended harm Reliability issuesDifferences in human–machine interaction 4.Legal aspects What IHL requires from humans and permits of machines in theconduct of hostilitiesLegal responsibility and accountabilityRole in legal assessmentsImplications for Article 36 reviews 5.Policy responses Approach 1: Specifically include AI-DSS in existing multilateral efforts on AWS16Approach 2: Establish a new process dedicated to AI-DSS18Approach 3: Not take a specific approach to AI-DSS18 Key findings20Recommendations20 Figure 6.1. Differences between AI-enabled decision support systems21and autonomous weapon systems for human–machine interaction,risk pathways and use of force Acknowledgements SIPRI and the authors express their sincere gratitude to the ministries for foreign affairsof the Netherlands, Sweden and Switzerland for their generous financial support for The authors also thank SIPRI colleagues Marta Bo, Vincent Boulanin, Netta Goussac,Dustin Lewis and Jules Palayer for their invaluable insights and support throughoutthe writing of this report. The authors are also grateful for comments, discussions andfeedback provided at different stages of the project by Anna Andersson, Rupert Barrett-Taylor, Nehal Bhuta, Ingvild Bode, Jessica Dorsey, Martin Hagström, Sally Longworth, Executive summary The humanitarian and legal concerns raised by autonomous weapon systems (AWS)have long been the subject of international policy processes, and more recently in dis-cussions on the military adoption of artificial intelligence (AI). Growing attention tothe military use of AI-enabled decision support systems (AI-DSS) raises the need toconsider how these systems fit within global policy conversations. This report com- Both AWS and AI-DSS used in military targeting impact the role of humans in target-ing decisions. However, a key difference is their scope of use in the targeting cycle: AWSare limited to the mission execution phase, while AI-DSS are used more broadly acrossmultiple phases. Still, the distinction between the two systems can blur in practice, Both AWS and AI-DSS carry risks of unintended harm, but these risks emerge indifferent ways. While both systems share reliability issues arising from known technicallimitations of autonomy and AI, their distinct forms of human–machine interaction canlead to different outcomes. For AWS, which have a direct path between target identifi-cation and engagement, the risks are direct—for example, a false target identification Both AWS and AI-DSS raise questions about how much users are permitted to relyon these systems for fulfilling IHL obligations, and how to ensure responsibility andaccountability. However, they pose distinct legal challenges. For AWS, concerns stemfrom the autonomous use of force and whether users can reasonably foresee and con-trol the system’s effects. For AI-DSS, concerns arise from humans over-relying on AI These comparisons suggest three approaches available to policymakers navigatingthe current multilateral context regarding military AI: (a) specifically include AI-DSSin multilateral efforts on AWS; (b) establish a new process dedicated to AI-DSS; (c)takeno specific approach to AI-DSS. Each option comes with certain implications andtrade-offs that policymakers must take into account. The substantive similarities and Based on these findings, the report makes three recommendations. First, statesshould consider whether a dedicated multilateral process for AI-DSS is needed, recog-nizing the