AI智能总结
A Comparison and Recommended Policy Responses alexander blanchard and laura bruun STOCKHOLM INTERNATIONALPEACE RESEARCH INSTITUTE SIPRI is an independent international institute dedicated to research intoconflict, armaments, arms control and disarmament. Established in 1966,SIPRI provides data, analysis and recommendations, based on open sources,to policymakers, researchers, media and the interested public. The Governing Board is not responsible for the views expressed in thepublications of the Institute. GOVERNING BOARD Stefan Löfven, Chair (Sweden)Dr Mohamed Ibn Chambas (Ghana)Ambassador Chan Heng Chee (Singapore)Dr Noha El-Mikawy (Egypt)Jean-Marie Guéhenno (France)Dr Radha Kumar (India)Dr Patricia Lewis (Ireland/United Kingdom)Dr Jessica Tuchman Mathews (United States) DIRECTOR Dan Smith (United Kingdom) Signalistgatan 9SE-169 70 Solna, SwedenTelephone: +46 8 655 97 00Email: sipri@sipri.orgInternet: www.sipri.org AUTONOMOUS WEAPONSYSTEMS AND AI-ENABLEDDECISION SUPPORT SYSTEMSIN MILITARY TARGETING A Comparison and Recommended Policy Responses alexander blanchard and laura bruun June 2025 DOI: https://doi.org/10.55163/YQBY3151 Contents AcknowledgementsivExecutive summaryv 1.IntroductionScope of this reportOutline 122 2.Characterization3 Functionality of AWS and AI-DSS3Impact on the decision to use force4Scope of the targeting cycle5Role in the execution of targeting decisions6Policy implications6Figure 2.1. The scope of autonomous weapon systems and AI-enabled4decision support systems in the phases of the targeting cycle 3.Risks of unintended harm Reliability issues8Differences in human–machine interaction8Contextual variables10Policy implications11 4.Legal aspects12 What IHL requires from humans and permits of machines in the12conduct of hostilitiesLegal responsibility and accountability13Role in legal assessments13Implications for Article 36 reviews14Policy implications14 5.Policy responses16 Approach 1: Specifically include AI-DSS in existing multilateral efforts on AWS16Approach 2: Establish a new process dedicated to AI-DSS18Approach 3: Not take a specific approach to AI-DSS18 6.Key findings and recommendations 20 Key findings20Recommendations20 Figure 6.1. Differences between AI-enabled decision support systems21and autonomous weapon systems for human–machine interaction,risk pathways and use of forceTable 6.1. Autonomous weapon systems (AWS) and AI-enabled decision support22systems (AI-DSS) compared in four key areas About the authors Acknowledgements SIPRI and the authors express their sincere gratitude to the ministries for foreign affairsof the Netherlands, Sweden and Switzerland for their generous financial support forthis publication. The authors also thank SIPRI colleagues Marta Bo, Vincent Boulanin, Netta Goussac,Dustin Lewis and Jules Palayer for their invaluable insights and support throughoutthe writing of this report. The authors are also grateful for comments, discussions andfeedback provided at different stages of the project by Anna Andersson, Rupert Barrett-Taylor, Nehal Bhuta, Ingvild Bode, Jessica Dorsey, Martin Hagström, Sally Longworth,Arthur Holland Michel and Anna Nadibaidze. Finally, the authors acknowledge theinvaluable editorial work of the SIPRI Editorial Department. Executive summary The humanitarian and legal concerns raised by autonomous weapon systems (AWS)have long been the subject of international policy processes, and more recently in dis-cussions on the military adoption of artificial intelligence (AI). Growing attention tothe military use of AI-enabled decision support systems (AI-DSS) raises the need toconsider how these systems fit within global policy conversations. This report com-pares AWS and AI-DSS for targeting in terms of their respective characterization, riskof unintended harm, legal aspects and policy responses. The report makes a number ofkey findings to inform policymakers on this issue.Both AWS and AI-DSS used in military targeting impact the role of humans in target- ing decisions. However, a key difference is their scope of use in the targeting cycle: AWSare limited to the mission execution phase, while AI-DSS are used more broadly acrossmultiple phases. Still, the distinction between the two systems can blur in practice,depending on how they are deployed.Both AWS and AI-DSS carry risks of unintended harm, but these risks emerge in different ways. While both systems share reliability issues arising from known technicallimitations of autonomy and AI, their distinct forms of human–machine interaction canlead to different outcomes. For AWS, which have a direct path between target identifi-cation and engagement, the risks are direct—for example, a false target identificationcan result in immediate lethal action without human input. For AI-DSS, which provideoutputs to humans, the risk is indirect—harm materializes if humans act upon that falsetarget identification. Both AWS and AI-DSS raise questions about how much