AI智能总结
A Framework for Resilient and Trustworthy Deployment Jason Samarin (US Navy PEO C4I) , Andrés Vega (M42) NATO IST-HFM-225 Research Specialists Meeting Jason Samarin Principal SoftwareEngineer, PEO C4I Founder and CEO Publications Andrés Vega “The root tenets of command and control are timeless—but theyhave been lost in the chase for new technologies. Commanders must exert exacting control over their forces toadvance their plans if they are to defeat that future adversary whois multidimensional, well equipped, well trained, willing to fight, andintending to win.” - Rediscover the Art of Command and Control, Vice Admiral Robert F. Willard, USN Effective C2 in Modern Warfare •Command grants authority; Control directs action.•C2 delivers timely, decisive moves.•Commanders need insight beyond the front line.•Goal: Real-time enemy picture and rapid response.•Tech multiplies force through instant data andexecution. NATO IST-HFM-225 Research Specialists Meeting Use Cases for AI Enabled Differentiation •Real-time Situational Awareness:Live geospatial intelligence, blueforce tracking•Secure Data Exchange:Cross-classification data sharing, coalitioninteroperability•Mission Collaboration:Secure real-time translation, chat, annotations•Intelligent ISR Integration:Live drone and sensor data Additional Tactical Capabilities: ∙Advanced Targeting: AI-driven precision targeting∙Offline Sync: Persistent access to mission-critical data∙Dynamic Planning: Real-time mission adaptation∙Biometric Verification: Rapid identity management∙Threat Alerting: Immediate sensor-based threat detection NATO IST-HFM-225 Research Specialists Meeting The Growing Risk of AI in C2 AI relies on sensitive data, expanding breach opportunities: •Misused AI data can lead to theft, harm, and mission failures•Breaches in AI systems allow lateral movement, amplifying damage•Poor isolation in AI systems enables systemic misuse AI Safety Levels and Risk ASL-1:Low safety concerns, negligible impact of errors(e.g., recommendation systems, spell-check). ASL-2:Moderate safety concerns, some regulatory orethical considerations (e.g., financial forecasting,marketing tools). ASL-3:High safety concerns, significant impact on livesor financial stability (e.g., autonomous vehicles, medicaldiagnostics). ASL-4:Critical safety concerns, life-and-death orsocietal impact (e.g., predictive policing, AGI,autonomous weaponry). ASL-5+: Not yet defined as it is too far from presentsystems, but will likely involve qualitative escalations incatastrophic misuse potential and autonom Source: Anthropic's Responsible ScalingPolicy, introducing AI Safety Levels (ASL)to manage risks in advanced AI systems.Read more When threat modeling AI, the primary consideration is to assume potentialcompromise or poisoning of both the training data and the data provider. As defenders, we must create the ability to detect anomalous and maliciousdata entries, differentiate between them, and implement strategies forrecovery. Expanded Attack Surface AI systems introduce unique challenges to security, expanding the attack surface in ways traditional systemsdo not: 1.AI systems evolve rapidly, exposing new vulnerabilities.2.Dynamic Threats: Susceptible to adversarial attacks, data drift, and model degradation.3.Black Box Risk: Hard to detect poisoned data or malicious inputs.4.Operational Weaknesses: Poor security enables prompt injection and model inversion attacks.5.Mitigation Required: Tailored strategies are essential to counter AI-specific threats. This combination of evolving threats, opaque models, and operational weaknesses creates a critical securitychallenge, requiring tailored strategies to mitigate these risks. Military AI Threat Landscape Adversarial Threats Command Integrity ●Adversarial Inputs: Malicious signals deceiving ISR andtargeting systems ●Securing information flows against unauthorized access●Ensuring AI enhances commander situational awareness●Preserving human judgment under degraded conditions ●Data Poisoning: Corrupted sensor data degrading modelreliability ●Model Theft: Extracting decision logic for adversary advantage Operational Risks ●Black Box Risk: Difficulty detecting subtle malicious inputs ●Data Drift: Models failing under real-world battlefield variations ●Prompt Injection: Manipulated inputs distorting intelligenceoutputs NATO IST-HFM-225 Research Specialists Meeting Key Questions for an AI Security Review Data Integrity & Poisoning Anomaly Detection & Recovery ●How do you detect tampered data?●Is input data validated and documented? ●How do you trace accuracy declines?●How are invalid or malformed inputs managed●Can silent output errors be detected? Training Data Security Adversarial Resilience ●How are model-data links secured?●Can data sources alert to compromise?●Is sensitivity of data assessed and cataloged? ●Is training robust against adversarial inputs?●How quickly can you recover or revert models?●Can th