您的浏览器禁用了JavaScript(一种计算机语言,用以实现您与网页的交互),请解除该禁用,或者联系我们。[罗兰贝格]:人工智能风险缓解框架 - 发现报告

人工智能风险缓解框架

AI智能总结
查看更多
人工智能风险缓解框架

Introduction AUTHORS ASHOK KAULSenior Partner The rapid adoption of artificial intelligence (AI) technology hasexposed organizations to a wide spectrum of risks, from privacybreaches to operational failures. Recent global cases havehighlighted the dire consequences of poor AI governance, leading to NIZAR HNEINISenior Partner RIZWANUR RAHMANSenior Project Manager In this paper we show how organizations can build an AI RiskMitigation Framework. This is an integrated approach that helpsorganizations proactively identify, assess and mitigate AI-relatedrisks. Our approach is anchored in internationally recognized AI management systems Any effective AI risk management framework must be built on the backbone of arobust artificial intelligence management system (AIMS). As defined in ISO/IEC42001, an AIMS is a structured system that helps organizations manage AI From a risk management perspective, AIMS provide a foundation for identifying andmitigating risk across the entire AI lifecycle, from data sourcing and modeldevelopment through deployment, monitoring and retirement. By doing so, they AIMS are particularly necessary today due to the rise of sophisticated generativeartificial intelligence (GenAI), which both amplifies existing risks and introduces newones – especially in areas such as deepfake technology and content authenticity. In Scope, context and criteria The first step for organizations is to establish the foundation for the AI Risk MitigationFramework – its scope, context and criteria. Defining the scope involves identifyingwhich specific AI systems, business functions or organizational units the risk "The real test ofprogress will not behow fast we buildintelligentmachines, but how Risk assessment Once the foundation for the framework is established, the next step is to conduct a Risk identification: The company proactively pinpoints potential risks across the entire AI lifecycle, fromdata sourcing to model deployment. Risks can be of four types: data risks, such as The company then assesses each of these risks along two dimensions: their severityand their likelihood. Here, we recommend using the Human Rights, Democracy and ASHOK KAULSenior Partner Risk evaluation: Finally, the organization must make strategic decisions, comparing the calculatedrisk levels against the established criteria – and on that basis classifying each risk as Common AI risks by risk category and type Risk treatment The next step in the process is deciding how to treat each risk. This involves Mitigate: Introduce safeguards to reduce the risk. For example, the company addresses the risk of its chatbot giving incorrectregulatory advice – that is, hallucinating – by implementing a “human-in-the-loop” Avoid: Modify or abandon high-risk use cases. Given the high stakes involved in providing legal or medical guidance, the company The company partners with a specialized AI vendor, making them contractuallyliable for data breaches, and purchases cybersecurity insurance to cover potential Accept: Proceed with a risk at a residual level if justified. For low-stakes internal tasks, such as summarizing meeting notes, the company Three further activities support the entire AI Risk Mitigation Framework and occur inparallel with the other steps. The first iscommunication and consultation. Effective approach to communicating and consulting with all relevant stakeholders. Thisprocess should be continuous, ensuring that methods are timely, stakeholder- The second isrecording and reporting. Maintaining accountability and transparencyis crucial. Recording and reporting involve establishing a documented process to "Every sector thatadopts AI inheritsits power and itsperil. Riskmitigation isn’t acost of innovation Finally, the company must engage inmonitoring and review. This is an ongoingprocess that involves tracking the effectiveness of the implemented risk treatments,detecting emerging risks – such as new vulnerabilities in GenAI – and adapting Case study: AI risk management at NIZAR HNEINISenior Partner This case study illustrates the practical application and benefits of implementing anAIMS based on ISO/IEC 42001, the international standard for AI management. We Prior to their certification journey, Unique AI faced several significant hurdles inpreparing for ISO/IEC 42001. One major challenge was the novelty of the standard, Internally, the company had to address substantial governance gaps. While they hadexisting controls from other standards such as ISO 27001, ISO 9001 and SOC 2, theseneeded to be adapted and tailored specifically for AIMS compliance. The companyalso lacked tailored processes for evaluating AI-specific risks and impacts, a crucial By successfully implementing an AIMS and securing ISO/IEC 42001 certification,Unique AI realized significant benefits, transforming their AI governance from a An up to 75 percent reduction in manual documentation effort thanks to streamlined Complete visib