AI智能总结
The Business Case forResponsible AI Ritu JyotiGroup Vice President/General Manager, Dave SchubmehlResearch Vice President, Conversational Artificial Intelligence andIntelligent Knowledge Discovery, IDC Worldwide Artificial Intelligence, Automation,Data and Analytics Research Practice, IDC Table of Contents Executive Summary3 Key Findings from the Survey7 AI Adoption18Important Use Cases20 Advice and Recommendations22 Conclusion28 Appendix 1: Supplemental Data33 About the IDC Analysts37 Executive Summary Every organization needs to be responsible at the core in theAI era as it helps the organization accelerate realization ofthe benefits of AI. A responsible-at-the-core organization hasthe following foundational elements: •Core values and governance:It defines and articulatesresponsible AI (RAI) mission and principles, supported bythe C-suite, while establishing a clear governance structureacross the organization that builds confidence and trust inAI technologies. •Risk management and compliance:It strengthenscompliance with stated principles and current laws andregulations while monitoring future ones and developspolicies to mitigate risk and operationalize those policiesthrough a risk management framework with regularreporting and monitoring. •Technologies:It uses tools and techniques to supportprinciples such as fairness, explainability, robustness,accountability, and privacy and builds these intoAI systems and platforms. •Workforce:It empowers leadership to elevate RAI as acritical business imperative and provides all employees withtraining to give them a clear understanding of responsible AIprinciples and how to translate these into actions. Training thebroader workforce is paramount for ensuring RAI adoption. The purpose of this paper is to provide information and evidencethat a responsible AI approach fosters innovation by aligningAI deployment with organizational standards and societalexpectations, resulting in sustainable value for organizationsand their customers. Introduction According to IDC’s February 2024Worldwide SemiannualArtificial Intelligence Systems Spending Guide, Version 1,which tracks AI software, hardware, and services acrossindustries and use cases, enterprises worldwide are expectedto invest $232 billion on AI solutions in 2024. AI solutions are transforming a diverse range of industries, from finance andmanufacturing to agriculture and healthcare, by enhancing operationsand reshaping the nature of work. Enterprises’ application of generative AI(GenAI), which is rapidly unfolding, can revolutionize customer experiences,boost employee productivity, enhance creativity and content creation,and accelerate process optimization. However, AI also creates real risks and unintended consequences. AI systemscan inadvertently perpetuate or amplify societal biases due to biased trainingdata or algorithmic design. AI systems are often trained on large amounts ofdata collected from various sources. AI program outputs may run into copyrightinfringement concerns. AI hallucinations are incorrect or misleading resultsthat AI models generate. These errors can be caused by a variety of factors,including insufficient training data, incorrect assumptions made by the model,lack of context, or biases in the data used to train the model. So lack of groundingcan cause the model to generate outputs that, while seemingly plausible,are factually incorrect, irrelevant, or nonsensical and further deplete trust. From Risk to Reward: The Business Case for Responsible AI As AI technologies become increasingly sophisticated, the security risksassociated with their use and the potential for misuse also increase. For example,hackers/bad actors can control GenAI foundation model output by poisoning thegrounding data. Or they could use prompt injection attacks that disguise maliciousinstructions as user inputs, tricking the large language model (LLM) into overridingdeveloper instructions with the goal of manipulating the model to produce adesired response. Jailbreaking, a technique that attempts to bypass or subvert thesafety filters and restrictions built into LLMs, is also popular with the bad actors. According to IDC’s March 2024Microsoft — Responsible AI Survey(n = 2,309)(sponsored by Microsoft), which gathered insights on organizational attitudesand the state of responsible AI, 91% are currently using AI technology at theirorganization and expect more than 24% improvement in customer experience,business resilience, sustainability, and operational efficiency because of AI in 2024.Respondents who use responsible AI solutions say that it has helped with dataprivacy, customer experience, confident business decisions, brand reputation,and trust. AI brings not only unprecedented opportunities to businesses but alsoan incredible responsibility. To ensure trust and fairness with their customersand stakeholders, as well as adhere to emerging governmental regulations(e.g., the EU AI Act), organization