AI智能总结
Foreword With the rapid acceleration in the pace of AI adoption, it has never been morevital for organizations to ensure that they are executing their transformationbased upon clear ethical guidelines. AI has the potential to transform our worldin incredible ways, but if implemented without a framework, and the associatedrisk management processes, it has an equal chance of causing significant, evenirreparable, harm. At Capgemini, we have had the framework that is outlined inthe follow pages in place for many years, with a robust and proven Code of Ethicsfor AI at its heart. Every organization’s code of ethics principles should be unique to them, reflectingtheir values and broader societal and cultural norms. With that in mind, theAI Futures Lab team have created this thorough and practical guide to help youdevelop or validate the right code of ethics principles for AI for your organization,as well as guidance on how to establish the broader risk management andgovernance processes. These steps will help you to move forward with your business transformation withconfidence, ensuring that together we are delivering the future we all want. Anne-Violaine Monnié Where to start? Ethical conversations can be complicated.They depend on cultural, societal, personal,and organizational values. There’s no suchthing as a one-size-fits-all set ofethics*principles for organizations. While otherreports may try to provide you with areadymade set of genericAI ethics*principles, this guide will instead provideyou with a toolkit to create a set of AIethics principles that are unique to yourorganization. It will also provide you withkey insights on the responsibilities andprocesses that will be needed to ensurethey have the desired impact. Monika Byrtek AI Philosopher,AI Futures Lab,Capgemini Ethics is not a concept born in the 21stcentury. For centuries, people have exploredand debated moral frameworks to guide theirlives, societies, and innovations. With thatperspective, the following pages present atimeline highlighting key moments in thisongoing journey. James Wilson AI Ethicist,AI Futures Lab,Capgemini Table ofcontents The shifting AI landscape05 What is an AI ethicist and whydo you need them?08 Establishing the right AI ethicsprinciples10 Considering bias and fairness16 Testing the principles17 A necessity, not a luxury A timelineof ethics inhumanity The increasing need to adopt enterprise-scale AI within organizations has amplifiedethical concerns. Amongst these concerns are dilemmas that could have far-reachingconsequences for organizations, customers, and even society as a whole. This practicalguide intends to provide a perspective on the foundational steps required to implementan effective governance framework for the ethical challenges organizations mayencounter during and after their AI transformation. While headlines often dramatize AI risks, the real threats are often subtle – like biasedcredit algorithms that use gender to determine credit limits. Without early intervention,these ethical risks can rapidly turn into serious legal and reputational consequences. Asgenerative and agentic AI enter mainstream usage, the need for organizations to embedagile, ethical governance to mitigate evolving challenges becomes more apparent. Theseinnovations add substantial complexity in terms of maintaining adequate explainabilityand control. This need only further amplifies when you consider emerging risks frominnovations like embodied AI and quantum computing. The shifting AI landscape Before the launch of ChatGPT in November 2022, enterprise-scale AI focused on narrow,well-controlled use cases that were overseen by qualified individuals and teams whocould validate its outputs. For example, computer vision-based diagnostic X-Ray toolswould be monitored by qualified radiographers. Though ethical concerns existed, theimpact of AI was relatively non-intrusive to most people’s lives. The Code ofHammurabi is oneof the earliest intactsets of ethical/legalprinciples The concept that all virtuesare from knowledge Thought Experiment:Imagine a story about AI causing harm across societyappears on the front page of your local newspaper or news site. Howwould you feel if your company was implicated in that article? Now turn thisscenario around and also ask yourself, how would you feel if you were thevictim of unethical AI-augmented actions? Everything changed with the launch of ChatGPT. AI became widely accessible, andfrontier model providers and hyperscalers began releasing increasingly powerful toolswhich required complex oversight. Early missteps were mostly harmless (e.g. chatbotsselling cars for $1 or giving advice on how to glue cheese to pizza), but as capabilitiesgrew, so did the risks. In 2024, a companion chatbot was directly implicated in at leastone teenage suicide, while another encouraged a failed assassination attempt. Therisk profile for future AI adoption is only compounded further by embodied AI an