AI智能总结
Risk & Resilience PracticeDeploying agentic AI with safety and security: A Autonomous AI agents present a new world of opportunity—and an array of noveland complex risks and vulnerabilities that require attention and action now. Business leadersare rushing to embrace agentic AI, and it’s easy to understand why.Autonomous and goal driven, agentic AI systems are able to reason, plan, act, and adapt withouthuman oversight—powerful new capabilities that couldhelp organizations capture the potentialunleashed by gen AIby radically reinventing the way they operate. A growing number of organizations are now exploring or deploying agentic AI systems, which are projected to helpunlock $2.6 trillion to $4.4 trillion annually in value across more than 60 gen AI use cases, But while agentic AI has the potential to deliver immense value, the technology also presents anarray of new risks—introducing vulnerabilities that could disrupt operations, compromisesensitive data, or erode customer trust. Not only do AI agents provide new external entry pointsfor would-be attackers, but because they are able to make decisions without human oversight,they also introduce novel internal risks. In cybersecurity terms, you might think of AI agents as“digital insiders”—entities that operate within systems with varying levels of privilege and It is up to technology leaders—including chief information officers (CIOs), chief risk officers(CROs), chief information security officers (CISOs), and data protection officers (DPOs)—todevelop a thorough understanding of the emerging risks associated with AI agents and agenticworkforces and to proactively ensure secure and compliant adoption of the technology. (A review of early agentic AI deployments highlightssix key lessons—from reimagining workflowsto embedding observability—that can help organizations avoid some common pitfalls as they Emerging risks in the agentic era By operating autonomously and automating tasks traditionally performed by human employees,agentic AI adds an additional dimension to the risk landscape. The key shift is a move fromsystems that enable interactions to systems that drive transactions that directly affect businessprocesses and outcomes. This shift intensifies the challenges around core security principles of —Chained vulnerabilities. A flaw in one agent cascades across tasks to other agents, amplifying Example: Due to a logic error, a credit data processing agent misclassifies short-term debt asincome, inflating the applicant’s financial profile. This incorrect output flows downstream tothe credit scoring and loan approval agents, leading to an unjustified high score and risky —Cross-agent task escalation.Malicious agents exploit trust mechanisms to gain unauthorized Example: A compromised scheduling agent in a healthcare system requests patient recordsfrom a clinical-data agent, falsely escalating the task as coming from a licensed physician.The agent then releases sensitive health data, resulting in unauthorized access and —Synthetic-identity risk.Adversaries forge or impersonate agent identities to bypass trust Example: An attacker forges the digital identity of a claims processing agent and submitsasynthetic request to access insurance claim histories. Trusting the spoofed agent’scredentials, the system grants access, exposing sensitive policyholder data without —Untraceable data leakage.Autonomous agents exchanging data without oversight obscure Example: An autonomous customer support agent shares transaction history with an externalfraud detection agent to resolve a query but also includes unneeded personally identifiable —Data corruption propagation.Low-quality data silently affects decisions across agents. Example: In the pharmaceutical industry, a data labeling agent incorrectly tags a batch ofclinical-trial results. This flawed data is then used by efficacy analysis and regulatoryreporting agents, leading to distorted trial outcomes and potentially unsafe drug approval Such errors threaten to erode faith in the business processes and decisions that agenticsystems are designed to automate, undermining whatever efficiency gains they deliver.Fortunately, this is not inevitable. Agentic AI can deliver on its potential, but only if the principles Guiding principles for agentic AI security To adopt agentic AI securely, organizations can take a structured, layered approach. Below, weprovide a practical road map that outlines the key questions technology leaders should ask toassess readiness, mitigate risks, and promote confident adoption of agentic systems. The Prior to agentic deployment Before an organization begins using autonomous agents, it should ensure that it has thenecessary safeguards, risk management practices, and governance in place for a secure, —Does our AI policy framework address agentic systems and their unique risks?Answeringthisquestion starts with upgrading existing AI policies, standards, and processes—such asidentit