您的浏览器禁用了JavaScript(一种计算机语言,用以实现您与网页的交互),请解除该禁用,或者联系我们。[Workday]:工作中的人工智能代理:CLO视角 - 发现报告

工作中的人工智能代理:CLO视角

信息技术2025-10-29-WorkdayM***
AI智能总结
查看更多
工作中的人工智能代理:CLO视角

Table of Contents Introduction Why Adoption Is Racing The Boundaries Of Trust Defining GuardrailsFor Adoption Introduction. AI agents are no longer a distant possibility. They are here, enteringarenas where accountability cannot be easily automated, such ascontract review and management, compliance, and governance. For This ebook draws onglobal research commissioned by Workdayandconducted by Hanover Research, with nearly 3,000 decision-makerssurveyed across North America, EMEA, and APAC. It highlights not only where adoption is advancing, but also the stepsCLOs must take before that point: understanding use cases, anticipating And make no mistake, adoption is advancing quickly.82%of organizations are expanding their use of AI agents, and88%of leaders expect them to ease workloads by boosting productivity. As agentic adoption accelerates,CLOs must ensure progress does Why adoption is racing. The desire for greater productivity is driving agentic adoption.88%of leaderssay it is the primary draw, and56%expect to see ROI within the first year. In compliance, risk, and governance. Finance leaders also point to workforce pressure, with76%saying agentscould help offset a persistent shortage of skilled professionals. Together, these of respondents trusthumans more than AI. This acceleration creates pressure for CLOs. As peers and boards view agentsas solutions to talent gaps and cost demands, legal may be asked to validate The trust data shows why restraint is essential.41%of respondents say theytrust humans more than AI in compliance, risk, and governance.37%trust Enterprise adoption may be speeding ahead, but confidence in sensitivedomains has not caught up. CLOs are not only expected to respond to this of respondents trusthumans and AI equally. of respondents trustAI more than humans. Adoption is racing ahead.Responsible guidance from CLOs The boundaries of trust. Adoption brings a second question: how big of a role should agents playonce they are implemented? The survey makes the swim lanes clear from ahuman perspective.75%of employees are comfortable working alongside CLOs are the line betweenagentic colleagues as helpers As one respondent put it, “Employees are happy to use them as a tool butdon’t want them acting as managers or decision-makers.” That distinction For CLOs, there is some relief in this boundary. If employees resist agents asmanagers, legal leaders are spared from defending the riskiest scenarios: AIacting as boss, allocator of budgets, or decision-maker over people. CLOs Yet an agent-as-colleague status does not remove the need for oversight. Their contributions must be framed, monitored, and governed to ensure That oversight is not work legal can do alone. CLOs will need to partner with IT on technical controls, with HR onworkforce policies, and with compliance and audit teams on monitoringand reporting. By leading stakeholder collaboration, CLOs can define how are comfortable being Defining guardrails for adoption. Barriers to adoption. Adoption cannot advance in legal without safeguards. The survey makesthis clear:44%of respondents cite ethics and governance, including bias,data privacy, and legal challenges, as top barriers to adoption. Another cite ethics and governance,data privacy and legal as For CLOs, these numbers highlight the oversight challenge. Agentic AI promisesefficiency, but without guardrails the risks multiply quickly. A biased output, aprivacy breach, or a misapplied recommendation can expose the enterprise to Guardrails do not slow adoption. They make it defensible. The task is notto slow adoption but to set terms for it. Guardrails begin with governanceframeworks that specify where agentic colleagues can operate, what data theycan access, and how their outputs are validated. They extend to transparency cite they are very The responsibility for setting these guardrails does not fall on CLOs alone.Legal’s focus is on policy: defining boundaries, validation requirements, anddocumentation standards. But those policies have to be put into practice CLO action plan:Setting guardrails for agentic AI. For example, IT may need to require audit logs, mandate human validationof high-stakes outputs, or ensure escalation paths exist when agentrecommendations touch sensitive areas. Collaboration with IT turns broad Define boundaries.Draft clear policies thatstate where agentic colleagues can operatewithin legal workflows. Work with IT.Align security safeguards withlegal’s policy requirements to ensuredefensible controls. Make accountability visible.Require audit trailsthat boards, regulators, and counterpartiescan trust. For CLOs, the measure of adoptionis not speed, but whether trustcan stand behind it. Keep humans in charge.Mandate humanvalidation for outputs that carry legal,financial, or reputational impact. Where legal can lean in. If agentic colleagues are not meant to manage people, the natural questionis where they can contribute.