您的浏览器禁用了JavaScript(一种计算机语言,用以实现您与网页的交互),请解除该禁用,或者联系我们。[德勤]:人工智能网络安全防护:新风险环境下的韧性构建 - 发现报告

人工智能网络安全防护:新风险环境下的韧性构建

信息技术2025-12-12德勤江***
人工智能网络安全防护:新风险环境下的韧性构建

Navigating the Report AI Cybersecurity Introduction However, despite the potential risks, AI also provides opportunities to strengthencybersecurity. AI-enabled tools can help organisations detect issues, improvethe security of software and systems, and respond to incidents more quicklyand consistently. Firms that combine these capabilities with strong governanceand proportionate controls will be better positioned to manage the evolvingcyber threat landscape. Artificial intelligence (AI) is reshaping the cybersecurity landscapeacross Asia Pacific (AP). Jurisdictional Deep Dive Even before the rise of advanced AI, the increasing digitisation of businessoperations had already made cyber-attacks more frequent, scalable andeffective. AI is now amplifying this trend by enabling malicious actors to workmore quickly and produce more convincing and adaptive attacks. For example,AI can help generate persuasive phishing messages and deepfakes, analysesystems to identify weaknesses, and adjust attack methods in real-time. Thislowers the barrier for attackers and increases both the speed and potentialimpact of a cyber incident. China (Mainland) Cybersecurity is now firmly a Board level responsibility. The additional risksintroduced by AI make strong oversight, clear lines of accountability, and Boardfluency in AI technology essential. These capabilities are needed not only toprotect critical operations and meet regulatory obligations, but also to maintaincustomer and stakeholder trust. Hong Kong SAR As organisations adopt AI across core processes, the attack surface is alsoexpanding.AI introduces new systems and data flows into technologyarchitecture,including model training environments,automated decisionworkflows and large-scale data pipelines. These components can potentiallycreate additional points where vulnerabilities may arise. Further, the AI systemsare also subject to attack. Adversaries may try to corrupt the data used to trainmodels, influence or distort model outputs, or exploit weaknesses in how thesystems interpret and respond to user inputs. This paper examines how AI is impacting cybersecurity risk, how supervisorsin AP are responding and what organisations can do to build stronger andmore resilient defences. It outlines key attack vectors, emerging regulatoryexpectations, and practical steps for Boards and senior executives to bolstertheir firm’s cyber resilience. Whilst this paper focuses on AI security considerations, it is important for firmsto take a holistic view and address all AI-related risks when developing theirtechnology strategy and AI systems. These threats create clear business risks. AI-related cyber incidents can causefinancial losses, compromise intellectual property, distort critical decisionoutputs, expose sensitive customer data, and erode organisational reputationand stakeholder trust. Therefore, as AI adoption grows, it is critical that risksmust be assessed and managed as part of a wider cyber defence strategy. Singapore South Korea Taiwan (China) AI Security vs. AI SafetyFor the purposes of this report, we define AI security as the protections that keep AI systems resilient against attacks and misuse. This includes defending against adversarial inputs, tampered data, stolen models, and attempts to manipulate or extract model outputs.We distinguish this from AI safety, which concerns how an AI system behaves such as its accuracy, reliability, fairness, and alignment with intendedgoals. In practice, these two domains often overlap. Weak safety, such as a model that is brittle, poorly calibrated, or prone to hallucination can createopenings that attackers can exploit. Conversely, a security failure like compromised training data or manipulated content can degrade safety bychanging a system’s behavior and eroding trust in its outputs. This paper focuses on the cybersecurity risks associated with AI systems while recognising these risks can affect broader safety outcomes and vice-versa. AI Cybersecurity AI Cybersecurity RisksAs organisations begin to adopt and scale AI, malicious actors are evolving to target these systems. Some techniques such as prompt injections, jailbreaks and model extraction are relatively new and arise from the way AI models process data and instructions. Others, including supply chain compromise or the exploitationof vulnerable components, build on long standing cyber-attack methods. Nevertheless, the impacts are amplified by AI’s reliance on external models, open source Supply Chain and Third-Party RisksAs highlighted above, third-party relationships and extended supply chains are a major source of cyber and AI-related vulnerability, particularly for firms in complex vendor ecosystems. Many incidents now stem from vendors and the AI capabilities embedded in the software andservices they provide. As firms connect more tools and data pipelines, they can also be susceptible to weaknesses across this extendedecosystem. In practice, a comp