您的浏览器禁用了JavaScript(一种计算机语言,用以实现您与网页的交互),请解除该禁用,或者联系我们。 [杜克大学]:2026年医疗系统中的人工智能安全:基础设施建设和风险管理工作实践强化白皮书 - 发现报告

2026年医疗系统中的人工智能安全:基础设施建设和风险管理工作实践强化白皮书

医药生物 2026-04-05 杜克大学 秋穆
报告封面

Cameron Joyce, MPA,Duke-Margolis Institute for Health Policy Nicoleta J Economou, PhD,Duke Health AI Evaluation and Governance Christina Silcox, PhD,Duke-Margolis Institute for Health Policy The authors would like to thank several individuals for their contributions to this white paper. First, we thank theparticipants of our expert workshop, who are listed at the end of the paper, for sharing their expertise and experiences,as well as the multiple other health system representatives and policy influencers that held individual informational calls The Robert J. Margolis, MD, Institute for Health Policy at Duke University is directed by Mark McClellan, and bringstogether expertise from the Washington, DC, policy community, Duke University, and Duke Health to address the mostpressing issues in health policy. The mission of Duke-Margolis is to improve health, health equity, and the value ofhealth care through practical, innovative, and evidence-based policy solutions. Duke-Margolis catalyzes Duke University’s Duke Health is an academic health system advancing care through clinical excellence, education, and research.Itsnetwork of hospitals, clinics, and specialty centers serves diverse populations across North Carolina and beyond.Duke Health emphasizes innovation, fairness, and continuous improvement, with a focus on precision medicine,AI-enabled care, and population health. Through the Duke Health AI Evaluation and Governance Program, Duke Healthworks to promote the responsible and trustworthy use of AI within clinical care and beyond. The program integrates OVERVIEW As clinical artificial intelligence (AI) tools aredeployed at increasing scaleacross health care settings, healthsystems and regulators face growing challenges to understand and manage the potential patient safety risks.Emerging AI safety risks are difficult to detect through existing patient safety monitoring systems. Traditionally,safety events are often identified by clinicians, patients, or family members, with patients and families typicallyserving as early detectors of harm. However, clinical AI tools used in health systems are often largely invisibleto patients, which makes patient reporting of AI safety events more challenging. As such,effective system- Apatient safetyevent is commonly defined as any process, action, or omission that causes or has the potentialto cause patient harm, meaning unintended physical or emotional injury resulting from or contributed to bymedical care that requires additional monitoring, treatment, hospitalization, or results in death. TheWorld Health OrganizationandAmerican Medical Associationnote that patient safety events should include errors, INTRODUCTION TO LIFECYCLE-BASED RISK MANAGEMENT While AI tools can provide value to clinicians and patients,the complexity of AI technology introducesadditionalrisks compared to traditional clinical tools,including therisk of performance drift andbias.In addition, clinicaltools that rely on large language models (LLMs) alsointroduce the risk of hallucinations and overreliance bythe user, often due to the authority with which outputs Health systems have ethical and legal obligations toidentify and manage risks that threaten patient safety. Inaddition to traditional safety reporting/issue managementand documentation, proactive risk management of clinicalAI should include workflow integration, training, andmonitoring that takes individual model performance, While there is much written on AI governance andgeneral risk management, there are no widely adoptedorganizational practices and procedures to ensure patientsafety when clinical AI tools are used. Also, less explicitdiscussion exists for patient safety-specific risk mitigation,monitoring, and event reporting involving clinical AI tools.At this time, many health systems lack sufficient expertise for Health IT. For example, the FDA regulates AI thatqualifies as software as a medical device (SaMD), includingcertain clinical decision support (CDS) tools, with oversightspanning pre-market review and select post-marketchanges. However, many tools like administrative or Without intentional planning and investments, resourceconstraints and limited access to staff with AI expertiserisk widening existing resource disparities between health The current regulatory framework for clinical AI tools isfragmented and often confusing. Oversight is distributedacross multiple federal agencies rather than centralizedin a single authority. Depending on a tool’s function and Risk Management Frameworks While not specific to health, the NIST Artificial Intelligence Risk ManagementFrameworkandPlaybookstates thatAI trustworthinessincludes validity, reliability, security, resilience, accountability, transparency, explainability,integration, privacy protection, and fairness with bias mitigation. It defines risk as both technical and socio-technical, emphasizingcontext. Systems should establish governance, define the use case, assess