AI智能总结
Demanding More from AIGovernance in Drug Safety ByMARIE FLANAGAN, Regulatory and AI Governance Lead, Safety Technologies, IQVIA Source: IQVIA Table of contents The regulatory climate: where do we stand?1The challenges: what are we up against?2The solutions: how do we safely move the dial from traditional to generative AI?3The partnerships: what questions should we be asking?5The horizon: what’s stoppingyou?5References6About the authors6About IQVIA6 Artificial Intelligence (AI) usage continues to expand in the pharmaceutical spacewith many drug companies considering how to implement this technology toincrease efficiency and enhance decision making across the lifecycle of a drug,the Marketing Authorization Holders (MAHs) must also ensure compliance amidan evolving regulatory landscape. Regulators have encouraged the use of AI indrug development and manufacturing, but their guidance remains largely in draftform and is subject to change as new insights come to light. In 2025, the U.S. government issued two memoranda— M-25-21 and M-25-22 — directed toward federalagencies, which firmly established the country’s pro-innovation stance on AI. These documents made itclear that AI adoption is encouraged, provided it issupported by robust, risk-based assessments. Japanhas echoed a similar position, emphasizing innovationwhile maintaining a strong focus on risk management.The message is clear:regulatory clarity is not aprerequisite for responsibility. Organizations areencouraged to move forward confidently, provided theyadhere to the guiding principles and frameworksalready established. The regulatory climate: wheredo we stand? As of January 2025, 69 countries had proposed orimplemented >1,000 AI-related policy initiatives andlegal frameworks.1Published in 2024, theEU AI Actaddresses healthcare and therein, high-risk AI systems;it is the world’s first and arguably most influentialcomprehensive legal framework for AI. It sets a globalbenchmark by adopting a risk-based approach andimposing obligations proportionate to the potentialimpact of AI systems. Despite calls from major European tech leaders todelay its rollout, the European Commission has made itunequivocally clear:there will be no pause, no graceperiod, and no delay in the implementation of the EUAI Act. The first set of rules came into force in February2025, and key obligations for high-risk AI systems, i.e.,those deemed to pose significant risks to fundamentalrights, health, or safety, will become binding fromAugust 2, 2026. At the same time, the commission hasemphasized thatorganizations do not need to wait fordetailed prescriptions to act. Companies are expectedto proactively align with the intent of the Act, and failureto do so could result in significant penalties: up to €35million or 7% of global annual turnover. Furthering the cause, theEuropean Medicines Agency(EMA),U.S. Food and Drug Administration(FDA), andU.K.Medicine and Healthcare products Regulatory Agency(MHRA) have all released draft frameworks. TheCouncilfor International Organizations of Medical Sciences(CIOMS) XIV Working Groupis developing a frameworkusing current regulatory guidelines that focuses on theuse of AI in the field of PV and drug safety. MAHs canuse this existing draft framework as a practical bridgebetween the EU AI Act and U.S. guidance to prepare forand maintain compliance and monitoring of AI solutionsfor patient safety. cultural shift. Without reimagining your operations toaccommodate AI, your business model will struggle toreach its potential, and at worst it will fail. The challenges: what are weup against? Successfully implementing AI across drug safety andPV processes requires circumventing several obstacles.First, there is the issue of trust. Generative AI solutionsemploy Large Language Models (LLMs), which, dueto the black box nature of this software, results in aninherent lack of transparency and control. It is difficult toexplain how an LLM produces the results it does, why itmakes the decisions it makes, or where that informationcomes from. For most MAHs, establishing trust andconfidence in an AI model that lacks full explainabilityis disconcerting, and though validating a model andenacting controls is a major step towards increasingtrust, this inherent risk presents the first hurdle for many. Implementing AI into drug safety practices requires amulti-disciplinary collaboration between developers,data analysts, computer scientists, PV experts, andbusiness operations to guarantee that the AI is workingas intended before, during, and after deployment.PV and safety use cases evolve over time and thus,AI demands continuous human oversight with cross-functional expertise, necessitating at least some AIfluency across skillsets. Finally, due to the relative newness of AIimplementations in drug safety and a lack of formalguidelines, MAHs likely do not have existing knowledgeof how to validate dynamic AI. Common questionsmight include:How do we incorporate mode