AI智能总结
2025年9月 自我介绍 工作年限: 20+ 目前职位:默克集团智能制造总监 工作经验 此前服务于礼来、葛兰素等企业,在默克服务从2017年1月至今 理学士传播技术上海外国语大学 海外法规标准 海外法规标准 海外法规标准 海外法规标准 Table of contents 1. AI Regulations in the EU 2025: The Artificial Intelligence Act (AI Act) 2. Practical Aspects of AI Implementations in Pharmaceutical Manufacturing: EudraLex Annex 22 Guidelines 3. AI in Drug Manufacturing: Applications and Benefits 4. Ensuring Pharmaceutical Product Quality with Artificial Intelligence 6. Summary Ai Software in Pharma Industry 海外法规标准 1. AI Regulations in the EU 2025: The Artificial Intelligence Act (AI Act)The Artificial Intelligence Act (AI Act), which came into force in February 2025, represents the world’s first comprehensive legal framework for AI, aiming to build trust in technology across Europe. It introduces a risk-based approach, classifying AI systems by their level of potential threat.The AI Actcame into effect on August 1, 2024, with its requirements being phased in gradually. The first provisions, including the banon“unacceptablerisk” and the “AI literacy” requirement, are effective from February 2, 2025. Obligations for providers of general-purpose AI models (GPAI) come intoeffect on August 2, 2025, though the finalization of the GPAI Code of Practice has been delayed until August 2025. Most provisions, including thoseconcerning “high-risk” systems, will be implemented by August 2, 2026, with further implementation phases extending to summer 2027. This staggeredtimeline creates a complex and dynamic regulatory compliance landscape. The AI Act defines four levels of risk: unacceptable, high, limited, and minimal. Unacceptable risk systems are strictly prohibited as they pose a clearthreat to safety and fundamental rights (e.g., subliminal manipulation, social scoring, untargeted facial scanning). Provisionsregarding penalties forviolations of Article 5 come into force on August 2, 2025. High-risk systems are those that pose a significant risk to health, safety, or fundamental rights. This includes AI used as safety components of productscovered by EU harmonization legislation (e.g., in medicine) or listed in Annex III, unless they do not pose a significant risk.High-risk systems are subjectto stringent obligations, such as adequate risk assessment, high data quality, activity logging, detailed documentation, clear user information, humanoversight, and a high level of robustness, cybersecurity, and accuracy. A key requirement of the AI Act is also “AI literacy.”From February 2, 2025, providers and users of AI systems must ensure their personnel possess a“sufficient level ofAI literacy.” This requirement applies to all AI systems, not just high-risk ones, and includes the ability to assess legal and ethicalimplications and critically interpret results. The AI Act is a horizontal framework designed to coexist with sectoral law. There is a need for clarity on the extent to which the general principles of theAI Act will regulate the use of AI by pharmaceutical companies, especially in the context of high-risk systems. The European Medicines Agency (EMA) andthe Heads of Medicines Agencies (HMA) are actively working on their own guidelines for AI in the medicinal product lifecycle,indicating the need forspecific industry regulations. 海外法规标准 2. Practical Aspects of AI Implementations in Pharmaceutical Manufacturing: EudraLex Annex 22 GuidelinesWithin the general framework of the AI Act, the pharmaceutical sector receives more detailed guidance through the update of EudraLex Volume 4. The revised Annex 11 concerningcomputerized systems and the entirely new Annex 22 dedicated to artificial intelligence are of crucial importance.Revised Annex 11–ComputerisedSystemsstrengthens the requirements for managing the lifecycle of computerized systems, emphasizing the comprehensive application ofQuality Risk Management (QRM) principles at all stages. Controls related to ensuring data integrity, audit trails, electronic signatures,andsystem security have been clarified.The New Annex 22–ArtificialIntelligenceestablishes specific requirements for the use of AI and machine learning in the manufacture of active substances and medicinalproducts.Scope of Application: Annex 22 applies to computerized systems where AI models are used incritical applications, i.e., those with a direct impact on patient safety, product quality, or data integrity, e.g., for data prediction or classification. This specifically concerns machine learning (AI/ML) models that gainfunctionality through training on data. Key Limitations and Exclusions: Annex 22 has very precise limitations. It applies exclusively tostatic models(non-adaptive during use) anddeterministic models(identical inputs always yieldidentical outputs).Dynamic models(continuously learning) andprobabilistic models(identical inputs may not yield identical results)should not be usedincritical GMP applications.Furthermore, Generative AI and Large Language Models (LLMs) are explicitly excludedfrom cr