Progress Report Published in February 2026 Foreword:The opportunity AI governance, and focuses in particular on agenticand frontier risks from increasingly sophisticatedmodels. In such a dynamic environment, it alsoshows how our systems are built to be able to detectand then adapt to emerging risks. Whether we arehardening agentic systems against adversarial If 2024 was defined by building out the foundationsfor an AI future, 2025 marked AI’s shift into a helpful,proactive partner, capable of reasoning andnavigating the world with users. As modelsgrow even more sophisticated, we see users andbusinesses around the globe transitioning from exploration to integration, finding new ways toput these tools to work in their daily lives. Fromfoundational advances inscientific discovery Responsibility is not only about stopping badoutcomes. It is also about enabling broad access tothese tools for the maximum benefit of people andsociety. By striking the right balance we can ensurethat AI is used to tackle existential challenges that Since we started publishing these reports,our responsible AI development approach hascontinued to mature and is now fully embeddedwith our product development and researchlifecycles. In 2025, as models became more capable,personalized, and multi-modal, we relied upon robust Building trust in these tools requires deeppartnership with governments, academicsand civil society. We will continue to vigorouslycollaborate to set standards for this remarkableera. As AI advances, we’ll continue to iterate This work continues to be guided by ourAI Principles,which we updated last year to reflect our latestunderstanding of the opportunities and risks Laurie RichardsonVice President, Trust & Safety, Google Helen KingVice President, Responsibility, Google DeepMind How we The foundations for AI-driven innovation are systemsthat are developed and deployed responsibly fromthe start. We are bold in our ambition to deliver theeconomic and societal benefits of the AI era —benefits that can unlock opportunity for communities products, including our most advanced AI systemsthat leveragepersonal intelligenceandagentic AI. Mitigation.We proactively mitigate risks through bothsupervised fine-tuning and reinforcement learningto ensure models are aligned with our content safetypolicies. Additionally, we deploy out-of-modelmitigations, such as safety filters and conditionalsystem instructions, to provide additional layers ofprotection by identifying, filtering out, or steeringmodel output away from harmful or inappropriate A multi-layered approach to We employ a multi-layered approach to AI governancethat combines human expertise, user feedback,and automated systems that help scale our work to Research.We take a research-driven approach to AIrisk and governance. This includes identifying currentand emerging risks associated with our models andproducts across new modalities and form factors Launch Review and Reporting.Before launching amodel or product, we evaluate a wide array of risks todetermine whether our safety guardrails appropriatelymitigate those risks or if additional protections areneeded. Our AI launches undergoexpert reviews to Policies and Frameworks.We develop rigorousAI policies and guidelines — such as our content safetypoliciesandProhibited Use Policy— that are designedto prevent potentially harmful outputs and misuse ofour products. Developed with internal and externalexperts, these protections guide multi-modal outputs Monitoring and Enforcement.We use a combinationof automated systems and human reviews to engagein continuous post-launch monitoring to improve ourAI models and products, and detect activity andbehavior that suggests misuse of our consumer Governance Forums.Our multi-layered processincludes launch reviews for both frontier models andapplications developed using these models. Our modellaunches are reviewed at Google DeepMind’s LaunchReview forum, which approves model releases, and our Futures Council, which consists of members of Google’ssenior management and Alphabet’s Board of Directors.Building on ourAI Principles, the Council providesperspectives and recommendations to our Board andmanagement team on long-term opportunities, risks, Testing.We take a comprehensive approach tostress test our systems against our policies andframeworks. Our testing includes both scaled Case study Responsible AI Our multi-layered approach to responsible AI governance isdesigned to adapt to each unique innovation. Our most recentlaunches, includingGemini 3,ourlatestFrontier Safety Framework,and our progress in emerging AIfields such asagenticAI,personal Gemini 3: our most securemodel yet framework contains a set of protocols designedto identify and mitigate severe risks from frontierAI models, such as cyberattacks, CBRN risks, and We conducted rigorous testing to assessmodel alignment with our policies andframeworks. We applied these insights todeploy targeted mitigations t