Executive Summary Our vision of trustworthy AGI development The field of Artificial Intelligence AI is rapidly advancing, with systems increasingly performingat or above human levels across various domains. These breakthroughs offer unprecedented op-portunities to address humanity’s greatest challenges, from scientific discoveries and improvedhealthcare to enhanced economic productivity.However, this rapid progress also introducesunprecedented risks. As advanced AI development and deployment outpace crucial safety mea-sures, the need for robust risk management has never been more critical. Shanghai Artificial Intelligence Laboratory is an advanced research institute focusing on AI re-search and application. Working in concert with universities and industry, we explore the futureof AI by conducting original and forward-looking scientific research that makes fundamental con-tributions to basic theory as well as innovations in various technological fields. We strive to be-come a top-tier global AI laboratory, committed to the safe and beneficial development of AI. Toproactively navigate these challenges and foster a global “race to the top” in AI safety, we haveproposed the AI45° Law [1], a roadmap to trustworthy AGI. Introducing our Frontier AI Risk Management Framework In July 2025, Shanghai AI Laboratory, in collaboration with Concordia AI,1released the FrontierAI Risk Management Framework v1.0 (the “Framework”). We proposed a robust set of protocolsdesigned to empower general-purpose AI developers with comprehensive guidelines for proac-tively identifying, assessing, mitigating, and governing a set of severe AI risks that pose threatsto public safety and national security, thereby safeguarding individuals and society. This framework serves as a guideline for general-purpose AI GPAI model developers to man-age the potential severe risks from their general-purpose AI models. This framework aligns withstandards and best practices in the risk management of safety-critical industries. It encompassessix interconnected stages:risk identification, risk thresholds, risk analysis, risk evaluation, riskmitigation, and risk governance(see Framework Overview). Evolution to Version 1.5 In February 2026, we were proud to release Version 1.5 of the Framework. Key updates in thenew version include: •Expanded loss of control content: To better implement the core principles of “ensuring ulti-mate human control” and “proactive prevention and response” to guard against AI technol-ogy getting out of control,2we refined the loss of control risk scenarios and thresholds; we also strengthened agent oversight protocols and emergency response mechanisms, aimingto provide guidance to help academia and industry continuously monitor these risks. •Operationalizing risk analysis: To make the Framework more operational, we have updatedthe risk analysis guidance for GPAI model providers.By clarifying the essential modulesof this process, such as model evaluation, elicitation, risk modeling and estimates, we aimto make it easier for developers to practically implement risk analysis best practices (seeSection 3. Risk Analysis). •Enhanced interoperability: We have mapped our risk management measures against lead-ing international and domestic AI risk management guidance, specifically China’s NationalTC260 AI Safety Governance Framework 2.0 and the EU Code of Practice for General-Purpose AI Models Safety and Security Chapter). This helps developers adopt safety mea-sures shared by major domestic and international regulatory guidance (see Appendix I andAppendix II). AI safety as a global public good As one of the first non-profit AI laboratories to propose a comprehensive framework of this kind,we firmly believe that AI safety is a global public good [3, 4]. This framework represents our cur-rent understanding and recommended approach for anticipating and addressing severe AI risks.We call on frontier AI developers, policymakers, and stakeholders to adopt AI risk managementframeworks. As AI capabilities continue to advance rapidly, collective action today is essentialto ensure that transformative AI benefits humanity while avoiding catastrophic risks. We invitecollaboration on framework implementation and commit to sharing our learnings openly.Trulyeffective societal risk mitigation will only be achieved when critical organizations adopt and im-plement similar levels of protection. The stakes are too high, and the potential benefits too great,for anything less than our most coordinated and comprehensive response. Contributions and Acknowledgement July 2025 Version Scientific DirectorZhou BowenLead AuthorsBrian Tse†, Fang Liang*, Xu Jia*, Duan Yawen*, Shao Jing*ContributorsZhang Jie, Liu Dongrui, Wang Weibing, Cheng Yuan, Yu Yi, Guo Jiaxuan,Lu Chaochao†First author*Equal contributions February 2026 Updates ContributorsDuan Yawen, Fang Liang, Xu Jia, Shao Jing, Brian Tse, Zhang Jie, WangWeibing, Hu Xia Acknowledgement T