您的浏览器禁用了JavaScript(一种计算机语言,用以实现您与网页的交互),请解除该禁用,或者联系我们。[未知机构]:国际人工智能安全报告:首次关键更新 - 发现报告

国际人工智能安全报告:首次关键更新

信息技术2025-10-14-未知机构李***
AI智能总结
查看更多
国际人工智能安全报告:首次关键更新

First Key UpdateCapabilities and Risk Implications Contributors Chair Nicole Wheeler, Advanced Research andInventionAgency (ARIA)Daniel Privitera (Interim Lead 2026),KIRA CenterSören Mindermann (Interim Lead 2026),independent Prof. Yoshua Bengio, Université de Montréal /LawZero / Mila - Quebec AI Institute Expert Advisory Panel The Expert Advisory Panel is an internationaladvisory body that advises the Chair on thecontent of the Report. The Expert Advisory Panelprovided technical feedback only. The Report –and its Expert Advisory Panel – does not endorseany particular policy or regulatory approach. Senior Advisers Daron Acemoglu, MassachusettsInstitute of TechnologyThomas G. Dietterich, Oregon State UniversityFredrik Heintz, Linköping UniversityGeoffrey Hinton, University of TorontoNick Jennings, Loughborough UniversitySusan Leavy, UniversityCollege DublinTeresa Ludermir, FederalUniversity of PernambucoVidushi Marda, AI CollaborativeHelen Margetts, University of OxfordJohn McDermid, University of YorkJane Munga, Carnegie Endowment forInternational PeaceArvind Narayanan, Princeton UniversityAlondra Nelson, Institute forAdvanced StudyClara Neppel, IEEESarvapali D. (Gopal) Ramchurn,Responsible AI UKStuart Russell, University of California, BerkeleyMarietje Schaake, Stanford UniversityBernhard Schölkopf, ELLIS Institute TübingenAlvaro Soto, Pontificia UniversidadCatólica de ChileLee Tiedrich, University of Maryland/DukeGaël Varoquaux, InriaAndrew Yao, Tsinghua UniversityYa-Qin Zhang, Tsinghua University The Panel comprises representatives from30 countries, the United Nations (UN), EuropeanUnion (EU), and the Organisation for EconomicCo-operation and Development (OECD). Pleasefind here –internationalaisafetyreport.org/expert-advisory-panel– the membership of theExpert Advisory Panel to the 2026 InternationalAISafety Report. Lead WritersStephen ClareCarina Prunkl Writing Group Maksym Andriushchenko, ELLIS Institute TübingenBen Bucknall, University of OxfordPhilip Fox, KIRA CenterTiancheng Hu, University of CambridgeCameron Jones, Stony Brook UniversitySam Manning, Centre for the Governance of AINestor Maslej, Stanford UniversityVasilios Mavroudis, The Alan Turing InstituteConor McGlynn, Harvard UniversityMalcolmMurray, SaferAIShalaleh Rismani, Mila - Quebec AI InstituteCharlotte Stix,Apollo ResearchLucia Velasco, Maastricht University Secretariat UK AI Security Institute:Lambrini Das, Claire Dennis, Arianna Dini, Freya Hempleman, Samuel Kenny,Patrick King, Hannah Merchant, Jamie-Day Rawal,Rose WoolhouseMila - Quebec AI Institute:Jonathan Barry, Marc-Antoine Guérard, Claire Latendresse, Cassidy MacNeil,Benjamin Prud’homme Acknowledgements The Secretariat and writing team appreciated the support, comments and feedback fromJean-Stanislas Denain, Marius Hobbhahn, José Hernández-Orallo, Vera Liao, and Ray Perrault,as well as the assistance with quality control and formatting of citations by José Luis León Medinaand copyediting by Amber Ace. © Crown owned 2025 Disclaimer This publication is licensed under the termsof the Open Government Licence v3.0 exceptwhere otherwise stated. To view this licence,visithttps://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/or writeto the Information Policy Team, The NationalArchives, Kew, London TW94DU, or email:psi@nationalarchives.gsi.gov.uk. The report does not represent the views of theChair, any particular individual in the writing oradvisory groups, nor any of the governmentsthat have supported its development. This reportis a synthesis of the existing research on thecapabilities and risks of advanced AI. The Chairof the Report has ultimate responsibility forit and has overseen its development frombeginning to end. Any enquiries regarding this publicationshould be sent to:secretariat.AIStateofScience@dsit.gov.uk. Research series number: DSIT 2025/033 Foreword Foreword The field of AI is moving too quickly for a single yearly publication to keep pace.Significant changes can occur on a timescale of months, sometimes weeks. This is whywe are releasing Key Updates: shorter, focused reports that highlight the most importantdevelopments between full editions of the International AI Safety Report. With theseupdates, we aim to provide policymakers, researchers, and the public with up-to-dateinformation to support wise decisions aboutAI governance. This first Key Update focuses on areas where especially significant changes haveoccurred since January 2025: advances in general-purpose AI systems' capabilities,and the implications for several critical risks. New training techniques have enabled AIsystems to reason step-by-step and operate autonomously for longer periods, allowingthem to tackle more kinds of work. However, these same advances create new challengesacross biological risks, cyber security, and oversight of AI systems themselves. The International AI Safety Report is intended to help readers assess, anticipate, andmanage risks from