您的浏览器禁用了JavaScript(一种计算机语言,用以实现您与网页的交互),请解除该禁用,或者联系我们。 [OECD]:探索2030年前可能的人工智能发展轨迹 - 发现报告

探索2030年前可能的人工智能发展轨迹

信息技术 2026-02-02 - OECD 🌱
报告封面

OECD ARTIFICIALINTELLIGENCE PAPERSFebruary 2026No. 55 OECD Artificial Intelligence Papers Exploring possibleAItrajectoriesthrough2030 Hamish Hobbs, Dexter Docherty,Luis Aranda,KasumiSugimoto,Karine Perset,Rafał Kierzenkowski This OECD Working Paper should not be reported as representing the official views of the OECD or of OECD or GPAI member countries.The opinions expressed and arguments employed are those of the authors. Working Papers describepreliminary results or research inprogress by the author(s) and are published to stimulate discussion on a broad range of issues on which the OECD works.Comments onWorkingPapers are welcomed, and may be sent to Directorate for Science, Technology and Innovation, OECD, 2rue André Pascal, 75775Paris Cedex 16, France. Note to Delegations: This document is also available on O.N.E Members & Partners under the reference code: This document, as well as any data and map included herein, are without prejudice tothe status of or sovereignty over any territory, to thedelimitation of international frontiers and boundaries and to the name of any territory, city or area. Cover image: ©Kjpargeter/Shutterstock.com ©OECD2026 Attribution4.0 International (CCBY4.0) This work is made available under the Creative Commons Attribution4.0 International licence. By using this work, you accept to be bound by the terms of this licence(https://creativecommons.org/licenses/by/4.0/). Attribution–you must cite the work. Translations–you must cite the original work, identify changes to the original and add the following text:In the event of any discrepancy between the original work and thetranslation, only the text of the original work should be considered valid. Adaptations–you must cite the original work and add the following text:This is an adaptation of an original work by the OECD. The opinions expressed and arguments employedin this adaptation should not be reported as representing the official views of the OECD or of its Member countries. Third-party material–the licence does not apply to third-party material in the work. If using such material, you are responsible for obtaining permission from the third party andfor any claims of infringement. Any dispute arising under this licence shall be settled by arbitration in accordance with the Permanent Court of Arbitration(PCA) Arbitration Rules2012. The seat of arbitrationshall be Paris(France). The number of arbitrators shall be one. Abstract Artificialintelligence (AI) has advanced rapidly in recent years, with systemsbecoming increasingly capable. This paper presents expert-and evidence-informed scenarios for how AI couldprogress by 2030.It considers recenttrends in AI and key uncertainties for AI progressthrough 2030.Currentevidence suggests that four different broad scenario classes are allplausiblethrough to2030: progress stalling, progress slowing, progresscontinuing, and progress accelerating.This suggests that AI progress by2030 has a plausible range that includes both a plateau at approximatelytoday’s level of capabilities and rapid improvement that leads to AI systemswhich broadly surpass human capabilities.This paperdecomposesplausible AI capabilityprogressin each scenarioin line with the OECD’sbetaAI capability indicators, exploring plausible capability trajectoriesfor AIsystem’sabilities inlanguage; social interaction; problem solving; creativity;metacognition and critical thinking; knowledge, learning and memory;vision;physicalmanipulation; and robotic intelligence. Acknowledgements This paper wasdrafted by Hamish Hobbsfrom the OECD Strategic Foresight Unit, in close collaborationwithDexter Docherty from the Strategic Foresight Unit andKasumi Sugimoto, Luis Aranda and KarinePerset fromthe OECDDivision on AI and Emerging Digital Technologies.Strategic direction and inputwere provided byRafałKierzenkowski, Senior Counsellorfor Strategic ForesightandJerry Sheehan andAudrey Plonk, respectively Director and Deputy Director of the OECD Directorate for Science, TechnologyandInnovation (STI). The teamgratefully acknowledges the input ofStuart Elliot,Sam Mitchelland ZinaEfcharyregarding theintegration of the OECDbetaAI Capability Indicators.The team also thanksNiamh Higgins-Lavery fromthe Strategic Foresight Unit for operationalsupportandShellie Laffont,Christy Dentler andAndreiaFurtadofrom STI CommunicationsandRomy de Courtay(external editor)for editorial support. The paper benefitted significantly from the oral and written contributions of GPAI delegates as well asexperts from the OECD.AI network of experts. The authors would like to extend their sincere gratitude tothe Delegations of Brazil,Greece,India, Israel,Spain, Saudi Arabia,Slovenia,Türkiye,andthe UnitedKingdom for their invaluable insights.The authors thank themembers of the OECD Expert Group on AIFutures for their insightful comments. This report benefited greatly from discussions and input fromthewriting teamof the International AI SafetyReport, includin