AI智能总结
This Epoch AI report was commissioned by Google DeepMind. All points ofviews and conclusions expressed are those of the authors and do notnecessarily reflect the position or endorsement of Google DeepMind. Table of Contents Executive summary3Introduction6Scaling and capabilities19Scale25Compute28Investment33Data39Hardware45Energy and the environment50Interlude: From scale to capabilities57Capabilities66How capabilities are deployed67Software engineering72Mathematics77Molecular biology83Weather predictions89Discussion and conclusion92Appendix: AIʼs potential to reduce GHG emissions94Appendix: benchmark extrapolation details99 Executive summary How will advanced AI be developed, and what will its effects be in the worldat large? What will happen if current trends in scaling up AI developmentpersist all the way to 2030? This report examines what this scale-up wouldinvolve in terms of compute, investment, data, hardware, and energy. Weexplore the role of compute across inference and training, the promise ofeconomic value that would be necessary to justify such investment, andpotential challenges in data availability and energy. Based on these predictions for how AI will be developed, we turn to predictfuture AI capabilities, and the impacts they will have in scientific R&D. AI forscience is the explicit goal of several leading AI developers, and is likely tobe among the top priorities for AI deployment. Scientific R&D provides avaluable lens for understanding what advanced AI will achieve. Compute scaling has played a key role in AI development, and will likelycontinue to do so.Compute for training and inference drives improvementsin AI capabilities, and much progress in AI research has come fromdeveloping general-purpose methods to enable the use of more compute. The trajectory of AI development can be forecasted based on continuedcompute scaling.Scaling has significant implications across many areas ofAI development: training and inference compute, investment, data,hardware, and energy. When we predict that compute scaling will continue,we can then examine the consequences within each of these — and howthey need to scale accordingly to allow compute scaling trends to continue. Exponential growth will likely continue to 2030 across all key trends. Across training and inference compute, investment, data, hardware, andenergy, we argue that a continuation of existing trends is feasible. Weexplore each factor in detail, showing how growth could continue to 2030,and discussing the most credible reasons for slowdown or accelerationbefore then. We argue the most credible reasons for a deviation from trendare changes in societal coordination of AI development (e.g. investorsentiment or tight regulation), supply bottlenecks for AI clusters (e.g. chips or energy), or paradigmatic shifts in AI production (e.g. substantial R&Dautomation). On current trends, the largest AI models of 2030 will require investmentsof hundreds of billions of dollars, and 1,000x the compute of todayʼslargest models.Investment of this scale is potentially justified if AI canautomate significant tasks in the economy. The present trend of 3x annualAI lab revenue growth would lead to revenues exceeding hundreds ofbillions of dollars before 2030. Finding data for such training runs may bechallenging, but between synthetic data and multimodal data, this should besurmountable. Training runs of this scale will require gigawatts of electricalpower, approaching the average demand of entire large cities. Continued scaling will lead to continued progress in capabilities. Once atask begins to show substantive progress with scaling, performancetends to predictably improve with further scaling.Existing AI benchmarks,despite their limitations, cover many capabilities that would be genuinelyuseful if automated in the real world. Thus, existing benchmarks can informour predictions on AIʼs future capabilities. This will be an imperfect view,shaped by the representativeness of existing benchmarks, and limited towhere we can already measure progress. We discuss these challengesfurther inInterlude: from scale to capabilities. Nevertheless, this provides uswith a compelling baseline prediction for what AI will be able to do. At a minimum, AI will act as a valuable tool for scientific R&D.AI systemsalready excel at helping users find relevant information, implement code,and perform well-defined prediction tasks based on copiousdomain-specific data. All of these capabilities are set to continue improving. For example, AI will be able to implement complex scientific softwarefrom natural language, assist mathematicians formalising proof sketches,and answer open-ended questions about biology protocols.All of theseexamples are taken from existing AI benchmarks showing progress, wheresimple extrapolation suggests they will be solved by 2030. Moreover, AItools for domain-specific applications will continue to improve. For example,AI tools already offer state-of