AI智能总结
stateof.ai airstreet.com About the authors Nathan Benaich Nathan is the General Partner ofAir Street Capital, aventure capital firm investing in AI-first companies. Heruns the Research and Applied AI Summit (RAAIS), theRAAIS Foundation (funding open-source AI projects), AIcommunities in the US and Europe, and Spinout.fyi(improvinguniversity spinout creation). He studiedbiology at Williams College and earned a PhD fromCambridge in cancer research as a Gates Scholar. About the authors Alex Chalmers Alexis Platform Lead at Air Street Capital andregularly writes research, analysis, and commentary onAI viaAir Street Press. Before joining Air Street, he wasan associate director at Milltown Partners, where headvisedbig technology companies,start-ups,andinvestors on policy and positioning. He graduated fromthe University of Oxford in 2017 with a degree inHistory. Artificial intelligence (AI) is a multidisciplinary field of science and engineering whose goal is to create intelligent machines. We believe that AI will be a force multiplier on technological progress in our increasingly digital, data-driven world. This isbecause everything around us today, ranging from culture to consumer products, is a product of intelligence. The State of AI Report is now in its seventh year. Consider this report as a compilation of the most interesting things we’veseen with a goal of triggering an informed conversation about the state of AI and its implication for the future. We consider the following key dimensions in our report: -Research: Technology breakthroughs and their capabilities.-Industry: Areas of commercial application for AI and its business impact.-Politics: Regulation of AI, its economic implications and the evolving geopolitics of AI.-Safety: Identifying and mitigating catastrophic risks that highly-capable future AI systems could pose to us.-Predictions: What we believe will happen in the next 12 months and a 2023 performance review to keep us honest. Produced byNathan Benaich and Air Street Capital team Definitions Artificial intelligence (AI):a broad discipline with the goal of creating intelligent machines, as opposed to the natural intelligence that isdemonstrated by humans and animals. Artificial general intelligence (AGI):a term used to describe future machines that could match and then exceed the full range of human cognitiveability across all economically valuable tasks. AI Agent:an AI-powered system that can take actions in an environment. For example, an LLM that has access to a suite of tools and has to decidewhich one to use in order to accomplish a task that it has been prompted to do. AI Safety:a field that studies and attempts to mitigate the risks (minor to catastrophic) which future AI could pose to humanity. Computer vision (CV):the ability of a program to analyse and understand images and video. Deep learning (DL):an approach to AI inspired by how neurons in the brain recognise complex patterns in data. The “deep” refers to the many layersof neurons in today’s models that help to learn rich representations of data to achieve better performance gains. Diffusion: An algorithm that iteratively denoises an artificially corrupted signal in order to generate new, high-quality outputs. In recent years it hasbeen at the forefront of image generation and protein design. Generative AI:A family of AI systems that are capable of generating new content (e.g. text, images, audio, or 3D assets) based on 'prompts'. Graphics Processing Unit (GPU):a semiconductor processing unit that enables a large number calculations to be computed in parallel. Historicallythis was required for rendering computer graphics. Since 2012 GPUs have adapted for training DL models, which also require a large number ofparallel calculations.stateof.ai 2024 Definitions (Large) Language model (LM, LLM):a model trained on vast amounts of (often) textual data to predict the next word in a self-supervised manner.The term “LLM” is used to designate multi-billion parameter LMs, but this is a moving definition. Machine learning (ML):a subset of AI that often uses statistical techniques to give machines the ability to "learn" from data without being explicitlygiven the instructions for how to do so. This process is known as “training” a “model” using a learning “algorithm” thatprogressively improves model performance on a specific task. Model:a ML algorithm trained on data and used to make predictions. Natural language processing (NLP): the ability of a program to understand human language as it is spoken and written. Prompt: a user input often written in natural language that is used to instruct an LLM to generate something or take action. Reinforcement learning (RL):an area of ML in which software agents learn goal-oriented behavior by trial and error in an environment thatprovides rewards or penalties in response to their actions (called a “policy”) towards achieving that goal. Self-supervised learning (SSL):a form