
Labor market impacts of AI: A new measureand early evidence PublishedMarch 5, 2026 AuthorsMaxim Massenkoff and Peter McCrory AcknowledgementsRuth Appel, Tim Belonax, Keir Bradwell, Andy Braden, Dexter Callender III, Miriam Chaum, Madison Clark, Jake Eaton,Deep Ganguli, Kunal Handa, Ryan Heller, Lara Karadogan, Jennifer Martinez, Jared Mueller, Sarah Pollack, DavidSaunders, Carl De Torres, Jack Clark. We additionally thank Martha Gimbel, Anders Humlum, Evan Rose, and Nathan Wilmers for feedback on earlier versionsof this note. Key findings •We introduce a new measure of AI displacement risk,observed exposure, thatcombines theoretical LLM capability and real-world usage data, weightingautomated (rather than augmentative) and work-related uses more heavily •AI is far from reaching its theoretical capability: actual coverage remains afraction of what’s feasible•Occupations with higher observed exposure are projected by the BLS to growless through 2034•Workers in the most exposed professions are more likely to be older, female,more educated, and higher-paid•We find no systematic increase in unemployment for highly exposed workerssince late 2022, though we find suggestive evidence that hiring of youngerworkers has slowed in exposed occupations An overview of our method and some of our main results See below for how we measure task coverage and the impacts of AI on unemployment. The rapid diffusion of AI is generating a wave of research measuring andforecasting its impacts on labor markets. But the track record of pastapproaches gives reason for humility. For example, a prominent attempt to measure job offshorability identifiedroughly a quarter of US jobs as vulnerable, but a decade on, most of those jobsmaintained healthy employment growth. The government’s own occupationalgrowth forecasts, while directionally correct, have added little predictive valuebeyond linear extrapolation of past trends. Even in hindsight, the impact ofmajor economic disruptions on the labor market is often unclear. Studies on theemployment effects of industrial robots reach opposing conclusions, and thescale of job losses attributed to the China trade shock continues to be debated.1 In this paper, we present a new framework for understanding AI’s labormarket impacts, and test it against early data, finding limited evidence thatAI has affected employment to date. Our goal is to establish an approach formeasuring how AI is affecting employment, and to revisit these analysesperiodically. This approach won’t capture every channel through which AIcould reshape the labor market, but by laying this groundwork now, beforemeaningful effects have emerged, we hope future findings will more reliablyidentify economic disruption than post-hoc analyses. It is possible that the impacts of AI will be unmistakable. This framework ismost useful when the effects are ambiguous—and could help identify the mostvulnerable jobs before displacement is visible. Counterfactuals Causal inference is easier when the effects are large and sudden. TheCOVID-19 pandemic and accompanying policy measures caused economicdisruption so stark that sophisticated statistical approaches were unnecessaryfor many questions. For example, unemployment jumped sharply in the earlyweeks of the pandemic, leaving little room for alternative explanations. The impacts of AI, however, might be less like COVID and more like theinternet or trade with China. The effects may not be immediately clear fromaggregate unemployment data; factors like trade policy and the business cyclecould cloud interpretations of trendlines. One common approach is to compare outcomes between more or less AI-exposed workers, firms, or industries, in order to isolate the effect of AI fromconfounding forces.2Exposure is typically defined at the task level: AI can gradehomework but not manage a classroom, for example, so teachers are consideredless exposed than workers whose entire job can be performed remotely. Our work follows this task-based approach, incorporating measures of theoreticalAI capability and real-world usage, before aggregating to occupations.³ Measuring exposure Our approach combines data from three sources. •The O*NET database, which enumerates tasks associated with around 800unique occupations in the US.•Our own usage data (as measured in the Anthropic Economic Index).•Task-level exposure estimates from Eloundou et al. (2023), which measurewhether it is theoretically possible for an LLM to make a task at least twice as fast. Eloundou et al.’s metric,β, scores tasks on a simple scale: 1 if a task can bedoubled in speed by an LLM alone, 0.5 if it requires additional tools or softwarebuilt on top of the LLM, and 0 otherwise.⁴ Why might actual usage fall short of theoretical capability? Some tasksthat are theoretically possible may not show up in usage because of modellimitations. Others may be slow to diffuse due to legal constraints, specificsoftware requirements, human verificat