|

Labor Market Impacts of AI: A New Measure and Early Evidence

Summary of the Report

The report, titled “Labor Market Impacts of AI: A New Measure and Early Evidence,” was published by Anthropic on March 5, 2026, authored by Maxim Massenkoff and Peter McCrory. It introduces a novel metric called observed exposure, which measures AI’s labor market risks by combining theoretical large language model (LLM) capabilities (drawn from Eloundou et al., 2023) with real-world usage data from Anthropic’s Claude platform. This metric prioritizes automated, work-related AI applications over augmentative or non-professional uses, and weights tasks by their time fraction in occupations.

Key findings include:

  • AI’s actual deployment lags far behind its theoretical potential: For example, in Computer & Math occupations, 94% of tasks are theoretically feasible for LLMs, but only 33% show observed coverage.
  • Occupations with higher observed exposure are projected by the U.S. Bureau of Labor Statistics (BLS) to grow less from 2024–2034 (a 0.6 percentage point drop in growth for every 10% increase in exposure).
  • Highly exposed workers (top quartile) are demographically distinct: older (by ~2 years), more female (54% vs. 39%), more educated (e.g., 17% with graduate degrees vs. 5%), and higher-paid ($33/hour vs. $22/hour) compared to unexposed workers.
  • No systematic unemployment increase in exposed occupations since late 2022, but suggestive evidence of slowed hiring for young workers (ages 22–25) in these roles (a ~14% drop in job-finding rates post-ChatGPT).

The methodology draws from three sources:

  • O*NET database for ~800 U.S. occupations and their tasks.
  • Anthropic’s Economic Index for Claude usage data (from August and November 2025).
  • Eloundou et al.’s β scores (0–1 scale) for theoretical LLM feasibility.

The report uses Current Population Survey (CPS) data for unemployment trends and emphasizes a difference-in-differences approach to isolate AI effects from broader economic factors. Data is available on Hugging Face, though the dataset viewer shows a FileNotFoundError, suggesting incomplete uploads; it includes releases from 2025–2026 with task mappings and primitives, but no direct observed coverage stats in summaries.

Top 10 most exposed occupations (from Figure 3):

OccupationObserved ExposureLeading Automated Task
Computer programmers74.5%Write, update, and maintain software programs
Customer service representatives70.1%Confer with customers to provide info, take orders, handle complaints
Data entry keyers67.1%Read source documents and enter data into systems
Medical record specialists66.7%Compile, abstract, and code patient data
Market research analysts and marketing specialists64.8%Prepare reports of findings, illustrating data graphically and translating complex findings into written text
Sales representatives, wholesale and manufacturing (non-technical)62.8%Contact customers to demonstrate products and solicit orders
Financial and investment analysts57.2%Inform investment decisions by analyzing financial information to forecast business, industry, or economic conditions
Software quality assurance analysts and testers51.9%Modify software to correct errors or improve performance
Information security analysts48.6%Perform risk assessments and test data processing security
Computer user support specialists46.8%Answer user inquiries regarding computer software or hardware operation to resolve problems

Strengths of the Report

  • Innovative Metric: Observed exposure bridges a critical gap in prior research, which often relies solely on theoretical feasibility (e.g., Eloundou et al.). By incorporating real Claude usage, it provides a more grounded view of deployment, weighted toward automation and professional contexts. This could evolve into a leading indicator as AI adoption grows.
  • Timely and Forward-Looking: Published amid rising AI concerns (e.g., post-ChatGPT layoffs at companies like Block and Amazon), it establishes a framework for ongoing monitoring. The focus on counterfactuals (comparing exposed vs. unexposed groups) adds rigor, drawing lessons from past disruptions like automation or trade shocks.
  • Validation with External Data: The weak negative correlation with BLS projections (R² ~0.22, based on Figure 4) offers some external validation, unlike pure theoretical measures. It also highlights equity issues, showing AI disproportionately affects higher-skilled, female, and educated workers—contrary to stereotypes of low-wage job loss.
  • Transparency: Appendix details (linked but not in the provided PDF) address methodological judgments, and high Spearman correlations across variants suggest robustness.

Criticisms and Limitations

While ambitious, the report has notable flaws, some self-acknowledged, others highlighted in external critiques:

  • Data Bias and Narrow Scope: The metric relies exclusively on Claude usage, which may not represent the broader AI ecosystem (e.g., ChatGPT, Gemini, or enterprise tools like Copilot). A Forbes critique argues this makes findings “narrower than its framing suggests,” as Claude’s footprint is limited to specific users and APIs. For instance, coding tasks dominate due to Claude’s strengths, potentially overestimating exposure in tech-heavy roles while underestimating in others. Geographic bias (U.S.-centric) and sampling from only two 2025 datasets limit generalizability.
  • Overemphasis on Unemployment as Outcome: The report prioritizes unemployment but acknowledges it misses subtler effects like wage compression, reduced promotions, or job quality decline. Acalytica’s analysis notes this ignores “augmentation” benefits, where AI boosts productivity without displacement. Brynjolfsson et al. (2025, cited in the report) found 6–16% employment drops for young workers in exposed roles, but Anthropic’s suggestive 14% hiring slowdown is “barely significant” and vulnerable to mismeasurement (e.g., CPS panel data caveats).
  • Causation vs. Correlation: The BLS growth correlation is “slight” and could stem from non-AI factors (e.g., post-COVID shifts). No such link exists with Eloundou’s measure alone, raising questions about observed exposure‘s added value. The Algorithmic Bridge calls the capability-usage gap a “loading bar” for disruption, but the report’s humility about past forecasting failures (e.g., offshoring overestimates) is undercut by alarmist phrasing like a potential “Great Recession for white-collar workers.”
  • Early-Stage Evidence: With data only through 2025, effects may lag (e.g., diffusion hurdles like regulations). The report admits model limitations (e.g., Eloundou’s 2023 β scores are outdated) and calls for updates, but critics like Reddit’s r/antiai thread argue reactions to the report reveal more about AI hype than substance.
  • Potential Conflict: As an Anthropic product, the report uses self-generated data, risking bias toward portraying Claude as impactful yet “safe” (e.g., no mass unemployment). Frugal Scientific notes this as a “capability-usage gap” that’s “massive,” but questions if it’s truly closing.

Further Evidence from Recent Studies (2025–2026)

To contextualize, here’s evidence from other sources, showing AI’s impacts are mixed—some displacement, but also growth and premiums:

  • Job Growth in Exposed Areas: Vanguard’s 2025 analysis found high-AI-exposure occupations grew 1.7% post-COVID (mid-2023–2025), faster than pre-COVID (1%), with real wages up 3.8% (vs. 0.1%). PwC’s 2025 Barometer shows AI-exposed industries had 4x productivity growth since 2022, with AI-skilled workers earning 56% premiums.
  • Hiring Slowdowns for Youth: Aligning with the report, IMF (Jan 2026) notes 3.6% lower employment in AI-vulnerable occupations after five years in high-AI-demand regions; entry-level jobs are hardest hit. Josh Bersin’s Dec 2025 analysis shows U.S. unemployment up 24.7% in two years, with slowed entry-level hiring. Stanford (Nov 2025) found 16% relative employment decline for graduates in AI-exposed roles.
  • Net Job Creation Projections: Goldman Sachs (2026 update) estimates AI could displace 300M full-time jobs but create net gains via productivity (e.g., 7% global GDP boost). WEF (2025) projects 92M displacements but 170M new roles by 2030 (net +78M). Yale’s Budget Lab (Jan 2026) finds no AI-related employment shifts in CPS data through Dec 2025.
  • Wage and Quality Effects: WEF (Feb 2026) shows AI jobs offer 23% higher wages, 2x parental leave, and 3x remote work. But HBR (Feb 2026) argues AI intensifies work, not reduces it, based on surveys showing increased tasks. McKinsey’s 2025 survey: 32% expect workforce reductions, but 43% no change.
  • Layoffs and Anticipation: HBR (Jan 2026) survey of 1,006 executives: Layoffs are often anticipatory, not from proven AI performance. J.P. Morgan (Aug 2025) notes rising unemployment for AI-exposed majors like computer engineering. CNBC (Jan 2026) reports employee AI job-loss fears up to 40%.

Anthropic’s Jan 2026 Economic Index update shows job coverage rising to 49% (from 36% in Jan 2025), with uneven global impacts.

Overall Assessment

This report is a valuable step toward empirical tracking of AI’s effects, superior to purely theoretical models in capturing deployment realities. However, its Claude-centric data and focus on unemployment limit its scope, potentially understating subtler harms like wage stagnation or overhyping future risks. Broader evidence suggests AI is boosting productivity and wages in exposed fields but disproportionately burdens young entrants, with net job gains possible if reskilling accelerates. Critically, the “capability-usage gap” is closing faster than anticipated—policymakers should prioritize training over alarmism. Future iterations with multi-model data could strengthen it.


References

  • Acemoglu, Daron and Pascual Restrepo, “Robots and Jobs: Evidence from US Labor Markets,” Journal of Political Economy, 2020, 128 (6), 2188–2244.
  • Acemoglu, Daron, David Autor, Jonathon Hazell, and Pascual Restrepo, “Artificial intelligence and jobs: Evidence from online vacancies,” Journal of Labor Economics, 2022, 40 (S1), S293–S340.
  • Appel, Ruth, Maxim Massenkoff, Peter McCrory, Miles McCain, Ryan Heller, Tyler Neylon, and Alex Tamkin, “Anthropic Economic Index report: economic primitives,” 2026.
  • Autor, David H, David Dorn, and Gordon H Hanson, “The China syndrome: Local labor market effects of import competition in the United States,” American Economic Review, 2013, 103 (6), 2121–2168.
  • Autor, David H, & Thompson, N. (2025). Expertise. NBER Working Paper, (w33941).
  • Blinder, Alan S et al., “How many US jobs might be offshorable?,” World Economics, 2009, 10 (2), 41.
  • Borusyak, Kirill, Peter Hull, and Xavier Jaravel, “Quasi-experimental shift-share research designs,” The Review of Economic Studies, 2022, 89 (1), 181-213.
  • Brynjolfsson, Erik, Bharat Chandar, and Ruyu Chen, “Canaries in the coal mine? six facts about the recent employment effects of artificial intelligence,” Digital Economy, 2025.
  • Eckhardt, Sarah and Nathan Goldschlag, “AI and Jobs: The Final Word (Until the Next One),” Economic Innovation Group (EIG), August 2025. Available at: https://eig.org/ai-and-jobs-the-final-word/
  • Eloundou, Tyna, Sam Manning, Pamela Mishkin, and Daniel Rock, “Gpts are gpts: An early look at the labor market impact potential of large language models,” arXiv preprint arXiv:2303.10130, 2023, 10.
  • Fujita, S., Moscarini, G., & Postel-Vinay, F. (2024). Measuring employer-to-employer reallocation. American Economic Journal: Macroeconomics, 16(3), 1-51.
  • Gans, Joshua S. and Goldfarb, Avi, “O-Ring Automation,” NBER Working Paper No. 34639, December 2025 Available at SSRN: https://ssrn.com/abstract=5962594
  • Gimbel, Martha, Molly Kinder, Joshua Kendall, and Maddie Lee, “Evaluating the Impact of AI on the Labor Market: Current State of Affairs,” Research Report, The Budget Lab at Yale, New Haven, CT October 2025 Available at: https://budgetlab.yale.edu.
  • Graetz, Georg and Guy Michaels, “Robots at Work,” Review of Economics and Statistics, 2018, 100 (5), 753-768.
  • Hampole, Menaka, Dimitris Papanikolaou, Lawrence DW Schmidt, and Bryan Seegmiller, “Artificial intelligence and the labor market,” Technical Report, National Bureau of Economic Research 2025.
  • Handa, Kunal, Alex Tamkin, Miles McCain, Saffron Huang, Esin Durmus, Sarah Heck, Jared Mueller, Jerry Hong, Stuart Ritchie, Tim Belonax, Kevin K. Troy, Dario Amodei, Jared Kaplan, Jack Clark, and Deep Ganguli, “Which Economic Tasks are Performed with AI? Evidence from Millions of Claude Conversations,” 2025.
  • Hui, Xiang, Oren Reshef, and Luofeng Zhou, “The short-term effects of generative artificial intelligence on employment: Evidence from an online labor market,” Organization Science, 2024, 35 (6), 1977-1989.
  • Johnston, Andrew and Christos Makridis, “The labor market effects of generative AI: A difference-in-differences analysis of AI exposure,” Available at SSRN 5375017, 2025.
  • Massenkoff, Maxim, “How predictable is job destruction? Evidence from the Occupational Outlook,” 2025. Working Paper.
  • Ozimek, Adam, “Overboard on Offshore Fears,” 2019. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3777307
  • Tamkin, Alex and Peter McCrory, “Estimating AI productivity gains from Claude conversations,” 2025.
  • Tomlinson, K., Jaffe, S., Wang, W., Counts, S., & Suri, S. (2025). Working with AI: measuring the applicability of generative AI to occupations. arXiv preprint arXiv:2507.07935.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *