Artificial Intelligence & Labor

AI Exposure vs. Realized AI Usage: What Do We Learn?

Exposure indices measure where AI could matter. Observed Claude usage data shows where adoption is happening. They don’t fully align.

Shisham Adhikari · February 27, 2026

Key Takeaways
  1. Rankings align; intensity doesn’t. Exposure metrics broadly rank occupations correctly but miss how uneven usage is.
  2. Adoption is uneven. Computing roles over-adopt relative to exposure; office/admin roles under-adopt despite high exposure.
  3. Diffusion is frontier-led. The top exposure decile accounts for ~45% of observed usage, concentrated in digitally native jobs.

Exposure indices have become a standard way to measure where AI could affect work. But they capture potential, not adoption. This note compares seven exposure metrics widely used in academic research to realized usage from the Anthropic Economic Index (AEI), which tracks how Claude is used across occupations.

The sample includes 586 matched occupations, covering 91% of total AEI usage mass. That gives a representative view of where observed AI interactions concentrate.

What Am I Comparing?

The exposure metrics come from leading academic studies, each approaching the question from a slightly different angle. Some rely on human annotators, some on language model judgments, some on patent filings. All attempt to quantify how much of an occupation's task bundle could plausibly be affected by generative AI. For this analysis, I use six z-score standardized metrics plus one PCA-composite score, sourced from the Yale Budget Lab's exposure dataset (see appendix for details).

The usage measure comes from the AEI, which maps observed Claude interactions to O*NET occupations. Rather than measuring what AI could do, it measures what workers are actually asking AI to do on their behalf. I refer to this as realized usage throughout.

The object of interest is the adoption gap: for each occupation, how much does standardized AEI usage deviate from what a given exposure metric would predict?

Gap = z(AEI usage) − z(Exposure score)

Positive values mean an occupation is using AI more than its exposure score would predict. Negative values mean it is using AI less. Understanding what drives these gaps is the central contribution of this analysis.

586 Occupations in overlap
91% AEI usage mass captured
7 Exposure metrics benchmarked
~45% Usage mass in top exposure decile

Exposure and Usage: Aligned, but Only Moderately

The most basic question is whether occupations that score highly on exposure indices are also the occupations with the highest observed Claude usage. The answer is: broadly yes, but with meaningful slippage.

Table 1 — Correlation between exposure metrics and AEI usage (n = 586 occupations)

Exposure Metric Pearson r Spearman ρ
DV Rating (text-based)0.3590.588
GenAI Core0.3530.573
PCA Composite0.3440.637
GenAI Total0.3360.584
Human Rating Beta0.2660.596
AI Applicability Score0.2630.491
AIOE (Acemoglu-style)0.2180.558

Two patterns stand out. First, Spearman rank correlations (0.49 to 0.64) are materially higher than Pearson correlations (0.22 to 0.36). The metrics do a reasonable job of ordering occupations by usage intensity, but they systematically misrepresent how much usage actually differs across occupations. The divergence between rank and level agreement is itself informative: adoption intensity, not just relative ordering, requires its own explanation.

Second, no single metric explains more than about 13% of the variance in realized usage (Pearson r² ≤ 0.13). Exposure scores are useful benchmarks, but a substantial portion of what drives actual AI adoption lies outside what these indices capture.

Exposure indices benchmark potential. Observed usage reveals where adoption is actually happening. The gap between the two is economically meaningful and deserves study in its own right.

Where Exposure and Usage Diverge the Most

The aggregate correlations mask sharp occupational heterogeneity. Looking at which occupations have the largest positive and negative adoption gaps reveals a clear structural pattern.

Figure 1

Over- and under-adopters relative to predicted exposure

Bar charts showing occupations with the largest positive and negative adoption gaps.

Residual z-scores from regressing AEI usage on PCA-weighted exposure. Positive values indicate usage above predicted; negative values indicate usage below predicted.

Over-adopters: Software and Computing Occupations

The biggest positive gaps are concentrated in software and systems roles. Programmers and software developers alone account for over 13% of total AEI usage. The gap is extreme: Computer Programmers exceed +11 (z units), nearly triple the next-largest over-adopter.

These jobs are digitally native. Work product is already text or code, so AI fits directly into existing workflows with low friction: no handoffs, separate interfaces, or compliance bottlenecks.

Under-adopters: Clerical, Support, and Some Analytical Occupations

The largest negative gaps come from clerical and support occupations, plus a few analytical roles. Many are high exposure on paper, for example proofreaders, payroll clerks, data entry keyers, and legal secretaries, but show far lower observed usage than exposure would predict.

The constraint is not task susceptibility. It is workflow and organizational friction. These roles often sit inside rigid systems (payroll, CRMs, document management) with limited integration, plus compliance and tool restrictions. In some jobs, such as telemarketing and collections, the work is primarily live and interpersonal rather than text based, making Claude-style assistance less natural.

The Pattern Holds Across Occupation Families

Zooming out from individual occupations to broader SOC major groups confirms that this is a structural, not idiosyncratic, phenomenon. Computer and Mathematical occupations have the largest positive mean residual, more than two standard deviations above predicted. Office and Administrative Support, Sales, Business and Financial Operations, and Legal are negative on average despite high exposure.

Figure 2

Mean adoption gap by SOC major occupational group

Horizontal bar chart showing mean residual z-score by SOC major group.

Mean of occupation-level residuals within each SOC major group. Computer and Mathematical (SOC 15) has a mean residual above 2.0; Office and Administrative Support (SOC 43) has the most negative mean residual at roughly −0.5.

The divergence between these two groups is striking. Both are high-exposure by any measure in the literature. Yet realized usage patterns run in opposite directions. This is not a story about AI affecting some occupations and not others; it is a story about where adoption has arrived first and why.

Frontier-Led Adoption

Overall, the evidence points to frontier-led adoption: early diffusion concentrates in digitally native, text- and code-intensive roles that can absorb new tools quickly, rather than spreading evenly across all high-exposure occupations.

The concentration is steep. The top exposure decile accounts for about 45% of AEI usage, and a small set of software and computing jobs drives a large share of interactions.

This suggests exposure indices can overstate near-term adoption breadth. High exposure in clerical and administrative work does not yet translate into high usage, largely because of workflow constraints, compliance limits, and tool integration frictions.

Bottom Line

Exposure and realized usage align in rank more than in intensity. The adoption gap is economically meaningful and concentrated in specific occupation families for structural reasons.

For researchers, treat single exposure indices cautiously as proxies for adoption. For policymakers, the workers who may need the most support are not only current AI users, but also high-exposure clerical and administrative workers in organizations where adoption has not arrived.

Future AEI releases will allow tracking whether these gaps narrow as diffusion broadens, or persist as adoption remains concentrated at the frontier.

Appendix: Data and Methods

Data sources, construction choices, and coverage statistics underlying the analysis above.

A1. Exposure Data

Source: Yale Budget Lab AI Exposure Dataset (sheet FA5), coded to SOC 2018. Underlying measures draw on Felten et al. (2021), Eloundou et al. (2024), Eisfeldt et al. (2023), Tomlinson et al. (2025), and Webb (2020).

Table A1. AI Exposure Measures

Metric Study What it measures Labeling basis
dv_rating_beta Eloundou et al. (2024) Task exposure to GenAI (aggregated to occupations) GPT-4 ratings
human_rating_beta Eloundou et al. (2024) Task exposure to GenAI (aggregated to occupations) Human ratings
genaiexp_estz_total Eisfeldt et al. (2023) Predicted GenAI productivity potential (all tasks) Model-based scoring
genai_exp_estz_core Eisfeldt et al. (2023) Predicted GenAI productivity potential (core tasks) Model-based scoring
AIOE Felten et al. (2021) Occupation-level exposure to AI-relevant capabilities Survey-based mapping
ai_applicability_score Tomlinson et al. (2025) AI applicability inferred from real tool interactions Observed interactions
pct_ai (not in PCA) Webb (2020) AI relatedness via patent–task links AI patents
PCA_composite Budget Lab (computed) First PC across the six standardized measures above (excluding pct_ai) PCA

Coverage: 867 occupations have at least one metric; 710 have all six core measures; 779 have all except pct_ai. pct_ai is excluded from the PCA composite because its PCA loading is much smaller than the others.

A2. Anthropic Economic Index Data

Source: Anthropic Economic Index (task_pct_v2.csv + onet_task_statements.csv). The AEI maps observed Claude interactions to O*NET task statements and aggregates usage weights to occupations. Coverage: 974 O*NET occupations total; 749 with non-zero usage weights. SOC codes correspond to the O*NET-SOC 2010 taxonomy and are harmonized to SOC 2018 for this analysis.

A3. Harmonization and Overlap

Occupation codes are harmonized via a two-step crosswalk: O*NET-SOC 2010 to O*NET-SOC 2019, then to SOC 2018. Earlier direct matching under-counted high-usage computing occupations due to code-system differences; the crosswalk-based pipeline corrects this.

Table A1 — Overlap statistics

MetricCount
Exposure SOC rows778
AEI SOC rows608
Exposure rows with AEI match586
Overlap rate75.3%
AEI mass in overlap91.15%

Missingness is not random. Occupations missing an AEI match have substantially lower average exposure scores than those in the overlap sample (mean PCA score: −0.92 vs. +0.30). The analysis slightly over-represents high-exposure occupations, but given that 91% of AEI usage mass is captured, the practical impact on conclusions is minimal.

A4. Construction of the Adoption Gap

Both AEI usage percentages and exposure scores are standardized to z-scores within the 586-occupation overlap sample before differencing. This ensures the gap measure is scale-invariant and interpretable as a deviation in standard deviation units.

Replication: Code and processed data files are available upon request.

A5. AEI Release History

ReleaseKey AdditionsRelevance
v1 (Feb 2025)O*NET task mappings, SOC structure, BLS employment and wages, automation vs. augmentation aggregateBaseline task-to-occupation setup
v2 (Mar 2025)task_pct_v2, task-level automation/augmentation by interaction type, extended-thinking fractionsPrimary usage base used here
v3 (Sep 2025)Geography (country and US state), per-capita usage, specialization indicesUseful for geographic diffusion analysis
v4 (Jan 2026)Multitasking, human-only ability, use case, task success; time, autonomy, and education distributionsRicher behavioral dimensions for future work

A6. Limitations