- Rankings align; intensity doesn’t. Exposure metrics broadly rank occupations correctly but miss how uneven usage is.
- Adoption is uneven. Computing roles over-adopt relative to exposure; office/admin roles under-adopt despite high exposure.
- Diffusion is frontier-led. The top exposure decile accounts for ~45% of observed usage, concentrated in digitally native jobs.
Exposure indices have become a standard way to measure where AI could affect work. But they capture potential, not adoption. This note compares seven exposure metrics widely used in academic research to realized usage from the Anthropic Economic Index (AEI), which tracks how Claude is used across occupations.
The sample includes 586 matched occupations, covering 91% of total AEI usage mass. That gives a representative view of where observed AI interactions concentrate.
What Am I Comparing?
The exposure metrics come from leading academic studies, each approaching the question from a slightly different angle. Some rely on human annotators, some on language model judgments, some on patent filings. All attempt to quantify how much of an occupation's task bundle could plausibly be affected by generative AI. For this analysis, I use six z-score standardized metrics plus one PCA-composite score, sourced from the Yale Budget Lab's exposure dataset (see appendix for details).
The usage measure comes from the AEI, which maps observed Claude interactions to O*NET occupations. Rather than measuring what AI could do, it measures what workers are actually asking AI to do on their behalf. I refer to this as realized usage throughout.
The object of interest is the adoption gap: for each occupation, how much does standardized AEI usage deviate from what a given exposure metric would predict?
Positive values mean an occupation is using AI more than its exposure score would predict. Negative values mean it is using AI less. Understanding what drives these gaps is the central contribution of this analysis.
Exposure and Usage: Aligned, but Only Moderately
The most basic question is whether occupations that score highly on exposure indices are also the occupations with the highest observed Claude usage. The answer is: broadly yes, but with meaningful slippage.
Table 1 — Correlation between exposure metrics and AEI usage (n = 586 occupations)
| Exposure Metric | Pearson r | Spearman ρ |
|---|---|---|
| DV Rating (text-based) | 0.359 | 0.588 |
| GenAI Core | 0.353 | 0.573 |
| PCA Composite | 0.344 | 0.637 |
| GenAI Total | 0.336 | 0.584 |
| Human Rating Beta | 0.266 | 0.596 |
| AI Applicability Score | 0.263 | 0.491 |
| AIOE (Acemoglu-style) | 0.218 | 0.558 |
Two patterns stand out. First, Spearman rank correlations (0.49 to 0.64) are materially higher than Pearson correlations (0.22 to 0.36). The metrics do a reasonable job of ordering occupations by usage intensity, but they systematically misrepresent how much usage actually differs across occupations. The divergence between rank and level agreement is itself informative: adoption intensity, not just relative ordering, requires its own explanation.
Second, no single metric explains more than about 13% of the variance in realized usage (Pearson r² ≤ 0.13). Exposure scores are useful benchmarks, but a substantial portion of what drives actual AI adoption lies outside what these indices capture.
Exposure indices benchmark potential. Observed usage reveals where adoption is actually happening. The gap between the two is economically meaningful and deserves study in its own right.
Where Exposure and Usage Diverge the Most
The aggregate correlations mask sharp occupational heterogeneity. Looking at which occupations have the largest positive and negative adoption gaps reveals a clear structural pattern.
Over- and under-adopters relative to predicted exposure
Residual z-scores from regressing AEI usage on PCA-weighted exposure. Positive values indicate usage above predicted; negative values indicate usage below predicted.
Over-adopters: Software and Computing Occupations
The biggest positive gaps are concentrated in software and systems roles. Programmers and software developers alone account for over 13% of total AEI usage. The gap is extreme: Computer Programmers exceed +11 (z units), nearly triple the next-largest over-adopter.
These jobs are digitally native. Work product is already text or code, so AI fits directly into existing workflows with low friction: no handoffs, separate interfaces, or compliance bottlenecks.
Under-adopters: Clerical, Support, and Some Analytical Occupations
The largest negative gaps come from clerical and support occupations, plus a few analytical roles. Many are high exposure on paper, for example proofreaders, payroll clerks, data entry keyers, and legal secretaries, but show far lower observed usage than exposure would predict.
The constraint is not task susceptibility. It is workflow and organizational friction. These roles often sit inside rigid systems (payroll, CRMs, document management) with limited integration, plus compliance and tool restrictions. In some jobs, such as telemarketing and collections, the work is primarily live and interpersonal rather than text based, making Claude-style assistance less natural.
The Pattern Holds Across Occupation Families
Zooming out from individual occupations to broader SOC major groups confirms that this is a structural, not idiosyncratic, phenomenon. Computer and Mathematical occupations have the largest positive mean residual, more than two standard deviations above predicted. Office and Administrative Support, Sales, Business and Financial Operations, and Legal are negative on average despite high exposure.
Mean adoption gap by SOC major occupational group
Mean of occupation-level residuals within each SOC major group. Computer and Mathematical (SOC 15) has a mean residual above 2.0; Office and Administrative Support (SOC 43) has the most negative mean residual at roughly −0.5.
The divergence between these two groups is striking. Both are high-exposure by any measure in the literature. Yet realized usage patterns run in opposite directions. This is not a story about AI affecting some occupations and not others; it is a story about where adoption has arrived first and why.
Frontier-Led Adoption
Overall, the evidence points to frontier-led adoption: early diffusion concentrates in digitally native, text- and code-intensive roles that can absorb new tools quickly, rather than spreading evenly across all high-exposure occupations.
The concentration is steep. The top exposure decile accounts for about 45% of AEI usage, and a small set of software and computing jobs drives a large share of interactions.
This suggests exposure indices can overstate near-term adoption breadth. High exposure in clerical and administrative work does not yet translate into high usage, largely because of workflow constraints, compliance limits, and tool integration frictions.
Bottom Line
Exposure and realized usage align in rank more than in intensity. The adoption gap is economically meaningful and concentrated in specific occupation families for structural reasons.
For researchers, treat single exposure indices cautiously as proxies for adoption. For policymakers, the workers who may need the most support are not only current AI users, but also high-exposure clerical and administrative workers in organizations where adoption has not arrived.
Future AEI releases will allow tracking whether these gaps narrow as diffusion broadens, or persist as adoption remains concentrated at the frontier.