Blogs

HR Uses AI the Least. It Recruits for the Roles that Use It The Most. That's a Problem.

By Chris Hoyt (he/him) posted 2 hours ago

  

I spent most of last week in Philadelphia at I Am Phenom, sitting in a small room with industry analysts while Phenom's senior leadership - including CEO Mahe Bayireddi - spent four hours walking through their vision for where talent technology is headed. Mahe made a point that stuck with me: the question isn't just what can be automated, but what should be. He framed AI's economic transformation not as job replacement, but as automation happening at the intersection of industry, function, role, workflow, and geography - all at once.

Simon Sinek was also in attendance at the event, and in a fireside chat session he offered a perspective worth holding onto: what knowledge workers are going through right now isn't new. Manufacturing went through it. Automation happened. Fear was real. And the industry figured it out and adapted. That doesn't make the current disruption less significant - it just means we've seen this movie before, and panic wasn't the useful response then either.

I left Philadelphia thinking about a dataset I'd been sitting with before I got on the plane. Putting the two things together gave me something uncomfortable enough that I think every TA leader needs to look at it directly.

About the Research Behind This Post

Before getting into the findings, it's worth being clear about where the data comes from because this is not another AI-and-jobs think piece built on surveys or speculation.

Anthropic (the company behind Claude) publishes something called the Economic Index. It tracks how AI tools are actually being used across the economy: real observed behavior from real conversations, matched to the same occupational codes the US Department of Labor uses. The most recent dashboard (January 2026) covers 974 occupations across every US state and 116 countries.

In early March 2026, Anthropic also published a companion research paper - "Labor Market Impacts of AI: A New Measure and Early Evidence" - by economists Maxim Massenkoff and Peter McCrory. Their paper builds on the dashboard data and adds a rigorous labor market analysis, cross-referencing actual Claude usage patterns against US Bureau of Labor Statistics employment projections and Current Population Survey data.

What makes this different from most AI-and-labor research: it's based on what people are actually doing with AI in real work contexts — not what they say they do in surveys, not what researchers think AI could theoretically do. The underlying dataset is publicly available.

The Mirror: HR's AI Adoption Gap

The dashboard covers 974 occupations. When you search "human resources" in the job explorer, three results come back.

  • HR Specialists: 0.12%
  • HR Assistants: 0.12%
  • HR Managers: 0.05%

For context: Software Developers are at 7.08%, Computer Programmers at 4.14%, and Data Warehousing Specialists at 3.85%.

The people responsible for recruiting into the most AI-active roles in the economy are among the least AI-active professionals in that same economy. That's the mirror. And it gets more complicated the longer you look at it.

What's Actually Being Automated in the Roles You Recruit For

The Massenkoff and McCrory paper introduces a measure they call "observed exposure": not what AI could theoretically do to a job, but what it's actually doing right now, weighted by whether the usage is automated versus collaborative, work-related versus personal, and whether it makes up a meaningful share of the overall role.

The ten most exposed occupations by that measure include roles that TA teams recruit for constantly:

  1. Computer Programmers: 74.5% - leading automated task: writing and maintaining software programs.
  2. Customer Service Representatives: 70.1% - handling customer inquiries, orders, and complaints.
  3. Data Entry Keyers: 67.1% - reading source documents and entering data into systems.
  4. Medical Record Specialists: 66.7% - compiling, abstracting, and coding patient data.
  5. Market Research Analysts & Marketing Specialists: 64.8% - preparing reports and translating findings into written text.
  6. Sales Representatives (Wholesale & Manufacturing): 62.8% - contacting customers and soliciting orders.
  7. Financial and Investment Analysts: 57.2% - informing investment decisions through financial analysis.
  8. Software QA Analysts and Testers: 51.9% - modifying software to correct errors or improve performance.
  9. Information Security Analysts: 48.6% - performing risk assessments and testing data security.
  10. Computer User Support Specialists: 46.8% - answering user inquiries about hardware and software.

This isn't theoretical risk. This is what's actually happening in those roles, at scale, in professional settings, right now.

One nuance worth holding onto: observed exposure is still a fraction of what's theoretically possible. The Computer & Math category could see AI penetration across 94% of its tasks but actual observed coverage today is only 33%. The gap is closing, not widening. The disruption isn't over; it's just early.

The Signal Most TA Leaders Are Missing: Entry-Level Hiring

Here's the finding that I think lands closest to TA's actual work.

The research shared found a 14% drop in the job-finding rate for workers aged 22 to 25 in AI-exposed occupations, compared to 2022 levels. This decrease is attributed primarily to a slowdown in hiring - not layoffs, not rising unemployment. There's no such decrease for workers over 25.

The authors are very specific: the result is just barely statistically significant, and other explanations exist. Young workers who aren't landing in exposed roles might be staying put, moving into different roles, or heading back to school. It's a signal, not a verdict.

But it's a signal that lives squarely in TA's domain. Companies are quietly not backfilling or growing entry-level headcount in exposed functions. Campus recruiting, early career programs, and rotational pipelines in those areas face structural headwinds that are just starting to show up in the data. If your time-to-fill on certain entry-level roles has been behaving oddly, or your sourcing yield is slipping in tech, finance, or customer-facing functions then this may be part of what's underneath it.

This is also landing inside a broader labor market that's sending its own warning signs. February's jobs report (what Recruitonomics called the worst since the pandemic) showed a net decline of 92,000 nonfarm payrolls. The sharpest drops were in healthcare (driven largely by the Kaiser Permanente strike), federal employment, and the information sector, which is the closest proxy the BLS data gives us for tech hiring. One difficult month isn't a trend. But when macro weakness and AI exposure data start moving in the same direction at the same time, it's worth paying attention.

Who Is Actually Being Disrupted - It's Not Who Most People Assume

The common assumption about AI displacement (low-wage, low-skill workers losing routine jobs) doesn't hold up in this data.

Workers in the most AI-exposed occupations are 16 percentage points more likely to be female than the least-exposed group. They earn 47% more on average ($32.69/hr vs. $22.23/hr). They're nearly four times more likely to hold graduate degrees (17.4% vs. 4.5%). The most exposed workers are older, more educated, higher-paid, and disproportionately female.

For TA leaders who also own DEI goals, that intersection deserves some real attention. The disruption is concentrated in professional, white-collar talent pools where organizations have invested the hardest in building representation and pipeline. That's not a reason for alarm — but it is a reason to be intentional about what you're watching and measuring.

Your Candidates Have Already Moved

One more finding worth sitting with.

In Washington D.C. (the highest-usage market in the country at 4x the expected rate) the second most common thing people use Claude for is "assist with job searching, career planning, and professional development." At 5.4% of all conversations. Ahead of business planning. Ahead of proofreading. Ahead of most other professional activities in one of the country's most educated, white-collar markets.

The candidate experience your team designed (the application, the job description language, the screening questions, the interview prep guidance) was built for a job seeker who didn't have an AI thinking partner available around the clock. That job seeker is increasingly rare in competitive talent markets. Whether your process has caught up to that shift is worth asking honestly.

What To Do With This

Mahe's framing at I Am Phenom keeps coming back to me: not just what can be automated, but what should be. I believe that's the right question for TA leaders right now and not just about the roles you are recruit for. About your own function too.

Pull your last 90 days of requisition data. Sort by the occupational categories most exposed in this research such as computer and math, business and finance, office and admin, customer-facing roles. Look at time-to-fill trends, offer acceptance rates, and sourcing yield by function. The data is already sitting there. Most TA teams just haven't connected it to this framework yet.

Look at your five most-posted job descriptions in high-exposure categories. Does the language assume a candidate who does these tasks manually? Are your requirements inadvertently screening for the pre-AI version of the role?

Bring data to the hiring manager conversation. The next time a hiring manager or senior leadership asks why an entry-level req is taking longer to fill, consider whether structural demand shifts are part of the explanation - not just sourcing delivery. Companies across the industry are quietly not backfilling certain roles, which shrinks the candidate flow that used to be reliable. TA leaders who can explain that distinction with data become strategic partners. Those who can't continue to be just order-takers.

What I'm Hearing Next

The Anthropic research gives us a data-grounded starting point. What it can't give us is what's actually showing up on the ground at large enterprises right now in live pipelines, in hiring manager conversations, in the decisions TA teams are quietly making about where to invest and where to pull back.

Next week I'll be in Scottsdale, AZ at the Paradox / Workday offices with dozens of our member companies who are joining CXR for a full day and a roundtable dinner that evening. If there's one question I'm most looking forward to putting to the practitioners in that room, it's this: Is any of this actually showing up in your data yet, or does it still feel theoretical from where you're sitting?

I'll share what I hear. In the meantime, I'd encourage every TA leader to spend time with the primary sources directly - the data is richer than any summary of it, including this one.


#Recruiting-Automation
#Claude
#Anthropic
#AI

Latest Podast Show

Community Events

Recent Headlines

Permalink