Master Burnett put it plainly in a post last week: "The question is not whether AI should be adopted. It is whether the system it is entering is governable."
That sentence stopped me. Not because it's surprising — but because it names, precisely, what I've been watching play out in Talent Acquisition for the last several years.
The University of Phoenix recently published data showing that 53% of employers report they do not have standardized workflows for Talent Acquisition. I'll be honest — I worry more about the 47% who believe they do. Either way, the implication for AI adoption is the same: ungoverned AI applied to unstandardized processes doesn't solve the problem. It scales the inconsistency. Faster. Further. With less accountability.
That's not a technology failure. It's a governance failure — and the distinction matters enormously.
The risk isn't abstract. Compliance exposure, candidate experience breakdowns, decisions that may reflect or amplify bias at scale — these are the documented consequences of deploying AI tools ahead of the organizational infrastructure required to oversee them. (Raghavan et al. documented the contours of this problem clearly in their 2020 ACM paper on algorithmic hiring, worth tracking down if you haven't read it.)
What makes this particularly difficult in HR and TA is that the function is simultaneously the most people-consequential and, in many organizations, still the most operationally under-resourced. Every stakeholder in the hiring journey — the recruiter navigating an AI-assisted pipeline, the hiring manager interpreting AI-generated rankings, the candidate whose livelihood may hinge on how an algorithm scores their resume, the HR leader accountable for outcomes no one yet knows how to measure — every one of them is exposed when governance is absent.
Establishing baseline standards at each milestone of the hiring journey isn't a nice-to-have. It is a precondition for AI to deliver value rather than liability. IBM's Institute for Business Value found that organizations with clearly defined AI governance frameworks were significantly more likely to report positive outcomes from AI deployment — improved quality of hire, reduced time-to-fill. The organizations without those guardrails end up with black boxes: outputs no one can explain, defend, or improve.
This is where HR Operations — traditionally underinvested and, frankly, underutilized — has to step up. Not as an administrative layer, but as a strategic oversight function: monitoring AI-driven processes in real time, tracking outcomes at each stage of the funnel, and connecting those outcomes to measurable organizational performance. Josh Bersin made this point directly in 2023 — the organizations that will lead in the future of work are those that treat HR data infrastructure and process rigor as strategic assets, not overhead.
Here's the number that should give every TA leader pause: Deloitte's 2024 Global Human Capital Trends report found that only 22% of HR functions currently have the operational maturity to govern AI tools effectively. That means the majority of organizations are already deploying AI ahead of their ability to oversee it.
The path forward isn't complicated to describe, even if it's hard to execute. Map the AI-enabled hiring journey end to end. Define what "good" looks like at each milestone. Establish the metrics that will confirm or challenge those definitions. Assign accountability for continuous review. Treat AI not as a plug-and-play solution, but as a capability that must be earned — through standardization, through governance, and through a relentless focus on outcomes that matter not just to HR, but to the organization as a whole.
The question has never really been whether to adopt AI. The question — the one that will define which organizations actually benefit from it — is whether we've built the foundation that makes it governable.
Context is everything. And right now, most organizations are building the plane while it's already in the air.
The sparks that fire my imagination typically go into a Tomorrow File — a folder I began keeping during my years at J&J, where budgets always wait for a proper use case and the right timing. Fast forward to today, and the explosion of published content by billions of people curating their own version of reality makes separating signals from the noise of unreferenced opinions and badly designed research — even (or maybe especially) within HR and Talent Acquisition — a task way beyond my pay grade and causes me to pause and ruminate about where we've been, where we are, and where we can go. If any of this sparks a thought or two, please let me know.
References
Bersin, J. (2023). The HR technology market 2023: A definitive guide. The Josh Bersin Company.
Burnett, Master (March 3, 2026). AI, Talent Decisions, and the Context Crisis, Part III. https://www.linkedin.com/pulse/ai-talent-decisions-context-crisis-part-iii-master-burnett-vg0yc/
Deloitte Insights. (2024). Global human capital trends: Navigating the boundaryless world. Deloitte.
IBM Institute for Business Value. (2023). CEO decision-making in the age of AI. IBM Corporation.
Raghavan, M., Barocas, S., Kleinberg, J., & Levy, K. (2020). Mitigating bias in algorithmic hiring: Evaluating claims and practices. Proceedings of the 2020 ACM Conference on Fairness, Accountability, and Transparency, 469–481. https://doi.org/10.1145/3351095.3372828
University of Phoenix. (2025). The 2025 Career Optimism Index. https://www.phoenix.edu/career-institute.html