Blogs

Shared Accountability for HR Decisions in the Age of AI (some thoughts on Mobley v Workday)

By Gerry Crispin posted 05-28-2025 04:03 PM

  

The Mobley v Workday case is far from decided but its eventual outcome (see link below to: HBR- Assessment of Resetting Anti-discrimination in the Age of AI) is unlikely to close the HR equivalent of Pandora’s box that was opened May 16 when a federal judge allowed a class action suit to go forward against a Technology Solutions Supplier… instead of every one of its clients.

Reading the court’s document (see link below to: Mobley v Workday), it makes perfect sense to me why Workday was put in the hot seat alone but, in the future, the accountability of every solutions supplier (AI or not) is going to be weighed alongside every employer’s use of that technology in terms of their intent, negligence, and compliance with recently passed laws and regulations, the Equal Protection Clause of the 14 th Amendment and Title VII civil rights legislation.

Both parties each share a degree of accountability. This is a major shift.

In the past, employers were almost always considered 100% responsible for the disparate impact of the decisions to hire, onboard, manage and terminate. Today, a rapidly evolving set of AI solutions providers claim (with few warning labels) that they can speed up the productivity and efficiency of these ‘Human’ decisions at scale.

What this means is faster decisions with fewer people using AI insights and, in some instances, full automation.

I’m a fan of this direction. Businesses will absolutely want the productivity and efficiencies that come with AI’s help but, in the case of human impact, it is not without risk. Mitigating that cost by ensuring the businesses have done their proper due diligence will be critical. Class action suits that raise questions about the decisions and the intention of how people are treated by employers will be very public and could easily destroy trust in the brands involved, directly impact revenue and profitability regardless of their guilt in addition to the penalties imposed should they be found accountable.

In the hiring process alone, fewer recruiters will each handle more and more open requisitions- faster and faster. And this comes at a time when the number of candidates has doubled and tripled as noted by NACE and many others. AI designed campaigns attracting AI targeted prospects providing AI insights that match, compare and move forward qualified candidates for AI augmented Interviews summarizing relevant candidate responses to AI job descriptions. 

Can anyone imagine that the humans in the workflow will regularly disagree with the AI ‘recommendations’? Who will be auditing the decision process for disparate impact?  How are the decisions to employ AI solutions managed? (see the link below to RFI/P questions). Who is certifying the AI solutions for their claimed ability to mitigate bias? How well insured are the AI solutions providers in relation to the risks?

When all is said and done, I expect AI solutions to be able to defend a selection process that is fairer, faster and less costly than any group of humans can achieve without AI but, not without a shift in human oversight. (and hiring is just the starting point- Onboarding, Managing, Rewarding, Promoting and Terminating decisions will come under the same bright light).

As a starting point CHROs should be thinking deeply about tackling risk management by adding significantly to HR/TA operations with I/O psych, audits, surveys, quality control/compliance and more to safeguard the decision processes that are influenced by AI and ensure that the business outcomes are aligned to the goals of all the stakeholders.

Reference Links


#Workday
#leadership

Latest Podast Show

Community Events

Recent Headlines

Permalink