Blogs

Why all TA Leaders should be Watching the Workday v Mobley Case

By Barb Ruess posted 06-18-2025 04:38 PM

  

It was only a matter of time before a recruiting AI tool came under legal scrutiny and now that it’s happening, the potential implications are worth careful consideration - and not just for TA teams using Workday. That was the clear message from a recent CareerXroads (CXR) Community call, where TA leaders gathered to unpack the implications of the unfolding Mobley v. Workday class action lawsuit.

Thanks to our partners at Cielo for sparking the conversation with their perspective on this evolving legal matter and to our members for openly engaging in this timely discussion.

This article captures the highlights for those who couldn’t join live and it underscores the unique insights available through our community. Conversations like these, enriched by insights from industry leaders, don't happen in public forums. But they do happen here.

The Lawsuit at a Glance

You can (and should) read about the case from a variety of sources, but to set the stage for our takeaways: The Mobley v. Workday case centers on alleged age discrimination in Workday’s AI-enabled screening technology. The court’s decision to proceed with the case expands potential plaintiffs, raising accountability questions in tech-driven hiring.

Key legal developments include:

  • The court's view that Workday could be considered an “agent” of employers, not just a neutral vendor.
  • Increased scrutiny on the discrepancy between marketing claims and the actual behavior of automated systems.
  • Potential involvement of any employer using Workday in the past 5 years, who may be named or contacted as part of the investigation.

AI in Recruiting: What TA Leaders Should Be Doing Now

While the current class action centers on Workday, the implications stretch far beyond a single vendor. For Talent Acquisition leaders, this isn’t just a case to watch - it’s a blueprint for action. The legal and ethical questions it raises offer an opportunity to get ahead of similar risks lurking in any AI-powered hiring tool.

During the Community call, there was widespread agreement - not surprisingly - that AI brings real advantages to the recruiting process. From speed to scalability, its potential is undeniable. But those gains come with a catch: AI must be implemented with intention, guided by oversight, and grounded in ethical responsibility. As one member succinctly put it, “You can’t set it and forget it.”

So what does responsible AI use look like in practice?

  • Audit regularly. Periodic reviews and reasonableness checks must become standard.
  • Preserve human judgment. Over-reliance on automation can erode the mentorship and evaluation skills that define great recruiters.
  • Watch for bias in outcomes. Discrimination isn’t just in the code - it’s in the results.

If you're using tools with AI capabilities, now is the time to act - not after your vendor is in the headlines.

Accountability and Brand Risk: The High Stakes of AI Missteps

One of the most animated discussions during the call centered around a pressing question: Who ultimately bears the risk when AI misfires in recruiting? Is it the technology vendor who developed the tool, or the employer who put it into practice? The answer is evolving

Many in the community expect that shared liability will become the norm as legal frameworks begin to catch up with the realities of AI in hiring. Employers may not be able to point solely to their vendors when things go wrong. Instead, the responsibility is increasingly viewed as mutual. That’s why a strong recommendation emerged from the conversation: Review your vendor contracts now. In particular, examine how AI functionality, data handling, and risk allocation are addressed - because vague or outdated clauses may not provide the protection you think they do.

TA leaders need to be just as concerned about the reputation consequences that can arise even when no law has been broken. Imagine receiving word that thousands of rejected candidates from the past five years are about to be contacted due to a system audit or legal action. Or being called into an executive meeting to explain how your team’s use of technology aligns with your company’s stated values around fairness and inclusion.

These are no longer theoretical scenarios. As one member pointed out, it’s not just about whether your system is legally compliant - it’s about whether your internal narrative around talent, equity, and compliance holds up under external scrutiny. In today’s environment, brand trust is on the line, and that trust can be shaken by a single algorithmic misstep.

Smart Advice from the CXR Community

This conversation wasn’t just about concerns - we also focused on actionable takeaways. Here’s some of the practical guidance shared:

  • Use clear, binary pre-screening questions to ensure defensibility.
  • Delay rejection messages from automated systems to create space for nuance.
  • Purge applicant data when appropriate and ensure what you do keep is complete and retrievable.
  • Customize your vendor contracts. Avoid cookie-cutter terms that don’t reflect your compliance standards.
  • Ask the right questions before buying AI-powered tools.

Even if you don’t use Workday, if you’re using any system with AI, this is a conversation that affects you. Bring your updates and insights to the Community because this is just the beginning. Together, we’ll navigate what comes next. That’s the power of community.

Additional resources:

CXR’s AI Evaluation Document

Read more of the CXR perspective here: Shared Accountability for HR Decisions in the Age of AI

Attend our monthly Workday call to connect with other TA Leaders, open to all 

Latest Podast Show

Community Events

Recent Headlines

Permalink