AI Hiring Trends

Two Major AI Hiring Platforms Are Being Sued for Bias. Here Is What Enterprise HR Teams Need to Know

Workday and Eightfold AI are facing federal lawsuits over bias in AI-powered hiring tools. Here is what the litigation means for enterprise HR teams evaluating AI recruitment platforms in 2026.

By
Narayanan
April 28, 2026

Two of the most widely deployed AI hiring platforms in the world are facing federal lawsuits over bias in their automated candidate screening tools. For enterprise HR teams still deciding how to deploy AI in recruitment, the litigation is not a sideshow. It is a preview of the compliance and governance questions that every AI hiring tool will eventually have to answer.

Here is what is happening, what the cases mean in practice, and what enterprise teams should be doing differently as a result.

What the Lawsuits Allege

HR leaders and talent acquisition vendors are closely watching a lawsuit against Workday involving AI and bias in hiring. The federal lawsuit alleges Workday's AI-powered candidate screening tools disproportionately overlooked older applicants and those from other protected groups. HeroHunt

Workday's response has been that its tools do not make final hiring decisions and do not disparately impact applicants. That defence matters legally - whether an AI tool is considered an employment decision-maker or a decision-support tool changes the applicable regulatory framework significantly. But the litigation highlights a practical reality that the technical defence does not resolve: if an AI screening tool consistently surfaces fewer candidates from a protected group for human review, the human reviewers never see those candidates. The downstream effect is discriminatory even if the tool is not technically making the final call.

A separate lawsuit filed against Eightfold AI highlights different concerns amid allegations that the company's AI-powered talent intelligence platform can produce biased outcomes in candidate recommendation. HeroHunt

Eightfold's platform, used by some of the world's largest enterprises for talent matching and internal mobility, infers candidate suitability from career trajectory and skills data across 1.6 billion profiles. The allegation is that the inferences the platform draws replicate historical hiring patterns including the patterns that excluded protected groups in the first place.

Why This Is Not a Surprise

The bias problem in AI hiring tools is not new. A 2025 University of Washington study found that recruiters who reviewed applicants using AI LLM tools with bias built into the models mirrored the inequitable choices of the AI up to 90% of the time. But when recruiters made decisions without AI or with unbiased AI, they chose white and non-white candidates equally. HeroHunt

The finding points to a specific risk that is easy to underestimate. The concern with AI bias in hiring is not just that the tool produces a biased output. It is that human reviewers, trusting the tool's authority, replicate and amplify that output in their own decisions. The AI does not just make a biased recommendation. It trains the humans using it to make biased decisions.

HireVue encountered an early version of this problem in 2021 when it discontinued its facial recognition feature after public backlash over concerns that the model penalised candidates with accents or atypical speech patterns. The feature was removed but the underlying tension it revealed has not gone away. Any model trained on historical data inherits the biases embedded in that data. When the historical data reflects decades of unequal hiring, the model learns from unequal hiring.

Only 26% of applicants trust AI to evaluate them fairly, making visible human oversight and clear explanations essential requirements in 2026 hiring. That distrust is not irrational. It reflects a reasonable inference from a documented pattern. GraffersID

The Compliance Landscape That Makes This More Urgent

The Workday and Eightfold lawsuits are not happening in isolation. They are arriving in a regulatory environment that is moving fast in the same direction.

The EU AI Act's obligations for general purpose AI began in August 2026, raising compliance expectations for employers and vendors that deploy hiring technology. New York City's Local Law 144 still requires an annual bias audit and candidate notices before using automated employment decision tools. GraffersID

The UK's Data (Use and Access) Act 2025 similarly requires that automated decisions affecting individuals be explainable, subject to human override, and documented with a clear audit trail. The Indian DPDP Act and the UAE Personal Data Protection Law impose comparable requirements in the markets where NeoRecruit's clients operate.

What all of these frameworks share is a common logic: if AI influences a hiring decision, the employer must be able to explain how and why, demonstrate that the process was fair, and show that a human was genuinely involved rather than rubber-stamping an automated output.

The Workday lawsuit will test how courts interpret the line between a decision-support tool and an employment decision-maker. Whatever the outcome, the litigation has already achieved something significant: it has put enterprise legal, compliance, and procurement teams on notice that AI hiring tools carry employer liability, not just vendor liability.

What Enterprise HR Teams Should Do Right Now

The response to this litigation should not be to abandon AI in hiring. AI tools that are well-designed, properly audited, and used with genuine human oversight produce better and fairer hiring outcomes than unstructured human interviews alone. The research shows that if we can tune models appropriately, people are more likely to make unbiased decisions themselves. HeroHunt

The response should be to ask harder questions of the tools already in your stack and any you are evaluating.

Verify that human oversight is genuine. A hiring process where a human technically reviews every candidate but the AI scoring heavily determines who gets reviewed first is not meaningful human oversight. The question is not whether a human signs off. It is whether the human has access to enough information to make an independent judgment.

Understand what your tool is actually trained on. Talent intelligence platforms that match candidates based on historical career trajectory data are inheriting whatever patterns existed in that historical data. Ask specifically what steps the vendor has taken to identify and correct for training data bias.

Build your audit trail before you need it. Every hiring decision that involved an AI tool should be documentable - which tool was used, what it recommended, how the human reviewer responded, and the ultimate outcome. If you cannot reconstruct this for a hiring decision made six months ago, you are not in a position to defend it.

Evaluate interview tools differently from sourcing and screening tools. The bias risk is highest in tools that filter candidates before a human ever sees them - sourcing algorithms, resume screeners, and scoring systems. Interview tools that present a structured, consistent experience to every candidate who reaches that stage carry a different and generally lower bias profile, provided the evaluation criteria are job-relevant and consistently applied.

What This Means for How You Design Your Assessment

The Workday and Eightfold cases are about what happens before the interview - the sourcing and screening layer where AI filters who gets considered at all. But the litigation points toward a design principle that applies across every layer of the hiring funnel: the tools most likely to create legal and regulatory exposure are those whose outputs cannot be explained.

A resume screening algorithm that scores a candidate 4.2 out of 10 without explaining why is an explainability problem waiting to become a litigation problem. An AI interview that evaluates a candidate against a fixed set of criteria, generates a score, and provides timestamped evidence of the specific responses that drove that score is a fundamentally different type of output.

NeoRecruit was designed with this distinction in mind. The adaptive conversational interview evaluates every candidate against the same role-specific criteria, in the same structured format, regardless of their background or how they arrived in the funnel. NeoEye (patent pending) analyses audio, video, behaviour, and response patterns to detect AI-assisted fraud - a different problem from bias, but one that also requires the same auditability. Every session produces a structured, timestamped assessment with scoring rationale that the hiring team can review, override, and document.

The assessment produces evidence. Evidence is what the regulatory frameworks now require and what the litigation has demonstrated the absence of creates.

The Broader Point

AI use across HR tasks climbed to 43% in 2026, up from 26% in 2024. The adoption wave is real and it is accelerating. What the Workday and Eightfold cases establish is that adoption without governance is not a neutral choice. It is a decision to accept the liability that comes with deploying a tool whose outputs you cannot explain or defend. GraffersID

The enterprise HR teams that will come through this regulatory moment in good shape are not those that stopped using AI in hiring. They are those that chose tools with explainable outputs, built genuine human oversight into their processes, documented their decisions, and treated bias audits as a routine operational requirement rather than a defensive reaction to litigation.

The lawsuits against Workday and Eightfold are the beginning of this accountability shift, not the end of it. More litigation will follow, more regulatory enforcement will follow, and the employers who have already built defensible processes will be glad they did not wait.

Related reading: