- A proposed class action in California alleges Eightfold’s AI tools evaluated job candidates secretly and violated consumer protection laws.
- The case spotlights broader concerns about algorithmic transparency and legal accountability in automated hiring systems.
What happened
AI hiring platform Eightfold AI is the subject of a proposed class-action lawsuit in California alleging it helped employers assess and score job applicants without their knowledge or consent, according to a Reuters report. The complaint was filed on behalf of two job seekers who claim that Eightfold’s tools generated detailed profiles and predictions about applicants’ job prospects — including personality traits and education rankings — without giving candidates notice or an opportunity to dispute inaccuracies.
The plaintiffs argue that this practice violated the Fair Credit Reporting Act (FCRA) and California consumer protection law, which govern how information used in hiring decisions must be disclosed and allow individuals to review and challenge data used against them. They contend that Eightfold’s AI assessments functioned like “consumer reports” but without the transparency and dispute rights required by law.
Eightfold’s platform is used by many large employers — including Microsoft and PayPal — to screen applicants and predict fit for roles. However, neither Microsoft nor PayPal are defendants in the lawsuit, and Eightfold had not publicly responded to requests for comment at the time of reporting.
The complaint describes traditional résumé screening augmented by AI models that compile vast data sources to generate rankings and inferences about an applicant’s future career path. The plaintiffs allege that many qualified applicants, including those with significant experience and advanced degrees, were rejected and believe the AI scoring may have played a role.
Also Read: https://btw.media/all/tech-trends/ai/ai-in-hiring-where-efficiency-meets-accountability/
Why it’s important
The Eightfold lawsuit underscores growing unease about how AI and automated systems are used in employment decisions. Automated hiring tools have proliferated across industries, promising efficiency and bias reduction, but their opaque nature can make it difficult for applicants to understand decisions that affect their careers. Experts warn that when AI models draw on external data and proprietary algorithms, it becomes harder to verify accuracy or challenge unfair outcomes.
Regulators are increasingly focused on this intersection between technology and law. For example, New York City’s Local Law 144 requires employers to disclose when automated decision tools are used and to audit them for bias, reflecting a broader global trend toward stricter regulation of AI in hiring.
Critics argue that without clear legal frameworks and enforcement, candidates may face invisible barriers reinforced by AI. If the lawsuit succeeds, it could force greater transparency and accountability on AI hiring platforms and reshape how such tools are deployed.
Yet proponents of AI hiring technology claim automation can reduce human bias and improve efficiency. The Eightfold case raises a critical question: should AI tools that influence life-changing employment decisions be subject to the same legal safeguards as traditional evaluation systems?
As AI continues to expand in recruitment and beyond, this lawsuit may become a bellwether for how legal systems reconcile cutting-edge technology with individual rights and fairness.
Also Read: https://btw.media/all/internet-governance/regulating-algorithms-why-employment-ai-is-under-scrutiny/
