Imagine applying for your dream job, only to be secretly scored by an algorithm that determines your fate before a human ever sees your resume. This scenario is at the heart of a new lawsuit against AI company Eightfold, accused of helping employers secretly evaluate job candidates without their knowledge. The case highlights a growing tension in the workplace: as businesses rush to adopt AI for efficiency gains, legal and safety frameworks are struggling to keep pace.
The Secret Scoring Controversy
The lawsuit against Eightfold alleges the company’s AI-powered hiring tools enable employers to covertly assess job applicants, raising questions about transparency and fairness in recruitment. While the specific details of the case remain under legal review, it represents a broader pattern of AI systems operating in employment contexts with limited oversight. This isn’t just about one company – it’s about how AI is reshaping hiring practices across industries, often without clear rules or accountability mechanisms.
The Safety Gap in AI Deployment
This legal challenge arrives as businesses are deploying AI agents at an unprecedented rate, often outpacing safety protocols. According to a Deloitte report surveying over 3,200 business leaders across 24 countries, only 21% of companies have robust safety mechanisms for their AI systems, despite 23% currently using AI agents moderately – a figure projected to jump to 74% within two years. “Given the technology’s rapid adoption trajectory, this could be a significant limitation,” the Deloitte report warns. “As agentic AI scales from pilots to production deployments, establishing robust governance should be essential to capturing value while managing risk.”
The report identifies specific dangers businesses face, including prompt injection attacks where malicious actors manipulate AI systems, and unexpected agent behavior that can lead to operational disruptions. These risks become particularly concerning in hiring contexts, where AI decisions can directly impact people’s livelihoods and career opportunities.
Regulatory Patchwork and Industry Response
While federal AI regulation remains uncertain in the U.S., states are taking matters into their own hands. California’s SB-53 and New York’s RAISE Act, both effective in early 2026, require AI developers to publicize risk mitigation plans and report safety incidents, with fines reaching up to $3 million for violations. These laws specifically target companies with over $500 million in annual revenue, creating what some experts call a “politically motivated” threshold that may not adequately address risks from smaller AI deployments.
Data protection lawyer Lily Li notes the regulatory tension: “It’s interesting that there is this revenue threshold, especially since there has been the introduction of a lot of leaner AI models that can still engage in a lot of processing, but can be deployed by smaller companies.” Meanwhile, the Trump administration has pushed back against state-level regulations through executive orders and litigation efforts, arguing that fragmented rules could stifle innovation and cede technological ground to China.
The Hallucination Problem in Professional Contexts
Beyond legal and safety concerns, AI systems continue to struggle with accuracy issues that can have serious consequences in professional settings. A recent analysis of 4,841 papers accepted by the prestigious NeurIPS AI conference found 100 hallucinated citations across 51 papers – about 1.1% of submissions contained fabricated references. While the conference organization notes that “the content of the papers themselves [is] not necessarily invalidated” by incorrect references, the finding highlights how even AI experts can be tripped up by their own tools.
These accuracy problems become particularly problematic in hiring contexts. If AI systems can’t reliably cite academic papers, how can they be trusted to evaluate complex resumes, assess cultural fit, or predict job performance? The persistence of what some call “hallucination roulette” – where different AI systems provide varying answers to the same questions – suggests businesses need to approach AI deployment with caution rather than blind enthusiasm.
Balancing Innovation with Responsibility
The Eightfold lawsuit serves as a wake-up call for businesses using AI in hiring and other sensitive areas. While AI promises efficiency gains and data-driven insights, companies must navigate complex legal landscapes and implement proper safeguards. Deloitte recommends several practical steps: establishing clear boundaries for agent autonomy, implementing real-time monitoring systems to track AI behavior, and creating audit trails that capture the full chain of agent actions for accountability.
As Gideon Futerman of the Center for AI Safety observes about current regulations: “SB-53’s level of regulation is nothing compared to the dangers, but it’s a worthy first step on transparency and the first enforcement around catastrophic risk in the US. This is where we should have been years ago.” The question for businesses isn’t whether to use AI – it’s how to use it responsibly, transparently, and with appropriate human oversight, especially when people’s careers and livelihoods are at stake.

