A comprehensive study published on ScienceDirect in 2026 found that gender bias in AI recruitment tools persists even when explicit gender markers are removed. AI models use proxy variables — specific hobbies, language patterns, career gaps, voice timbre in video interviews, and names correlating with migration backgrounds — to inadvertently penalize female candidates at rates that current de-biasing techniques cannot address.
How Proxy Bias Works
The mechanism is straightforward. An AI hiring system trained on historical data learns that employment gaps correlate with lower job performance scores — because historical data reflects the reality that women take parental leave more frequently than men. The system then penalizes any candidate with an employment gap, regardless of the reason. The system never sees the word “female” but achieves the same discriminatory outcome through a neutral-seeming variable.
Amazon’s AI recruitment tool provided the canonical example: it systematically downgraded resumes from female candidates by detecting indirect markers like “captain of the women’s chess club” — phrases that never mention gender explicitly but correlate with it in training data. The Belgian study documents the same pattern across video interviews, where voice timbre and physical appearance serve as gender proxies.
New Legal Requirements
Illinois HB 3773, effective January 1, 2026, now prohibits AI use in hiring that intentionally or unintentionally discriminates based on protected characteristics. The law specifically bans ZIP codes as proxy variables for protected characteristics — the first U.S. state law to address proxy discrimination in AI hiring directly.
At the federal level, the Artificial Intelligence Civil Rights Act (S.3308 / H.R.6356) would regulate algorithmic discrimination across housing, hiring, lending, healthcare, and education. The bill mandates pre-deployment evaluations and independent third-party bias audits for any AI system used in consequential decisions. Colorado’s AI Act, delayed until June 30, 2026, takes a similar approach.
The gap between the technical reality and the regulatory response remains wide. Current de-biasing techniques — removing gender-associated words, balancing training data, adding fairness constraints — address surface-level bias but leave proxy variable pathways intact. Until AI hiring systems can be audited for proxy discrimination as rigorously as they are tested for accuracy, the tools will continue penalizing the candidates they were ostensibly designed to evaluate fairly.
