The Hiring Algorithm That Cloned Its Creators

In 2018, Reuters revealed that Amazon had built an AI recruiting tool that systematically penalized resumes containing the word 'women's' — as in 'women's chess club captain' or 'women's college.' The system had been trained on a decade of hiring data from Amazon's own workforce, which was overwhelmingly male in technical roles. The algorithm learned that successful hires (as defined by the training data) tended to be men, so it treated female-associated signals as negative predictors. Amazon's engineers tried to patch the bias by removing explicitly gendered terms, but the system found proxies — certain colleges, certain phrasings, certain extracurricular patterns that correlated with gender. Each fix led the model to discover subtler correlations. The project was eventually scrapped i...

Mental Models

Discourse Analysis

Popular framing: Algorithmic hiring bias is a bug caused by bad training data — fix the data, add fairness constraints, and the technology becomes neutral and meritocratic. The 'Map-Territory' confusion where the 'training data' (the map) is treated as the 'ideal workforce' (the territory).

Structural analysis: The algorithm is not malfunctioning; it is doing exactly what it was designed to do — predict resemblance to past hires. Because past hiring was structurally biased, optimizing for 'good candidates' is indistinguishable from optimizing for 'candidates who resemble a historically exclusionary workforce.' Each patch (removing gendered terms) simply causes the model to find deeper proxies, revealing that the bias is not in the features but in the optimization target itself. This is a feedback loop: biased history produces biased model produces biased future history. The 'Principal-Agent' problem where AI is used as a 'blame-shield' for HR departments.

The gap matters because it misdirects intervention. If the problem is framed as bad data, the solution is better data collection — which keeps decision-making authority with algorithm vendors and employers. But if the problem is that the optimization target (past hiring success) encodes inequality, then no dataset curation can solve it — you must change what is being predicted. The popular framing also obscures information asymmetry: applicants cannot audit the model that judges them, so they cannot contest it, making the feedback loop self-sealing.

Competing Interpretations

Research Sources

Sources

Explore more scenarios on WiseApe

Loading...

Categories

Scenarios

All Models

🔍

Your Progress