Amazon's sexist hiring algorithm could still be better than a human
algorithm illustration
Article

Amazon's sexist hiring algorithm could still be better than a human

Expecting algorithms to perform perfectly might be asking too much of ourselves
5 min.
November 2018
PRINTABLE PDF – Less than 1MB

Amazon decided to shut down its experimental artificial intelligence (AI) recruiting tool after discovering it discriminated against women. The company created the tool to trawl the web and spot potential candidates, rating them from one to five stars. But the algorithm learned to systematically downgrade women’s CV’s for technical jobs such as software developer.

Although Amazon is at the forefront of AI technology, the company couldn’t find a way to make its algorithm gender-neutral. But the company’s failure reminds us that AI develops bias from a variety of sources. While there’s a common belief that algorithms are supposed to be built without any of the bias or prejudices that colour human decision making, the truth is that an algorithm can unintentionally learn bias from a variety of different sources. Everything from the data used to train it, to the people who are using it, and even seemingly unrelated factors, can all contribute to AI bias.

AI algorithms are trained to observe patterns in large data sets to help predict outcomes. In Amazon’s case, its algorithm used all CVs submitted to the company over a ten-year period to learn how to spot the best candidates. Given the low proportion of women working in the company, as in most technology companies, the algorithm quickly spotted male dominance and thought it was a factor in success.

Because the algorithm used the results of its own predictions to improve its accuracy, it got stuck in a pattern of sexism against female candidates. And since the data used to train it was at some point created by humans, it means that the algorithm also inherited undesirable human traits, like bias and discrimination, which have also been a problem in recruitment for years.

Some algorithms are also designed to predict and deliver what users want to see. This is typically seen on social media or in online advertising, where users are shown content or advertisements that an algorithm believes they will interact with. Similar patterns have also been reported in the recruiting industry.

One recruiter reported that while using a professional social network to find candidates, the AI learned to give him results most similar to the profiles he initially engaged with. As a result, whole groups of potential candidates were systematically removed from the recruitment process entirely.

Contact

Research Information & Knowledge Hub for additional information on IMD publications

Looking for something specific?
IMD's faculty and research teams publish articles, case studies, books and reports on a wide range of topics