AI’s bias challenge
digital
Article

AI’s bias challenge

How stereotypes are shaping Artificial Intelligence and what to do about it
4 min.
April 2019
PRINTABLE PDF – Less than 1MB

Headlines about bias in Artificial Intelligence keep on coming.

This week the AI Now Institute, based at New York University, released a report calling the field of AI a “diversity disaster”.

Last week Facebook was in the hot seat for allegations of bias in its ad-targeting algorithms.

That’s on top of a recent privacy-related law suit Facebook is up against in Germany.

We’ve known for some time that with Artificial Intelligence there is a very real risk of reinforcing society’s gender equality problems. 

How AI becomes biased

The way machines learn, a subfield of AI, involves feeding the computer sets of data –whether it’s in the form of text, images or voice – and adding a classifier to this data. If search engines are being used to train the machine learning systems, then the images used to portray what jobs people hold or what activities they do are often based on societal stereotypes. An example would be showing the computer an image of a man as a board member or a STEM scientist, or a woman as a nurse or teacher.

Over time and with many images, the computer will have learned to recognize similar images and classify them according to past patterns. One example of AI already in use that shows the potential for bias is how it is used to assess credit risk before issuing credit cards or small loans. An AI algorithm could easily be used to rule out a group perceived to be from a less financially stable demographic, like single women, for example. A person could also be rejected for a mortgage if they currently live in a low-income area.

Another potential trouble area is job candidate selection. LinkedIn, for instance, had an issue where highly-paid jobs were not displayed as frequently for searches by women as they were for men because of the way its algorithms were written. The initial users of the site’s job search function were predominantly male for these high-paying jobs, so it just ended up proposing these jobs to men.

My colleague Maude Lavanchy has written about how Amazon abandoned its experimental artificial intelligence recruiting tool after discovering it discriminated against women.

AI algorithms are trained to observe patterns in large data sets to help predict outcomes. In Amazon’s case, its algorithm used all CVs submitted to the company over a ten-year period to learn how to spot the best candidates. Given the low proportion of women working in the company, as in most technology companies, the algorithm quickly spotted male dominance and thought it was a factor for success.

Because the algorithm used the results of its own predictions to improve its accuracy, it got stuck in a pattern of sexism against female candidates, Maude wrote.

Contact

Research Information & Knowledge Hub for additional information on IMD publications

Looking for something specific?
IMD's faculty and research teams publish articles, case studies, books and reports on a wide range of topics