Amazon's sexist hiring algorithm could always be better than a human.


Amazon decided to close its artificial intelligence recruitment (AI) tool after discovering that it was discriminatory towards women.

The company has created the tool to browse the web, identify potential candidates and rank them from one to five stars. But the algorithm has learned to systematically downgrade women's CVs for technical jobs such as software developer.

Although Amazon is at the forefront of AI technology, the company has not managed to find a way to make its algorithm gender-neutral. But the failure of society reminds us that artificial intelligence develops biases from various sources.

Although there is a common belief that algorithms are supposed to be built without any prejudices or prejudices that influence human decision making, the truth is that an algorithm can unintentionally take advantage of biases from different sources.

Everything from the data used to train it to the people who use it, and even seemingly unrelated factors, can all contribute to the bias of AI.

Artificial intelligence algorithms are trained to observe trends in large sets of data to help predict results. In the case of Amazon, his algorithm used all CVs submitted to the company over a period of ten years to learn how to spot the best candidates.

Given the low proportion of women working in the business, as in most technology companies, the algorithm quickly detected male dominance and thought that it was a factor of success.

Because the algorithm used the results of its own predictions to improve its accuracy, it remained stuck in a sexist scheme towards candidates.

And since the data used to train them were created at one time by humans, this means that the algorithm has also inherited unwanted human traits, such as bias and discrimination, which have also been a recruitment problem since years.

Some algorithms are also designed to predict and deliver what users want to see. This is usually seen on social media or in online ads, where users see content or advertisements that an algorithm thinks they can interact with. Similar trends have also been reported in the recruitment industry.

A recruiter said that while using a professional social network to find candidates, the AI ​​learned to give him results very similar to the profiles he had initially engaged with.

As a result, entire groups of potential candidates have been systematically removed from the recruitment process.

However, a bias also appears for other independent reasons. A recent study of how an algorithm was showing ads promoting STEM jobs showed that men were more likely to see the announcement, not because men were more likely to click on it, but because women were costly. more expensive to announce.

Since businesses are more likely to target ads targeting women (which account for 70% to 80% of consumer purchases), the algorithm has chosen to show more men's ads than women's ads, because it was designed to optimize the delivery of ads while limiting costs.

But if an algorithm only reflects the characteristics of the data we provide, what its users like and the economic behavior of its market, is it not unfair to blame it for perpetuating our worst attributes?

We automatically expect that an algorithm will make decisions without discrimination, which is rarely the case in humans. Even if an algorithm is biased, it may represent an improvement over the current status quo.