Amazon decided to close its artificial intelligence recruitment (AI) tool after discovering that it was discriminatory towards women.
The company has created the tool to browse the web, identify potential candidates and rank them from one to five stars. But the algorithm has learned to systematically downgrade women's CVs for technical jobs such as software developer.
Although Amazon is at the forefront of AI technology, the company has not managed to find a way to make its algorithm gender-neutral. But the failure of society reminds us that artificial intelligence develops biases from various sources.
Although there is a common belief that algorithms are supposed to be built without any prejudices or prejudices that influence human decision making, the truth is that an algorithm can unintentionally take advantage of biases from different sources.
Everything from the data used to train it to the people who use it, and even seemingly unrelated factors, can all contribute to the bias of AI.
Artificial intelligence algorithms are trained to observe trends in large sets of data to help predict results. In the case of Amazon, his algorithm used all CVs submitted to the company over a period of ten years to learn how to spot the best candidates.
Given the low proportion of women working in the business, as in most technology companies, the algorithm quickly detected male dominance and thought that it was a factor of success.
Because the algorithm used the results of its own predictions to improve its accuracy, it remained stuck in a sexist scheme towards candidates.
And since the data used to train them were created at one time by humans, this means that the algorithm has also inherited unwanted human traits, such as bias and discrimination, which have also been a recruitment problem since years.
Some algorithms are also designed to predict and deliver what users want to see. This is usually seen on social media or in online ads, where users see content or advertisements that an algorithm thinks they can interact with. Similar trends have also been reported in the recruitment industry.
A recruiter said that while using a professional social network to find candidates, the AI learned to give him results very similar to the profiles he had initially engaged with.
As a result, entire groups of potential candidates have been systematically removed from the recruitment process.
However, a bias also appears for other independent reasons. A recent study of how an algorithm was showing ads promoting STEM jobs showed that men were more likely to see the announcement, not because men were more likely to click on it, but because women were costly. more expensive to announce.
Since businesses are more likely to target ads targeting women (which account for 70% to 80% of consumer purchases), the algorithm has chosen to show more men's ads than women's ads, because it was designed to optimize the delivery of ads while limiting costs.
But if an algorithm only reflects the characteristics of the data we provide, what its users like and the economic behavior of its market, is it not unfair to blame it for perpetuating our worst attributes?
We automatically expect that an algorithm will make decisions without discrimination, which is rarely the case in humans. Even if an algorithm is biased, it may represent an improvement over the current status quo.
To take full advantage of the use of AI, it is important to investigate what would happen if we allowed AI to make decisions without human intervention.
A 2018 study explored this scenario with bail decisions using an algorithm based on historical criminal data to predict the likelihood of reoffending criminals. According to a projection, the authors were able to reduce the crime rate by 25% while reducing the cases of discrimination against incarcerated prisoners.
Yet the gains highlighted in this research would only occur if the algorithm actually made all the decisions. This is unlikely to happen in reality, as judges would probably prefer to choose whether or not to follow the algorithm's recommendations. Even if an algorithm is well designed, it becomes redundant if people choose not to trust it.
Many of us already rely on algorithms for most of our daily decisions, whether it is to watch on Netflix or buy from Amazon. But Research shows that people lose confidence in algorithms faster than humans when they see them making a mistake, even when the algorithm is overall more efficient.
For example, if your GPS suggests that you use another route to prevent traffic from taking longer than expected, it is likely that you will not use your GPS in the future.
But if your decision was to take another path, it is unlikely that you would stop trusting your own judgment. A follow-up study on exceeding algorithm aversion even showed that users were more likely to use an algorithm and accept its errors if given the ability to modify the algorithm itself, even if it meant making it flawed.
While humans may quickly lose confidence in faulty algorithms, many of us tend to trust machines more if they have human characteristics. According to research on autonomous cars, humans were more likely to trust the car and thought it would work better if the car's reinforced system had a name, a gender and a human-sounding voice.
However, if the machines become very human, but not quite, people often find them scary, which could affect their confidence.
Even if we do not necessarily appreciate the image that algorithms may reflect of our society, it seems that we always want to live with them and make them look like and act like us. And if so, can algorithms also make mistakes?
This article is republished from The Conversation by Maude Lavanchy, Research Associate, IMD Business School, under Creative Commons license. Read the original article.
We live in a digital bondage – trade confidentiality for convenience