How to Avoid Biases in Your AI Implementation


In most circles, the word "bias" obviously has a negative connotation. As far as the media is concerned, this means that the information is biased in one way or another. In science, this means that preconceived notions have led to inaccurate conclusions. When it comes to artificial intelligence, the bias of those who program the software – and the data it learns – can lead to unsatisfactory results.

Any bias is a deviation from reality when collecting, analyzing or interpreting data. Intentional or not, most people are somewhat biased in their way of seeing the world, which affects the way they interpret data. As technology plays a more crucial role in everything from employment to criminal justice, a biased AI system can have a significant impact.

Before humans can trust machines to learn and interpret the world around them, we need to eliminate the biases in the data from which AI systems learn. Here's how to avoid such a bias when implementing your own AI solution.

1. Start with a very diverse team.

Any deep learning model of an AI system will be limited by the collective experience of the team behind it. If this team is partitioned, the system will make judgments and predictions based on a very imprecise model. For Adam Kalai, co-author of the article "Is a man a computer programmer like a woman is a housewife?" Debiasing word incorporations, " eliminate biases in AI it's like raising a baby. For better or worse, the baby – or the AI ​​system – will think about how you teach him to think. You also need a village. So put together a very diverse team to lead your AI effort. You will be more likely to identify nuanced biases earlier and more precisely.

To reduce hiring biases when building your team, examine the language of your job postings and remove biased terms. The word "ninja", for example, may seem to make your job advertisement more attractive. However, this may discourage women from applying because society views the word as masculine. Another tactic is to reduce the number of job requirements, by listing them as preferred qualifications. It will also encourage more women to apply – not because they don't have these credentials, but because they tend not to apply unless they have all d & # 39; them. Finally, create standard interview questions and a post-interview debriefing process to ensure that all interviewers in your company work in the same framework when assessing candidates.

2. Ask your diverse team to teach your chatbots.

Like humans, when robots have more data and more experience to draw from, they make smarter choices. “Collect enough data for your chatbot to make good decisions. Automated agents must constantly learn and adapt, but they can only do so if they receive the right data. " said Fang Cheng, CEO and co-founder of Linc Global. Chatbots learn by studying previous conversations, so your team needs to feed your robot data that teaches them to respond the way you want. For example, the Swedish bank SEB even taught his virtual assistant Aida to detect a frustrated tone in the caller's voice, how well the bot can transmit the caller to a human representative.

To accomplish something similar without being biased, you may need to create datasets that will provide your bot with examples from several demographics. Establish a process to detect problems. Whether you use an automated platform or manually review customer conversations, look for patterns in customer chats. Do customers opt for a human representative or do they seem more frustrated when they call about a specific problem? Do certain client characters feel upset more often? Your chatbots may be mismanaging or misunderstanding a certain type of customer concern – or a certain type of customer concern. Once you've identified a common thread in requests from frustrated customers, you can provide your AI with the information it needs to correct the course.

3. Show the world what your AI thinks.

Transparency is perhaps just as important as diversity when it comes to building an AI system that people can trust. There are currently no laws regarding the rights of consumers subject to decision-making by an AI algorithm. The least that companies can do is to be completely transparent with consumers about why the decisions were made. Despite common fears in the industry, that doesn't mean leaking the code behind your AI.

Just provide the criteria that the system used to make its decisions. For example, if the system refuses a credit request, ask them to explain the factors that motivated the refusal and what the consumer can do to improve their chances of qualifying next time. IBM launched a software service that looks for biases in AI systems and determine why the automated decisions were made. Tools like this can help you in your transparency efforts.

The potential for bias to taint a company's AI program is a real concern. Fortunately, there are ways to broaden the diversity of your AI's source data and eliminate significant biases. By eliminating bias, you will help your business – and society – to truly realize the benefits that AI can offer.

Brad Anderson

Brad Anderson

Editor of ReadWrite

Brad is the publisher who oversees the content provided on ReadWrite.com. He previously worked as an editor at PayPal and Crunchbase. You can reach him at brad at readwrite.com.