OPINION: Bias, Ethical AI and Bias Bounties — A Tech Expert’s Opinion

This opinion piece was written by Christophe Bourguignat, founder of Zelros, a global software vendor providing insurance distribution through hyper-personalized recommendations across various channels. Before creating Zelros in 2016, Bourfuignat worked at AXA Data Innovation Lab and Datarobot. Some of Zelros’ clients include BPCE Group, Crédit Agricole and Groupama. Zelros helps them stay competitive, generate higher revenue and maintain compliance with ethical AI programs.

Imagine this, it’s 2015 and you’re working at a large international corporation as a machine-learning specialist. You’ve been assigned to help create an artificial intelligence (AI) program that can help optimize the hiring process at the company. You gladly take on the project, and at the next meeting your boss fills you in on the work to be done in the weeks ahead.

You and your team will be creating a program that selects only the most qualified job candidates when looking at resumes. You get busy and, ultimately, achieve this by training the computer models to observe patterns in resumes that had been submitted to the company over a 10-year period. You and the team feel confident that the project was a success–until it all blows up and garners international coverage.

This blow up was a real event that occurred to Amazon in 2015. It was discovered that the solutions generated by their AI was biased against women. As most of the resumes had come from men, the AI taught itself that male candidates were more preferable to female candidates. The AI even went as far as to penalize those who had attended women’s colleges, and discarded resumes that included the word “women”, as in “captain of women’s swim team.”

Now well into 2022, industries and companies are implementing AI and machine learning (ML) into software more than ever before to streamline processes and to aid in growth of their organizations. As with any new and evolving technology, AI has its limitations regarding how much self-autonomy it can actually handle. At this level, we’re no longer talking about Alexa and Google Home that help us find recipes and tell us how far away the moon is from the Earth. Some of these AI solutions are making determinations that impact our health and safety, and at those levels, there is absolutely no place for biases or discrimination.

So how do industries avoid having their AI programs launch them into the media spotlight–for all the wrong reasons?

Strengthen and Protect Your Hiring System

First, let’s look at what exactly makes AI ethical. It is the implementation of intelligence that adheres to thorough and well-defined ethical guidelines regarding its pre-programmed fundamental values. This includes areas such as individual rights, privacy, non-discrimination, and non-manipulation, plus many more.

What comes to mind when we think of AI going rogue is Skynet in the Terminator movie franchise. Though this is science fiction, there are scenarios in which AI is used for unethical purposes. This can include disinformation on the internet, environmental damage, and human and societal abuses–actions that can be intentional and malicious. However, AI can also produce unethical solutions that are unintentional. How?

At its core, AI is fueled by human-generated data, and humans are prone to having biases based on their experiences. Despite how unbiased we humans like to think that we are, our neighborhoods, faith, and own conclusions about life come together to form our understanding of the world. What’s true for us, may not be true for someone living across the street, let alone the country.

For the individuals and teams developing these AI systems–that a multitude of people will use–there can’t be only one demographic supplying the training data. In this constantly evolving field of AI, and discussions around ethical AI use, it’s best if there are diverse teams working together on these systems, including women, people of color, disabilities, people of different ages, socioeconomic status, and cultural background.

Women and people of color (particularly the BIPOC community) still struggle to break into the tech field in significant numbers, and may find it discouraging to even begin, despite how passionate they may be. When looking at the Black community, the percentage of Black employees employed by major tech companies remains low, for example Twitter employs 6%, Microsoft at 4.5%, Slack 4.4%, Facebook 3.8%, and Salesforce at 2.9%. The numbers do jump slightly for Lyft (9%) and Uber (9.3%) but these numbers tend to be reflective of lower-paid team members on the ground.

Workforce statistics from Amazon, showed that from 2018 to 2020, the company increased its Black and Hispanic staff members. However, most of that growth was seen in the area of delivery drivers and warehouse workers. Less than 4% of senior level managers at Amazon were Black or Hispanic in 2020, according to company data.

Shifting gears and looking at gender, a 2020 study conducted by the AnitaB.org Institute found that women make up only 28.8% of the tech workforce, highlighting that women are the minority almost 3 to 1 in these spaces. Another study published by the World Economic Forum found that only 22% of AI professionals across the world are female, compared to the 78% who are male. The data suggests that between men and women with AI skills, women are less likely to be promoted to senior roles which limits their ability to acquire expertise in higher-profile and emerging skills.

These findings raise the question: does gender, racial diversity, and the representation of other minority groups on teams matter in the development and implementation of tech, especially when it comes to artificial intelligence?

The answer is that it absolutely does. Companies can implement bias bounties that reward white hat hackers (people not affiliated with their company) to identify and remove bias in their AI systems, typically the result of incomplete data or existing bad data. If you have a small team of developers that come from the same background and have the same experience, a bias is even more likely to occur. Organizations can solve for this more completely by focusing their time and resources on selecting qualified candidates from diverse backgrounds who would bring their specialized knowledge to the table. As Ben Franklin put it in his timeless phrase in 1736, “An ounce of prevention is worth a pound of cure.”

Employing a diverse pool of humans that vary not only in gender and race, but in age, religion, and a host of other relevant diversities, can be invaluable in combating biases in AI.

Yes, the tech we use in our everyday lives has come a long way even in just the last few years. But relying solely on AI and ML when it comes to who is the best applicant for a job is a recipe for disaster if there is a lack of representation in the design stage. The ability to intervene in the system to catch red flags before the AI can compute what it’s gathered as the solution, can keep employers out of controversial headlines, help qualified marginalized demographics land executive roles, and ensure that the products and services available to the public are as ethical and sound as humanly possible.