Companies who have experimented with AI are well aware of the issues with data. It may be incomplete, there may not be enough of it, the list goes on... Beyond the common issues of data processing and storage, companies need to be careful when collecting personal data like race or gender. They need to be aware of instances when collecting this information would lead to harm, and when it may actually lead to equitable outcomes. Many executives would be surprised to know that recent research in AI Ethics has shown that the removal of sensitive personal information from algorithms may lead to more harm than good.
To understand how, we’ll look to a real-life example of Amazon's deployment of a hiring algorithm in 2015. Their intent was to free up employees time by using AI to screen high-potential candidates. When it was deployed, the algorithm was simply replicating the company’s prior hiring practices and helped identify that they were not gender-balanced. Previous hiring resulted in a bias towards male software engineers and the algorithm interpreted this to mean that males were the preferred candidates. Amazon has stopped using the algorithm, but the case reveals some interesting ethical questions to contemplate.
The first reaction of many in Amazon’s position would be to simply shutdown the algorithm and go back to their old hiring practices. This doesn’t seem to be the best course of action because the old hiring practices are what led to the algorithm’s bias in the first place. In a way, the implementation of this algorithm amplified and revealed biased hiring practices and offered Amazon a way to reconcile and improve in the future. Without the outcomes presented by their hiring AI, Amazon would not be forced to face their bias and would not have the feedback they need to improve. Perhaps Amazon should be admonished for their previous hiring practices, but they should be celebrated if they acknowledge their past transgressions and build an algorithm that leads to better outcomes in the future.
What the Amazon case leads us to understand is that AI can reveal the biases existing in our current human decision-making frameworks. Humans have many cognitive biases and, until now, we haven’t had a data-driven way to acknowledge and face the consequences of these biases. Due to today's political climate, and the speed in which companies can face public backlash for ethical missteps, the most common reaction would be to revert to old practices and remove the biased algorithm. This would result in a step-backwards. It wouldn’t allow the revelation of ongoing discrimination through data-checking and statistics.
This leads management to make some very difficult decisions. A thoughtful approach to ethical AI forces leaders to grapple with the choice of using sensitive social information (like race and gender). It may seem counterintuitive for those that have been removing this information from their algorithms in hopes that would lead to the elimination of bias. There now needs to be serious consideration of whether it is actually unethical to remove this information because it removes the transparency of the bias and the ability to diagnose the source.
It also forces management to be vulnerable. They must be willing to expose themselves and their company to the biased practices of the past. If management wants to reap the benefits that AI enables, they must bravely accept the task of reconciling the biases it helps to uncover, and work towards a more equitable future.