Bias in AI
As artificial Intelligence (AI) technology continues to establish itself as a core technology affecting many parts of our lives, we must continue to consider its limitations; notably, bias within AI systems. AI is only becoming more normal in business operations, with 60% of companies having already implemented AI (such as machine learning and data analytics) and 84% of businesses understanding that AI will help them gain or maintain a competitive advantage. By the same token, 85% of AI projects will produce inaccurate results due to anomalies in data, algorithms, or groups handling them. All of this to say, the overarching topic of bias within AI is one that is ongoing and increasingly imperative for organizations to consider. As we wrote last year, bias can be combated in AI with the proper teams and tools. Let’s revisit why we all must continue to learn how to eliminate bias from AI programs and look at two major organizations that are doing this work.
A common bias that occurs when programming AI systems is sample bias, where one population is usually heavily overrepresented or underrepresented. For example, during the training of a speech-to-text program, it was decided to use audiobook recordings to train the system, because it makes sense to just grab those voice recordings and teach the AI to convert them into text, right? This is where bias crept in- the majority of audiobook recordings were of educated, middle-aged, white men. This meant that whenever the voice of someone who was from a different socioeconomic status and/or race was trained for speech-to-text, the software underperformed.
We see sample bias in the beauty and fashion industry as well. In fact, even when beauty and fashion brands promised to represent more Black folks during the peak of the BLM protests in 2020, an analysis by Quartz showed that that may not have actually happened. Specifically, the analysis was of models’ skin tones in 27,000 images from the feeds of 34 fashion and beauty brands. They found that although brands may have slightly increased the representation of models of Color on their social media feed following the major protests in support of racial justice here in the US, the percentage of increase was not significant enough to truly show any change.
What does this have to do with AI? Beauty brands, for example, are beginning to adopt AI technology for their customer experience and product development protocols. Some brands have deployed “shade matching” AI technologies and others have launched smart machines that analyze a customer’s photo in order to provide the appropriate hair dye to use. If the majority of sample skin colors or hair colors/textures are of a dominant identity group (e.g. white women with straight hair) then an AI program won’t provide accurate services for customers of other skin colors and/or hair colors/textures. This is why it’s imperative to train AI algorithms without sample bias, using a large representation of all individuals.
There are two organizations that are working towards sample bias, and various other forms of bias within AI in order to provide a more ethical and inclusive digital transformation process for organizations looking to or who have already launched AI technology.
1. Algorithmic Justice League
This organization raises public awareness for how racism, sexism, and ableism is perpetuated through standard AI efforts. The AJL believes that “who codes matters, how we code matters, and that we can code a better future” and that the benefits of technology, especially in AI, should favor every person, not just a specific group of people. The founder of the AJL, Dr. Joy Buolamwini, personally experienced racial discrimination while working with a facial recognition software in graduate school at MIT, which ignited the idea to create an organization like the AJL. This organization calls for support from engineers, techies, policymakers, and journalists alike who are committed to ending bias within AI.
2. Responsible AI Institute
This non profit organization is one with a first-of-its-kind certification system that enforces the need to maintain human rights within AI. They believe that AI should be inclusive and non-biased and held accountable when harm is caused- especially because 60% of companies have implemented AI operations (e.g. machine learning, big data analytics) this year according to their statistics. Their certification framework scores an organization’s use of AI based on these 5 categories: bias and fairness, accountability, robustness, data quality and rights, explainability and interpretability. The Responsible AI Institute knows that we all have a choice to build a technologically advanced future that either will, or will not, be trusted by everyone; so we must continue to work on responsibly deploying AI.
At Iterate.ai, we are committed to diversity, equity, inclusion, and belonging. With these as some of our core values, we know how crucial it is to support the efforts of responsible AI. That is why we are often considering how to combat bias within AI technologies, and why we bring this up when we meet with our clients. AI can be programmed and deployed to be equitable for all individuals, let’s continue to work towards this reality together.