In a recent episode of Zoey’s Extraordinary Playlist, Zoey, a software developer, learns that her company’s smart watch has a coding glitch – it doesn’t recognize Black people. What she discovers is that this is not merely a software problem but a systemic problem in her company and her industry, where there is a disproportionately low number of Black employees. Though the show is fictional, the storyline is not. As more and more products utilizing artificial intelligence enter the market, we are finding that even computers can exhibit bias.
As Joy Buolamwini explains in this TED Talk from 2016, facial recognition programs are dependent on machine learning, and that learning is dependent on training sets. If those sets are not diverse, then we end up with problems like those described by Buolamwini and others.
After realizing that she needed to be part of the solution for this problem, which Buolamwini describes as “The Coded Gaze,” she set up The Algorithmic Justice League, which aims to combat racism as well as any type of discrimination in artificial intelligence. In a recent Twitter thread for National Geographic during Black History Month, Buoloamwini gives more examples of the ways AI can fail.
Here I’ll lay down a few verses highlighting the ways AI can misinterpret the images of iconic Black women: Sojourner Truth, Ida B. Wells, Shirley Chisholm, @MichelleObama, @Oprah, @serenawilliams. #EthicalAI 6/18 pic.twitter.com/HJMH6qeWeJ— National Geographic (@NatGeo) February 22, 2021
As artificial intelligence becomes more ubiquitous, it is important to continue to encourage diverse groups of young people to learn how to code ethically so that future generations will not inadvertently (or deliberately) create biased programs.
I will be adding this post to my growing collection of Anti-Racism resources. Please take a look, and feel free to offer suggestions!