Without question, the world is moving toward automation and artificial intelligence, which pick up on some of our deeply-ingrained natural prejudices.
While scenarios like The Matrix and I, Robot might seem like folly, AI picks up behaviors from we humans. As a result, some AIs adapt based on new information. Others, regrettably, can become racist, murderous, and/or sexist.
Can we prevent AIs from learning negative behaviors and traits?Can We Stop AIs from Becoming Sexist?Click To Tweet
Leeloo Dallas Multipass or How AIs Learn
If you’re a fan of science fiction, you’ve no doubt seen the movie The Fifth Element. Milla Jovovich plays a perfect being created from a strand of alien DNA who learns about humanity and our history after she is “born”.
Similarly, artificial intelligences do the same thing using videos, images, words, and data of all kinds. Due to this manner of knowledge absorption, the lessons can get a bit muddled because human history is, too.
As we saw with Microsoft’s Tay bot, humans can manipulate anything with machine learning capabilities if they really want to do so. But in this instance, humans didn’t manipulate with how and what the AI learned.
A University of Washington research team studied how computer vision algorithms handled gender predictions based on an image data set. Using a classic set of images typical in AI predictive experiments, the AI neural network predicted women to be doing traditionally “female” tasks in the images.
You know these kinds of tasks: cooking and the like. The problem is that the image could be a balding man in a kitchen and the AI would still predict a woman. With a predictive algorithm, mitigating biases matters.
These biases emerged as a result of one five major areas wherein machine learning acquires biases:
- Bias through interaction
- Emergent bias
- Data-driven bias
- Conflicting goals bias
- Similarity bias
Of course, these kinds of biases aren’t new and people are doing something about them. Unfortunately, there’s something else far more troubling with this outcome.
Sexist Predictions Magnified into Misclassification
It’s not stellar that the neural network predicts that women are 33% more likely to appear in the kitchen/cooking. But this isn’t the biggest concern. The problem is that these biases become amplified across the AI neural network leading to further misclassification or bias.
MIT Technology Review reports: “So, trained on that data set, an AI was 68 percent more likely to predict a woman was cooking and did so even when an image was clearly of a balding man in a kitchen.”
This dataset is just one of the catalogs of images we use to train machine learning algorithms. Imagine what that percentage could climb to if the neural network explored more data sets.
Why is This a Problem?
It might not be a huge issue if this algorithm is used for targeting social media ads. However, if this algorithm is used in predictive crime software, that amplification could turn problematic — even deadly.
An Equitable Future for AI & Humans
Minds from prestigious bodies such as MIT, Stanford, and Harvard expressed concern. Biases based on gender, ethnicity and other criteria are all relevant. After all, machine bias is human bias given how machine learning works in its current iteration.
Some say that we should incorporate an AI watchdog in order to avoid unfair and discriminatory practices. Some members of the scientific community took a studious approach to the matter.
The AI Now Initiative is dedicated to determining the long-term social implications and effects of artificial intelligences (and their potential biases). Focused on liberties, bias, inclusion, automation, labor, and other AI future related issues, the research group aims to work across multiple disciplines to better understand social impacts of humans on AI and vice versa.