The True Impact of Sexist and Racist Bias in Algorithms

Algorithm Bias BeLatina Latinx
Photo courtesy of vox.com

The sexist and racist burden in artificial intelligence is often minimized. Despite the strength of the third feminist wave with the #MeToo movement, the way algorithms work has become a problem of bias, depending on who builds them, how they are developed, and how they are used. 

This bias in programming impacts all women’s daily lives: from job searches to airport security checks.

There are many documented cases of sexism through algorithms, from the wage gap to skin color or race. If you’re a white woman, you may not have the same line of credit as your husband, or if you’re Latina and want to search for that word on Google Spain, you will only be in the first results as a porn category or as a cleaning lady who also gives sexual massages. Just observing a  search engine closely proves the bias created by colonialist prejudice. 

The burden of the hegemonic story

Artificial intelligence (AI) learns from old data, maybe from the last 10 or 20 years, which can unknowingly reproduce past prejudices. It is a time-learning process that requires constant updates on the most recent social advances in terms of gender, attitudes, or language. If this update is missing, outdated stereotypes can be perpetuated. For example, most AI have not heard of the #MeToo movement or the Chilean anthem “The rapist is you.”

In most cases, the problem is the amount of information processed. The more information uploaded, the more bias there is. A study published in 2016 collected the automatic learning techniques used to train an AI using Google News. The processor had to solve the following analogy: “man is to computer programmer what woman is to x.” The biased response was: “x = housewife.”

How do the algorithms discriminate against women?

Although Ada Lovelace was the first person to write an algorithm in the 19th century, today, artificial intelligence discriminates against women. 

After Lovelace created the first algorithm for a computer machine, the doors opened for multiple feminist achievements, since women were considered better at detailed tasks, such as programming the then-called Electronic Numeric Integrator and Computer (ENIAC).

Later, during World War II, when women had to take over new workspaces, six women programmed the first electronic computer. However, the credit for their work was never recognized in 1946 when the product was launched. 

Forty years later, and with the technological boom of the 1980s, the tech sector was dominated by men. However, it is not a question of men coding with bias. Instead, is the lack of diversity inside the teams.

One of the best-known cases of discrimination based on AI use was Amazon‘s attempt to automate its recruitment system. By 2018, Jeff Bezos’ multinational company had four years running and selecting candidates under a sexist IA tool. Although it was eventually discarded, Amazon’s computer models had been trained to screen applicants by observing variables in the CV’s resumes submitted ten years earlier. Since the industry was predominantly male at the time, most of these resumes were male, creating an AI machine learning bias that favored men.

In her book, Algorithms of Oppression, UCLA associate professor Safiya Umoja Noble exposes how people of color and Latinos are discriminated against by the algorithms. She quotes the redlining practices often used in real estate, health, and financial circles that deepen race inequalities and make people of color more likely to pay higher interest rates.

For Umoja Noble, the use of the Internet in our daily lives has also embedded discrimination habits. The increasing presence of artificial intelligence in technologies we depend on, whether by choice or not, could become “an important human rights issue in the 21st century,” the author says.

How to fix the bias?

Algorithms and humans also differ in responding to detected bias. Fay Payton, professor of information systems technology at the University of North Carolina, proposes finding new solutions to algorithmic bias, designing  “with feminist thinking.”

For Payton, the objective implies creating new variables to eliminate the algorithmic bias that is not only sexist against women but racist against African Americans and Latinos.

“To be clear, this is not just about doing something because it is morally correct. But we know that women, African Americans, and Latinx people are under-represented in IT fields. And there is ample evidence that a diverse, inclusive workforce improves a company’s bottom line,” Payton says. “If you can do the right thing and improve your profit margin, why wouldn’t you?”

For the American professor of Computer and Behavioral Science at the University of Chicago and author of Scarcity, why having too little means so much, Sendhil Mullainathan, there is still hope, since changing the algorithmic bias is more manageable than changing human’s inflexible prejudices.

“We must ensure all the necessary inputs to the algorithm, including the data used to test and create it, are carefully stored,” Mullainathan says. “Something quite similar is already required in financial markets, where copious records are preserved and reported while preserving the commercial secrecy of the firms involved. We will need a well-funded regulatory agency with highly trained auditors to process this data. Once proper regulation is in place, better algorithms can help to ensure equitable treatment in our society.”

Although the responsibility to the algorithm itself is simple, a fast-growing society with an inclination to introspection could imply a different approach to data collecting. 

Technology learns from us, its creators. Therefore, it’s up to us to implement a change in variables, not only in technology itself but also in education, and avoid future data analysis and biased intelligence systems.