Is Artificial Intelligence Racial Bias Being Suppressed?

Is Artificial Intelligence Racial Bias Being Suppressed?

Is Artificial Intelligence Racial Bias Being Suppressed? | DeviceDaily.com
 

Artificial Intelligence (AI) and Machine Learning are used to power a variety of important modern software technologies. For instance, AI powers analytics software, Google’s bugspot tool, and code compilers for programmers. AI also powers the facial recognition software commonly used by law enforcement, landlords, and private citizens.

Of all the uses for AI-powered software, facial recognition is a big deal. Security teams from large buildings that rely on video surveillance – like schools and airports – can benefit greatly from this technology. An AI algorithm has the potential to detect a known criminal or an unauthorized person on the property. Some systems can identify guns while others can track each individual’s movements and provide a real-time update regarding their location with a single click.

Facial recognition software has phenomenal potential

Police in the U.S have used facial recognition software to successfully identify mass shooting suspects. Police in New Delhi, India, used this tech to identify close to 3,000 missing children in four days. AI-powered software scanned 45,000 photos of children living in orphanages and foster homes and matched 2,930 kids to photos in the government’s lost child database. That’s an impressive success rate.

Facial recognition software is also used by governments to help refugees find their families through the online database called REFUNITE. This database combines data from multiple agencies and allows users to perform their own searches.

Despite the potential, AI-powered software is biased

Facial recognition software is purported to enhance public safety since AI algorithms can be more accurate than the human eye. However, that’s only true when you’re a white male. The truth is, artificial intelligence algorithms have an implicit bias toward women and people with dark skin. That bias is present in two major types of software: facial recognition software and risk assessment software.

For instance, researchers from MIT’s Media Lab used facial recognition software in an experiment that misidentified dark-skinned females as men up to 35% of the time. Both women and people with dark skin had the highest error rates.

Another area of bias is seen in risk assessments. Some jails use a computer program to predict the likelihood of each inmate committing a crime in the future. Unfortunately, time has already shown these assessments are biased toward people with dark skin. Dark-skinned people are generally scored as a higher risk than light-skinned people. The problem is that risk assessment scores are used by authorities to inform decisions as a person moves through the criminal justice system. Judges frequently use these scores to determine bond amounts and whether a person should receive parole.

In 2014, U.S Attorney General Eric Holder called for the U.S. Sentencing Commission to study the use of risk assessment scores because he saw the potential for bias. The commission chose not to study risk scores. However, an independent, nonprofit news organization called ProPublica studied the scores and found them to be remarkably unreliable in forecasting violent crime. They studied more than 7,000 people in Broward County, Florida and found that only 20% of people predicted to commit violent crimes actually did.

This bias has been known for quite some time, yet experts have yet to create a solution. People wouldn’t be so alarmed at the error rate if the technology wasn’t already in use by governments and police.

The ACLU concluded facial recognition software used by police is biased

In 2018, the American Civil Liberties Union (ACLU) ran a test to see if Amazon’s facial recognition software used by police has a racial bias. The results? Twenty-eight U.S. Congress members were falsely matched with mugshots, including California representative and Harvard graduate Jimmy Gomez. The ACLU’s test revealed 40% of false matches involved people of color.

Despite the large error rate, Amazon’s facial recognition tool (Rekognition) is already in use by police. Civil liberties groups and lawmakers are heavily concerned that using this software as-is can harm minorities. Activists are calling for government regulation to prevent abuse since this software is going mainstream too soon.

Are governments suppressing AI’s racial bias?

For two years in a row, Canadian immigration authorities denied visas to approximately two dozen AI academics hoping to attend a major conference on artificial intelligence. Researchers from the group called Black in AI were planning to educate people about AI’s racial bias but were denied visas in 2018 and 2019. After pressuring the government, some denials were reversed in 2019.

The Canadian government denied the visas, claiming to have no assurance the researchers would leave Canada at the end of their visit. The group and many of their supporters don’t believe the visa denials were legitimate. Canada’s economy routinely benefits from overseas visitors, who spent more than $ 21 billion in 2018. Why would Canada deny that many visas two years in a row unless they’re trying to suppress the researchers from voicing their concerns?

Although there’s no direct evidence of intentional suppression, the whole situation is odd and deserves to be thoroughly investigated.

Why does AI struggle to identify women and people with dark skin?

Gender bias against women and people of color has existed in AI-powered software for years, even before facial recognition went mainstream.

Due to a lack of color contrast, it makes sense that darker skin would make it harder for computer algorithms to identify facial features. It’s also possible that photos used to train AI systems include more light skinned people and males than dark skinned people and females. Both factors likely contribute to the problem.

Computers might have a hard time identifying facial features when women are wearing makeup to hide wrinkles or when they have a short haircut. AI-powered algorithms can only be trained to recognize patterns; if short hair is registered as a factor that indicates a male, that will skew results.

While the issue appears straight forward, there’s one factor that isn’t being accounted for by some of the facial recognition critics: the racial and gender bias seems to exist with facial analysis and not facial recognition. The two terms are used interchangeably, but are distinct processes.

Facial recognition vs. facial analysis

When MIT conducted a study with facial recognition tools from Microsoft and IBM, they found those tools had less of an error rate than Amazon’s Rekognition. In response, Amazon disputed the results of MIT’s study, claiming researchers used “facial analysis” and not “facial recognition” to test for bias.

Facial recognition identifies facial features and attempts to match a face to an existing database of faces. Facial analysis uses facial characteristics to identify other factors like gender, race, or to detect a fatigued driver. An Amazon spokesperson says it doesn’t make sense to use facial analysis to gauge the accuracy of facial recognition, and that’s a fair claim.

While the two processes are not the same, facial analysis still plays a significant role in identifying suspects and should be more accurate before being used by police. For instance, if a suspect is captured on video but can’t be clearly seen, has no previous arrests, and can’t be matched to a database, facial analysis will be used to obtain the suspect’s identity. If that suspect is a female wrongly identified as a male, they might never be found.

Are we using facial recognition software too soon?

While it’s not a surprise, it’s a disappointment to know that biased software is being deployed in situations that can have serious consequences.

While the benefits to using facial recognition software are clear, it’s time for this technology to be regulated and force developers to improve the accuracy before it’s deployed in high-stake situations.

The post Is Artificial Intelligence Racial Bias Being Suppressed? appeared first on ReadWrite.

ReadWrite

Frank Landman

Frank Landman

Frank is a freelance journalist who has worked in various editorial capacities for over 10 years. He covers trends in technology as they relate to business.

(29)