Why Facial-Recognition Technology Can Be So Biased

It’s been widely known for quite some time that police departments across the U.S. use facial recognition technology as one of their tools to catch criminal suspects. But it wasn’t until Tuesday, however—when researchers at Georgetown Law School’s Center of Privacy and Technology dropped their sprawling report, “The Perpetual Line-Up”—that many of us realized just how biased and far-reaching the implementation of the technology is.

Among the most jaw-dropping results of the study: Law enforcement facial-recognition networks include photos of 117 million adults, most of whom are law-abiding citizens who’ve never had contact with police. Moreover, the researchers found that the technology is less accurate in identifying African Americans and women, and that leading developers of the software do not test it for racial bias. The report—which comes on the heels of the revelation that police used the analytics tool Geofeedia to track Black Lives Matter protesters on social media—has resulted in calls from the ACLU and Leadership Conference on Civil and Human Rights to launch an investigation into the racial disparities of face recognition systems.

Below, the co-authors of the report, Clare Garvie and Alvaro Bedoya (also executive director of the Center on Privacy & Technology), discuss the implications of their discovery.

What was the biggest finding of your report?

Garvie: The top-line finding is that today, 1 in 2 American adults, 50% of American adults, are in a face recognition network that police use for criminal justice purposes.

Aren’t law enforcement agencies required to process individuals or get a warrant to collect this type of data?

These are photos of people who are not criminals. These are photos of people who went to the DMV to get a driver’s license. Most of these people have never had interactions with law enforcement. They’ve never been arrested, and yet, all of a sudden they are in this perpetual lineup used countless times a month for police purposes.

Bedoya: Contrast that to fingerprints or DNA. Unless you’ve committed a crime, chances are, you’re not in a criminal fingerprint database. You’re definitely not in a DNA database. So this is the first time there’s been a national law enforcement biometric database that is primarily made up of law-abiding people.

Obviously you can draw a lot of consequences from this, but what is the major impact of law enforcement collecting data in this way?

Garvie: The first thing to realize about face recognition technology is that it makes mistakes. And not only does it make mistakes, but it doesn’t make mistakes evenly across the people on which it is used. It makes more mistakes on African Americans, on women, and on younger people, people under 30. It means that police, in looking for a suspect, the system will give them a list of possible people who match the image of the person they’re looking for. Well, if the system is prone to make mistakes—which it is—it may give them a list of innocent people.

Alvaro, you went on a tweetstorm recently to add context to the report. In one tweet, you pointed out that the 3D-modeling software for this technology leaves out certain groups of people. Is that why face recognition technology is prone to inaccuracies in identifying individuals?

Bedoya: Let me clarify—that modeling example is from Pennsylvania[‘s face recognition system]. This could well be the case in other places, but all we know is that the manual [for Pennsylvania’s technology] left out 3D modeling of African American and Latino faces.

To be very honest, one of the basic facts of human faces is that they’re different. Women tend to wear cosmetics. One of the points of cosmetics is to obscure marks on a person’s face that are distinguishing. So a woman’s face may be harder to analyze than someone else’s face. [For] people with darker skin tones, sometimes it’s harder for the computer to see distinguishing features.

The other issue is that, these systems are often trained on datasets in which white men are overrepresented, so they’re really good at finding white guys’ faces. But they could use more training in seeing faces of people that are not white men.

Garvin: The face recognition algorithms that are used by law enforcement agencies are what we call “pre-trained” algorithms. They are not learning algorithms. These algorithms are trained before they get sent to police departments, and the datasets—the photos on which they’re trained—will determine what they perform best at. Well, a lot of these used a dataset of photos that were taken from college campuses. They went to a college and asked for volunteers, and a majority of the volunteers were in fact white males. So I think this really illustrates that training matters, that what goes into these algorithms will really determine the outcome.

I mean, it seems like there’s a bit of a Catch-22. On the one hand, this technology produces skewed results, but on the other hand, you’d have to feed it more information in order to have it operate in a non-biased way.

Bedoya: That’s true. You have to make some hard decisions about whether you’re doing this or not, and if you do it, you’ve got it do it right. You’ve got to do it in a way that’s fair to women, that’s fair to African Americans, that’s fair to young people.

I feel it’s important for you to know that we don’t recommend banning face recognition. I am comfortable with the world where this technology is used on the rare occasions where we have a true public emergency. We have a bomber loose—okay, do that. We are not comfortable with the world where this is used to identify anyone, at any time, for any crime. I mean like, jaywalking. Blocking an entrance to a building. The kinds of crime that frankly, people commit unknowingly many times a day. It could be an actual, serious crime like a bank robbery, or it could be a “crime”—a bunch of people at a protest who are engaging in disorderly conduct.

One of the most disturbing findings from our report is that major police departments like Los Angeles, Chicago, West Virginia, and Dallas are buying or making plans to buy technology that lets them scan every single face that passes in front of a camera that’s stationed on a sidewalk. They are making plans to do real-time surveillance using face recognition. This fundamentally changes the nature of public spaces. If you think that you’ll be identified wherever you go, you’ll think twice about what you do, where you go, and who you go there with. And that doesn’t look like America to us.

If there’s already bias in the way that these systems are being used, I imagine that would have huge implications for communities that are already heavily policed, from African Americans to Muslim Americans, for example.

Bedoya: That’s right.

Are there any laws that these law enforcement agencies infringing upon that you’ve looked at?

Garvie: I just want to emphasize, in terms of face recognition, there are zero comprehensive laws in this space. There are no laws that tell police, you cannot use face recognition, or you must have a certain level of suspicion about activity before using face recognition. So what we have is a world where it’s up to the police who are using this technology to decide how they use it—to decide whether there are any controls, and whether there aren’t any controls. And what we’ve found is that they tend to go with the latter.

In Vermont, there is a law that says the Department of Motor Vehicles cannot use biometrics in the creation and collection of information for drivers licenses. Well, Vermont has actually signed an agreement that allows the FBI to search driver’s license photos in Vermont with face recognition. So this means the law is being skirted in some way, or it’s being interpreted to somehow get around this law.

How did you go about compiling this information? What was your methodology for finding this data, and were law enforcement agencies cooperative?

Garvie: We submitted 106 state FOIA requests, to agencies across the country. The law enforcement agencies, by and large, were pretty compliant in responding to our record requests—if they’re required, under law, to respond. There were some notable exceptions. The LAPD, for example, announced in 2013 that they had launched a real-time face recognition system off of video cameras stationed in the San Fernando Valley. Yet, in response to our records request, they said they had no record of use of face recognition. The NYPD completely denied our records request.

What are you hoping will come out of your work? Are you pushing for legislation to regulate this technology?

Garvie: As an organization, we don’t lobby, but in the report, we provide a model state or federal bill. Our hope is that this is the start of a discussion, [and] of a much broader movement [toward] transparency, to engage the public on which the technology is used, to get people involved in saying how they want to see this technology used. And for some hard questions to be asked of the police departments who’ve already decided to use this technology without informing the public, without getting legislative approval.

Fast Company , Read Full Story

(45)