Is Artificial Intelligence Ready to be the Backbone of Our Security Systems?

Is Artificial Intelligence Ready to be the Backbone of Our Security Systems?

Is Artificial Intelligence Ready to be the Backbone of Our Security Systems? | DeviceDaily.com
 

Artificial intelligence has vastly improved in the last decade to the point where AI-powered software has become mainstream. Many organizations, including schools, are adopting AI-powered security cameras to keep a close watch on potential threats. For example, one school district in Atlanta uses an AI-powered video surveillance system that can provide the current whereabouts of any person captured on video with a single click. The system will cost the district $ 16.5 million to equip around 100 buildings.

These AI-powered surveillance systems are being used to identify people, suspicious behavior, guns, and gather data over time that will help identify suspects based on mannerisms and gait. Some of these systems are used to identify persons previously banned from the area and if they return, the system will immediately alert officials.

Schools are hoping to use top-of-the-line AI-powered video surveillance systems to prevent mass shootings by identifying guns, suspended or expelled students, and also alert police to the whereabouts of an active shooter.

AI-powered security systems are also being used in homes and businesses. AI-powered video surveillance seems like the perfect security solution, but accuracy is still a problem and AI isn’t advanced enough for behavioral analysis. AI isn’t truly able to form independent conclusions (yet). At best, AI is only capable of recognizing patterns.

AI isn’t completely reliable – yet

At first glance, AI might appear more intelligent and less fallible than humans and in many ways that’s true. AI can perform tedious functions quickly and identify patterns humans don’t see due to perception bias. However, AI isn’t perfect and sometimes AI-powered software makes disastrous and deadly mistakes.

For instance, in 2018, a self-driving Uber car struck and killed a pedestrian crossing the street in Tempe, Arizona. The human ‘safety driver’ behind the wheel wasn’t paying attention to the road and failed to intervene to avoid the collision. The video captured by the car showed the safety driver looking down toward her knee. Police records revealed she was watching The Voice just moments before the incident. This wasn’t the only crash or fatality involving a self-driving vehicle.

If AI software repeatedly makes grave mistakes, how can we rely on AI to power our security systems and identify credible threats? What if the wrong people are identified as threats or real threats go unnoticed?

AI-powered facial recognition is inherently flawed

Using AI-powered video surveillance to identify a specific person relies heavily on facial recognition technology. However, there’s an inherent problem with using facial recognition – the darker a person’s skin, the more that errors occur.

The error? Gender misidentification. The darker a person’s skin color, the more likely they are to be misidentified as the opposite gender. For example, a study conducted by a researcher at M.I.T found that light-skinned males were misidentified as women about 1% of the time while light-skinned females were misidentified as men about 7% of the time. Dark-skinned males were misidentified as women around 12% of the time and dark-skinned females were misidentified as men 35% of the time. Those aren’t small errors.

Facial recognition software developers are aware of the implicit bias toward certain ethnicities and are doing everything they can to improve the algorithms. However, the technology isn’t there yet and until it is, it’s probably a good idea to use facial recognition software with caution.

The other concern with facial recognition software is privacy. If an algorithm can track a person’s every move and display their current location with a click, how can we be certain this technology won’t be used to invade people’s privacy? That’s an issue some New York residents are already battling.

Tenants in New York are fighting against landlords using facial recognition

Landlords across the U.S. are starting to use AI powered software to lock down security for their buildings. In Brooklyn, more than 130 tenants are fighting a landlord who wants to install facial recognition software for accessing the building in place of metal and electronic keys. Tenants are upset because they don’t want to be tracked when they come and go from their own homes. They’ve filed a formal complaint with the state of New York in an attempt to block this move.

At first glance, using facial recognition to enter an apartment building sounds like a simple security measure, but as Green Residential points out tenants are concerned it’s a form of surveillance. Those concerns are warranted and officials are taking note.

Brooklyn Councilmember Brad Lander introduced the KEYS (keep entry to your home surveillance-free) Act to try to prevent landlords from forcing tenants to use facial recognition or biometric scanning to access their homes. Around the same time the KEYS Act was introduced, the city of San Francisco, CA became the first U.S. city to ban police and government agencies from using facial recognition technology.

This kind of smart technology is currently not legislated since it’s fairly new. The KEYS Act, plus other bills, could become the first laws that regulate commercial use of facial recognition and biometric software. One of those bills would prevent businesses from silently collecting biometric data from customers. If the bill becomes law, customers would have to be notified when a business collects data like iris scans, facial images, and fingerprints.

Experts have openly admitted that many commercial deployments of facial recognition surveillance are done secretly. People are and have been tracked for longer than they think. Most people don’t expect to be tracked in real life like they are online, but it’s been happening for a while.

What if the data collected by AI-powered video surveillance is used improperly?

Privacy concerns aside, what if the data collected by these video surveillance systems is used for illegal or sinister purpose? What if the data is handed over to marketers? What if someone has access to the data and decides to stalk or harass someone or worse – learn their activity patterns and then break into their house when they’re not home?

The benefits of using AI-powered video surveillance are clear, but it might not be worth the risk. Between misidentification errors in facial recognition and the potential for willful abuse, it seems like this technology might not be in the best interest of the public.

For most people, the idea of being tracked, and identified through video surveillance feels like a scene from George Orwell’s 1984.

Getting on board with AI-powered video surveillance can wait

For most organizations, shelling out big bucks for an AI-powered video surveillance system can wait. If you don’t have a pressing need to continually watch for suspicious people and keep tabs on potential threats, you probably don’t need an AI system. Organizations like schools and event arenas are different because they are often the target of mass shootings and bombings. Being equipped with a facial recognition video surveillance system would only increase their ability to catch and stop perpetrators. However, installing a facial recognition system where residents are required to be filmed and tracked is another story.

There will probably be a time when cities around the world are equipped with surveillance systems that track people’s every move. China has already implemented this type of system in public spaced. Although, in China the surveillance system is specifically intended to keep track of citizens. In the United States and other countries, data collected would also be used for marketing purposes.

Of course, there’s always the possibility that cities will use surveillance data for improving things like traffic flow, pedestrian accessibility to sidewalks, and parking situations.

The challenge of utilizing this powerful technology while protecting privacy is a challenge that will require collaboration between city officials, courts, and citizens. It’s too early to know how this technology will be regulated, but it should become clearer in the next few years.

The post Is Artificial Intelligence Ready to be the Backbone of Our Security Systems? appeared first on ReadWrite.

ReadWrite

Frank Landman

Frank Landman

Frank is a freelance journalist who has worked in various editorial capacities for over 10 years. He covers trends in technology as they relate to business.

(19)