Watch this drone use AI to spot violence in crowds from the sky

By Steven Melendez

June 06, 2018
 

The right machine learning algorithms can let aerial surveillance systems spot when people are being violent on the ground, according to research from researchers in the U.K. and India.

 

The researchers trained a deep learning neural network using what it calls an Aerial Violent Individual dataset, where each of 2,000 labeled drone images includes between two and 10 people, with some of them punching, kicking shooting, stabbing or strangling someone. The system is more than 88% accurate at identifying the violent people in the images and between 82% and 92% accurate at identifying who’s engaged in which violent activities, according to a paper slated to be presented at July’s International Conference on Computer Vision and Pattern Recognition in Salt Lake City.

In an email, University of Cambridge researcher Amarjot Singh suggests the system could be used to automatically spot outbreaks of violence at outdoor events such as marathons and music festivals. The system—for now limited to a low-flying consumer Parrot AR Drone—hasn’t been tested in real-world settings or with large crowds yet, but the researchers say they plan to test it around festivals and national borders in India. Eventually, they say, they may attempt to commercialize the software.

To conduct the live video analysis, the researchers used Amazon Web Services and two Nvidia Tesla GPUs, after training a neural network using a single Tesla GPU on a local computer. To mitigate privacy and legal issues, the cloud-based software is designed to delete each frame it receives from the drone after the image is processed.

There are, however, lingering privacy concerns about how this and other AI-based technologies could be used. Civil libertarians have warned that when applied to photos and video, AI technology is often inaccurate and could enable unwanted mass surveillance. Deploying such technologies in warfare raises bigger questions, given that the software could inform the decisions of a human (or potentially a robot) to fire a weapon, for instance.

The system is reminiscent of the Pentagon’s controversial Project Maven, which aims to automatically analyze military drone footage to spot features of interest. Google said it would end its involvement in the project after a number of employees quit and others circulated a petition calling the company to abandon military contracts. Other tech companies, including Amazon, have been closemouthed about how they are contributing to the effort.

 
Many companies and researchers are developing ways to automate the real-time analysis of torrents of video footage. Google and Facebook, as the Register notes, have already patented techniques for identifying human poses in images. Companies like Axon, formerly known as Taser, has worked on technology to identify people and activities in police body camera footage. And recently Amazon came under fire for selling facial recognition services to police departments—part of a fast-growing market for police surveillance technologies that are governed by few laws, if any.

Fast Company , Read Full Story

(23)