Why AI and Viztech hold the key to a safer internet

Why AI and Viztech hold the key to a safer internet

Why AI and Viztech hold the key to a safer internet | DeviceDaily.com

Online media companies are chasing their tails when it comes to policing terrorist material, and other dangerous and offensive content. But there is artificial intelligence-based technology out there that can spot it before it goes live, says David Fulton, CEO of WeSee.

Leading figures in both government and academia have been focused on a common cause in recent months – how best to solve the growing problem of online terrorist content. However, the jury’s out on whether the big digital media players, like Facebook, Twitter and YouTube, are up to the job, despite being under growing pressure from pending legislation. The good news it looks like a powerful new image-recognition technology based on deep learning and neural networks could provide a solution.

In the same week in June that German lawmakers passed a bill forcing major internet companies to banish “evidently illegal” content within 24 hours or face fines up to $ 57 million, a conference took place in Harvard University entitled: Harmful Speech Online: At the Intersection of Algorithms and Human Behaviour. It discussed how best to constrain harmful online content, and was co-hosted by the Harvard-based Berkman Klein Center for Internet and Society, the Shorenstein Center on Media, Politics and Public Policy at the Harvard Kennedy School, and the Institute for Strategic Dialogue (ISD), a London-based think tank.

The opening address stated that extremism in online spaces can have an enormous impact on public opinion, inclusiveness and politics. It also cited the enormous gap? – ?in terms of resourcing, activism, and even basic research? – ?between the problems of harmful speech online and the available solutions to control it.

Automated Detection

Just a few weeks later in September, the heads of state of the UK, France and Italy met with internet companies at the UN General Assembly in New York to discuss the issue. In a speech ahead of the meeting, UK Prime Minister Theresa May threatened the internet giants with huge fines if they could not come up with a way to detect and remove terrorist content within two hours. This time span is significant as within two hours two-thirds of the propaganda is shared – so you could question whether two hours is actually too long.

In response, Google and YouTube have announced they are increasing their use of technology to help automatically identify videos. Meanwhile the problem continues and is only going to get worse. A recent article in the Telegraph revealed that, according to official figures, 54,000 different websites containing advice on bomb making, and committing attacks using trucks and knives, were posted online by supporters of the so-called Islamic State group between August last year and May this year.

What’s more, Cisco has forecast that by 2020 there will be 65 trillion images and six trillion videos uploaded to the web, which will result in over 80% of all internet traffic being image or video-based in less than three years’ time. That’s a lot of content to monitor for extremist and other inappropriate material, but the latest advances in artificial intelligence (AI) could hold the key to unlocking this conundrum.

Emerging Field of Viztech

Pioneers in the new field of Viztech have developed a highly effective adult and violence video filter. It uses AI to identify terrorist and other harmful digital content automatically – and not within two hours of being published, but before it actually goes live. It can spot inappropriate digital content, such as an ISIS flag or face of a known hate-preacher. Viztech can also detect and categorize video, as well as still images, quickly and efficiently, processing information just like the human brain, but up to 1,000 times faster, so not just mimicking human behavior but performing far better.  

Driven by deep learning and neural networks, it’s similar to the technology behind the iPhone X’s facial recognition system, but much more sophisticated. Rather than being reactionary, it’s predictive, filtering, identifying and categorising video content before it even appears online. In Viztech lies the solution to curbing online terrorist material and its unfortunate effects, which is something governments, academics and, of course, digital media businesses are all desperate to do. Ultimately it holds the key to a safer internet for everyone.

David Fulton is CEO of WeSee.

The post Why AI and Viztech hold the key to a safer internet appeared first on ReadWrite.

ReadWrite

(36)