After Las Vegas Fake News, Facebook And Google Blame The Machine, Ignore The System

By Cale Guthrie Weissman

As news broke about the deadly shooting in Las Vegas, partisan content farms commenced doing what they do best: writing fast and getting clicks. With an influx of both real and fake news circulating online, giving mixed reports about who the shooter was, social media became central sources of news for many, providing people with up-to-date information about what was going on. And yet the algorithms of both Google and Facebook fell prey to the click-baiting fake news that they’ve been trying so hard to decimate.

Watch: Las Vegas Reminds Us That Fake News Continues To Plague Breaking News

As I wrote about earlier this morning, Facebook’s “Security Check” page began promoting blogs claiming to be “Alt-Right News,” which gave false information about the massacre’s victims and the identity of the shooter. Google too shared posts from forums like 4chan that were also disseminating wrong information. And their statements about both instances rubbed me as odd.

Facebook sent me a statement that read,

“Our Global Security Operations Center spotted the post this morning and removed it. However, its removal was delayed by a few minutes, allowing it to be screen captured and circulated online. We are working to fix the issue that allowed this to happen in the first place and deeply regret the confusion this caused.”

But the post was up for much longer than a “few minutes”; my colleagues and I saw it online for at least half an hour. And even now, a slew of sites are being shared on “Security Check” that seem questionable–blogs not associated with local news stations that shoddily aggregate the already-reported news.

Google too offered an oddly similar response, also implicitly shifting blame to the algorithms:

“Unfortunately, early this morning we were briefly surfacing an inaccurate 4chan website in our Search results for a small number of queries. Within hours, the 4chan story was algorithmically replaced by relevant results. This should not have appeared for any queries, and we’ll continue to make algorithmic improvements to prevent this from happening in the future.”

Yes, Google says: It did make a mistake; this should not have happened. But the very technology that caused the platform to promote 4chan for a few hours has now been slightly altered; now, the algorithm is promoting “relevant results,” not those pesky fake ones. Voila. Problem solved; case closed.

But both Facebook’s and Google’s statements refuse to reckon with the problems these automated systems have created. These companies rely on promoting content they think people will click on. That’s why they’re the two most popular digital ad platforms online. Their algorithms are the very skeleton of the system–they determine what people are engaging with and share those even further. This is precisely how fake news became such a gargantuan issue.

Facebook’s observation that the delay on its end led to the spreading of screenshots is curious, not just because it’s obvious, but because it points right at the platform’s challenge: The very circulation of humanity’s ideas is the central economic engine powering these platforms. That includes not just fake news but screenshots of fake news that’s been debunked, and that’s already been circulated to an untold number of people.

Google, for its part, doesn’t seem to understand how bad some “breaking stories” and “content” are in the context of news, especially around a terrorist attack. In an email to a reporter, a spokesperson explained that 4chan–known for rampant trolling, racism, and misogyny–was simply lumped in as just another “fresh” “story,” chosen in part because one of the news algorithm’s criteria is speed.

The question Facebook and Google should be wrestling with isn’t how they can tweak their technology to become better at sussing out bad content; they should ask how they can build a platform that precludes these sorts of examples from happening. Doing this, of course, would be no easy feat; they’d be tiptoeing the hazy line of censorship while also de-emphasizing the very moneymaking machines they’ve come to rely on. Still, it’s the only way for these platforms to attack these very real problems. Had a human seen a blog from “Alt-Right News,” would they really have pushed it to a “Security Check” page?

Until these companies begin to apologize for the algorithms themselves and not a supposed quirk in the system, this cycle of fake news—and news about fake news—won’t stop anytime soon.

Fast Company , Read Full Story

(27)