Facebook Failing On Fake News — But There’s A Better Way

Facebook Failing On Fake News — But There’s A Better Way

by , Featured Contributor, March 16, 2018
 
It feels almost like “Groundhog Day”: the top brass at Facebook saying, “Oh, did we get it wrong? Were you offended by what we did? We didn’t mean to upset you. We’re only learning. We’ll get better. Trust us!”

We’ve heard this from them many times over the past decade and more, about a wide range of misbehaviors: the launch of News Feed, the invasive Beacon program, experiments on users, and more.

Facebook Failing On Fake News -- But There's A Better Way | DeviceDaily.com

But in the past few years we’ve been hearing it about the same issue, over and over again.

Jan. 20, 2015: Facebook launches the ability to report hoaxes — aka fake news.

Dec. 15, 2016: Facebook launches a partnership with third-party fact-checkers to flag fake news. Despite Facebook’s more than $40 billion in revenue being dependent on the quality of the News Feed, it does not offer to pay the fact-checkers. Dec. 20, 2017: Facebook ditches the flags (“Academic research on correcting misinformation has shown that putting a strong image, like a red flag, next to an article may actually entrench deeply held beliefs – the opposite effect to what we intended.”) and instead uses Related Articles to show a counterpoint.

Jan. 19, 2018: Facebook announces it is going to prioritize news the community rates as trustworthy.

Despite Mark Zuckerberg’s incredulity at the idea that fake news could impact the 2016 U.S. election, this is not a new issue for the company’s principals. Which is why it’s so surprising they keep getting it so wrong.

First of all, any solution that relies on community flagging and reporting is going to be gamed.

Much like Trump appropriated the term “fake news” and turned it on mainstream media outlets, bad actors will simply hijack any community reporting system to flag opposing viewpoints.

But the fact that bad actors will try to game the system doesn’t let Facebook off the hook. Bad actors are trying to game Google every day so that they can get to the top of the search results. But every day, Google improves its algorithm so that accurate content gets to the top (or of its own content, but that’s a different column). No, “It’s too hard” is not a valid excuse. What else could Facebook do?

In his book “WTF?: What’s the Future and Why It’s Up to Us,” Tim O’Reilly suggests we don’t need a system for identifying the veracity of individual stories. Much the way Google uses signals like bounce rate or inbound links to determine site quality, Facebook should be using signals to identify hoaxes and actually fake news (as opposed to just “news you don’t like”).

Here are the signals O’Reilly cites: “Does the story or graph cite any sources?…Do the sources actually say what the article claims they say? …Are the sources authoritative? …If the story references quantitative data, does it do so in a way that is mathematically sound? …Do the sources, if any, substantiate the account? …Are there multiple independent accounts of the same story?”

Much as with Google, this is a dance between the gamers and the gamed, an ongoing battle that must be fought if we want to have a hope of maintaining our social fabric. And the social media giant is making some strides in this direction — disrupting the ability of spammers to spoof actual news sites, deploying AI to identify some of those signals O’Reilly mentions.

But Google doesn’t get rewarded for fake news the way Facebook does. People are far more likely to share fake news than real news — and when people are sharing, Facebook’s making money.

So we need to keep up the pressure. There is no technological reason why we can’t win the battle against fake news. It’s a question of will. Does Facebook have the will?

MediaPost.com: Search Marketing Daily

(24)