Disqus Promises To Banish Toxic Reader Comments From Sites Like Breitbart

By Sean Captain

After Facebook, Disqus is the biggest company that hosts online discussions. Being so large, Disqus has clients with content and conversations ranging in nature from mainstream and moderate to fringe and horrific—including those on white supremacist, Nazi-quoting Richard Spencer’s Radix Journal. Those latter sites have put Disqus and its young CEO Daniel Ha under intense pressure from online activists (which I wrote about recently). Now Disqus says it has new polices and tech to cut down on what it calls “toxic comments,” although much of what the company has proposed is still just a set of rough concepts.

In the past, Disqus touted tools for moderators to police their own website discussions, but not all moderators moderate all the ugliness away. Breitbart News, for instance, is awash in racist, anti-Semitic, sexist, and sometimes violent comments (as I also reported). Disqus has just announced some new ways it hopes might help rid the web of lousy moderators. Starting today, readers will be able to report directly to Disqus sites that host conversations that they think violate Disqus’s terms of service–which, for instance, prohibit “intimidation of users.” Disqus also wants to give advertisers on its platform control over which comments sections its ads appear alongside, but “it’s still in the early research phase,” says spokeswoman Kim Rohrer about that tool.

For moderators who do care, Disqus says it plans to roll out more tools, including machine learning-powered features to analyze content and automatically flag hate speech for moderators to look at. These types of AI-powered tools could get past the limited capacity of keyword flagging by learning to understand the context around words–commenters often say hateful things using phrases that individually aren’t that bad: “go back” “to your own” “country,” for example. Comments also sometimes use unusual spellings or symbols. Among racists, three sets of parenthesis = “Jews” for example. This kind of online symbology is constantly evolving; machine learning tools could spot trends across all of Disqus’s online forums and recognize potentially racist speech faster than a single human moderating a single message board. But this, too, is also just a rough idea, as Disqus is still looking into companies that could provide the AI software to power such a system. “We haven’t confirmed which one we’ll be using and started development yet,” says Rohrer.

Vitriol on site Lidblog (redacted).

Providers are out there. Online ad exchange RocketFuel, for instance uses scanning service Peer39, which is run by a company called Sizmek. Peer39 not only looks for keywords but also evaluates the context of the text on websites. RocketFuel has blacklisted its ads from running on sites, including Breitbart, which it judges to have violated its anti-hate speech policy.

Disqus has not yet decided whether or not, once it does implement machine learning techniques in its fight against dark speech, it might zap toxic comments automatically, without requiring any work from a human moderator who is overtaxed—or indifferent. “That’s certainly something that would be a cool thing to do eventually,” says Rohrer. “We’re not looking right now at automatically removing content.”

The only new tools already in development, and planned to launch by midyear, are incremental by comparison to Disqus’s AI dreams. One product in development will allow moderators to temporarily ban misbehaving commenters, which would give them a chance to mend their mean ways. The other product Disqus is currently developing, called “shadowbanning,” will let people continue posting to discussions, but only they will be able to see their posts–the offenders will essentially be invisible to other users. Other websites, like Facebook, already employ a similar tactic.

Disqus reaffirms its promise that it will drop entire sites that cross a line. “If a publication is dedicated to toxic or hateful discourse, no software or product solutions will help to combat hate speech or toxicity,” said the company in a statement. Meanwhile, the company still hosts discussion on Richard Spencer’s website, which features posts such as one claiming that whites are genetically superior to blacks, and another comparing Jews to a knife cutting into America.

Advertisers might not want to be associated with conversations on sites like PatDollard.com (redacted).

Indeed, critics call Disqus’s movement on that promise too little and too slow. “More tools are great, but what needs to happen before they actually ban a community for recurring racism and antisemitism?” asks activist John Ellis, who maintains a list of over 40 sites featuring what he has labeled recurring hate speech in comments sections powered by Disqus. For instance, weeks after Fast Company‘s reporting made Disqus aware of self-proclaimed “Online Fascist Zine” Noose, it still features Disqus-powered discussion groups. Rohrer says that many sites are under review, but that she “can’t comment on individual sites due to the privacy we hold around our internal review process.” In other words, Disqus is not saying yet if Noose will stay or go.

Disqus claims that it has dropped “dozens” of sites that perpetuate hate speech. Ellis counters that there are many more websites out there that still get to use Disqus’s comment technology to fuel rancid online conversations. “At some point a human at Disqus needs to say ‘I’m sick and tired of having communities that are full of hate and threats of violence and these 30 or 50 or 100 sites need find another commenting tool and stop polluting the company we set out to build,’” argues Ellis.

Fast Company

(25)