With Twitter’s Updated Mute Feature, You Can Dismiss Yourself From Conversations
Earlier this year, Twitter CEO Jack Dorsey told Fast Company, “We’ve seen a trend—not just on Twitter, but on the internet more broadly, and in the world—of really targeted harassment and abuse.”
It’s something Twitter is acutely aware of, having long served as both a platform for open discourse and a safe space for trolls to indulge in harassment and hate speech. So, to give users more control over their experience, Twitter is now broadening the scope of its mute feature and making it easier to report abuse and harassment.
Part of the challenge, of course, is that it’s impossible for Twitter to scrub all unsavory content from its platform without censoring users outright. The powers that be do suspend accounts if they feel the situation warrants it: Most recently, the company banned Breitbart editor Milo Yiannopoulos from Twitter for harassing actress Leslie Jones. But Twitter has largely focused its efforts on introducing tools that, for better or for worse, often put the onus on users to stave off the trolls.
“We’ve definitely made a lot of changes over the years to try to combat abuse and harassment and all those sorts of issues, and we’ve made product improvements and we’ve iterated on our policies,” Del Harvey, Twitter’s VP of trust and safety, told Fast Company. “But we’ve definitely not moved as fast as we’ve liked to, and we haven’t done as much as we really want to. And part of that is because we have been trying to be very deliberate about making sure we’re doing it right.”
Muting Trolls And Monitoring Hate
In keeping with this mission, Twitter is expanding its mute feature today to allow users to mute notifications that they don’t want to see. The mute feature previously only made it possible for users to mute accounts; now, they can mute notifications about specific words, phrases, and even conversations. This is not limited only to content that violates Twitter’s policies: If, say, you don’t want to be included in a conversation that you’ve been roped into, you can mute it. Eventually, the hope is to expand the scope of the mute feature beyond just notifications.
The other piece of today’s update builds on a change made last year that defined and prohibited “hateful conduct” on Twitter as any behavior that attacked people or promoted violence against them based on race, ethnicity, sexual orientation, gender, and other such categories. “We wanted to make sure that in the rules, it was really clear that we didn’t allow this kind of content,” Harvey said. “The problem we ran into after doing that is we had this policy prohibition, but we weren’t enforcing it as consistently or as accurately as we wanted to be able to.”
Until now, users could only report instances of hateful conduct to Twitter under the umbrella term of abuse or harassment. A new option for hateful conduct in Twitter’s reporting section will make it easier for people to report misconduct if they spot it—even if they aren’t involved—and, internally, this will make it easier for Twitter to track the groups and people who are being targeted repeatedly. To make this as effective and seamless a process as possible, Twitter put anyone who may have to enforce these policies through a global retraining program.
Extensive Training To Tackle Abuse
“We added some really extensive training on cultural and historical context of certain types of abuse,” Harvey said. “We have people all around the world who are answering these issues because we need to have 24/7 coverage, and that means that obviously not everyone answering these issues has the same background or has the same training to think about certain types of reports.”
And this training is dynamic: Harvey says Twitter has also “implemented an ongoing refresher program,” that will make it easy to flag its employees if a new phrase or word starts trending.
“I think it’s often easy for people to dismiss Twitter as a faceless company,” Harvey noted. “But I can tell you, we are absolutely listening and we absolutely take the feedback we get to prioritize what we’re working on . . . I’m okay with people saying, ‘you screwed this up’ because it helps us get better. I would rather they not have to say that, absolutely, but it’s important we get this stuff right, and we have to keep getting better.”
Challenging The Echo Chamber Effect
When asked about whether the ability to mute conversations at will could contribute to the echo chamber effect of curated Facebook and Twitter feeds—something that has drawn scrutiny following the election—Harvey noted that Twitter had been slow to adopt features like “mute” for that exact reason.
“We have been trying to be really thoughtful about how we can balance free expression with making sure other people’s voices aren’t silenced simply because they’re saying something you don’t like, or because you’re saying something that frightens them into not feeling comfortable speaking,” Harvey said.
“That’s why we put this in a user control perspective, as opposed to just rolling out things without there being any sort of user input. We want to make sure that, if you decide to mute a keyword, that this is a choice that you have made—that you’re the architect of your own notification destiny.”
related video: How Can Twitter Stay Relevant?
Fast Company , Read Full Story
(46)