Twitter now removing 214% more spammy accounts YoY as it ramps up efforts against bad actors
After announcing it acquired Smyte to help improve safety initiatives, Twitter outlined new ways it is fighting malicious content and spam.
Earlier this year, Twitter CEO Jack Dorsey, along with three other executives who head up the company’s product and security teams, hosted a 45-minute live Q&A with users. During the livestream, Dorsey said Twitter’s primary goal was improving the health of the platform. According to Twitter’s latest transparency numbers, the work they’ve put toward security measures and reducing the amount of content from bad actors is paying off.
Twitter says that on a year-to-year basis, it is currently removing 214 percent more accounts for violating spam policies. In May, it identified more than 9.9 million potentially spammy and automated accounts per week — up from the 6.4 million it was finding every week in December of 2017. There has also been a drop in spam reports this year.
On Twitter’s company blog, Yoel Roth, head of Twitter’s platform policy team, and Del Harvey, Twitter’s VP of trust and safety, write, “The average number of spam reports we received through our reporting flow continued to drop — from an average of approximately 25,000 per day in March, to approximately 17,000 per day in May.”
Not only is Twitter fighting against spam accounts, the company says it has suspended more than 142,000 apps that violated Twitter rules during Q1 of this year — with more than half suspended within one week of registration and many within hours.
“We’ve maintained this pace of proactive action, removing an average of more than 49,000 malicious applications per month in April and May. We are increasingly using automated and proactive detection methods to find misuses of our platform before they impact anyone’s experience,” say Roth and Harvey.
Smyte acquisition
According to Twitter, it is using automated processes and machine learning tools to identify and remove malicious and spammy content. Last week, Twitter announced it was acquiring Smyte, a technology firm that specialized in safety, spam and security issues.
“Their review tools and processes will be powerful additions to our own tools and technology that help us keep Twitter safe,” said Twitter’s safety team in a blog post announcing the acquisition. The transition to Twitter’s team did not go as seamlessly as Smyte had hoped, though. The same day the acquisition was announced, TechCrunch reported that Twitter immediately shut down access to Smyte’s platform — leaving Smyte’s existing customer base without access to the API. (According to TechCrunch, Twitter declined to comment on the situation but was making phone calls to Smyte customers to help rectify the problem.)
I asked a few digital ad agencies if they believe Twitter’s Smyte acquisition was in any way connected to a drop in ad dollars on the platform resulting from the platform’s spam problems. Aaron Goldman, the CMO for 4C, which is a Twitter certified partner, says his company is only seeing growth in Twitter ad budgets. According to 4C’s quarterly State of Media report, Q4 2017 and Q1 2018 both showed year-over-year increases in ad spend on the platform. (In Q4 2017, year-over-year ad spend on Twitter grew 60 percent, per 4C’s numbers.)
Akvile DeFazio, founder and president of Akvertise, a social media marketing agency, said she has shifted budgets away from Twitter in recent months, but only because of a lack of targeting options and not because of an influx of spam or bot activity on the app.
New safety measures
In addition to buying new technology designed to fight spam and improve security measures, Twitter also announced new security and safety initiatives to tackle manipulation on the platform. One includes reducing account metrics related to bad actors in real time. This means that if Twitter identifies an account as spammy or malicious, any follows, likes or retweets performed by the identified account will not be reflected in account metrics.
“If we put an account into a read-only state (where the account can’t engage with others or tweet) because our systems have detected it behaving suspiciously, we now remove it from follower figures and engagement counts until it passes a challenge, like confirming a phone number,” Roth and Harvey write.
Twitter will also display a warning on read-only accounts and will not allow them to gain any new followers until the account has passed specific challenges to prove it is not a bot or automated account.
Twitter is also enabling a new sign-up procedure that requires confirmation through an email address or phone number and is auditing existing accounts to identify any that show signs they may have been registered via an automated process. The new registration process, which will roll out later this year, aims to make it more difficult for bad actors and bots to register spam accounts.
“We will be working closely with our Trust & Safety Council and other expert NGOs to ensure this change does not hurt someone in a high-risk environment where anonymity is important.”
According to Twitter, its audit to identify bad accounts has resulted in preventing more than 50,000 spammy account registrations per day. The company says such spammy accounts are primarily created as “follow spammers” — accounts that automatically or “bulk” follow high-profile accounts. Twitter says that the new safety measure will impact some people, as some follower numbers may drop.
“When we challenge an account, follows originating from that account are hidden until the account owner passes that challenge. This does not mean accounts appearing to lose followers did anything wrong; they were the targets of spam that we are now cleaning up.”
Accounts that display suspicious activity — things like high-volume tweeting with the same hashtag or tagging a user without replies from the account being mentioned — are now being identified by an automated process that may require a user to complete a reCAPTCHA form or reset their password.
Twitter’s overall goal
All of these effort fall in line with Twitter’s overall goal of improving the health of the app and follow the company’s continued efforts to better secure the platform. Most recently, Twitter modified the way conversations happen by paying more attention to the conduct of users versus the actual content of their tweets. It also launched new policies around political ads and gave US political candidates new ways to identify who they were and which races they were running in.
Twitter, along with Facebook, came under fire after the 2016 presidential election for looking the other way as foreign entities turned the social platforms into battlegrounds, doing what they could to interfere in US elections. But Twitter’s safety issues go well past the 2016 elections. The app has long been plagued with harassment issues. In 2017, Twitter laid out new policies to stop harassment — but instead of going after the accounts responsible for the harassment, the app only gave people being harassed ways to block or mute abusive content. The company’s latest measures feel more comprehensive and tactical in terms of actually stopping the accounts behind the abusive behavior and malicious content.
Marketing Land – Internet Marketing News, Strategies & Tips
(11)