After Facebook admits Russia bought U.S. political ads, senator calls for a closer look
Facebook testified before congressional investigators this week that it was able to track ads it sold ahead of the 2016 U.S. election to a troll farm likely funded by Russia, the Washington Post reports. The roughly 3,000 ads sold on the platform–which ran as early as 2015 from 470 fake accounts–total around $100,000 and featured polarizing content about presidential nominees and hot-button issues. The majority of the ads did not reference either of the two presidential candidates, the company said, and one quarter of the ads were geographically targeted.
This raises a few questions, many of which have been asked for quite a while. Most pressing is how Russian entities were able to precisely target the ads: Those spreading ads would need certain data to correctly send them to susceptible voters in swing states.
The ability to send these ads out at scale suggests to some–including Sen. Mark Warner (D-VA), the co-chair of the Senate committee investigating Russia’s influence on the election and possible links with the Trump campaign–that there may have been coordination with others in the U.S. But this is something that the company is unable to determine, according to “an official familiar with Facebook’s internal investigation” who spoke with the Post.
“I’d like to get a more comprehensive look than perhaps what we got today,” Warner told Axios today. “My hope is that we would even at some point get Facebook, Twitter, and some of the other social media firms in for a public hearing.”
Early on, CEO Mark Zuckerberg played down the influence that “fake news” had had on the election’s outcome; an April report described “information operations” that had a minimal reach, and a company spokesperson previously said the company had no evidence that Russia-linked entities purchased ads. But Facebook has also refused to let researchers study raw data surrounding elections ads and their impact, citing privacy policies.
In addition to Facebook’s acknowledgement about the U.S. election, the company says it has tracked similar efforts in recent months, noting in a blog post today that “we have taken action against fake accounts in France, Germany, and other countries, and we recently stated that we will no longer allow Pages that repeatedly share false news to advertise on Facebook.” That decision is one of a number Facebook has made as it attempts to limit fake accounts, misinformation, and other types of complicated content that it and other companies struggle to moderate.
This new bit of transparency is part of that effort, and sheds more valuable light on the company’s role in elections. But it also highlights just how little we know about how the Facebook ad ecosystem works–a topic that’s of particular interest to advertisers, too.
Fast Company , Read Full Story
(31)