Big Tech’s Half-Hearted Response To Fake News And Election Hacking
Every day a new front emerges in Big Tech’s battle against fake news. Signs of trouble reared their head during the election, when hyper-partisan misinformation began materializing on Facebook. Months later it became known that many of these sites had been weaponized in a larger misinformation campaign spearheaded by external players, including the Russian government.
Amid the foreign political intrigue and algorithmic hijinks, spending on these platforms continues to flourish: One estimate says that digital advertisers received $1.4 billion during the 2016 election.
Facebook’s ad platform has been a particular cause for concern lately. Facebook has admitted that a Kremlin-backed media company spent $100,000 on a U.S. election-based ad campaign, but has refused to divulge any more information about how they did it or what they did, citing privacy policies. The New York Times and The Daily Beast described efforts by fake users on Twitter and Facebook to foment political anger. The outrage about the ads overshadowed another report last week that implied that Facebook has been inflating its metrics–an issue that, like the problem of fraudulent ads, has swirled around its advertising business.
On Thursday ProPublica exposed another problem with algorithms, ad targeting, and the complicated matter of “free speech”: reporters were able to send targeted ads to antisemitic user groups including “Jew haters.”
Advertisers are given relatively free rein on digital ad platforms, and this has allowed dark things to happen at the fringes. The big tech juggernauts historically have faced up to the issue by not facing up to it; they admit a problem exists and have a crack team working on it–or they introduce some small algorithmic fix–and months later it becomes known that the issue is even more widespread. Lately, the big online platforms have been looking for more ways to address the problems, or at least to seem like they are.
For example, Facebook announced on Thursday it would stop advertisers from being able to modify headlines of news articles they share. Small as this may seem, the idea is that third-party actors can use already-published work as a way to spread misinformation. Similarly, earlier this week, Facebook also made formal rules about the types of content advertisers would be able to monetize–notably, highly partisan subjects can no longer be moneymakers.
Google–which has said it had seen no evidence of Kremlin-bought U.S. political ads–has also begun implementing fact checking tools for certain highly political subjects people search. At a more microscopic level, the company has been trying to crack down on fraudulent advertisers.
“We try to the best of our ability to use technology saddled with people to solve these problems,” Heather Adkins, Google’s director of information security and privacy, said at a forum at Harvard’s Kennedy School of Government earlier this week, part of a new initiative aimed at fighting election hacking and propaganda that is receiving its initial funding from Facebook. “We are still very much learning what kind of technologies and what kinds of strategies work here,” she added. Adkins is of a bipartisan project at Harvard’s Belfer Center aimed at fighting cyber attacks and protecting elections (Facebook’s chief security officer Alex Stamos is part of it too).
Video: A conversation on “The Digital Threat to Democracy” at Harvard this week:
Twitter–which Donald Trump credits as helping him win the election–has implemented similar safeguards against fake news, as it too became a platform for rampant misinformation. These include fake news flags and algorithms that would target spam accounts. But despite its best efforts, Twitter is still riddled with bots which have been shown to be useful tools to spread fake stories and attack other users. (The company generally makes its data more available to researchers, however, making it easier to track and grasp the problem.)
For Facebook, Google, and Twitter the fight against fake news seems to be two-pronged: De-incentivize the targeted content and provide avenues to correct factual inaccuracies. These are both surface fixes, however, akin to putting caulk on the Grand Canyon.
And, despite grand hand waves, both approaches are reactive. They don’t aim at understanding how this problem became prevalent, or creating a method that attacks the systemic issue. Instead these advertising giants implement new mechanisms by which people can report one-off issues—and by which the platforms will be left playing cat-and-mouse games against fake news—all the while giving no real clear glimpse into their opaque ad platforms.
And that’s the real core of the issue: Facebook, Google, and Twitter have little incentive to overhaul their shadowy ad businesses. Even when bad actors are bilking the system the platforms, they are still making money.
Keeping Data Under Wraps
While they make head nods toward trying to fix the misinformation problem, the tech giants refuse to own up to these issues–citing the privacy of their clients and their own proprietary ad systems. “Advertisers consider their ad creatives and their ad targeting strategy to be competitively sensitive and confidential,” Rob Sherman, Facebook’s deputy chief privacy officer, told Reuters. “From our perspective, it’s confidential information of these advertisers.”
Still the call for more transparency is getting louder. Members of the U.S. Federal Election Commission are calling for better reporting from Facebook following the Russian ad buy revelations. Lawmakers have called for Facebook and Twitter to testify before Congress. Even people like Steve Bannon, who led Trump’s Facebook- and Google-assisted digital campaign, have advocated for both companies to be treated like utilities.
Whether intentional or not, the companies don’t seem to be able to understand the gravity of the issue. At the Harvard event, for instance, Adkins was asked about information consumption and how platforms, with their algorithmically-determined recommendations, can improve users’ information diets. She shot back at the idea that people only consume content within their own bubbles. “They also do seek that information outside,” she said. She referred to those who consume only content in their own bubbles as a “microcosm”—”it might not be the majority of people,” she said.
Yet research somewhat disputes that. Researchers at Yale, for instance, looked into the efficacy of the “fake news” labels Facebook uses–the ones that say if a story is “disputed by third party fact-checkers”–and found the labels to be ineffective and even backfire sometimes. That is, certain users considered articles with fake news labels to be more accurate. These people, in a sense, already had their worldview set and when information from a platform combatted it, that only further affirmed their beliefs.
The difficult truth is that there is no clear way to fight the huge problem of misinformation. Though Russia may have used “fake news” as a way to meddle in U.S. affairs, the problem existed well before Putin became interested. Political campaigns have long used micro-targeting strategies to bring over undecided voters—though these efforts have never involved misinformation at such scale, and with such precise abilities to target and spread messages.
While it may seem noble that the big tech companies are taking up the charge, their current attempts will likely produce little effect. The problem rests in the very advertising systems these companies created. No amount of content tagging or ad category de-incentivizing is going to stop the beast unless a bigger upheaval begins to take root.
Fast Company , Read Full Story
(27)