Here’s what social media platforms could do to prevent misinformation before the midterms

By Mark Sullivan

With only weeks to go until the midterm elections, all eyes are on the social media giants.

 

Social networks, which enjoy protections from lawsuits stemming from user-generated content, are the go-to media channel for spreading political misinformation and disinformation (the former: falsehoods unwittingly, or half-wittingly, spread; the latter meaning falsehoods knowingly spread to affect a political outcome).

Since the 2016 election, when Russian operatives successfully seeded Facebook with ads and posts designed to sow division among U.S. voters, threats to U.S. elections have evolved. In 2022, experts say, malign actors spend more time and resources operating within the information space to mislead and disrupt, rather than on executing cyberattacks on election systems or communications systems. Another shift from 2016 is that most misinformation originates from domestic groups rather than foreign, although some researchers point out that domestic and foreign state-sponsored groups with aligned political interests often work together. The common thread in all of this is social networks, which continue to be weaponized to spread fear, uncertainty, and doubt among the electorate.

In 2020 and 2021, right-wing operatives used Facebook and other platforms to spread The Big Lie that the 2020 presidential election was fraudulent and its winner illegitimate. More recently, researchers at New York University found it easy to run ads on both Facebook and TikTok containing blatantly false information about the logistics (voting times and places) and credibility of the upcoming midterm elections.

 
 

“Deepfakes or misuse of information . . . cannot really influence people or change outcomes of elections without propagating on social networks,” says misinformation expert Wael AbdAlmageed, a professor at the University of Southern California. “My biggest fear is the social networks and how they actually handle disinformation and/or misinformation on their platforms.”

Below, we’ve outlined how the major social networks might better protect election integrity ahead of the 2022 midterms, based on conversations with misinformation and security experts.

Detect and fact-check repurposed images 

USC’s AbdAlmageed believes that of all the possible forms of misinformation we may see before the midterms, bad actors will most likely try an old trick: grabbing an old image and mislabeling it to harm some candidate or call the integrity of the election into question. The usual approach is taking a legitimate photograph from a news story from the past and adding text saying the photo is from a current event. 

 

In a well-known example, the right-wing group Turning Point USA posted an image of empty grocery store shelves with the caption, “YUP! #SocialismSucks.” But the photo was actually of a grocery store in Japan just after a major earthquake in 2011. The shelves were’t empty because of socialism, but because all the food items were literally shaken off of them. 

This technique is attractive to purveyors of misinformation because it’s cheap and it requires no special technical skill. Social networks, which rely on powerful AI models to detect toxic content, should train the models to recognize old images that have been recontextualized with new text.

AbdAlmageed says social networks should be able to detect and prevent the publication of old, repurposed photos that have been copyrighted by news organizations. They already have the technology, he argues. Companies like YouTube already use it to prevent users from uploading protected content.

 

“They already have tech and software to index content, even audio files and video files,” he says. “If somebody is creating a new video file, and they use a piece of music in it that violates copyright, the platforms can immediately detect that and prevent the video from even being published.”

Be prepared for election result deniers

Even though the 2020 election was arguably the most secure election in the history of the U.S., claims by former president Donald Trump that the process was fraudulent, and the fact that so many believed his unfounded claim, have served to normalize the idea that elections are highly susceptible to fraud.

In September the New York Times asked gubernatorial and Senate candidates in key battleground states if they would accept election results in the event that they lost. Six candidates wouldn’t commit to accepting the results, while another six didn’t respond to the question. All of them had, in public statements, preemptively cast doubt on the validity of their state’s election system. Just in case. Just like Trump did. This is a dangerous game, and social networks could play a key role.

 

“They’re not saying they won’t necessarily not accept the election, although some of them have, but they’re creating tremendous uncertainty about it,” Cynthia Miller-Idriss, professor at American University and director of its Polarization and Extremism Research Innovation Lab, told PBS. “And, to be honest, uncertainty is one of the things that we know creates vulnerability to conspiracy theories.”

It’s on social networks that such conspiracy theories take hold and spread. Social networks must be quick to label and demote posts or ads that baselessly claim that an election result was fraudulent.

Meta’s policy on posts denying election results is to label and demote them (limit their virality), but not delete them. “We have argued for several years that Facebook undercuts its fact-checking efforts by failing to remove demonstrably false content, such as the strain of election denialism infecting the Republican Party,” wrote Paul Barrett, deputy director of the Center for Business and Human Rights at NYU’s Stern School of Business, in a recent report. “This is unfortunate because removal makes a more definitive statement.”

 

The same could be said of Twitter’s approach. The company does not remove Tweets containing such demonstrably false claims about election results. It labels such Tweets with links to credible information, excludes them from promotion and recommendation, and warns users who try to like or share them. Twitter says it will prevent users from sharing or liking misleading Tweets that carry a “potential for harm.”

Coordinated disinformation campaigns

The FBI said in early October that it has detected actors associated with the Russian and Chinese governments posting on mainstream U.S. social media platforms, including Facebook and Twitter, in an effort to sow division and raise doubts about the electoral process. The Russian and Chinese groups do this mainly by selectively amplifying misinformation on the platforms rather than creating original content (such as divisive posts and ads) as the Russians did in the 2016 election, the FBI said.

In a recent report, the security research group Recorded Future identifies one specific state-affiliated group it believes is trying to influence voters in the midterms. “We are almost certain that personas linked with the (Russian) Internet Research Agency (IRA)-associated Newsroom for American and European Based Citizens (NAEBC) are coordinating renewed attempts to engage in malign influence targeting U.S. conservative audiences ahead of the 2022 U.S. midterm elections via alternative social media platforms,” the group wrote in the report. By “alternative” platforms, the researchers mean conservative sites such as Gettr, Gab, and Truth Social.

 

Meta recently said it shut down 81 Facebook accounts, 8 Facebook pages, 2 Instagram accounts, and at least one group it believes may be associated with the Chinese government. The accounts, it said, posed as both conservative and liberal types, and posted content on divisive issues, including gun rights and abortion rights. Meta said it couldn’t say with certainty that the accounts indeed originated in China. 

Meta is quick to report such successes, but with nearly 3 billion people on its platform and a finite number of humans to monitor content, some malign influence operations are bound to go undetected. “[R]ecent evidence shows that Facebook’s approach still needs work when it comes to managing accounts that spread misinformation, flagging misinformation posts, and reducing the reach of those accounts and posts,” wrote University of Arizona professor of communication Dam Hee Kim in a recent piece in The Conversation.

And even when the sources of misinformation are detected, they’re not always silenced. “In April 2020, fact-checkers notified Facebook about 59 accounts that spread misinformation about COVID-19,” Dam Hee wrote. “As of November 2021, 31 of them were still active.”

 

Rid political ads of flatly false claims

While other platforms such as Twitter, TikTok, LinkedIn, and Pinterest don’t sell political ads, Meta and Google, and their various platforms, continue to do so. And those platforms do not rigorously fact-check political ads. It’s perfectly legal and acceptable for one candidate to run ads that are wildly misleading, and even flatly false, about an opponent.

“[P]olitical ads are considered political speech, and First Amendment law protects political speech above all other types of speech,” wrote First Amendment lawyer Lata Nott for the American Bar Association. “The rationale behind this is that voters have a right to uncensored information from candidates, which they can then evaluate themselves before making their decisions at the ballot box.” 

Facebook said in a blog post that its policy on political ads is based on the principle that “people should be able to hear from those who wish to lead them, warts and all, and that what they say should be scrutinized and debated in public.”

 

This approach was put to the test in 2019 when the Trump campaign ran ads on Facebook implying that then-candidate Joe Biden conditioned the delivery of a billion dollars in U.S. aid on the Ukraine government firing a prosecutor who was investigating an energy company with ties to Hunter Biden. The ad’s key assertion has been debunked. At the time, Facebook refused to remove the ad, arguing that it was protecting “free expression” and that political speech is already highly scrutinized. 

Meta has not changed this policy, although it did limit somewhat how finely campaigns can target ads to specific types of voters.

TikTok tries to avoid these thorny free speech issues by simply not selling political ad inventory. But NYU researchers found that TikTok approved 90% of the falsehood-containing ads they attempted to place on the platform. By contrast, Facebook approved “a significant number” of the experimental ads, and YouTube was able to detect and reject all the misinformation ads the researchers tried to place.

 

“YouTube’s performance in our experiment demonstrates that detecting damaging election disinformation isn’t impossible,” wrote Laura Edelson, the NYU researcher who led the project, in a research report. “But all the platforms we studied should have gotten an ‘A’ on this assignment.”

“We call on Facebook and TikTok to do better: stop bad information about elections before it gets to voters,” Edelson concluded.

Sixty activist groups—including the National Organization of Women, GLAAD, and The Tech Accountability Project—recently sent a letter (shared with Axios) to the CEOs of Meta, TikTok, Twitter, and YouTube, sounding the alarm about election misinformation.

 

“[I]t remains painfully clear that social media companies are still failing to protect candidates, voters, and elected officials from disinformation, misogyny, racism, transphobia, and violence,” the groups wrote.

Fast Company

(43)