On Ukrainian war TikTok, even experts struggle to distinguish truth from hoaxes

By Mark Sullivan

March 12, 2022

A new report from Harvard researchers finds that TikTok remains a rich source of misinformation and disinformation about Ukraine—and explains why it spreads so easily.

The same tools and features that have brought the funny (and sometimes the genius) out of regular people on TikTok can also be used to manipulate content to spread misinformation at scale, the research suggests.

The report, called “TikTok, The War on Ukraine, and 10 Features That Make the App Vulnerable to Misinformation,” comes from the Technology and Social Change project (TaSC) at Harvard’s Shorenstein Center, which is led by noted misinformation researcher Joan Donovan.

The Harvard group began monitoring and cataloging TikTok posts about Ukraine on Feb 24, 2022, the day Russia invaded the country. As of March 9, TikToks tagged #ukraine had been viewed 26.8 billion times, the researchers note.

It’s often hard for users, even seasoned journalists, to discern the difference between truth and rumor on TikTok, say the researchers. “We’re all familiar with tools used to manipulate media, such as deepfakes, but this app is unique in that it has a built-in video editing suite of tools that one could argue encourages users to manipulate the content they’re about to upload,” research fellow Kaylee Fagan, one of the authors of the report, tells Fast Company. “And the app really does encourage the use of repurposed audio, so people can fabricate an entire scene so that it looks like it may have been captured in Ukraine.”

Plus, it’s very hard to track the original source and date of the original video or soundtrack. Compounding the problem is the fact that users are practically anonymous on TikTok. “Anyone can publish and republish any video, and stolen or reposted clips are displayed alongside original content,” the researchers wrote.

The Harvard researchers also note that while TikTok has shut off its service for Russian users, you can still find propaganda from the accounts of state-controlled media, such as RT, on the app. You can also find pro-Russia videos posted by people living outside of Russia, the report stated.

The reason this is all so worrisome is not that misleading TikToks often get wide exposure. But because both misinformation (in which users unwittingly publish falsehoods) and disinformation (in which operatives post falsehoods to manipulate public opinion) make it hard for the public to differentiate between true and legitimate and false and misleading narratives about an event, such as an invasion. As the weeks go by, people grow tired of trying to dismiss the lies and find the truth. Exhausted and confused, they become politically neutralized.

Propagandists don’t have to prove a point or win over majorities, they simply have to spread a critical mass of doubt. As the researchers put it: “[T]hese videos continue to go viral on TikTok, raking in millions of views. This results in a ‘muddying of the waters,’ meaning it creates a digital atmosphere in which it is difficult—even for seasoned journalists and researchers—to discern truth from rumor, parody, and fabrication.”

The Harvard report comes as another massive social platform, Facebook, finds itself embroiled in another content-moderation controversy. Reuters reported Thursday that Facebook would alter its community conduct rules for users in Ukraine, allowing them to post death threats against Russian soldiers. The company didn’t deny the report, and struggled to explain the policy. On Friday, Russian authorities called for Facebook’s parent company Meta to be labeled an extremist organization, and announced plans to restrict access to Meta’s Instagram app in Russia.

 

Meta founder and CEO Mark Zuckerberg once hoped to take a hands-off approach moderating speech on the Facebook platform, even insisting that politicians should be able to lie in Facebook ads. But his free-speech ideal (which also happens to entail a much lighter content-moderation lift for Facebook) has proven harmful, forcing the company to increasingly restrict certain kinds of speech on its platform, including misinformation about the coronavirus and COVID-19 vaccines.

 

Fast Company , Read Full Story

(27)