You won’t read this life-changing study about clickbait headlines

By Connie Lin

June 22, 2021
 
…Are you here? Did you click?

According to a new study from Penn State University, you probably didn’t, which means you’re not reading this right now.

Or if you did, that means I may be about to offer you some faulty information.

Confused? So is artificial intelligence.

All right, enough of that, here’s the explanation: Researchers at Penn State’s Media Effects Laboratory and Institute of Computational and Data Sciences recently found that clickbait headlines—titles that rely on “linguistic gimmicks” such as questions, superlatives, and curiosity gaps to encourage readers to click—aren’t any more effective than regular headlines, and in some cases perform even worse.

In two consecutive experiments, study authors asked a group of 150 to 250 respondents to read one of eight headlines, featuring both traditional and clickbait styles, and then observed whether they followed through on reading the article or shared the story afterward. The first test used artificial intelligence to source a variety of headlines classified as clickbait from both reliable and unreliable news websites, and the second test used headlines centered on a single political topic and written by a former journalist. According to the team, their clickbait titles were characterized by seven main features, including questions, lists, “Wh” words (i.e., what, when), demonstrative adjectives (this, that), positive superlatives (best, greatest), negative superlatives (worst, least), or modals (could, should).

In both cases, clickbait headlines surprisingly “did not dramatically outperform” the traditional headlines. Study authors suggest this could be due to the popularity of clickbait in recent years, as it’s become so ubiquitous that it fails to stand out today.

But regardless of why, it’s a consequential finding in the era of fake news. Clickbait—which can facilitate the rapid-fire spread of politically and socially charged articles on platforms like Facebook and Twitter—can influence millions of minds in a matter of hours, as viral stories take on a life of their own—whether fact-checked or not.

“One of the ideas in fake news research is that if we can just solve the clickbait problem, we can get closer to solving the fake news problem,” S. Shyam Sundar, a Penn State professor, said. “Our studies push back on that a little bit. They suggest that fake news might be a completely different ballgame, and that clickbait is itself more complicated than we thought.”

To that end, researchers went a step further, probing how well artificial intelligence systems—such as the one used to collect headlines for their earlier experiment—are able to identify clickbait. Results showed that AIs frequently diverged on whether a title was clickbait or not, coming to the same conclusion just 47% of the time.

That could mean such AIs, which have been refined over several decades of shifting human behavior, will need to continually adapt. “The people who write fake news may become aware of the characteristics that are identified as fake news by the detectors . . . News consumers may also just become numb to certain characteristics if they see those headlines all the time. So, fake news detection must constantly evolve,” said Sundar. “It becomes a bit of a cat-and-mouse game.”

Fast Company , Read Full Story

(21)