Clickbait headlines might not appeal to readers as much, AI may confuse
UNIVERSITY PARK, Pa. – Clickbait headlines might not be as appealing to readers as they thought, according to a team of researchers. They added that artificial intelligence – AI – can also be insufficient when it comes to correctly determining whether a title is clickbait.
In a series of studies, researchers found that clickbaits – headlines that often rely on language tricks to entice readers to read further – often did not perform better, and in some cases, less efficient than traditional titles.
Because fake news is a concern on social media, researchers have explored the use of AI to systematically identify and block clickbait. However, studies also suggest that identifying fake news with artificial intelligence may be even more complicated than expected, said S. Shyam Sundar, Professor James P. Jimirro of Media Effects in the Donald P. Bellisario College of Communications and co-director of Media Effects Research Laboratory.
“One of the ideas of fake news research is that if we can just solve the problem with click traps, we can get one step closer to solving the fake news problem,” said Sundar, who is also affiliated with Penn State Institute for Computing and Data Science (ICDS). “Our studies postpone that a bit. They suggest that fake news could be a whole different ball game, and that clickbait itself is more complicated than we thought. “
In the first study, the research team randomly assigned 150 participants to read one of eight different headline types and measured whether the participants would read or share the story afterwards. Participants read either a traditional headline or a headline that relied on one of the seven types of clickbaits, including headlines with questions, lists, words “Wh” (that is, i.e. what, when), demonstrative adjectives (i.e. this, that), positive superlatives (i.e. best, greatest), negative superlatives (i.e. i.e. worst, least) or modals (i.e. could, should). The headlines were taken from trusted and unreliable online sources and ranked using algorithms developed to detect clickbait.
“One of the questions we started out with was, which of these click-bait features would attract more clicks?” mentionned Maria molina, an assistant professor of advertising and public relations at the State of Michigan, who is the lead author of the study. “We wanted to explore this more in depth, but when we analyzed the results we realized that there weren’t any significant differences and, if anything, people were more drawn to titles without clickable baits. So from there we thought there might be reasons why it could have happened.
The researchers conducted a second study to make sure other factors, such as the subject of each title, were not confusing the results, according to Molina.
In this study, researchers recruited 249 participants, who were randomly assigned to one of eight conditions – seven clickbait titles and one non-clickbait title. This time all the headlines focused on a single political topic and were written by a former journalist. Again, the team reported that the clickbait titles did not significantly outperform traditional titles.
According to Dongwon lee, a professor of information science and technology at Penn State, the team conducted a third study to examine several types of AI, or machine learning models, that were used in the study to rank titles into as clickable bait titles or not. They found that models often disagreed about whether the title was click bait or not.
The study found that the four AI models only agreed on the classification of clickbait 47% of the time. Of the 175 titles classified as similar by the four algorithms, 139 were identified as clickable baits and 36 were not clickable baits. The level of agreement between systems also varied depending on the type of title. For example, while the four algorithms agreed more often on the classification of clickbaits for the negative superlative characteristic, compared to the other six characteristics, the four classifiers failed to agree. on a no-click bait classification for negative or question superlative characteristics.
The performance of AI and machine learning models tends to vary, said Lee, who is an ICDS affiliate. When the headlines ranked by each model were rated against the number of clicks, three of the four models consistently showed that demonstrative adjectives, lists, and the words “what” attracted more reader engagement than headlines without the bait. clicks.
“As these machine learning models are the product of the last decades, we have many variations – some are very simple, some work very quickly, still others are more complicated and require a lot of resources,” said Lee. “It’s like when you put together a desk: you can get the job done with a screwdriver that costs $ 5, but you can probably get the job done faster with an electric drill that costs $ 50. So, depending on the inherent power of these machine learning models and the training dataset provided to the models, they tended to have different levels of performance and different pros / cons. “
However, these findings raise doubts about using AI to detect fake news by ranking headlines only.
“People were putting a lot of effort into using click-bait headlines as part of fake news detection algorithms, but our studies challenge that assumption,” Sundar said.
He added that studies also suggest that programmers who develop algorithms to detect fake news may need to continually adapt as producers of human fake news – and consumers of media – become savvy about the elements that make up fake news. new.
“It’s getting a bit of a cat-and-mouse game,” Sundar said. “People who write fake news may become aware of characteristics that detectors identify as false and they will change their strategies. News consumers can also become oblivious to certain characteristics if they see these headlines all the time. Thus, the detection of fake news must constantly evolve with the readers as with the creators. “
Researchers have suggested that the popularity of clickbait titles in the past may be a reason the titles have failed to engage readers in their studies. Clickbait could be so ubiquitous in today’s media that it fails to stand out and gain the same attention that traditional headlines do.
The popularity of clickbaits also sparked more media scrutiny, which may have made study participants more wary of clickbait headlines, Molina added.
The research team, who presented their results at CHI 2021, the premier conference for human-machine interaction research, also included Thai Le, PhD student in information science and technology at Penn State; Md Main Uddin Rony, doctoral student in information science, and Naeemul Hassan, assistant professor in journalism and information science, both at the University of Maryland.
The National Science Foundation supported this work.