What threatens public health more, a deliberately false Facebook post about tracking microchips in the COVID-19 vaccine that is flagged as misinformation, or an unflagged, factual article about the rare case of a young, healthy person who died after receiving the vaccine?
Credit: Jennifer Allen, Duncan Watts, David G. Rand
What threatens public health more, a deliberately false Facebook post about tracking microchips in the COVID-19 vaccine that is flagged as misinformation, or an unflagged, factual article about the rare case of a young, healthy person who died after receiving the vaccine?
According to Duncan J. Watts, Stevens University Professor in Computer and Information Science at Penn Engineering and Director of the Computational Social Science (CSS) Lab, along with David G. Rand, Erwin H. Schell Professor at MIT Sloan School of Management, and Jennifer Allen, 2024 MIT Sloan School of Management Ph.D. graduate and incoming CSS postdoctoral fellow, the latter is much more damaging. “The misinformation flagged by fact-checkers was 46 times less impactful than the unflagged content that nonetheless encouraged vaccine skepticism,” they conclude in a new paper in Science.
Historically, research on “fake news” has focused almost exclusively on deliberately false or misleading content, on the theory that such content is much more likely to shape human behavior. But, as Allen points out, “When you actually look at the stories people encounter in their day-to-day information diets, fake news is a miniscule percentage. What people are seeing is either no news at all or mainstream media.”
“Since the 2016 U.S. presidential election, many thousands of papers have been published about the dangers of false information propagating on social media,” says Watts. “But what this literature has almost universally overlooked is the related danger of information that is merely biased. That’s what we look at here in the context of COVID vaccines.”
In the study, Watts, one of the paper’s senior authors, and Allen, the paper’s first author, used thousands of survey results and AI to estimate the impact of more than 13,000 individual Facebook posts. “Our methodology allows us to estimate the effect of each piece of content on Facebook,” says Allen. “What makes our paper really unique is that it allows us to break open Facebook and actually understand what types of content are driving misinformed-ness.”
One of the paper’s key findings is that “fake news,” or articles flagged as misinformation by professional fact-checkers, has a much smaller overall effect on vaccine hesitancy than unflagged stories that the researchers describe as “vaccine-skeptical,” many of which focus on statistical anomalies that suggest that COVID-19 vaccines are dangerous.
“Obviously, people are misinformed,” says Allen, pointing to the low vaccination rates among U.S. adults, in particular for the COVID-19 booster vaccine, “but it doesn’t seem like fake news is doing it.” One of the most viewed URLs on Facebook during the time period covered by the study, at the height of the pandemic, for instance, was a true story in a reputable newspaper about a doctor who happened to die shortly after receiving the COVID-19 vaccine.
That story racked up tens of millions of views on the platform, multiples of the combined number of views of all COVID-19-related URLs that Facebook flagged as misinformation during the time period covered by the study. “Vaccine-skeptical content that’s not being flagged by Facebook is potentially lowering users’ intentions to get vaccinated by 2.3 percentage points,” Allen says. “A back-of-the-envelope estimate suggests that translates to approximately 3 million people who might have gotten vaccinated had they not seen this content.”
Despite the fact that, in the survey results, fake news identified by fact-checkers proved more persuasive on an individual basis, so many more users were exposed to the factual, vaccine-skeptical articles with clickbait-style headlines that the overall impact of the latter outstripped that of the former.
“Even though misinformation, when people see it, can be more persuasive than factual content in the context of vaccine hesitancy,” says Allen, “it is seen so little that these accurate, ‘vaccine-skeptical’ stories dwarf the impact of outright false claims.”
As the researchers point out, being able to quantify the impact of misleading but factual stories points to a fundamental tension between free expression and combating misinformation, as Facebook would be unlikely to shut down mainstream publications. “Deciding how to weigh these competing values is an extremely challenging normative question with no straightforward solution,” the authors write in the paper.
Allen points to content moderation that involves the user community as one possible means to address this challenge. “Crowdsourcing fact-checking and moderation works surprisingly well,” she says. “That’s a potential, more democratic solution.”
With the 2024 U.S. Presidential election on the horizon, Allen emphasizes the need for Americans to seriously consider these tradeoffs. “The most popular story on Facebook in the lead-up to the 2020 election was about military ballots found in the trash that were mostly votes for Donald Trump,” she notes. “That was a real story, but the headline did not mention that there were nine votes total, seven of them for Trump.”
This study was conducted at the University of Pennsylvania’s School of Engineering and Applied Science, the Annenberg School for Communication and the Wharton School, along with the Massachusetts Institute of Technology Sloan School of Management, and was supported by funding from Alain Rossmann.
Journal
Science
DOI
10.1126/science.adk3451
Method of Research
Data/statistical analysis
Subject of Research
Not applicable
Article Title
Quantifying the Impact of Misinformation and Vaccine-Skeptical Content on Facebook
Article Publication Date
31-May-2024