Does fact-checking change what people think, believe, or do? Sort of. But even if fact-checkers can’t convince everyone, their work promotes accountability, challenges misinformation, and fosters media literacy.
Fact-checking has grown dramatically in recent years, fueled by concerns about misleading information circulated on social media – and by populist politicians promoting “alternative facts” and denouncing anything they don’t like as “fake news.” But does fact-checking actually work? Social scientists say it does, but only sort of: the direct impact of corrections is often very limited, especially for the kinds of groups most exposed to misinformation.
Fact checks alone are limited
Most academic research into fact-checking’s effectiveness is based on the United States. International fact-checkers including Africa Check (which covers multiple countries in sub-Saharan Arica), Chequeado (Argentina), and Full Fact (United Kingdom) have drawn similar conclusions, but they present them a bit differently, as in the blog post “Fact-checking doesn’t work (the way you think it does.)”
Publishing fact-checked information has been shown generally to have a positive effect in terms of correcting inaccurate information. However, that effect is smaller in polarized contexts (such as during election campaigns) and among certain audiences (partisans with deeply held beliefs).
Whether or not verification directly changes people’s minds though, fact-checkers work to hold elites accountable, teach audiences to handle dubious information appropriately, and promote good journalistic practice. At the same time, fact-checkers are actively developing strategies to improve the effectiveness of their work.
Facts versus beliefs
Fact-checking can improve the accuracy of audiences’ factual knowledge, but it has far less impact on their beliefs and actions, as a study by a team of Paris-based researchers including Ekaterina Zhuravskaya shows. She and her colleagues examined how well fact-checking could correct some of the false “alternative facts” propagated by Marine Le Pen in France’s 2017 presidential election. These included inaccurate and misleading claims about the number of men vs. women among refugee populations and about unemployment statistics for migrants.
Zhuravskaya and her colleagues found that fact-checking helped correct knowledge of the relevant statistics, but that it did not necessarily change people’s minds about Le Pen’s arguments. As they put it, “success in correcting factual knowledge does not translate into an impact on voting intentions.” Even armed with the facts, audiences may continue to believe arguments based on incorrect information or to act according to those prior beliefs.
Modifying beliefs is most difficult in cases where they are deeply ingrained. Social scientists often describe the challenge to fact-checkers in terms of “motivated reasoning,” a process by which audiences accept or ignore evidence depending on the conclusion they want to draw from it. At a 2017 fact-checking summit, Dan Kahan, professor of law and psychology at Yale University, explained that partisans are “never missing it when the data support their view. If [the data] doesn’t, they’re turning on the cognitive afterburners” in order to rationalize their beliefs.
They do so in part out of a desire to affirm group identity. NYU psychology professor Jay Van Bavel, for example, argues “the partisan brain” tends to value group belonging over accuracy. In polarized contexts and especially during election campaigns, the pressure to assert such group identities is even stronger than usual.
Other scholars argue that it is actually not “motivated reasoning” but “lack of reasoning” that is to blame. If people accept incorrect information, then it may simply come down to “lazy thinking” and a failure to make the effort to realign their views with inconvenient truths. Whatever the cause, there seems to be little that fact-checking in the strictest sense can do to alter deeply-held beliefs that are based on inaccurate information.
One should therefore not expect fact-checking by itself to bring polarized audiences together or force partisan voters to compromise. A meta-analysis (based on data from multiple prior studies and partly summarized by the Nieman Lab) by the communications scholars Nathan Walter, Jonathan Cohen, Lance Holbert, and Yasmin Morag makes this argument rather frankly.
The authors conclude that “the effects of fact-checking on beliefs are quite weak” and even “negligible” in many real-world scenarios. They argue that, “though fact-checking can be used to strengthen preexisting convictions, its credentials as a method to correct misinformation (i.e. counter-attitudinal fact-checking) are significantly limited.”
The (alleged) danger of backfire
Worse still, it seems that fact-checking might actually “backfire” under certain conditions. The idea is that certain audiences will affirm their pre-existing beliefs about an issue even more forcefully once they have been confronted with (verified) information contradicting those views. (Here again, most such research is based on the United States, with its polarized politics and media landscape.)
However, backfire does not appear to be a very common occurrence, at least according to a study by Thomas Wood and Ethan Porter. They note that the idea of backfire has been greatly exaggerated and often confused with “any incidence of motivated reasoning.”
Backfire effects, they argue, are in fact “elusive,” and several studies have failed to find any evidence of backfire. On balance, it seems that fact-checking might backfire occasionally, but only among a hard core of strong partisans and not as a broad phenomenon affecting the general public.
The biggest danger for fact-checkers then is not that their work will actually make things worse, but that the time and energy they devote to correcting misinformation will simply not pay off. However, these apparent limits apply principally to fact-checking in the narrowest sense. As Walter and his colleagues note, “fact-checking is not only about changing what citizens think – it is also, to a large extent, about holding political entities accountable.”
What (else) is fact-checking about?
Fact-checkers themselves actually make the same arguments. “The idea that fact-checking can work by correcting the public’s inaccurate beliefs on a mass scale alone doesn’t stack up,” according to Africa Check, Chequeado, and Full Fact. “Fact-checking can work but not if this is all we do.”
These groups see their purpose as not only publishing findings but also acting on them to ensure accountability. Their audience is therefore not just the public at large, but politicians and others who know that they will be challenged if they make misleading statements.
Self-described “second-generation” fact-checkers like these make a point of confronting public figures over dubious claims, such as by demanding formal corrections or filing complaints with official bodies. With their tools and experience, they are also able to identify repeat offenders, explain how misinformation spreads, and lobby for changes that can slow or stop the flow of false stories.
Whether they work inside or outside media outlets, fact-checkers promote good journalistic practices. By explaining how verification works, they also sensitize audiences to the challenges journalists face.
Through media information and literacy programs, fact-checkers directly train others in the techniques of identifying and challenging misinformation. Teachers, librarians, and local media are natural partners who can help to educate wider audiences and thus multiply the impact of fact-checking work.
Africa Check, Chequeado, and Full Fact argue that much still remains to be done. The next generation of fact-checking, they argue, will need to make fact-checking “function at internet scale, be massively collaborative, and work across international borders.”