A recent study discovered that marking incorrect statements as “disputed” on social media platforms such as X (previously Twitter) may have the unexpected result of encouraging beliefs in disinformation, particularly among Donald Trump supporters.
The study, led by John Blanchard of the University of Minnesota, Duluth, and Catherine Norris of Swarthmore College, looked at the impact of disputed tags on views about election fraud.
In December 2020, the study surveyed 1,072 Americans to see how they reacted to Donald Trump’s false claims about election fraud on Twitter. Participants rated the truthfulness of the tweets on a scale of one to seven before and after viewing the messages.
A control group viewed tweets without labels, whereas an experimental group saw tweets with the “disputed” tags.
The results were unexpected. When Trump voters saw the “disputed” label, they were more likely to believe the false claims, notwithstanding their original doubts about widespread election fraud. Instead than helping clarify misinformation, the tags appeared to reinforce people’s trust in the claims.
On the other hand, Biden voters were largely unaffected by the tags, while third-party voters or non-voters showed a slight decrease in belief in the false claims.
Blanchard and Norris did not expect the disputed tags to fuel misinformation. They projected that politically aware Trump voters would oppose corrective efforts, citing previous research indicating that highly active individuals frequently disregard fact-checking in favor of their own counterarguments. However, the study found that fact-checking labels had a “backfire effect” on these voters, strengthening their belief in election falsehoods.
This finding is consistent with previous studies on misinformation, which show that clearly challenging conspiracy theorists’ claims might have the opposite effect. When people believe their beliefs are being challenged, they may strengthen rather than reconsider them.
The findings have far-reaching consequences for social media platforms, news outlets, and efforts to combat misinformation. Over time, platforms such as Twitter/X have experimented with various approaches for labeling and fact-checking erroneous or fraudulent material.
Also Read: Country star Zach Bryan deletes X account after Swifties attack him over Kanye post
Twitter employed “disputed” tags during the 2020 election, but it has since replaced them with the “community notes” tool, which allows users to peer-review their content. Following Elon Musk’s takeover in 2022, the site also relaxed its content management policies.
The study’s authors admit numerous limitations, especially the particular circumstances of the 2020 election, when the study was carried out. Conservatives held greater anti-Twitter beliefs at the time, which may have led to the backfire. Since then, X has modified its moderation rules and reintroduced far-right voices, including Trump himself, resulting in a more positive image of the platform by conservatives.
The main conclusion from the study is that measures to correct disinformation, such as identifying incorrect claims, may not always be successful. In certain situations, they may even reinforce the very beliefs they try to challenge.
This highlights the complexity of combating misinformation in a highly polarized political environment, where attempts to correct falsehoods may be seen as attacks on personal autonomy or political identity.