Crowdsourced fact-checking at Twitter: How does the crowd compare with experts?

Saeed, Mohammed; Traub, Nicolas; Nicolas, Maelle; Demartini, Gianluca; Papotti, Paolo
CIKM 2022, 31st ACM International Conference on Information and Knowledge Management, 17-21 October 2022, Atlanta, GA, USA

Fact-checking is one of the effective solutions in fighting online misinformation. However, traditional fact-checking is a process requiring scarce expert human resources, and thus does not scale well on social media because of the continuous flow of new content to be checked. Methods based on crowdsourcing have been proposed to tackle this challenge, as they can scale with a smaller cost, but, while they have shown to be feasible, have always been studied in controlled environments. In this work, we study the first large-scale effort of crowdsourced fact-checking deployed in practice, started by Twitter with the Birdwatch program. Our analysis shows that crowdsourcing may be an effective fact-checking strategy in some settings, even comparable to results obtained by human experts, but does not lead to consistent, actionable results in others.We processed 11.9k tweets verified by the Birdwatch program and report empirical evidence of i) differences in how the crowd and experts select content to be fact-checked, ii) how the crowd and the experts retrieve different resources to fact-check, and iii) the edge the crowd shows in fact-checking scalability and efficiency as compared to expert checkers.

DOI
Type:
Conférence
City:
Atlanta
Date:
2022-10-17
Department:
Data Science
Eurecom Ref:
6994
Copyright:
© ACM, 2022. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in CIKM 2022, 31st ACM International Conference on Information and Knowledge Management, 17-21 October 2022, Atlanta, GA, USA https://doi.org/10.1145/3511808.3557279

PERMALINK : https://www.eurecom.fr/publication/6994