Fact checking is the task of determining if a given claim holds. Several algorithms have been developed to check facts with reference information in the form of knowledge bases. We demonstrate BUCKLE, an open-source benchmark for comparing and evaluating fact checking algorithms in a level playing field across a range of scenarios. The demo is centered around three main lessons. To start, we show how, by changing the properties of the training and test facts, it is possible to influence significantly the performance of the algorithms. We then show the role of the reference data. Finally, we discuss the performance for algorithms designed on different principles and assumptions, as well as approaches that address the link prediction task in knowledge bases.
Buckle: Evaluating fact checking algorithms built on knowledge bases
VLDB 2019, 45th International Conference on Very Large Data Bases, 26-30 August 2019, Los Angeles, CA, USA
© ACM, 2019. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in VLDB 2019, 45th International Conference on Very Large Data Bases, 26-30 August 2019, Los Angeles, CA, USA http://dx.doi.org/10.14778/3352063.3352069
PERMALINK : https://www.eurecom.fr/publication/6021