VLDB 2020, 46th International Conference on Very Large Data Bases, 31 August-4 September 2020, Tokyo, Japan (Virtual Conference) / To be published in PVLDB (Proceedings of the VLDB Endowment), Vol.13, N°12, 2020
We demonstrate Scrutinizer, a system that supports human fact checkers in translating text claims into SQL queries on an associated database. Scrutinizer coordinates teams of
human fact checkers and reduces their verication time by proposing queries or query fragments over relevant data. Those proposals are based on claim text classiers, that
gradually improve during the verication of multiple claims. In addition, Scrutinizer uses tentative execution of query candidates to narrow down the set of alternatives. The verification process is controlled by a cost-based optimizer that plans effective question sequences to verify specic claims, and prioritizes claims for verication. In this demonstration, we first show how our system can assist users in verifying
statistical claims. We then let users come up with new, unseen claims and show how the system effectively learns new queries with little user feedback.
Type:
Poster / Demo
City:
Tokyo
Date:
2020-08-31
Department:
Data Science
Eurecom Ref:
6282
Copyright:
© ACM, 2020. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in VLDB 2020, 46th International Conference on Very Large Data Bases, 31 August-4 September 2020, Tokyo, Japan (Virtual Conference) / To be published in PVLDB (Proceedings of the VLDB Endowment), Vol.13, N°12, 2020 https://doi.org/10.14778/3415478.3415520
See also: