RULEBERT: Teaching soft rules to pre-trained language models

Saeed, Mohammed; Ahmadi, Naser; Nakov, Preslav; Papotti, Paolo
EMNLP 2021, Conference on Empirical Methods in Natural Language Processing, 7-11 November 2021, Punta Cana, Dominican Republic

While pre-trained language models (PLMs) are the go-to solution to tackle many natural language processing problems, they are still very limited in their ability to capture and to use common-sense knowledge. In fact, even if information is available in the form of approximate (soft) logical rules, it is not clear how to transfer it to a PLM in order to improve its performance for deductive reasoning tasks. Here, we aim to bridge this gap by teaching PLMs how to reason with soft Horn rules. We introduce a classification task where, given facts and soft rules, the PLM should return a prediction with a probability for a given hypothesis. We release the first dataset for this task, and we propose a revised loss function that enables the PLM to learn how to predict precise probabilities for the task. Our evaluation results show that the resulting fine-tuned models achieve very high performance, even on logical rules that were unseen at training. Moreover, we demonstrate that logical notions expressed by the rules are transferred to the finetuned
model, yielding state-of-the-art results on external datasets.

DOI
Type:
Conférence
City:
Punta Cana
Date:
2021-11-07
Department:
Data Science
Eurecom Ref:
6678
Copyright:
Copyright ACL. Personal use of this material is permitted. The definitive version of this paper was published in EMNLP 2021, Conference on Empirical Methods in Natural Language Processing, 7-11 November 2021, Punta Cana, Dominican Republic and is available at : http://dx.doi.org/10.18653/v1/2021.emnlp-main.110

PERMALINK : https://www.eurecom.fr/publication/6678