EMNLP 2023, Conference on Empirical Methods in Natural Language Processing, 6-10 December 2023, Singapore, Singapore
Large language models have recently risen in popularity due to their ability to perform many natural language tasks without requiring any fine-tuning. In this work, we focus on two novel ideas: (1) generating definitions from examples and using them for zero-shot classification, and (2) investigating how an LLM makes use of the definitions. We thoroughly analyze the performance of GPT-3 model for fine-grained multi-label conspiracy theory classification of tweets using zero-shot labeling. In
doing so, we asses how to improve the labeling by providing minimal but meaningful context in the form of the definitions of the labels. We compare descriptive noun phrases, humancrafted definitions, introduce a new method to help the model generate definitions from examples, and propose a method to evaluate GPT-3’s understanding of the definitions. We demonstrate that improving definitions of class labels has a direct consequence on the downstream classification results.
Type:
Conference
City:
Singapore
Date:
2023-12-06
Department:
Data Science
Eurecom Ref:
7504
Copyright:
Copyright ACL. Personal use of this material is permitted. The definitive version of this paper was published in EMNLP 2023, Conference on Empirical Methods in Natural Language Processing, 6-10 December 2023, Singapore, Singapore and is available at : http://dx.doi.org/10.18653/v1/2023.findings-emnlp.267
See also: