ISWC 2024, 23rd International Semantic Web Conference, 11-15 November 2024, Baltimore, USA
This study presents a benchmark proposal designed to enhance knowledge engineering tasks through the use of large language models (LLMs). As LLMs become increasingly pivotal in knowledge extraction and modeling, it is crucial to evaluate and improve their performance. Building on prior work aiming at reverse generating competency questions (CQs) from existing ontologies, we introduce a benchmark focused on specific knowledge
modeling tasks including ontology documentation, ontology generation, and query generation. In addition, we propose a baseline evaluation framework that applies various techniques, such as semantic comparison, ontology evaluation criteria, and structural comparison, using both existing ground truth datasets and newly proposed ontologies with corresponding CQs and documentation. This rigorous evaluation aims to provide a deeper
understanding of LLM capabilities and contribute to their optimization in knowledge engineering applications.
Type:
Conference
City:
Baltimore
Date:
2024-11-11
Department:
Data Science
Eurecom Ref:
7944
Copyright:
© Elsevier. Personal use of this material is permitted. The definitive version of this paper was published in ISWC 2024, 23rd International Semantic Web Conference, 11-15 November 2024, Baltimore, USA and is available at :