CLEF 2025, Conference and Labs of the Evaluation Forum, Information Access Evaluation meets Multilinguality, Multimodality, and Visualization, In CheckThat! Lab, 9-12 September 2025, Madrid, Spain
These working notes present the ClimateSense team participation in the CheckThat! 2025 Lab Challenge for tasks 1 and 4a that investigated: 1) the subjectivity of news article sentences, and; 2) the detection of scientific content in social media posts. Pre-trained Large Language Models (LLMs), conventional Machine Learning (ML) models, sentence encoders, data augmentation, and filtering techniques were leveraged by the ClimateSense team to investigate these tasks. In this paper, we detail the approaches for each task, present the methodology, and report on the performance of each submission. The fine-tuning of pre-trained models shows particularly strong results for Task 4a, where we achieved the first rank on the final evaluation leaderboard. This result shows that LLMs can benefit from lightweight traditional classification models when performing classification tasks.
Type:
Conference
City:
Madrid
Date:
2025-09-09
Department:
Data Science
Eurecom Ref:
8292
Copyright:
© Springer. Personal use of this material is permitted. The definitive version of this paper was published in CLEF 2025, Conference and Labs of the Evaluation Forum, Information Access Evaluation meets Multilinguality, Multimodality, and Visualization, In CheckThat! Lab, 9-12 September 2025, Madrid, Spain and is available at :
See also: