scaLAR SemEval-2024 Task 1: Semantic Textual Relatednes for English
No Thumbnail Available
Date
2024
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Association for Computational Linguistics (ACL)
Abstract
This study investigates Semantic Textual Related- ness (STR) within Natural Language Processing (NLP) through experiments conducted on a dataset from the SemEval-2024 STR task. The dataset comprises train instances with three features (PairID, Text, and Score) and test instances with two features (PairID and Text), where sentence pairs are separated by'/n' in the Text column. Using BERT(sentence transformers pipeline), we explore two approaches: one with fine-tuning (Track A: Supervised) and another without finetuning (Track B: UnSupervised). Fine-tuning the BERT pipeline yielded a Spearman correlation coefficient of 0.803, while without finetuning, a coefficient of 0.693 was attained using cosine similarity. The study concludes by emphasizing the significance of STR in NLP tasks, highlighting the role of pre-trained language models like BERT and Sentence Transformers in enhancing semantic relatedness assessments. © 2024 Association for Computational Linguistics.
Description
Keywords
Citation
SemEval 2024 - 18th International Workshop on Semantic Evaluation, Proceedings of the Workshop, 2024, Vol., , p. 902-906
