scaLAR SemEval-2024 Task 1: Semantic Textual Relatednes for English

dc.contributor.authorKumar, M.H.
dc.contributor.authorAnand Kumar, M.
dc.date.accessioned2026-02-06T06:33:40Z
dc.date.issued2024
dc.description.abstractThis study investigates Semantic Textual Related- ness (STR) within Natural Language Processing (NLP) through experiments conducted on a dataset from the SemEval-2024 STR task. The dataset comprises train instances with three features (PairID, Text, and Score) and test instances with two features (PairID and Text), where sentence pairs are separated by'/n' in the Text column. Using BERT(sentence transformers pipeline), we explore two approaches: one with fine-tuning (Track A: Supervised) and another without finetuning (Track B: UnSupervised). Fine-tuning the BERT pipeline yielded a Spearman correlation coefficient of 0.803, while without finetuning, a coefficient of 0.693 was attained using cosine similarity. The study concludes by emphasizing the significance of STR in NLP tasks, highlighting the role of pre-trained language models like BERT and Sentence Transformers in enhancing semantic relatedness assessments. © 2024 Association for Computational Linguistics.
dc.identifier.citationSemEval 2024 - 18th International Workshop on Semantic Evaluation, Proceedings of the Workshop, 2024, Vol., , p. 902-906
dc.identifier.urihttps://doi.org/10.18653/v1/2024.semeval-1.129
dc.identifier.urihttps://idr.nitk.ac.in/handle/123456789/28796
dc.publisherAssociation for Computational Linguistics (ACL)
dc.titlescaLAR SemEval-2024 Task 1: Semantic Textual Relatednes for English

Files