Kumar, M.H.Anand Kumar, M.2026-02-062024SemEval 2024 - 18th International Workshop on Semantic Evaluation, Proceedings of the Workshop, 2024, Vol., , p. 902-906https://doi.org/10.18653/v1/2024.semeval-1.129https://idr.nitk.ac.in/handle/123456789/28796This study investigates Semantic Textual Related- ness (STR) within Natural Language Processing (NLP) through experiments conducted on a dataset from the SemEval-2024 STR task. The dataset comprises train instances with three features (PairID, Text, and Score) and test instances with two features (PairID and Text), where sentence pairs are separated by'/n' in the Text column. Using BERT(sentence transformers pipeline), we explore two approaches: one with fine-tuning (Track A: Supervised) and another without finetuning (Track B: UnSupervised). Fine-tuning the BERT pipeline yielded a Spearman correlation coefficient of 0.803, while without finetuning, a coefficient of 0.693 was attained using cosine similarity. The study concludes by emphasizing the significance of STR in NLP tasks, highlighting the role of pre-trained language models like BERT and Sentence Transformers in enhancing semantic relatedness assessments. © 2024 Association for Computational Linguistics.scaLAR SemEval-2024 Task 1: Semantic Textual Relatednes for English