Browsing by Author "Abhin, B."
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Does Degree Capture It All? A Case Study of Centrality and Clustering in Signed Networks(Association for Computing Machinery, Inc, 2025) Murali, S.S.; Abhin, B.; Shetty, R.D.; Bhattacharjee, S.Signed graph networks are used to model systems that contain both positive and negative components. By incorporating signed information into Graph Neural Networks (GNNs), allow for the analysis of complex interactions between nodes, facilitating tasks such as sentiment analysis and trust prediction in social networks. Our main goal in this study is to improve feature selection in a benchmark GNN, Signed Graph Attention (SiGAT) by including centrality and clustering measures other than degree. Our studies reveal that using both degree and centrality features slightly improves signed link prediction performance. Further, our ablation studies revealed that 6 degree features and 16 attention heads optimally encode information and reduce noise. © 2024 Copyright held by the owner/author(s).Item SCaLAR at SemEval-2024 Task 8: Unmasking the machine: Exploring the power of RoBERTa Ensemble for Detecting Machine Generated Text(Association for Computational Linguistics (ACL), 2024) Anand Kumar, M.; Abhin, B.; Murali, S.S.SemEval SubtaskB, a shared task that is concerned with the detection of text generated by one out of the 5 different models - davinci, bloomz, chatGPT, cohere and dolly. This is an important task considering the boom of generative models in the current day scenario and their ability to draft mails, formal documents, write and qualify exams and many more which keep evolving every passing day. The purpose of classifying text as generated by which pre-trained model helps in analyzing how each of the training data has affected the ability of the model in performing a certain given task. In the proposed approach, data augmentation was done in order to handle lengthier sentences and also labelling them with the same parent label. Upon the augmented data three RoBERTa models were trained on different segments of data which were then ensembled using a voting classifier based on their R2 score to achieve a higher accuracy than the individual models itself. The proposed model achieved an overall validation accuracy of 97.05% and testing accuracy of 76.25%. © 2024 Association for Computational Linguistics.
