Repository logo
Communities & Collections
All of DSpace
  • English
  • العربية
  • বাংলা
  • Català
  • Čeština
  • Deutsch
  • Ελληνικά
  • Español
  • Suomi
  • Français
  • Gàidhlig
  • हिंदी
  • Magyar
  • Italiano
  • Қазақ
  • Latviešu
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Српски
  • Svenska
  • Türkçe
  • Yкраї́нська
  • Tiếng Việt
Log In
Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Reddy, C.T."

Filter results by typing the first few letters
Now showing 1 - 1 of 1
  • Results Per Page
  • Sort Options
  • No Thumbnail Available
    Item
    SCaLAR NITK at Touché: Comparative Analysis of Machine Learning Models for Human Value Identification
    (CEUR-WS, 2024) Praveen, K.; Darshan, R.K.; Reddy, C.T.; Anand Kumar, M.
    This study delves into task of detecting human values in textual data by making use of Natural Language Processing (NLP) techniques. With the increasing use of social media and other platforms, there is an abundance in data that is generated. Finding human values in these text data will help us to understand and analyze human behavior in a better way, because these values are the core principle that influence human behavior. Analyzing these human values will help not only in research but also for practical applications such as sentiment evaluation, market analysis and personalized recommendation systems. The study tries to evaluate the performance of different existing models along with proposing novel techniques. Models used in this study range from simple machine learning model like SVM, KNN and Random Forest algorithms for classification using embeddings obtained from BERT till transformer models like BERT and RoBERTa for text classification and Large Language Models like Mistral-7b. The task that has be performed is a multilabel, multitask classification. QLoRA quantization method is used for reducing the size of weights of the model which makes it computationally less expensive for training and Supervised Fine Tuning (SFT) trainer is used for fine tuning LLMs for this specific task. It was found that LLMs performed better compared to all other models. © 2024 Copyright for this paper by its authors.

Maintained by Central Library NITK | DSpace software copyright © 2002-2026 LYRASIS

  • Privacy policy
  • End User Agreement
  • Send Feedback
Repository logo COAR Notify