Browsing by Author "LekshmiAmmal, H.R."
Now showing 1 - 5 of 5
- Results Per Page
- Sort Options
Item A reasoning based explainable multimodal fake news detection for low resource language using large language models and transformers(Springer Nature, 2025) LekshmiAmmal, H.R.; Anand Kumar, M.Nowadays, individuals rely predominantly on online social media platforms, news feeds, websites, and news aggregator applications to acquire recent news stories. This trend has resulted in an increase in the number of available social media platforms, online news feeds, and news aggregator applications. These news platforms have been accused of spreading fake news to gain more attention and recognition. Earlier, this misinformation or fake news used to be propagated only in the text form. However, with the advent of technology, now it is spread in multimodal forms, such as images with text, videos, and audio with textual content. Currently, the automatic fake news detection models are focused on high resource languages and superficial output. Social media users need clarity and reasoning when it comes to identifying fake news, rather than just a superficial classification of news as fake. Providing context, reasoning, and explanations can help users understand why certain news is misleading or false. Hence, a multimodal system has to be developed to identify and justify fake news. In this proposed work, we have developed a multimodal fake news system for the Low Resource Language Tamil with reasoning-based explainability. The dataset for this proposed work is retrieved from fact-check websites and official news websites. We have experimented with different combinations of models for visual and text modalities. Further, we integrated LLM-based image descriptions into our model with the text and visual features, resulting in an F1 score of 0.8736. We used the Siamese model to determine the similarity of the news and its image descriptions. Additionally, we conducted error analysis and used explainable artificial intelligence to explore the reasoning behind our model’s predictions. We also present the textual reasoning for the model’s predictions and match them with images. © The Author(s) 2025.Item LeDoFAN: enhancing lengthy document fake news identification leveraging large language models and explainable window-based transformers with n-gram expulsion(Springer Science and Business Media Deutschland GmbH, 2025) LekshmiAmmal, H.R.; Anand Kumar, M.Nowadays, people use social media to gather everything around them and consider it their primary source of information. Moreover, people rely more on information disseminated through social media and news channels. The alarming concern is that as the amount of information increases, the amount of fake news or misinformation spread also increases through social media. Generally, fake news has few lines of data; when it comes to a document or an article, the amount of information or the size of the documents is high, and it needs to be appropriately trained to build a model. In this work, we have developed a model that identifies and classifies fake news, consisting of articles collected from social media websites and news pages trained using transformer-based architecture. We have introduced a novel window method for handling lengthy documents and an N-gram expulsion method for managing similar words for classifying the article as fake or real news. We achieved the state-of-the-art F1-score of 0.3492 on test data for the window-based N-gram expulsion method and got an F1-score improvement of 2.1% for long documents alone with this method. We also explored the large language models (LLMs), specifically TinyLlama, which could only achieve an F1-score of 0.2098, and with LLama for summarization of the document that achieved an F1-score of 0.3402 with N-gram expulsion. We have further explored the results using Explainable Artificial Intelligence (XAI) to know the reason behind the proposed model’s intuition. © The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2025.Item NITK-IT NLP at CheckThat! 2022: Window based approach for Fake News Detection using transformers(CEUR-WS, 2022) LekshmiAmmal, H.R.; Anand Kumar, A.M.Misinformation is a severe threat to society which mainly spreads through online social media. The amount of misinformation generated and propagated is much more than authentic news. In this paper, we have proposed a model for the shared task on Fake News Classification by CLEF2022 CheckThat! Lab1, which had mono-lingual Multi-class Fake News Detection in English and cross-lingual task for English and German. We employed a transformer-based model with overlapping window strides, which helped us to achieve 7th and 2nd positions out of 25 and 8 participants on the final leaderboard of the two tasks respectively. We got an F1 score of 0.2980 and 0.2245 against the top score of 0.3391 and 0.2898 for the two tasks. © 2022 Copyright for this paper by its authors.Item NITK-IT_NLP@TamilNLP-ACL2022: Transformer based model for Offensive Span Identification in Tamil(Association for Computational Linguistics (ACL), 2022) LekshmiAmmal, H.R.; Ravikiran, M.; Anand Kumar, M.Offensive Span identification in Tamil is a shared task that focuses on identifying harmful content, contributing to offensiveness. In this work, we have built a model that can efficiently identify the span of text contributing to offensive content. We have used various transformer-based models to develop the system, out of which the fine-tuned MuRIL model was able to achieve the best overall character F1-score of 0.4489. © 2022 Association for Computational Linguistics.Item Overview of Shared Task on Multitask Meme Classification - Unraveling Misogynistic and Trolls in Online Memes(Association for Computational Linguistics (ACL), 2024) Chakravarthi, B.; Rajiakodi, S.; Ponnusamy, R.; Pannerselvam, K.; Anand Kumar, M.A.; Rajalakshmi, R.; LekshmiAmmal, H.R.; Kizhakkeparambil, A.; Kumar, S.S.; Sivagnanam, B.; Rajkumar, C.This paper offers a detailed overview of the first shared task on "Multitask Meme Classification - Unraveling Misogynistic and Trolls in Online Memes," organized as part of the LT-EDI@EACL 2024 conference. The task was set to classify misogynistic content and troll memes within online platforms, focusing specifically on memes in Tamil and Malayalam languages. A total of 52 teams registered for the competition, with four submitting systems for the Tamil meme classification task and three for the Malayalam task. The outcomes of this shared task are significant, providing insights into the current state of misogynistic content in digital memes and highlighting the effectiveness of various computational approaches in identifying such detrimental content. The top-performing model got a macro F1 score of 0.73 in Tamil and 0.87 in Malayalam. © 2024 Association for Computational Linguistics.
