Faculty Publications
Permanent URI for this communityhttps://idr.nitk.ac.in/handle/123456789/18736
Publications by NITK Faculty
Browse
6 results
Search Results
Item Vaccine Hesitancy to Vaccine Hope: Comparison of MR Vaccine and COVID Vaccine Trends in India(Springer, 2022) Jayan, V.; Alathur, S.Social media played a major role during the distress in the era of Web 3.0 technologies. The use of social media for relief and rescue operations is common nowadays. But the Web 3.0 and its technologies have made the situation worse sometimes, especially in the healthcare sector. The measles-rubella (MR) vaccine campaign in India had a huge setback due to the social media. The World Health Organization (WHO) has observed that the misinformation in social media is one of the ten reasons for vaccine hesitancy. In the COVID-19 situation, the misinformation has increased tremendously but at the same time people were expecting a vaccine. The vaccine hesitancy depends on the severity of the cause of the disease and the cure of the disease. There were trends of vaccination hesitancy during the MR vaccination campaign and that has changed to vaccine hope during the COVID-19 in the social media. © 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.Item Dynamics of multi-campaign propagation in online social networks(Association for Computing Machinery, Inc acmhelp@acm.org, 2015) Thejaswi, M.; Vijayaraghavan, S.; Das, A.; Santhi Thilagam, P.Ever since the advent of online social networking, people have been voluntarily posting and consuming information on the web. This new method to communicate digitally provides the means to spread information considerably far in a very short span of time with minimal resources. Social networks are increasingly being used to spread misinformation online due to low-costs in organizing grassroots of these campaigns. Our goal in this paper is to determine the efficiency with which campaigns can succeed in an online social network, efficiency represents the ease with which a campaign can triumph over other competing campaigns in a network. We model the information diffusion using Multi-Campaign Independent Cascade Model, and by applying node coercion and link cutting as campaign limiting strategies we ascertain how efficiently a campaign can succeed. The efficiency measure tackles the problem of determining the survivability of campaigns, which is used to ensure success or failure of a campaign using campaign limiting strategies.Item Health Fear Mongering Make People More Sicker: Twitter Analysis in the Context of Corona Virus Infection(Springer Science and Business Media Deutschland GmbH, 2020) Jayan, J.; Alathur, S.The purpose of this study is to assess the fear factor in Social media data in the context of Coronavirus Disease - 2019(COVID-19) across the globe. The fear generated from social media content will adversely affect the mental health of the public. Design/methodology/approach: The study is followed by a literature survey during the emergence of social media and Internet technologies since the year 2006 where the people commonly started to use the internet across the world. The Twitter data collected on COVID-19 during the infection period and the analysis. Findings: The social media contents adversely affect the mental health of the common public and also the healthcare programs run by the government organizations to some extent. The findings show that the social media are the major source of fear-mongering information and the people behind the fear-mongering are making use of the disaster situation to set their agenda. The strict enactment of law and the efforts by the social media platforms can reduce the fake news and misinformation. Research limitations/implications: The research focuses only on the Twitter data for the analysis during the COVID-19 distress. The detailed study needs to be done in similar distress situations across the globe. The data retrieval became limited from different social media platforms because of privacy issues. © 2020, IFIP International Federation for Information Processing.Item Misinformation Detection Through Authentication of Content Creators(Springer, 2023) KSudhama, K.; Siddamsetti, S.G.; G, P.; Chandavarkar, B.R.Recent technological advancements have made content modification and recreation easier and practically undetectable without suitable verification techniques. Users can change data from social media with photo, video, and text editing tools and share the updated content in a different context. As a result, online social media platforms are suitable for distributing fake news and misinformation. Misinformation can take several forms, including one or more types of multimedia, such as text, photos, videos. The modified contents provide fake evidence to the user, leading to various misconceptions. Generally, fake news has eye-catching headlines attracting the readers. These are called click-baits. The content of these click-baits often differs from what the headlines suggest. There are also many fake websites whose IP addresses are slightly modified versions of popular news agencies. Users easily get fooled to open the website as its address seems legitimate. These issues indicate the importance of identifying legitimate content creators from non-legitimate ones. This chapter focuses on authenticating the legitimate content creators verified by a trusted entity using certificates and blockchain technology. Also, check for fakeness in their content using natural language processing and image processing techniques. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.Item LeDoFAN: enhancing lengthy document fake news identification leveraging large language models and explainable window-based transformers with n-gram expulsion(Springer Science and Business Media Deutschland GmbH, 2025) LekshmiAmmal, H.R.; Anand Kumar, M.Nowadays, people use social media to gather everything around them and consider it their primary source of information. Moreover, people rely more on information disseminated through social media and news channels. The alarming concern is that as the amount of information increases, the amount of fake news or misinformation spread also increases through social media. Generally, fake news has few lines of data; when it comes to a document or an article, the amount of information or the size of the documents is high, and it needs to be appropriately trained to build a model. In this work, we have developed a model that identifies and classifies fake news, consisting of articles collected from social media websites and news pages trained using transformer-based architecture. We have introduced a novel window method for handling lengthy documents and an N-gram expulsion method for managing similar words for classifying the article as fake or real news. We achieved the state-of-the-art F1-score of 0.3492 on test data for the window-based N-gram expulsion method and got an F1-score improvement of 2.1% for long documents alone with this method. We also explored the large language models (LLMs), specifically TinyLlama, which could only achieve an F1-score of 0.2098, and with LLama for summarization of the document that achieved an F1-score of 0.3402 with N-gram expulsion. We have further explored the results using Explainable Artificial Intelligence (XAI) to know the reason behind the proposed model’s intuition. © The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2025.Item A reasoning based explainable multimodal fake news detection for low resource language using large language models and transformers(Springer Nature, 2025) LekshmiAmmal, H.R.; Anand Kumar, M.Nowadays, individuals rely predominantly on online social media platforms, news feeds, websites, and news aggregator applications to acquire recent news stories. This trend has resulted in an increase in the number of available social media platforms, online news feeds, and news aggregator applications. These news platforms have been accused of spreading fake news to gain more attention and recognition. Earlier, this misinformation or fake news used to be propagated only in the text form. However, with the advent of technology, now it is spread in multimodal forms, such as images with text, videos, and audio with textual content. Currently, the automatic fake news detection models are focused on high resource languages and superficial output. Social media users need clarity and reasoning when it comes to identifying fake news, rather than just a superficial classification of news as fake. Providing context, reasoning, and explanations can help users understand why certain news is misleading or false. Hence, a multimodal system has to be developed to identify and justify fake news. In this proposed work, we have developed a multimodal fake news system for the Low Resource Language Tamil with reasoning-based explainability. The dataset for this proposed work is retrieved from fact-check websites and official news websites. We have experimented with different combinations of models for visual and text modalities. Further, we integrated LLM-based image descriptions into our model with the text and visual features, resulting in an F1 score of 0.8736. We used the Siamese model to determine the similarity of the news and its image descriptions. Additionally, we conducted error analysis and used explainable artificial intelligence to explore the reasoning behind our model’s predictions. We also present the textual reasoning for the model’s predictions and match them with images. © The Author(s) 2025.
