Artificial Intelligence-Based Model for Detecting Inappropriate Content on the Fly

dc.contributor.authorRanjan, A.
dc.contributor.authorPintu
dc.contributor.authorKumar, V.
dc.contributor.authorSingh, M.P.
dc.date.accessioned2026-02-06T06:34:57Z
dc.date.issued2023
dc.description.abstractSocial media made it convenient for users to express, communicate, discuss, and exchange their opinions on various issues in recent years. For example, Twitter, YouTube, Facebook, and News portals allow users to express themselves through comments. However, such platforms are being misused in the name of freedom of speech. Numerous improper messages towards specific persons or communities can be found in them that use abusive, vulgar, hostile, or harsh words. Moreover, bots are also involved in exchanging such messages nowadays. As a result, user experiences are sometimes ruined on social media. Therefore, automatic identification and filtering of such offensive messages is a significant issue for improving user experience. This paper proposes a heterogeneous ensemble-based machine learning (ML) model powered by artificial intelligence (AI) that can classify messages into Threat, Obscenity, Insult, Identity Hate, Toxic, and Severe Toxic categories. The experimental evaluation of the proposed model on a standard dataset demonstrates the accuracy and adaptability of the proposed model. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
dc.identifier.citationSmart Innovation, Systems and Technologies, 2023, Vol.326 SIST, , p. 299-313
dc.identifier.issn21903018
dc.identifier.urihttps://doi.org/10.1007/978-981-19-7513-4_27
dc.identifier.urihttps://idr.nitk.ac.in/handle/123456789/29551
dc.publisherSpringer Science and Business Media Deutschland GmbH
dc.titleArtificial Intelligence-Based Model for Detecting Inappropriate Content on the Fly

Files