Artificial Intelligence-Based Model for Detecting Inappropriate Content on the Fly
| dc.contributor.author | Ranjan, A. | |
| dc.contributor.author | Pintu | |
| dc.contributor.author | Kumar, V. | |
| dc.contributor.author | Singh, M.P. | |
| dc.date.accessioned | 2026-02-06T06:34:57Z | |
| dc.date.issued | 2023 | |
| dc.description.abstract | Social media made it convenient for users to express, communicate, discuss, and exchange their opinions on various issues in recent years. For example, Twitter, YouTube, Facebook, and News portals allow users to express themselves through comments. However, such platforms are being misused in the name of freedom of speech. Numerous improper messages towards specific persons or communities can be found in them that use abusive, vulgar, hostile, or harsh words. Moreover, bots are also involved in exchanging such messages nowadays. As a result, user experiences are sometimes ruined on social media. Therefore, automatic identification and filtering of such offensive messages is a significant issue for improving user experience. This paper proposes a heterogeneous ensemble-based machine learning (ML) model powered by artificial intelligence (AI) that can classify messages into Threat, Obscenity, Insult, Identity Hate, Toxic, and Severe Toxic categories. The experimental evaluation of the proposed model on a standard dataset demonstrates the accuracy and adaptability of the proposed model. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. | |
| dc.identifier.citation | Smart Innovation, Systems and Technologies, 2023, Vol.326 SIST, , p. 299-313 | |
| dc.identifier.issn | 21903018 | |
| dc.identifier.uri | https://doi.org/10.1007/978-981-19-7513-4_27 | |
| dc.identifier.uri | https://idr.nitk.ac.in/handle/123456789/29551 | |
| dc.publisher | Springer Science and Business Media Deutschland GmbH | |
| dc.title | Artificial Intelligence-Based Model for Detecting Inappropriate Content on the Fly |
