Artificial Intelligence-Based Model for Detecting Inappropriate Content on the Fly

No Thumbnail Available

Date

2023

Journal Title

Journal ISSN

Volume Title

Publisher

Springer Science and Business Media Deutschland GmbH

Abstract

Social media made it convenient for users to express, communicate, discuss, and exchange their opinions on various issues in recent years. For example, Twitter, YouTube, Facebook, and News portals allow users to express themselves through comments. However, such platforms are being misused in the name of freedom of speech. Numerous improper messages towards specific persons or communities can be found in them that use abusive, vulgar, hostile, or harsh words. Moreover, bots are also involved in exchanging such messages nowadays. As a result, user experiences are sometimes ruined on social media. Therefore, automatic identification and filtering of such offensive messages is a significant issue for improving user experience. This paper proposes a heterogeneous ensemble-based machine learning (ML) model powered by artificial intelligence (AI) that can classify messages into Threat, Obscenity, Insult, Identity Hate, Toxic, and Severe Toxic categories. The experimental evaluation of the proposed model on a standard dataset demonstrates the accuracy and adaptability of the proposed model. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

Description

Keywords

Citation

Smart Innovation, Systems and Technologies, 2023, Vol.326 SIST, , p. 299-313

Endorsement

Review

Supplemented By

Referenced By