Explainable hate speech detection using LIME

dc.contributor.authorImbwaga, J.L.
dc.contributor.authorChittaragi, N.B.
dc.contributor.authorKoolagudi, S.G.
dc.date.accessioned2026-02-04T12:24:20Z
dc.date.issued2024
dc.description.abstractFree speech is essential, but it can conflict with protecting marginalized groups from harm caused by hate speech. Social media platforms have become breeding grounds for this harmful content. While studies exist to detect hate speech, there are significant research gaps. First, most studies used text data instead of other modalities such as videos or audio. Second, most studies explored traditional machine learning algorithms. However, due to the increase in complexities of computational tasks, there is need to employ complex techniques and methodologies. Third, majority of the research studies have either been evaluated using very few evaluation metrics or not statistically evaluated at all. Lastly, due to the opaque, black-box nature of the complex classifiers, there is need to use explainability techniques. This research aims to address these gaps by detecting hate speech in English and Kiswahili languages using videos manually collected from YouTube. The videos were converted to text and used to train various classifiers. The performance of these classifiers was evaluated using various evaluation and statistical measurements. The experimental results suggest that the random forest classifier achieved the highest results for both languages across all evaluation measurements compared to all classifiers used. The results for English language were: accuracy 98%, AUC 96%, precision 99%, recall 97%, F1 98%, specificity 98% and MCC 96% while the results for Kiswahili language were: accuracy 90%, AUC 94%, precision 93%, recall 92%, F1 94%, specificity 87% and MCC 75%. These results suggest that the random forest classifier is robust, effective and efficient in detecting hate speech in any language. This also implies that the classifier is reliable in detecting hate speech and other related problems in social media. However, to understand the classifiers’ decision-making process, we used the Local Interpretable Model-agnostic Explanations (LIME) technique to explain the predictions achieved by the random forest classifier. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2024.
dc.identifier.citationInternational Journal of Speech Technology, 2024, 27, 3, pp. 793-815
dc.identifier.issn13812416
dc.identifier.urihttps://doi.org/10.1007/s10772-024-10135-3
dc.identifier.urihttps://idr.nitk.ac.in/handle/123456789/20935
dc.publisherSpringer
dc.subjectDecision trees
dc.subjectEconomic and social effects
dc.subjectMachine learning
dc.subjectSpeech recognition
dc.subjectBERT
dc.subjectExplainable AI (XAI)
dc.subjectFree speech
dc.subjectGPT-J-6b
dc.subjectHate speech
dc.subjectKiswahilus
dc.subjectLocal interpretable model-agnostic explanation
dc.subjectRandom forest classifier
dc.subjectSpeech detection
dc.subjectWhisper AI
dc.subjectRandom forests
dc.titleExplainable hate speech detection using LIME

Files

Collections