Stratification of Depressed and Non-Depressed Texts from Social Media using LSTM and its Variants

dc.contributor.authorKeerthan Kumar, T.G.K.
dc.contributor.authorAnoop, R.
dc.contributor.authorKoolagudi, S.G.
dc.contributor.authorRao, T.
dc.contributor.authorKodipalli, A.
dc.date.accessioned2026-02-06T06:34:06Z
dc.date.issued2024
dc.description.abstractThis work examines the performance of various LSTM (long short-term memory) variants on social media text data. This study evaluates the performance of LSTM models with different architectures, namely, classic LSTM, Bidirectional LSTM, Stacked LSTM, gated recurrent unit (GRU), and bidirectional GRU, on a social network dataset comprising texts extracted from multiple social media platforms. We aim to identify the most effective LSTM variant of the five considered LSTM models for text analysis through a comparative study of the models' precision, recall, F1-score, and accuracy. The research findings show that the Classic LSTM and the GRU model perform better than the other models in accuracy. In contrast, the bidirectional models (Bidirectional LSTM and Bidirectional GRU) provide better precision scores than their respective primitive models. This research has significant implications for developing more efficient models for natural language processing applications. It offers beneficial insights into the implications involving the scrutiny of depression on social media platforms through text data analysis. © 2024 Elsevier B.V.. All rights reserved.
dc.identifier.citationProcedia Computer Science, 2024, Vol.235, , p. 1353-1363
dc.identifier.issn18770509
dc.identifier.urihttps://doi.org/10.1016/j.procs.2024.04.127
dc.identifier.urihttps://idr.nitk.ac.in/handle/123456789/29051
dc.publisherElsevier B.V.
dc.subjectDeep-Learning
dc.subjectDepression
dc.subjectGated Recurrent Unit
dc.subjectLong Short Term memory
dc.subjectPerformance analysis
dc.titleStratification of Depressed and Non-Depressed Texts from Social Media using LSTM and its Variants

Files