Keerthan Kumar, T.G.K.Anoop, R.Koolagudi, S.G.Rao, T.Kodipalli, A.2026-02-062024Procedia Computer Science, 2024, Vol.235, , p. 1353-136318770509https://doi.org/10.1016/j.procs.2024.04.127https://idr.nitk.ac.in/handle/123456789/29051This work examines the performance of various LSTM (long short-term memory) variants on social media text data. This study evaluates the performance of LSTM models with different architectures, namely, classic LSTM, Bidirectional LSTM, Stacked LSTM, gated recurrent unit (GRU), and bidirectional GRU, on a social network dataset comprising texts extracted from multiple social media platforms. We aim to identify the most effective LSTM variant of the five considered LSTM models for text analysis through a comparative study of the models' precision, recall, F1-score, and accuracy. The research findings show that the Classic LSTM and the GRU model perform better than the other models in accuracy. In contrast, the bidirectional models (Bidirectional LSTM and Bidirectional GRU) provide better precision scores than their respective primitive models. This research has significant implications for developing more efficient models for natural language processing applications. It offers beneficial insights into the implications involving the scrutiny of depression on social media platforms through text data analysis. © 2024 Elsevier B.V.. All rights reserved.Deep-LearningDepressionGated Recurrent UnitLong Short Term memoryPerformance analysisStratification of Depressed and Non-Depressed Texts from Social Media using LSTM and its Variants