Unsupervised Abstractive Text Summarization with Length Controlled Autoencoder

dc.contributor.authorDugar, A.
dc.contributor.authorSingh, G.
dc.contributor.authorBalamuralidhar, B.
dc.contributor.authorAnand Kumar, A.M.
dc.date.accessioned2026-02-06T06:35:19Z
dc.date.issued2022
dc.description.abstractThis work deals with taking an unsupervised approach to abstractive text summarization where a large set of sentences is converted into a concise summary highlighting the essential details. This is achieved with the use of an adversarial autoencoder model. The model encodes the input to a smaller latent vector and the decoder decodes this latent code to generate the higher dimensional output with some loss. Unlike variational autoencoders, AAE's use discriminators to learn using adversarial loss. K-Means clustering and language models are used to get the final summary. This model has been tested with different datasets like the Amazon, Rotten Tomatoes and Yelp reviews dataset to essentially do an opinion summarization task and this is finally evaluated using ROGUE-1, ROGUE-2,ROGUE-L and BLEU scores. The same task is also conducted on a dataset in Hindi. We obtain a ROGUE-1 score of around 24% for Amazon, Yelp and CNN/Daily Mail dataset and a score of 12% for Rotten Tomatoes while the score obtained for the Hindi news articles dataset is only 8%. © 2022 IEEE.
dc.identifier.citationINDICON 2022 - 2022 IEEE 19th India Council International Conference, 2022, Vol., , p. -
dc.identifier.urihttps://doi.org/10.1109/INDICON56171.2022.10040048
dc.identifier.urihttps://idr.nitk.ac.in/handle/123456789/29788
dc.publisherInstitute of Electrical and Electronics Engineers Inc.
dc.subjectAbstractive Text Summarization
dc.subjectAdversarial Autoencoder
dc.subjectUnsupervised Learning
dc.titleUnsupervised Abstractive Text Summarization with Length Controlled Autoencoder

Files