A transformer-based architecture for fake news classification

dc.contributor.authorMehta, D.
dc.contributor.authorDwivedi, A.
dc.contributor.authorPatra, A.
dc.contributor.authorAnand Kumar, M.
dc.date.accessioned2026-02-05T09:26:29Z
dc.date.issued2021
dc.description.abstractIn today’s post-truth world, the proliferation of propaganda and falsified news poses a deadly risk of misinforming the public on a variety of issues, either through traditional media or on social media. Information people acquire through these articles and posts tends to shape their world view and provides reasoning for choices they take in their day to day lives. Thus, fake news can definitely be a malicious force, having massive real-world consequences. In this paper, we focus on classifying fake news using models based on a natural language processing framework, Bidirectional Encoder Representations from Transformers, also known as BERT. We fine-tune BERT for specific domain datasets and also make use of human justification and metadata for added performance in our models. We determine that the deep-contextualizing nature of BERT is effective for this task and obtain significant improvement over binary classification, and minimal yet important improvement in six-label classification in comparison with previously explored models. © 2021, The Author(s), under exclusive licence to Springer-Verlag GmbH Austria, part of Springer Nature.
dc.identifier.citationSocial Network Analysis and Mining, 2021, 11, 1, pp. -
dc.identifier.issn18695450
dc.identifier.urihttps://doi.org/10.1007/s13278-021-00738-y
dc.identifier.urihttps://idr.nitk.ac.in/handle/123456789/22980
dc.publisherSpringer
dc.subjectBinary classification
dc.subjectNAtural language processing
dc.subjectReal-world
dc.subjectSocial media
dc.subjectWorld views
dc.subjectNatural language processing systems
dc.titleA transformer-based architecture for fake news classification

Files

Collections