A transformer-based architecture for fake news classification

No Thumbnail Available

Date

2021

Journal Title

Journal ISSN

Volume Title

Publisher

Springer

Abstract

In today’s post-truth world, the proliferation of propaganda and falsified news poses a deadly risk of misinforming the public on a variety of issues, either through traditional media or on social media. Information people acquire through these articles and posts tends to shape their world view and provides reasoning for choices they take in their day to day lives. Thus, fake news can definitely be a malicious force, having massive real-world consequences. In this paper, we focus on classifying fake news using models based on a natural language processing framework, Bidirectional Encoder Representations from Transformers, also known as BERT. We fine-tune BERT for specific domain datasets and also make use of human justification and metadata for added performance in our models. We determine that the deep-contextualizing nature of BERT is effective for this task and obtain significant improvement over binary classification, and minimal yet important improvement in six-label classification in comparison with previously explored models. © 2021, The Author(s), under exclusive licence to Springer-Verlag GmbH Austria, part of Springer Nature.

Description

Keywords

Binary classification, NAtural language processing, Real-world, Social media, World views, Natural language processing systems

Citation

Social Network Analysis and Mining, 2021, 11, 1, pp. -

Collections

Endorsement

Review

Supplemented By

Referenced By