Framework for Quantum-Based Deepfake Video Detection (Without Audio)
No Thumbnail Available
Date
2025
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
John Wiley and Sons Inc
Abstract
Artificial intelligence (AI) has made human tasks easier compared to earlier days. It has revolutionized various domains, from paper drafting to video editing. However, some individuals exploit AI to create deceptive content, such as fake videos, audios, and images, to mislead others. To address this, researchers and large corporations have proposed solutions for detecting fake content using classical deep learning models. However, these models often suffer from a large number of trainable parameters, which leads to large model sizes and, consequently, computational intensive. To overcome these limitations, we propose various hybrid classical–quantum models that use a classical pre-trained model as a front-end feature extractor, followed by a quantum-based LSTM network, that is, QLSTM. These pre-trained models are based on the ResNet architecture, such as ResNet34, 50, and 101. We have compared the performance of the proposed models with their classical counterparts. These proposed models combine the strengths of classical and quantum systems for the detection of deepfake video (without audio). Our results indicate that the proposed models significantly reduce the number of trainable parameters, as well as quantum long short-term memory (QLSTM) parameters, which leads to a smaller model size than the classical models. Despite the reduced parameter, the performance of the proposed models is either superior to or comparable with that of their classical equivalent. The proposed hybrid quantum models, that is, ResNet34-QLSTM, ResNet50-QLSTM, and ResNet101-QLSTM, achieve a reduction of approximately 1.50%, 4.59%, and 5.24% in total trainable parameters compared to their equivalent classical models, respectively. Additionally, QLSTM linked with the proposed models reduces its trainable parameters by 99.02%, 99.16%, and 99.55%, respectively, compared to equivalent classical LSTM. This significant reduction highlights the efficiency of the quantum-based network in terms of resource usage. The trained model sizes of the proposed models are 81.35, 88.06, and 162.79, and their equivalent classical models are 82.59, 92.28, and 171.76 in MB, respectively. © © 2025 Atul Pandey et al. International Journal of Intelligent Systems published by John Wiley & Sons Ltd.
Description
Keywords
Deep neural networks, Intelligent systems, Learning systems, Long short-term memory, Memory architecture, Quantum computers, Quantum theory, Video signal processing, Classical-quantum, Deepfake video (without audio), Hybrid classical–quantum neural network, Machine-learning, Quantum Computing, Quantum deepfake video, Quantum machine learning, Quantum machines, Quantum neural networks, Short term memory, Copyrights
Citation
International Journal of Intelligent Systems, 2025, 2025, 1, pp. -
