Blended-emotional speech for Speaker Recognition by using the fusion of Mel-CQT spectrograms feature extraction
No Thumbnail Available
Date
2025
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Elsevier Ltd
Abstract
Emotions are integral to human speech, adding depth and influencing the effectiveness of interactions. Speech with a single emotion is speech in which the emotional state stays the same throughout the utterance. Unlike single emotion, blended emotion involves a mix of emotions, such as happiness tinged with sadness or a shift from neutral to sadness within the same utterance. In real-life scenarios, people often experience and express mixed emotions. Most existing works on Speaker Recognition (SR), which recognizes the person from their voice, have focused on either neutral emotions or some primary emotions. This study aims to develop Blended-Emotional Speaker Recognition (BESR). In the proposed work, we try to look for emotional information in speech signals by simulating a blended emotional speech dataset for Speaker Recognition. The fusion of the Mel-Spectrograms and the Constant-Q Transform Spectrograms (Mel-CQT Spectrograms) has been developed to extract features. Three datasets, namely the National Institute of Technology Karnataka Kannada Language Emotional Speech Corpus (NITK-KLESC), the Crowd-sourced emotional multimodal actors dataset (CREMA-D), and the Indian Institute of Technology Kharagpur Simulated Emotion Hindi Speech Corpus (IITKGP-SEHSC) datasets are considered for the proposed work. The experimental outcomes demonstrate that the performance of the BESR system using blended emotional speech improves the fairness of Speaker Recognition. © 2025 Elsevier Ltd
Description
Keywords
Speech recognition, Blended emotion, Blended-emotion speaker recognition, Emotional speech, Features extraction, Mel-CQT spectrogram, Residual network, Speaker recognition, Spectrograms, Speech corpora, Spectrographs
Citation
Expert Systems with Applications, 2025, 276, , pp. -
