Blended-emotional speech for Speaker Recognition by using the fusion of Mel-CQT spectrograms feature extraction

dc.contributor.authorTomar, S.
dc.contributor.authorKoolagudi, S.G.
dc.date.accessioned2026-02-03T13:19:50Z
dc.date.issued2025
dc.description.abstractEmotions are integral to human speech, adding depth and influencing the effectiveness of interactions. Speech with a single emotion is speech in which the emotional state stays the same throughout the utterance. Unlike single emotion, blended emotion involves a mix of emotions, such as happiness tinged with sadness or a shift from neutral to sadness within the same utterance. In real-life scenarios, people often experience and express mixed emotions. Most existing works on Speaker Recognition (SR), which recognizes the person from their voice, have focused on either neutral emotions or some primary emotions. This study aims to develop Blended-Emotional Speaker Recognition (BESR). In the proposed work, we try to look for emotional information in speech signals by simulating a blended emotional speech dataset for Speaker Recognition. The fusion of the Mel-Spectrograms and the Constant-Q Transform Spectrograms (Mel-CQT Spectrograms) has been developed to extract features. Three datasets, namely the National Institute of Technology Karnataka Kannada Language Emotional Speech Corpus (NITK-KLESC), the Crowd-sourced emotional multimodal actors dataset (CREMA-D), and the Indian Institute of Technology Kharagpur Simulated Emotion Hindi Speech Corpus (IITKGP-SEHSC) datasets are considered for the proposed work. The experimental outcomes demonstrate that the performance of the BESR system using blended emotional speech improves the fairness of Speaker Recognition. © 2025 Elsevier Ltd
dc.identifier.citationExpert Systems with Applications, 2025, 276, , pp. -
dc.identifier.issn9574174
dc.identifier.urihttps://doi.org/10.1016/j.eswa.2025.127184
dc.identifier.urihttps://idr.nitk.ac.in/handle/123456789/20261
dc.publisherElsevier Ltd
dc.subjectSpeech recognition
dc.subjectBlended emotion
dc.subjectBlended-emotion speaker recognition
dc.subjectEmotional speech
dc.subjectFeatures extraction
dc.subjectMel-CQT spectrogram
dc.subjectResidual network
dc.subjectSpeaker recognition
dc.subjectSpectrograms
dc.subjectSpeech corpora
dc.subjectSpectrographs
dc.titleBlended-emotional speech for Speaker Recognition by using the fusion of Mel-CQT spectrograms feature extraction

Files

Collections