Identification of Speaker-Specific Features to Minimize the Mismatch Outcomes for Speaker Recognition Using Anger and Happy Emotional Speech
No Thumbnail Available
Date
2025
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Springer Science and Business Media Deutschland GmbH
Abstract
A vital component of digital speech processing is Speaker Recognition (SR). However, variation in speakers’ emotional states, such as happiness, anger, sadness, or fear, poses a significant challenge that compromises the robustness of speaker recognition systems. It appears to be challenging to distinguish between emotions like “anger†and “happy†, according to research on SR using emotive speech. The study looks at prosody-related speech characteristics to determine how to distinguish between “anger† and “happy†emotional speech for SR tasks. The goal is to explore speaker-specific features. The experiment outcomes demonstrate that, as speaker-specific features for the SR task, Intensity, Pitch, and Brightness (IPB) variables can distinguish between angry and happy emotional speech. Combining IPB and MFCC (IPBCC) feature extraction with the Hybrid CNN-LSTM combined with an attention mechanism approach achieves an SR accuracy of 95.45% for anger and 96.22% for happy emotional speech. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.
Description
Keywords
Brightness, Intensity, Pitch, Speaker Recognition using Emotional Speech
Citation
Communications in Computer and Information Science, 2025, Vol.2389 CCIS, , p. 63-76
