Conference Papers
Permanent URI for this collectionhttps://idr.nitk.ac.in/handle/123456789/28506
Browse
2 results
Search Results
Item Multiclass SVM-based language-independent emotion recognition using selective speech features(Institute of Electrical and Electronics Engineers Inc., 2014) Kokane Amol, T.; Guddeti, G.R.M.In this paper, we emphasize on recognizing six basic emotions viz. Anger, Disgust, Fear, Happiness, Neutral and Sadness using selective features of speech signal of different languages like Germen and Telugu. The feature set includes thirteen Mel-Frequency Cepstral Coefficients (MFCC) and four other features of speech signal such as Energy, Short Term Energy, Spectral Roll-Off and Zero-Crossing Rate (ZCR). The Surrey Audio-Visual Expressed Emotion (SAVEE) Database is used to train the Multiclass Support Vector Machine (SVM) classifier and a German Corpus EMO-DB (Berlin Database of Emotional Speech) and Telugu Corpus IITKGP: SESC are used for emotion recognition. The results are analyzed for each speech emotion separately and obtained accuracies of 98.3071% and 95.8166 % for Emo-DB, IITKGP: SESC databases respectively. © 2014 IEEE.Item Video Affective Content Analysis based on multimodal features using a novel hybrid SVM-RBM classifier(Institute of Electrical and Electronics Engineers Inc., 2017) Ashwin, T.S.; Saran, S.; Guddeti, G.R.M.Video Affective Content Analysis is an active research area in computer vision. Live Streaming video has become one of the modes of communication in the recent decade. Hence video affect content analysis plays a vital role. Existing works on video affective content analysis are more focused on predicting the current state of the users using either of the visual or the acoustic features. In this paper, we propose a novel hybrid SVM-RBM classifier which recognizes the emotion for both live streaming video and stored video data using audio-visual features; thus recognizes the users' mood based on categorical emotion descriptors. The proposed method is experimented for human emotions recognition for live streaming data using the devices such as Microsoft Kinect and Web Cam. Further we tested and validated using standard datasets like HUMANE and SAVEE. Classification of emotion is performed for both acoustic and visual data using Restricted Boltzmann Machine (RBM) and Support Vector Machine (SVM). It is observed that SVM-RBM classifier outperforms RBM and SVM for annotated datasets. © 2016 IEEE.
