Conference Papers
Permanent URI for this collectionhttps://idr.nitk.ac.in/handle/123456789/28506
Browse
3 results
Search Results
Item Contribution of Telugu vowels in identifying emotions(Institute of Electrical and Electronics Engineers Inc., 2015) Shashidhar Koolagudi, G.; Shivakranthi, B.; Sreenivasa Rao, K.S.; Ramteke, P.B.This work is mainly intended at identifying emotion contribution of different vowels in Telugu language. Instead of processing the entire speech signal we propose to focus only vowel parts of the utterance (/a/, /i/, /u/, /e/ and /o/). By analysing the vowels we can discriminate the emotions. In this work spectral and prosodic features are used for studying the effect of emotions on different vowels. Even though prosodic features are best discriminators of emotions at utterance level, at phoneme level spectral features are more useful. One may observe that same vowel exhibits different spectral behaviour when expressed in different emotions. Shimmer and jitter play a crucial role for classifying emotions using vowels. A semi natural database used in this work is collected from Telugu movies. Gaussian Mixture Models (GMMs) are used as the mathematical models for classification. Emotions considered for this work are anger, fear, happy, sad and neutral. Average emotion recognition performance obtained by combining MFCCs, formants, intensity, shimmer and jitter is around 78%. © 2015 IEEE.Item Rhythm and timbre analysis for carnatic music processing(Springer Science and Business Media Deutschland GmbH info@springer-sbm.com, 2016) Heshi, R.; Suma, S.M.; Koolagudi, S.G.; Bhandari, S.; Sreenivasa Rao, K.S.In this work, an effort has been made to analyze rhythm and timbre related features to identify raga and tala from a piece of Carnatic music. Raga and Tala classification is performed using both rhythm and timbre features. Rhythm patterns and rhythm histogram are used as rhythm features. Zero crossing rate (ZCR), centroid, spectral roll-off, flux, entropy are used as timbre features. Music clips contain both instrumental and vocals. To find similarity between the feature vectors T-Test is used as a similarity measure. Further, classification is done using Gaussian Mixture Models (GMM). The results shows that the rhythm patterns are able to distinguish different ragas and talas with an average accuracy of 89.98 and 86.67 % respectively. © Springer India 2016.Item Note Transcription from Carnatic Music(Springer, 2020) Suma, S.M.; Koolagudi, S.G.; Ramteke, P.B.; Sreenivasa Rao, K.S.In this work, an effort has been made to identify note sequence of different ragas of Carnatic Music. The proposed heuristic method makes use of standard just-intonation frequency ratios between notes for basic transcription of music piece into written sequence of notes. The notes present in a given piece of music are obtained using pitch histograms. The normalized pitch contour of the music piece is segmented based on detection of the note boundaries. These segments are labeled using note information already available. Without prior knowledge of raga, 30 out of 64 sequences are identified accurately and additional 18 sequences are identified with one note error. With the prior raga knowledge 76.56% accuracy is observed in note sequence identification. © 2020, Springer Nature Singapore Pte Ltd.
