Conference Papers
Permanent URI for this collectionhttps://idr.nitk.ac.in/handle/123456789/28506
Browse
2 results
Search Results
Item Identifying gamakas in Carnatic music(Institute of Electrical and Electronics Engineers Inc., 2015) Vyas, H.M.; Suma, S.M.; Koolagudi, S.G.; Guruprasad, K.R.In this work, an effort has been made to identify the gamakas present in a given piece of Carnatic music clip. Gamakas are the beautification elements used to improve the melody. The identification of gamaka is very important stage in note transcription. In the proposed method, features that correspond to melodic variations such as pitch and energy are used for characterizing the gamakas. The input pitch contour is modelled using Hidden Markov Model with 3 states, namely Attack, Sustain and Decay. These states correspond to ups and downs in the melody of the music. The system is validated using a comprehensive data set consisting 160 songs from 8 different ragas. The average accuracy of 75.86% is achieved using this method. © 2015 IEEE.Item Prediction of aesthetic elements in Karnatic music: A machine learning approach(International Speech Communication Association publication@isca-speech.org 4 Rue des Fauvettes - Lous Tourils Baixas 66390, 2018) Rajan, M.; Vijayakumar, A.; Vijayasenan, D.Gamakas, the embellishments and ornamentations used to enhance musical experience, are defining features of Karnatic Music (KM). The appropriateness of using gamaka is determined by aesthetics and is often developed by musicians with experience. Therefore, understanding and modeling gamaka is a significant bottleneck in applications like music synthesis, automatic accompaniment, etc. in the context of KM. To this end, we propose to learn both the presence and the type of gamaka in a data-driven manner using annotated symbolic music. In particular, we explore the efficacy of three classes of features - note-based, phonetic and structural - and train a Random Forest Classifier to predict the existence and the type of gamaka. The observed accuracy is ∼70% for gamaka detection and ∼60% for gamaka classification. Finally, we present an analysis of the features and find that frequency and duration of the neighbouring notes prove to be the most important features. © 2018 International Speech Communication Association. All rights reserved.
