Conference Papers
Permanent URI for this collectionhttps://idr.nitk.ac.in/handle/123456789/28506
Browse
3 results
Search Results
Item Classification of vocal and non-vocal regions from audio songs using spectral features and pitch variations(Institute of Electrical and Electronics Engineers Inc., 2015) Vishnu Srinivasa Murthy, Y.V.S.; Koolagudi, S.G.In this work, an effort has been made to identify vocal and non-vocal regions from a given song using signal processing techniques and machine learning algorithm. Initially spectral features like mel-frequency cepstral coefficients (MFCCs) are used to develop the baseline system. Statistical values of pitch, jitter and shimmer are considered to improve performance of the system. Artificial neural networks (ANNs) are used to capture the characteristics of vocal and non-vocal segments of the songs. The experiment is conducted on 60 vocal and 60 non-vocal clips extracted from Telugu albums. 11-point moving window is used to ensure the continuity of vocal and non-vocal segments, thus improving the accuracy of system. With this approach system achieves 85.59% accuracy for vocal and 88.52% for non-vocal segment classification. © 2015 IEEE.Item Audio songs classification based on music patterns(Springer Verlag service@springer.de, 2016) Sharma, R.; Vishnu Srinivasa Murthy, Y.V.S.; Koolagudi, S.G.In this work, effort has been made to classify audio songs based on their music pattern which helps us to retrieve the music clips based on listener’s taste. This task is helpful in indexing and accessing the music clip based on listener’s state. Seven main categories are considered for this work such as devotional, energetic, folk, happy, pleasant, sad and, sleepy. Forty music clips of each category for training phase and fifteen clips of each category for testing phase are considered; vibrato-related features such as jitter and shimmer along with the mel-frequency cepstral coefficients (MFCCs); statistical values of pitch such as min, max, mean, and standard deviation are computed and added to the MFCCs, jitter, and shimmer which results in a 19-dimensional feature vector. feedforward backpropagation neural network (BPNN) is used as a classifier due to its efficiency in mapping the nonlinear relations. The accuracy of 82% is achieved on an average for 105 testing clips. © Springer India 2016.Item Sound event detection in urban soundscape using two-level classification(Institute of Electrical and Electronics Engineers Inc., 2016) Luitel, B.; Vishnu Srinivasa Murthy, Y.V.S.; Koolagudi, S.G.A huge increase in automobile field h as lead t o the creation of different sounds in large volume, especially in urban cities. An analysis of the increased quantity of automobiles will give information related to traffic and vehicles. It also provides a scope to understand the scenario of particular location using sound scape information. In this paper, a two level classification is proposed to classify urban sound events such as bus engine (BE), bus horn (BH), car horn (CH) and whistle (W) sounds. The above sounds are taken as they place a major role in traffic scenario. A real-time data is collected from the live recordings at major locations of the urban city. Prior to the detection of events, the class of events is identified u sing signal processing techniques. Further, features such as Mel-frequency cepstral coefficients (MFCCs) a re extracted based on the analysis of a spectrum of the above-mentioned events and they are prominent to classify even in the complex scenario. Classifiers such as artificial neural networks (ANN), naive-Bayesian (NB), decision tree (J48), random forest (RF) are used at two levels. The proposed approach outperforms the existing approaches that usually does direct feature extraction without signal level analysis. © 2016 IEEE.
