Faculty Publications

Permanent URI for this communityhttps://idr.nitk.ac.in/handle/123456789/18736

Publications by NITK Faculty

Browse

Search Results

Now showing 1 - 3 of 3
  • Item
    Sound event detection in urban soundscape using two-level classification
    (Institute of Electrical and Electronics Engineers Inc., 2016) Luitel, B.; Vishnu Srinivasa Murthy, Y.V.S.; Koolagudi, S.G.
    A huge increase in automobile field h as lead t o the creation of different sounds in large volume, especially in urban cities. An analysis of the increased quantity of automobiles will give information related to traffic and vehicles. It also provides a scope to understand the scenario of particular location using sound scape information. In this paper, a two level classification is proposed to classify urban sound events such as bus engine (BE), bus horn (BH), car horn (CH) and whistle (W) sounds. The above sounds are taken as they place a major role in traffic scenario. A real-time data is collected from the live recordings at major locations of the urban city. Prior to the detection of events, the class of events is identified u sing signal processing techniques. Further, features such as Mel-frequency cepstral coefficients (MFCCs) a re extracted based on the analysis of a spectrum of the above-mentioned events and they are prominent to classify even in the complex scenario. Classifiers such as artificial neural networks (ANN), naive-Bayesian (NB), decision tree (J48), random forest (RF) are used at two levels. The proposed approach outperforms the existing approaches that usually does direct feature extraction without signal level analysis. © 2016 IEEE.
  • Item
    Classification of vocal and non-vocal segments in audio clips using genetic algorithm based feature selection (GAFS)
    (Elsevier Ltd, 2018) Vishnu Srinivasa Murthy, Y.V.S.; Koolagudi, S.G.
    The technology of music information retrieval (MIR) is an emerging field that helps in tagging each portion of an audio clip. A majority of the subtasks of MIR need an application that segments vocal and non-vocal portions. In this paper, an effort has been made to segment the vocal and non-vocal regions using some novel features based on formant structure on top of standard features. The features such as Mel-frequency cepstral coefficients (MFCCs), linear prediction cepstral coefficients (LPCCs), frequency domain linear prediction (FDLP) values, statistical values of pitch, jitter, shimmer, formant attack slope (FAS), formant heights from base-to-peak (FH1), peak-to-base (FH2), formant angle values at peak (FA1), valley (FA2), and F5 have been considered. The classifiers such as artificial neural networks (ANN), support vector machines (SVM), and random forest (RF) have been considered for a comparative study as they are powerful enough to discover huge non-linear patterns. The concept of genetic algorithms with the support of neural networks has been used to select the relevant features rather considering all dimensions, named as a genetic algorithm based feature selection (GAFS). an accuracy of 89.23% before windowing and 95.16% after windowing is obtained with the optimal feature vector of length 32 using artificial neural networks. The system developed is capable of detecting singing voice segments with an accuracy of 98%. © 2018 Elsevier Ltd
  • Item
    Singer identification for Indian singers using convolutional neural networks
    (Springer, 2021) Vishnu Srinivasa Murthy, Y.V.S.; Koolagudi, S.G.; Jeshventh Raja, T.K.
    Singer identification is one of the important aspects of music information retrieval (MIR). In this work, traditional feature-based and trending convolutional neural network (CNN) based approaches are considered and compared for identifying singers. Two different datasets, namely artist20 and the Indian popular singers’ database with 20 singers are used in this work to evaluate proposed approaches. Cepstral features such as Mel-frequency cepstral coefficients (MFCCs) and linear prediction cepstral coefficients (LPCCs) are considered to represent timbre information. Shifted delta cepstral (SDC) features are also computed beside the cepstral coefficients to capture temporal information. In addition, chroma features are computed from 12 semitones of a musical octave, overall forming a 46-dimensional feature vector. Experiments are conducted with different feature combinations, and suitable features are selected using the genetic algorithm-based feature selection (GAFS) approach. Two different classification techniques, namely artificial neural networks (ANNs) and random forest (RF), are considered on the features mentioned above. Further, spectrograms and chromagrams of audio clips are directly fed to CNN for classification. The singer identification results obtained using CNNs seem to be better than the traditional isolated and ensemble classifiers. Average accuracy of around 75% is observed with CNN in the case of Indian popular singers database. Whereas, on artist20 dataset, the proposed configuration of feature-based approach and CNN could not give better than 60% accuracy. © 2021, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.