Faculty Publications
Permanent URI for this communityhttps://idr.nitk.ac.in/handle/123456789/18736
Publications by NITK Faculty
Browse
3 results
Search Results
Item Currency recognition system using image processing(Institute of Electrical and Electronics Engineers Inc., 2017) Abburu, V.; Gupta, S.; Rimitha, S.R.; Mulimani, M.; Koolagudi, S.G.In this paper, we propose a system for automated currency recognition using image processing techniques. The proposed method can be used for recognizing both the country or origin as well as the denomination or value of a given banknote. Only paper currencies have been considered. This method works by first identifying the country of origin using certain predefined areas of interest, and then extracting the denomination value using characteristics such as size, color, or text on the note, depending on how much the notes within the same country differ. We have considered 20 of the most traded currencies, as well as their denominations. Our system is able to accurately and quickly identify test notes. © 2017 IEEE.Item Classification of vocal and non-vocal segments in audio clips using genetic algorithm based feature selection (GAFS)(Elsevier Ltd, 2018) Vishnu Srinivasa Murthy, Y.V.S.; Koolagudi, S.G.The technology of music information retrieval (MIR) is an emerging field that helps in tagging each portion of an audio clip. A majority of the subtasks of MIR need an application that segments vocal and non-vocal portions. In this paper, an effort has been made to segment the vocal and non-vocal regions using some novel features based on formant structure on top of standard features. The features such as Mel-frequency cepstral coefficients (MFCCs), linear prediction cepstral coefficients (LPCCs), frequency domain linear prediction (FDLP) values, statistical values of pitch, jitter, shimmer, formant attack slope (FAS), formant heights from base-to-peak (FH1), peak-to-base (FH2), formant angle values at peak (FA1), valley (FA2), and F5 have been considered. The classifiers such as artificial neural networks (ANN), support vector machines (SVM), and random forest (RF) have been considered for a comparative study as they are powerful enough to discover huge non-linear patterns. The concept of genetic algorithms with the support of neural networks has been used to select the relevant features rather considering all dimensions, named as a genetic algorithm based feature selection (GAFS). an accuracy of 89.23% before windowing and 95.16% after windowing is obtained with the optimal feature vector of length 32 using artificial neural networks. The system developed is capable of detecting singing voice segments with an accuracy of 98%. © 2018 Elsevier LtdItem Rare Sound Event Detection Using Multi-resolution Cochleagram Features and CRNN with Attention Mechanism(Birkhauser, 2025) Pandey, G.; Koolagudi, S.G.Acoustic event detection (AED) or sound event detection (SED) is a problem that focuses on automatically detecting acoustic events in an audio recording along with their onset and offset times. Rare acoustic event detection in AED is a challenging problem. Rare AED aims to detect rare but significant sound events in an audio signal. Traditional methods used for SED often struggle to accurately detect rare sound events due to their infrequent occurrence and diverse characteristics. This paper introduces novel features named as multi-resolution cochleagrams (MRCGs) for rare SED tasks. Different cochleagrams with different resolutions are extracted from the audio recording and stacked to get the MRCG feature vector. The equivalent rectangular bandwidth (ERB) scale used in the cochleagram simulates the human auditory filter. The classifier used is a convolutional recurrent neural network (CRNN) embedded with an attention module. This work considers the Task 2 DCASE 2017 dataset for detecting rare sound events. Results show that the proposed MRCG and CRNN with attention combination improves the performance. The proposed method achieved an average error rate of 0.11 and an average F1 score of 94.3%. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2025.
