Faculty Publications
Permanent URI for this communityhttps://idr.nitk.ac.in/handle/123456789/18736
Publications by NITK Faculty
Browse
7 results
Search Results
Item Automatic text-independent Kannada dialect identification system(Springer Verlag service@springer.de, 2019) Chittaragi, N.B.; Limaye, A.; Chandana, N.T.; Annappa, B.; Koolagudi, S.G.This paper proposes a dialect identification system for the Kannada language. A system that can automatically identify the dialects of the language being spoken has a wide variety of applications. However, not many Automatic Speech Recognition (ASR) and dialect identification tasks are carried out in majority of the Indian languages. Further, there are only a few good quality annotated audio datasets available. In this paper, a new dataset for 5 spoken dialects of the Kannada language is introduced. Spectral and prosodic features have captured the most prominent features for recognition of Kannada dialects. Support Vector Machine (SVM) and neural networks algorithms are used for modeling text-independent recognition system. A neural network model that attempts for identification dialects based on sentence level cues has also been built. Hyper-parameters for SVM and neural network models are chosen using grid search. Neural network models have outperformed SVMs when complete utterances are considered. © Springer Nature Singapore Pte Ltd. 2019.Item Estimation of Tyre Pressure from the Characteristics of the Wheel: An Image Processing Approach(Springer, 2020) Vineeth Reddy, V.B.; Ananda Rao, H.; Yeshwanth, A.; Ramteke, P.B.; Koolagudi, S.G.Improper tyre pressure is a safety issue that falls prey to ignorance of users. But a drop in tyre pressure can result in the reduction of mileage, tyre life, vehicle safety and performance. In this paper, an approach is proposed to measure the tyre pressure from the image of the wheel. The tyre pressure is classified into under pressure and normal pressure using load index, tyre type, tyre position and ratio of compressed and uncompressed tyre radius. The efficiency of the feature is evaluated using three classifiers namely Random Forest, AdaBoost and Artificial Neural Networks. It is observed that the ratio of radii plays a major role in classifying the tyres. The proposed system can be used to obtain a rough idea on whether the tyre should be refilled or not. © 2020, Springer Nature Singapore Pte Ltd.Item Dravidian language classification from speech signal using spectral and prosodic features(Springer New York LLC barbara.b.bertram@gsk.com, 2017) Koolagudi, S.G.; Bharadwaj, A.; Vishnu Srinivasa Murthy, Y.V.; Reddy, N.; Rao, P.The interesting aspect of the Dravidian languages is a commonality through a shared script, similar vocabulary, and their common root language. In this work, an attempt has been made to classify the four complex Dravidian languages using cepstral coefficients and prosodic features. The speech of Dravidian languages has been recorded in various environments and considered as a database. It is demonstrated that while cepstral coefficients can indeed identify the language correctly with a fair degree of accuracy, prosodic features are added to the cepstral coefficients to improve language identification performance. Legendre polynomial fitting and the principle component analysis (PCA) are applied on feature vectors to reduce dimensionality which further resolves the issue of time complexity. In the experiments conducted, it is found that using both cepstral coefficients and prosodic features, a language identification rate of around 87% is obtained, which is about 18% above the baseline system using Mel-frequency cepstral coefficients (MFCCs). It is observed from the results that the temporal variations and prosody are the important factors needed to be considered for the tasks of language identification. © 2017, Springer Science+Business Media, LLC.Item Choice of a classifier, based on properties of a dataset: case study-speech emotion recognition(Springer New York LLC barbara.b.bertram@gsk.com, 2018) Koolagudi, S.G.; Vishnu Srinivasa Murthy, Y.V.S.; Bhaskar, S.P.In this paper, the process of selecting a classifier based on the properties of dataset is designed since it is very difficult to experiment the data on n—number of classifiers. As a case study speech emotion recognition is considered. Different combinations of spectral and prosodic features relevant to emotions are explored. The best subset of the chosen set of features is recommended for each of the classifiers based on the properties of chosen dataset. Various statistical tests have been used to estimate the properties of dataset. The nature of dataset gives an idea to select the relevant classifier. To make it more precise, three other clustering and classification techniques such as K-means clustering, vector quantization and artificial neural networks are used for experimentation and results are compared with the selected classifier. Prosodic features like pitch, intensity, jitter, shimmer, spectral features such as mel frequency cepstral coefficients (MFCCs) and formants are considered in this work. Statistical parameters of prosody such as minimum, maximum, mean (?) and standard deviation (?) are extracted from speech and combined with basic spectral (MFCCs) features to get better performance. Five basic emotions namely anger, fear, happiness, neutral and sadness are considered. For analysing the performance of different datasets on different classifiers, content and speaker independent emotional data is used, collected from Telugu movies. Mean opinion score of fifty users is collected to label the emotional data. To make it more accurate, one of the benchmark IIT-Kharagpur emotional database is used to generalize the conclusions. © 2018, Springer Science+Business Media, LLC, part of Springer Nature.Item Classification of vocal and non-vocal segments in audio clips using genetic algorithm based feature selection (GAFS)(Elsevier Ltd, 2018) Vishnu Srinivasa Murthy, Y.V.S.; Koolagudi, S.G.The technology of music information retrieval (MIR) is an emerging field that helps in tagging each portion of an audio clip. A majority of the subtasks of MIR need an application that segments vocal and non-vocal portions. In this paper, an effort has been made to segment the vocal and non-vocal regions using some novel features based on formant structure on top of standard features. The features such as Mel-frequency cepstral coefficients (MFCCs), linear prediction cepstral coefficients (LPCCs), frequency domain linear prediction (FDLP) values, statistical values of pitch, jitter, shimmer, formant attack slope (FAS), formant heights from base-to-peak (FH1), peak-to-base (FH2), formant angle values at peak (FA1), valley (FA2), and F5 have been considered. The classifiers such as artificial neural networks (ANN), support vector machines (SVM), and random forest (RF) have been considered for a comparative study as they are powerful enough to discover huge non-linear patterns. The concept of genetic algorithms with the support of neural networks has been used to select the relevant features rather considering all dimensions, named as a genetic algorithm based feature selection (GAFS). an accuracy of 89.23% before windowing and 95.16% after windowing is obtained with the optimal feature vector of length 32 using artificial neural networks. The system developed is capable of detecting singing voice segments with an accuracy of 98%. © 2018 Elsevier LtdItem A Deep Ensemble Learning-Based CNN Architecture for Multiclass Retinal Fluid Segmentation in OCT Images(Institute of Electrical and Electronics Engineers Inc., 2023) Rahil, M.; Anoop, B.N.; Girish, G.N.; Kothari, A.R.; Koolagudi, S.G.; Rajan, J.Retinal Fluids (fluid collections) develop because of the accumulation of fluid in the retina, which may be caused by several retinal disorders, and can lead to loss of vision. Optical coherence tomography (OCT) provides non-invasive cross-sectional images of the retina and enables the visualization of different retinal abnormalities. The identification and segmentation of retinal cysts from OCT scans is gaining immense attention since the manual analysis of OCT data is time consuming and requires an experienced ophthalmologist. Identification and categorization of the retinal cysts aids in establishing the pathophysiology of various retinal diseases, such as macular edema, diabetic macular edema, and age-related macular degeneration. Hence, an automatic algorithm for the segmentation and detection of retinal cysts would be of great value to the ophthalmologists. In this study, we have proposed a convolutional neural network-based deep ensemble architecture that can segment the three different types of retinal cysts from the retinal OCT images. The quantitative and qualitative performance of the model was evaluated using the publicly available RETOUCH challenge dataset. The proposed model outperformed the state-of-the-art methods, with an overall improvement of 1.8%. © 2013 IEEE.Item Rare sound event detection using superlets and a convolutional TDPANet(Springer Science and Business Media Deutschland GmbH, 2025) Pandey, G.; Koolagudi, S.G.Rare Sound Event Detection (RSED) focuses on identifying infrequent but significant sound events in audio recordings with precise onset and offset times. It is crucial for applications like surveillance, healthcare, and environmental monitoring. An essential component in RSED systems is extracting effective time-frequency representation as input features. These features capture short, transient acoustic events in an audio input recording, even in noisy and complex environments. Most existing approaches to this RSED problem rely on input features as time-frequency representations, such as the Mel spectrogram, Constant-Q Transform (CQT), and Continuous Wavelet Transform (CWT). However, these approaches often suffer from resolution trade-offs between frequency and time. This trade-off limits their ability to precisely capture the fine-grained details needed to detect these events in complex acoustic environments. To overcome these limitations, we introduce superlets, a novel time-frequency representation that offers super-resolution in both time and frequency domains. To process the high-resolution Superlet features, we have also proposed a Convolutional Temporal Dilated Pyramid Attention Network (TDPANet). This novel neural network architecture incorporates convolutional feature extraction, dilated temporal modeling, multi-scale temporal pooling, and temporal attention mechanisms to enhance event detection accuracy. We evaluate our method on the DCASE 2017 Task 2 rare sound event dataset, which includes isolated sound events and real-world acoustic scenes. Experimental results show that our proposed method significantly outperforms state-of-the-art techniques, achieving an Error Rate (ER) of 0.15 and an F1-score of 92.3%, demonstrating its effectiveness in detecting rare sound events. © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2025.
