Faculty Publications

Permanent URI for this communityhttps://idr.nitk.ac.in/handle/123456789/18736

Publications by NITK Faculty

Browse

Search Results

Now showing 1 - 3 of 3
  • Item
    Polyphonic Sound Event Detection Using Mel-Pseudo Constant Q-Transform and Deep Neural Network
    (Taylor and Francis Ltd., 2024) Spoorthy, V.; Koolagudi, S.G.
    The task of identification of sound events in a particular surrounding is known as Sound Event Detection (SED) or Acoustic Event Detection (AED). The occurrence of sound events is unstructured and also displays wide variations in both temporal structure and frequency content. Sound events may be non-overlapped (monophonic) or overlapped (polyphonic) in nature. In real-time scenarios, polyphonic SED is most commonly seen as compared to monophonic SED. In this paper, a Mel-Pseudo Constant Q-Transform (MP-CQT) technique is introduced to perform polyphonic SED to effectively learn both monophonic and polyphonic sound events. A pseudo CQT technique is adapted to extract features from the audio files and their Mel spectrograms. The Mel-scale is believed to broadly simulate human perception system. The classifier used is a Convolutional Recurrent Neural Network (CRNN). Comparison of the performance of the proposed MP-CQT technique along with CRNN is presented and a considerable performance improvement is observed. The proposed method achieved an average error rate of 0.684 and average F1 score of 52.3%. The proposed approach is also analyzed for the robustness by adding an additional noise at different Signal to Noise Ratios (SNRs) to the audio files. The proposed method for SED task has displayed improved performance as compared to state-of-the-art SED systems. The introduction of new feature extraction technique has shown promising improvement in the performance of the polyphonic SED system. © 2024 IETE.
  • Item
    Rare Sound Event Detection Using Multi-resolution Cochleagram Features and CRNN with Attention Mechanism
    (Birkhauser, 2025) Pandey, G.; Koolagudi, S.G.
    Acoustic event detection (AED) or sound event detection (SED) is a problem that focuses on automatically detecting acoustic events in an audio recording along with their onset and offset times. Rare acoustic event detection in AED is a challenging problem. Rare AED aims to detect rare but significant sound events in an audio signal. Traditional methods used for SED often struggle to accurately detect rare sound events due to their infrequent occurrence and diverse characteristics. This paper introduces novel features named as multi-resolution cochleagrams (MRCGs) for rare SED tasks. Different cochleagrams with different resolutions are extracted from the audio recording and stacked to get the MRCG feature vector. The equivalent rectangular bandwidth (ERB) scale used in the cochleagram simulates the human auditory filter. The classifier used is a convolutional recurrent neural network (CRNN) embedded with an attention module. This work considers the Task 2 DCASE 2017 dataset for detecting rare sound events. Results show that the proposed MRCG and CRNN with attention combination improves the performance. The proposed method achieved an average error rate of 0.11 and an average F1 score of 94.3%. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2025.
  • Item
    Rare sound event detection using superlets and a convolutional TDPANet
    (Springer Science and Business Media Deutschland GmbH, 2025) Pandey, G.; Koolagudi, S.G.
    Rare Sound Event Detection (RSED) focuses on identifying infrequent but significant sound events in audio recordings with precise onset and offset times. It is crucial for applications like surveillance, healthcare, and environmental monitoring. An essential component in RSED systems is extracting effective time-frequency representation as input features. These features capture short, transient acoustic events in an audio input recording, even in noisy and complex environments. Most existing approaches to this RSED problem rely on input features as time-frequency representations, such as the Mel spectrogram, Constant-Q Transform (CQT), and Continuous Wavelet Transform (CWT). However, these approaches often suffer from resolution trade-offs between frequency and time. This trade-off limits their ability to precisely capture the fine-grained details needed to detect these events in complex acoustic environments. To overcome these limitations, we introduce superlets, a novel time-frequency representation that offers super-resolution in both time and frequency domains. To process the high-resolution Superlet features, we have also proposed a Convolutional Temporal Dilated Pyramid Attention Network (TDPANet). This novel neural network architecture incorporates convolutional feature extraction, dilated temporal modeling, multi-scale temporal pooling, and temporal attention mechanisms to enhance event detection accuracy. We evaluate our method on the DCASE 2017 Task 2 rare sound event dataset, which includes isolated sound events and real-world acoustic scenes. Experimental results show that our proposed method significantly outperforms state-of-the-art techniques, achieving an Error Rate (ER) of 0.15 and an F1-score of 92.3%, demonstrating its effectiveness in detecting rare sound events. © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2025.