Faculty Publications

Permanent URI for this communityhttps://idr.nitk.ac.in/handle/123456789/18736

Publications by NITK Faculty

Browse

Search Results

Now showing 1 - 2 of 2
  • Item
    Phoneme boundary detection from speech: A rule based approach
    (Elsevier B.V., 2019) Ramteke, P.B.; Koolagudi, S.G.
    In this paper, a novel approach has been proposed for the automatic segmentation of speech signal into phonemes. In a well spoken word, phonemes can be characterized by the changes observed in speech waveform. To get phoneme boundaries, the signal level properties of speech waveform i.e. changes in the waveform during transformation from one phoneme to the other are explored. The problem of phoneme level segmentation has been addressed in this work from two aspects 1. Segmentation of phonemes between voiced and unvoiced portions and 2. Segmentation of phonemes within voiced and unvoiced regions. Pitch and zero-frequency filter signal are used to get the region of change from voiced to unvoiced and vice versa. The segmentation of phoneme boundaries within voiced and unvoiced regions are approximated using the properties of power spectrum of correlation of adjacent frames of the signal. A finite set of rules is proposed on the variations observed in the power spectrum during phoneme transitions. The segmentation results of both approaches are combined to get the final phoneme boundaries. Three databases namely TIMIT Corpus, IIIT Hyderabad Marathi database & IIIT Hyderabad Hindi database (IIIT-H Indic Speech Databases) are used to test the proposed approach; an accuracy of 95.40%, 96.87% and 96.12% is achieved within the tolerance range of 10 ms respectively. The results of the proposed approach are observed to give precise phoneme boundaries. © 2019 Elsevier B.V.
  • Item
    Segmentation and characterization of acoustic event spectrograms using singular value decomposition
    (Elsevier Ltd, 2019) Mulimani, M.; Koolagudi, S.G.
    The traditional frame-based speech features such as Mel-frequency cepstral coefficients (MFCCs) are specifically developed for speech/speaker recognition tasks. Speech is different from acoustic events, when one considers its phonetic structure. Hence, frame-based speech features may not be suitable for Acoustic Event Classification (AEC). In this paper, a novel method is proposed for the extraction of robust acoustic event specific features from the spectrogram using a left singular vector for AEC. It consists of two main stages: segmentation and characterization of acoustic event spectrograms. In the first stage, symmetric Laplacian matrix of an acoustic event spectrogram is decomposed into singular values and vectors. Then, reliable region (spectral shape) of an acoustic from the spectrogram is segmented using a left singular vector. The selected prominent values of a left singular vector using the proposed threshold, automatically segment the reliable region of an acoustic event from the spectrogram. In the second stage, the segmented region of the spectrogram is used as a feature vector for AEC. Characteristics of values of singular vector belonging to reliable (event) and unreliable (non-event) regions of the spectrogram are determined. To evaluate the proposed approach, different categories of ‘home’ acoustic events are considered from the Freiburg-106 dataset. The results show that the significantly improved performance of acoustic event segmentation and classification. A singular vector effectively segments the reliable region of the acoustic event from spectrogram for Support Vector Machine (SVM) based AEC system. The proposed AEC system is robust to noise and achieves higher recognition rate in clean and noisy conditions compared to the traditional speech feature based systems. © 2018 Elsevier Ltd