Faculty Publications
Permanent URI for this communityhttps://idr.nitk.ac.in/handle/123456789/18736
Publications by NITK Faculty
Browse
5 results
Search Results
Item Voice activity detection from the breathing pattern of the speaker(Institute of Electrical and Electronics Engineers Inc., 2018) Ramakrishnan, A.G.; Krishnan, G.; Srivathsan, S.In this paper, we propose a method to perform voice activity detection using only the breathing signal of a speaker. Human breathing and speech production go hand in hand. Normal respiration and respiration during speech have a different profile. The former is generally symmetric as compared to an asymmetric profile in the case of respiration during speech. Impedance pneumography provides a mechanism to capture chest expansions and compressions due to breathing. We have recorded the breathing signal along with the speech audio for 44 subjects while they were speaking and quiet. We have classified cycles of breathing into two classes, namely during speech and normal, using the cycle-synchronous discrete cosine transform coefficients of the breathing signal with different classifiers. The best accuracy of 96.4% is obtained using the k-nearest neighbor classifier. From the classified breathing cycles, we determine the intervals when a subject is quiet and when he is speaking. We use the corresponding timeframes on the simultaneously recorded audio and achieve a good accuracy in voice activity detection. Compared to the earlier reported time resolution of 30 sec, we obtain a decision for every breathing cycle, which works out to an average resolution of about 3 sec. © 2017 IEEE.Item Phoneme boundary detection from speech: A rule based approach(Elsevier B.V., 2019) Ramteke, P.B.; Koolagudi, S.G.In this paper, a novel approach has been proposed for the automatic segmentation of speech signal into phonemes. In a well spoken word, phonemes can be characterized by the changes observed in speech waveform. To get phoneme boundaries, the signal level properties of speech waveform i.e. changes in the waveform during transformation from one phoneme to the other are explored. The problem of phoneme level segmentation has been addressed in this work from two aspects 1. Segmentation of phonemes between voiced and unvoiced portions and 2. Segmentation of phonemes within voiced and unvoiced regions. Pitch and zero-frequency filter signal are used to get the region of change from voiced to unvoiced and vice versa. The segmentation of phoneme boundaries within voiced and unvoiced regions are approximated using the properties of power spectrum of correlation of adjacent frames of the signal. A finite set of rules is proposed on the variations observed in the power spectrum during phoneme transitions. The segmentation results of both approaches are combined to get the final phoneme boundaries. Three databases namely TIMIT Corpus, IIIT Hyderabad Marathi database & IIIT Hyderabad Hindi database (IIIT-H Indic Speech Databases) are used to test the proposed approach; an accuracy of 95.40%, 96.87% and 96.12% is achieved within the tolerance range of 10 ms respectively. The results of the proposed approach are observed to give precise phoneme boundaries. © 2019 Elsevier B.V.Item Segmentation and characterization of acoustic event spectrograms using singular value decomposition(Elsevier Ltd, 2019) Mulimani, M.; Koolagudi, S.G.The traditional frame-based speech features such as Mel-frequency cepstral coefficients (MFCCs) are specifically developed for speech/speaker recognition tasks. Speech is different from acoustic events, when one considers its phonetic structure. Hence, frame-based speech features may not be suitable for Acoustic Event Classification (AEC). In this paper, a novel method is proposed for the extraction of robust acoustic event specific features from the spectrogram using a left singular vector for AEC. It consists of two main stages: segmentation and characterization of acoustic event spectrograms. In the first stage, symmetric Laplacian matrix of an acoustic event spectrogram is decomposed into singular values and vectors. Then, reliable region (spectral shape) of an acoustic from the spectrogram is segmented using a left singular vector. The selected prominent values of a left singular vector using the proposed threshold, automatically segment the reliable region of an acoustic event from the spectrogram. In the second stage, the segmented region of the spectrogram is used as a feature vector for AEC. Characteristics of values of singular vector belonging to reliable (event) and unreliable (non-event) regions of the spectrogram are determined. To evaluate the proposed approach, different categories of ‘home’ acoustic events are considered from the Freiburg-106 dataset. The results show that the significantly improved performance of acoustic event segmentation and classification. A singular vector effectively segments the reliable region of the acoustic event from spectrogram for Support Vector Machine (SVM) based AEC system. The proposed AEC system is robust to noise and achieves higher recognition rate in clean and noisy conditions compared to the traditional speech feature based systems. © 2018 Elsevier LtdItem Automatic speaker profiling from short duration speech data(Elsevier B.V., 2020) Kalluri, S.B.; Vijayasenan, D.; Ganapathy, S.Many paralinguistic applications of speech demand the extraction of information about the speaker characteristics from as little speech data as possible. In this work, we explore the estimation of multiple physical parameters of the speaker from the short duration of speech in a multilingual setting. We explore different feature streams for age and body build estimation derived from the speech spectrum at different resolutions, namely – short-term log-mel spectrogram, formant features and harmonic features of the speech. The statistics of these features over the speech recording are used to learn a support vector regression model for speaker age and body build estimation. The experiments performed on the TIMIT dataset show that each of the individual features is able to achieve results that outperform previously published results in height and age estimation. Furthermore, the estimation errors from these different feature streams are complementary, which allows the combination of estimates from these feature streams to further improve the results. The combined system from short audio snippets achieves a performance of 5.2 cm, and 4.8 cm in Mean Absolute Error (MAE) for male and female respectively for height estimation. Similarly in age estimation the MAE is of 5.2 years, and 5.6 years for male, and female speakers respectively. We also extend the same physical parameter estimation to other body build parameters like shoulder width, waist size and weight along with height on a dataset we collected for speaker profiling. The duration analysis of the proposed scheme shows that the state of the art results can be achieved using only around 1–2 s of speech data. To the best of our knowledge, this is the first attempt to use a common set of features for estimating the different physical traits of a speaker. © 2020 Elsevier B.V.Item Contribution of frequency compressed temporal fine structure cues to the speech recognition in noise: An implication in cochlear implant signal processing(Elsevier Ltd, 2022) Poluboina, V.; Pulikala, A.; Pitchai Muthu, A.N.The study investigated the effect of proportionally frequency compressed encoding of temporal fine structure information on speech perception in noise using vocoder simulations of cochlear implant signal processing. The study proposed a pitch synchronous overlap-add algorithm (PSOLA) for downward frequency shifting of TFS. The speech recognition scores (SRS) were measured at −10 dB, 0 dB, and +10 dB for eight signal processing conditions corresponding to sinewave vocoder without TFS (NO-TFS), four unshifted TFS conditions including full band TFS, TFS up to 2000, 1000, and 600 Hz, and three conditions with PSOLA which shifted 2000, 1000 and 600 Hz TFS to 1000, 500 and 300 Hz respectively. The original envelope was unchanged across the conditions. SRS at +10 dB and −10 dB SNR reached ceiling and floor respectively, in most conditions. Hence, SRS at 0 dB SNR was compared across the conditions. The results showed that the SRS was highest with full band TFS and lowest for the NO-TFS condition.The SRS for TFS 600 Hz shifted to 300 Hz through PSOLA was higher than the NO-TFS condition. Study findings suggest that encoding TFS by proportional frequency compression results in better speech perception in noise compared to NO-TFS. An important observation of this current study is that the speech recognition was better than the sine wave vocoder for all TFS conditions including frequency compressed 600 Hz TFS. © 2021 Elsevier Ltd
