Please use this identifier to cite or link to this item:
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSharma, R.
dc.contributor.authorSrinivasa, Murthy, Y.V.
dc.contributor.authorKoolagudi, S.G.
dc.identifier.citationAdvances in Intelligent Systems and Computing, 2016, Vol.381, , pp.157-166en_US
dc.description.abstractIn this work, effort has been made to classify audio songs based on their music pattern which helps us to retrieve the music clips based on listener�s taste. This task is helpful in indexing and accessing the music clip based on listener�s state. Seven main categories are considered for this work such as devotional, energetic, folk, happy, pleasant, sad and, sleepy. Forty music clips of each category for training phase and fifteen clips of each category for testing phase are considered; vibrato-related features such as jitter and shimmer along with the mel-frequency cepstral coefficients (MFCCs); statistical values of pitch such as min, max, mean, and standard deviation are computed and added to the MFCCs, jitter, and shimmer which results in a 19-dimensional feature vector. feedforward backpropagation neural network (BPNN) is used as a classifier due to its efficiency in mapping the nonlinear relations. The accuracy of 82% is achieved on an average for 105 testing clips. � Springer India 2016.en_US
dc.titleAudio songs classification based on music patternsen_US
dc.typeBook chapteren_US
Appears in Collections:2. Conference Papers

Files in This Item:
There are no files associated with this item.

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.