Please use this identifier to cite or link to this item: https://idr.nitk.ac.in/jspui/handle/123456789/11309
Full metadata record
DC FieldValueLanguage
dc.contributor.authorRao, K.S.
dc.contributor.authorNandi, D.
dc.contributor.authorKoolagudi, S.G.
dc.date.accessioned2020-03-31T08:31:06Z-
dc.date.available2020-03-31T08:31:06Z-
dc.date.issued2014
dc.identifier.citationInternational Journal of Speech Technology, 2014, Vol.17, 1, pp.65-74en_US
dc.identifier.urihttp://idr.nitk.ac.in/jspui/handle/123456789/11309-
dc.description.abstractIn this paper, Autoassociative Neural Network (AANN) models are explored for segmentation and indexing the films (movies) using audio features. A two-stage method is proposed for segmenting the film into sequence of scenes, and then indexing them appropriately. In the first stage, music and speech plus music segments of the film are separated, and music segments are labelled as title and fighting scenes based on their position. At the second stage, speech plus music segments are classified into normal, emotional, comedy and song scenes. In this work, Mel frequency cepstral coefficients (MFCCs), zero crossing rate and intensity are used as audio features for segmentation and indexing the films. The proposed segmentation and indexing method is evaluated on manual segmented Hindi films. From the evaluation results, it is observed that title, fighting and song scenes are segmented and indexed without any errors, and most of the errors are observed in discriminating the comedy and normal scenes. Performance of the proposed AANN models used for segmentation and indexing of the films, is also compared with hidden Markov models, Gaussian mixture models and support vector machines. 2013 Springer Science+Business Media New York.en_US
dc.titleFilm segmentation and indexing using autoassociative neural networksen_US
dc.typeArticleen_US
Appears in Collections:1. Journal Articles

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.