Sentiment extraction from naturalistic video

No Thumbnail Available

Date

2018

Authors

Radhakrishnan, V.
Joseph, C.
Chandrasekaran, K.

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Sentiment analysis on video is quite an unexplored field of research wherein the emotion and sentiment of the speaker are extracted by processing the frames, audio and text obtained from the video. In recent times, sentiment analysis from naturalistic audio has been an upcoming field of research. This is typically done by performing automatic speech recognition on audio, followed by extracting the sentiment exhibited by the speaker. On the other hand, techniques for extracting sentiments from text are quite developed and tech giants have already optimized these methods to process large amounts of customer review, feedback and reactions. In this paper, a new model for sentiment analysis from audio is proposed which is a hybrid of Keyword Spotting System (KWS) and Maximum Entropy (ME) Classifier System. This model is developed with the aim to outperform other conventional classifiers and to provide a single integrated system for audio and text processing. In addition, a web application for dynamic processing of YouTube videos is described. The WebApp provides an index-based result for each phrase that is detected in the video. Often, the emotion of the viewer of a video corresponds to its content. In this regard, it is useful to map these emotions to the text transcript of the video and assign a suitable weight to it while predicting the sentiment that the speaker exhibits. This paper describes such an application that was developed to analyze facial expressions using Affdex API. Thus, using the combined statistics from all the three aforementioned components, a robust and portable system for emotion detection is obtained that provides accurate predictions and can be deployed on any modern systems with minimal configuration changes. � 2018 The Authors. Published by Elsevier B.V.

Description

Keywords

Citation

Procedia Computer Science, 2018, Vol.143, , pp.626-634

Endorsement

Review

Supplemented By

Referenced By