Conference Papers
Permanent URI for this collectionhttps://idr.nitk.ac.in/handle/123456789/28506
Browse
2 results
Search Results
Item Product review based on optimized facial expression detection(Institute of Electrical and Electronics Engineers Inc., 2017) Chaugule, V.; Abhishek, D.; Vijayakumar, A.; Ramteke, P.B.; Koolagudi, S.G.This paper proposes a method to review public acceptance of products based on their brand by analyzing the facial expression of the customer intending to buy the product from a supermarket or hypermarket. In such cases, facial expression recognition plays a significant role in product review. Here, facial expression detection is performed by extracting feature points using a modified Harris algorithm. The modified Harris algorithm reduced the time complexity of the existing feature extraction Harris Algorithm. A comparison of time complexities of existing algorithms is done with proposed algorithm. The algorithm proved to be significantly faster and nearly accurate for the needed application by reducing the time complexity for corner points detection. © 2016 IEEE.Item Sentiment extraction from naturalistic video(Elsevier B.V., 2018) Radhakrishnan, V.; Joseph, C.; Chandrasekaran, K.Sentiment analysis on video is quite an unexplored field of research wherein the emotion and sentiment of the speaker are extracted by processing the frames, audio and text obtained from the video. In recent times, sentiment analysis from naturalistic audio has been an upcoming field of research. This is typically done by performing automatic speech recognition on audio, followed by extracting the sentiment exhibited by the speaker. On the other hand, techniques for extracting sentiments from text are quite developed and tech giants have already optimized these methods to process large amounts of customer review, feedback and reactions. In this paper, a new model for sentiment analysis from audio is proposed which is a hybrid of Keyword Spotting System (KWS) and Maximum Entropy (ME) Classifier System. This model is developed with the aim to outperform other conventional classifiers and to provide a single integrated system for audio and text processing. In addition, a web application for dynamic processing of YouTube videos is described. The WebApp provides an index-based result for each phrase that is detected in the video. Often, the emotion of the viewer of a video corresponds to its content. In this regard, it is useful to map these emotions to the text transcript of the video and assign a suitable weight to it while predicting the sentiment that the speaker exhibits. This paper describes such an application that was developed to analyze facial expressions using Affdex API. Thus, using the combined statistics from all the three aforementioned components, a robust and portable system for emotion detection is obtained that provides accurate predictions and can be deployed on any modern systems with minimal configuration changes. © 2018 The Authors. Published by Elsevier B.V.
