Faculty Publications
Permanent URI for this communityhttps://idr.nitk.ac.in/handle/123456789/18736
Publications by NITK Faculty
Browse
Search Results
Item Role of intensity of emotions for effective personalized video recommendation: A reinforcement learning approach(Springer Verlag service@springer.de, 2018) Tripathi, A.; Manasa, D.G.; Rakshitha, K.; Ashwin, T.S.; Reddy, G.Development of artificially intelligent agents in video recommendation systems over past decade has been an active research area. In this paper, we have presented a novel hybrid approach (combining collaborative as well as content-based filtering) to create an agent which targets the intensity of emotional content present in a video for recommendation. Since cognitive preferences of a user in real world are always in a dynamic state, tracking user behavior in real time as well as the general cognitive preferences of the users toward different emotions is a key parameter for recommendation. The proposed system monitors the user interactions with the recommended video from its user interface and web camera to learn the criterion of decision-making in real time through reinforcement learning. To evaluate the proposed system, we have created our own UI, collected videos from YouTube, and applied Q-learning to train our system to effectively adapt user preferences. © Springer Nature Singapore Pte Ltd. 2018Item A reinforcement learning and recurrent neural network based dynamic user modeling system(Institute of Electrical and Electronics Engineers Inc., 2018) Tripathi, A.; Ashwin, T.S.; Guddeti, R.M.With the exponential growth in areas of machine intelligence, the world has witnessed promising solutions to the personalized content recommendation. The ability of interactive learning agents to take optimal decisions in dynamic environments has been very well conceptualized and proven by Reinforcement Learning (RL). The learning characteristics of Deep-Bidirectional Recurrent Neural Networks (DBRNN) in both positive and negative time directions has shown exceptional performance as generative models to generate sequential data in supervised learning tasks. In this paper, we harness the potential of the said two techniques and strive to create personalized video recommendation through emotional intelligence by presenting a novel context-Aware collaborative filtering approach where intensity of users' spontaneous non-verbal emotional response towards recommended video is captured through system-interactions and facial expression analysis for decision-making and video corpus evolution with real-Time data streams. We take into account a user's dynamic nature in the formulation of optimal policies, by framing up an RL-scenario with an off-policy (Q-Learning) algorithm for temporal-difference learning, which is used to train DBRNN to learn contextual patterns and generate new video sequences for the recommendation. Evaluation of our system with real users for a month shows that our approach outperforms state-of-The-Art methods and models a user's emotional preferences very well with stable convergence. © 2018 IEEE.
