Please use this identifier to cite or link to this item:
Title: Detection and analysis model for grammatical facial expressions in sign language
Authors: Bhuvan, M.S.
Rao, D.V.
Jain, S.
Ashwin, T.S.
Ram Mohana Reddy, Guddeti
Kulgod, S.P.
Issue Date: 2016
Citation: Proceedings - 2016 IEEE Region 10 Symposium, TENSYMP 2016, 2016, Vol., , pp.155-160
Abstract: The proposed research explores a relatively new area of expression detection through facial points in a sign language to enhance the computer interaction with the deaf and hard of hearing. The research mainly focuses on facial points collected from Kinect as basis for expression detection as opposed to numerous gesture based studies on sign language. This helps in deploying the applications in smart phones as it is feasible to capture facial point easily rather than hand gestures. Exhaustive experimentation is carried out with ten different machine learning algorithms for detecting nine different types of expression modeled as different binary classification problem for each expression. This is done for user dependent model and user independent model scenarios. The optimal classifier for each expression is found to outperform the current state-of-the-art techniques and has ROC area greater than 0.95 for each expression. It is found that user independent model's performance is comparable to user dependent model, hence is suggested as it is easy and efficient to deploy in practical applications. Finally, the importance of each facial point in detecting each type of expression has been mined, which can be instrumental for future research and for various application using facial points as basis for decision making. � 2016 IEEE.
Appears in Collections:2. Conference Papers

Files in This Item:
There are no files associated with this item.

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.