Conference Papers
Permanent URI for this collectionhttps://idr.nitk.ac.in/handle/123456789/28506
Browse
21 results
Search Results
Item Ember: A smartphone web browser interface for the blind(Association for Computing Machinery, 2014) Jassi, I.S.; Ruchika, S.; Pulakhandam, S.; Mukherjee, S.; Ashwin, T.S.; Guddeti, G.R.M.Ember is a smartphone web browser interface designed exclusively for the blind user. The Ember keypad enables blind users to type using their knowledge of Braille. The interface is intuitive to the blind user because the layout consists of a very few large targets and remains consistent throughout the application. The verbal command option provides another dimension for user-interface interaction. Twelve out of thirteen users found that Ember verbal command navigation was easier than using a traditional web browser. Ten out of thirteen users found it faster to use the Ember tactile method of navigation compared to a traditional web browser. The learning rate for both the tactile and verbal command methods was faster compared to the learning rate associated with a traditional web browser layout. Finally it was seen that five out of five users found it significantly faster to use the Ember keypad compared to the QWERTY keypad. © 2014 ACM.Item Vision based laser controlled keyboard system for the disabled(Association for Computing Machinery, 2014) Ahsan, H.; Prabhu, A.; Deeksha, S.D.; Domanal, S.G.; Ashwin, T.S.; Guddeti, G.R.M.In this paper, we have proposed a novel design for a vision based unistroke keyboard system for the disabled. The keyboard layout considers the commonly used character patterns, which makes it convenient for the user to type. In addition to this, Shift functionality is provided to accommodate a larger set of characters. A webcam is positioned so as to monitor the keyboard and the characters are identified based on the laser pointer which the user can control by minor head movements. Experimental results demonstrate that the design achieves very promising results, thus establishing a baseline for such models in this domain. © 2014 ACM.Item An android GPS-based navigation application for blind(Association for Computing Machinery, 2014) Nisha, K.K.; Pruthvi, H.R.; Hadimani, S.N.; Guddeti, G.R.M.; Ashwin, T.S.; Domanal, S.G.Visual Impairment makes the person depend on another person for all his works and daily chores. Through the application proposed in this paper, we aim to eliminate this dependency of a visually impaired person when travelling from one place to another. The main goal is to provide information regarding the current location, how much distance and time is required to reach the destination as well as provide the user with the directions and turns to be taken while travelling by providing continuous audio feedback in his understandable language. © is held by the author/owner(s).Item A novel bio-inspired load balancing of virtualmachines in cloud environment(Institute of Electrical and Electronics Engineers Inc., 2015) Ashwin, T.S.; Domanal, S.G.; Guddeti, G.R.M.Load Balancing plays an important role in managing the software and the hardware components of cloud. In this present scenario the load balancing algorithm should be efficient in allocating the requested resource and also in the usage of the resources so that the over/underutilization of the resources will not occur in the cloud environment. In the present work, the allocation of all the available Virtual Machines is done in an efficient manner by Particle Swarm Optimization load balancing algorithm. Further, we have used cloudsim simulator to compare and analyze the performance of our algorithm. Simulation results demonstrate that the proposed algorithm distributes the load on all the available virtual machines uniformly i.e, without any under/over utilization and also the average response time is better compared to all existing scheduling algorithms. © 2014 IEEE.Item Semantic sentiment analysis using context specific grammar(Institute of Electrical and Electronics Engineers Inc., 2015) Bhuvan, B.M.; Rao, V.D.; Jain, S.; Ashwin, T.S.; Guddeti, G.The increasing number of e-commerce and social networking sites are producing large amount of data pertaining to reviews of a product, restaurant etc. A keen observation reveals that the text data gathered from any social review site are specific to a context and are subjective in nature promoting varied perceptions of sentiments. The novel idea is to define context specific grammar as semantics for a particular domain. Our research aims to develop a scalable model where features obtained from matching semantic patterns are used to predict the sentiment polarity of movie reviews and also provide a sentiment score for each review. The proposed model is intended to be flexible so that it could be applied to any domain by redefining the semantics specific to that domain. There are many other models which give accuracies greater than 80% using various methods. A study suggests that 70% accurate program is as good as humans as they have varied perceptions of sentiment about a movie review as it is a subjective summary of a movie. Our model might give lesser accuracy but it uses a cognitive approach trying to catch these varied perceptions by learning from a combination of positive and negative grammars. Analyzing results from various experiments we find that Logistic Regression with SGD on Apache Spark performs better with accuracy of 64.12% while being highly scalable. High dependency on the grammars is a limitation of the model. Improvements can be done by defining different quality and quantity of grammars. © 2015 IEEE.Item A Novel Method for Disease Recognition and Cure Time Prediction Based on Symptoms(Institute of Electrical and Electronics Engineers Inc., 2015) Shankar, M.; Pahadia, M.; Srivastava, D.; Ashwin, T.S.; Guddeti, G.Healthcare is a sector where decisions usually have very high-risk and high-cost associated with them. One bad choice can cost a person's life. With diseases like Swine Flu on the rise, which have symptoms quite similar to common cold, it's very difficult for people to differentiate between medical conditions. We propose a novel method for recognition of diseases and prediction of their cure time based on the symptoms. We do this by assigning different coefficients to each symptom of a disease, and filtering the dataset with the severity score assigned to each symptom by the user. The diseases are identified based on a numerical value calculated in the fashion mentioned above. For predicting the cure time of a disease, we use reinforcement learning. Our algorithm takes into account the similarity between the condition of the current user and other users who have suffered from the same disease, and uses the similarity scores as weights in prediction of cure time. We also predict the current medical condition of user relative to people who have suffered from same disease. © 2015 IEEE.Item Virtual slate: Microsoft kinect based text input tool to improve handwriting of people(Asia-Pacific Society for Computers in Education No. 300, Jhongda Road, Jhongli City, Taoyuan County 32001, 2016) Ashwin, T.S.; Sreenivasan, K.; Rameez, M.A.; Varma, A.; Mohandoss, V.; Guddeti, G.Text input is a mundane activity that is very closely associated with Human Computer Interaction. In this paper, using the object tracking facility of the Microsoft Kinect sensor and Tesseract for Optical Character Recognition (OCR), we made it possible to write the text by moving our finger in the air as though we were writing on a virtual slate. One of the main purposes of this proposed work is to help the children so that they can improve their handwriting without somebody to check and monitor their writing activity continuously. © 2016 Asia-Pacific Society for Computers in Education. All rights reserved.Item An E-Learning System with Multifacial Emotion Recognition Using Supervised Machine Learning(Institute of Electrical and Electronics Engineers Inc., 2016) Ashwin, T.S.; Jose, J.; Raghu, G.; Guddeti, G.R.E-Learning systems based on Affective computingare popularly used for emotional/behavioral analysis of the users. Emotions expressed by the user is depicted by detecting the facialexpression of the user and accordingly the teaching strategies willbe changed. The present eLearning systems mainly focus on thesingle user face detection. Hence, in this paper, we proposemultiuser face detection based eLearning system using supportvector machine based supervised machine learning technique. Experimental results demonstrate that the proposed systemprovides the accuracy of 89% to 100% w.r.t different datasets(LFW, FDDB, and YFD). Further, to improve the speed ofemotional feature processing, we used GPU along with the CPUand thereby achieve a speedup factor of 2. © 2015 IEEE.Item Detection and analysis model for grammatical facial expressions in sign language(Institute of Electrical and Electronics Engineers Inc., 2016) Bhuvan, M.S.; Rao, D.V.; Jain, S.; Ashwin, T.S.; Guddeti, G.R.; Kulgod, S.P.The proposed research explores a relatively new area of expression detection through facial points in a sign language to enhance the computer interaction with the deaf and hard of hearing. The research mainly focuses on facial points collected from Kinect as basis for expression detection as opposed to numerous gesture based studies on sign language. This helps in deploying the applications in smart phones as it is feasible to capture facial point easily rather than hand gestures. Exhaustive experimentation is carried out with ten different machine learning algorithms for detecting nine different types of expression modeled as different binary classification problem for each expression. This is done for user dependent model and user independent model scenarios. The optimal classifier for each expression is found to outperform the current state-of-the-art techniques and has ROC area greater than 0.95 for each expression. It is found that user independent model's performance is comparable to user dependent model, hence is suggested as it is easy and efficient to deploy in practical applications. Finally, the importance of each facial point in detecting each type of expression has been mined, which can be instrumental for future research and for various application using facial points as basis for decision making. © 2016 IEEE.Item Kinect Based Real Time Gesture Recognition Tool for Air Marshallers and Traffic Policemen(Institute of Electrical and Electronics Engineers Inc., 2017) Prakash, A.; Swathi, R.; Kumar, S.; Ashwin, T.S.; Guddeti, G.R.M.The Microsoft Kinect which is a motion sensing input device presents a very straightforward and affordable approach to facilitate real-time user interaction. Although a lot of research has been conducted on the application of Kinect to gaming and virtual reality environments, its relevance to real-world scenarios has not been explored much. The features provided by the driver platforms such as OpenNI and Microsoft Kinect Software Development Kit (SDK) for development using Kinect coupled with the motion sensing ability of Kinect, presents a unique opportunity for extending the scope of the Kinect sensor. This paper proposes a system for automatically recognizing the road traffic control gestures of police officers and air marshalling commands by ground personnels. This system is aimed for selflearning, training and testing these officers to equip them with the skills to tackle real-world situations. Since these applications are very crucial and performing accurate gestures are of at most importance, this system will prove to be very essential. Experimental results also demonstrate that our system is robust and effective and is suitable for real-time application. © 2016 IEEE.
- «
- 1 (current)
- 2
- 3
- »
