Browsing by Author "Bhuvan, M.S."
Now showing 1 - 4 of 4
- Results Per Page
- Sort Options
Item Algorithmic approach for strategic cell tower placement(2015) Kashyap, R.; Bhuvan, M.S.; Chamarti, S.; Bhat, P.; Jothish, M.; Annappa, K.The increasing number of cell phone users and the usage of cell phones in remote areas have demanded the network service providers to increase their coverage and extend it to all places. Cost of placing a cell tower depends on the height and location, and as it can be very expensive, they have to be placed strategically to minimize the cost. The research aims to find a simple implementable algorithm which effectively determines the strategic positions of the cell towers. Given a satellite image and population density, and obtaining topographical information from GIS (Geographic Information Systems), potential tower locations can be determined. Applying the proposed three stage algorithm, out of many potential tower locations only the indispensible and optimal locations can be chosen. In addition, this algorithm helps to find out the optimal height of the tower at a chosen potential tower location. Hence, the proposal will provide cost-effective way for tower placement specifying their optimal position and height to cover any area and population. � 2014 IEEE.Item Detection and analysis model for grammatical facial expressions in sign language(2016) Bhuvan, M.S.; Rao, D.V.; Jain, S.; Ashwin, T.S.; Ram Mohana Reddy, Guddeti; Kulgod, S.P.The proposed research explores a relatively new area of expression detection through facial points in a sign language to enhance the computer interaction with the deaf and hard of hearing. The research mainly focuses on facial points collected from Kinect as basis for expression detection as opposed to numerous gesture based studies on sign language. This helps in deploying the applications in smart phones as it is feasible to capture facial point easily rather than hand gestures. Exhaustive experimentation is carried out with ten different machine learning algorithms for detecting nine different types of expression modeled as different binary classification problem for each expression. This is done for user dependent model and user independent model scenarios. The optimal classifier for each expression is found to outperform the current state-of-the-art techniques and has ROC area greater than 0.95 for each expression. It is found that user independent model's performance is comparable to user dependent model, hence is suggested as it is easy and efficient to deploy in practical applications. Finally, the importance of each facial point in detecting each type of expression has been mined, which can be instrumental for future research and for various application using facial points as basis for decision making. � 2016 IEEE.Item Detection and analysis model for grammatical facial expressions in sign language(Institute of Electrical and Electronics Engineers Inc., 2016) Bhuvan, M.S.; Rao, D.V.; Jain, S.; Ashwin, T.S.; Guddeti, G.R.; Kulgod, S.P.The proposed research explores a relatively new area of expression detection through facial points in a sign language to enhance the computer interaction with the deaf and hard of hearing. The research mainly focuses on facial points collected from Kinect as basis for expression detection as opposed to numerous gesture based studies on sign language. This helps in deploying the applications in smart phones as it is feasible to capture facial point easily rather than hand gestures. Exhaustive experimentation is carried out with ten different machine learning algorithms for detecting nine different types of expression modeled as different binary classification problem for each expression. This is done for user dependent model and user independent model scenarios. The optimal classifier for each expression is found to outperform the current state-of-the-art techniques and has ROC area greater than 0.95 for each expression. It is found that user independent model's performance is comparable to user dependent model, hence is suggested as it is easy and efficient to deploy in practical applications. Finally, the importance of each facial point in detecting each type of expression has been mined, which can be instrumental for future research and for various application using facial points as basis for decision making. © 2016 IEEE.Item Semantic sentiment analysis using context specific grammar(2015) Bhuvan, M.S.; Rao, V.D.; Jain, S.; Ashwin, T.S.; Ram Mohana Reddy, GuddetiThe increasing number of e-commerce and social networking sites are producing large amount of data pertaining to reviews of a product, restaurant etc. A keen observation reveals that the text data gathered from any social review site are specific to a context and are subjective in nature promoting varied perceptions of sentiments. The novel idea is to define context specific grammar as semantics for a particular domain. Our research aims to develop a scalable model where features obtained from matching semantic patterns are used to predict the sentiment polarity of movie reviews and also provide a sentiment score for each review. The proposed model is intended to be flexible so that it could be applied to any domain by redefining the semantics specific to that domain. There are many other models which give accuracies greater than 80% using various methods. A study suggests that 70% accurate program is as good as humans as they have varied perceptions of sentiment about a movie review as it is a subjective summary of a movie. Our model might give lesser accuracy but it uses a cognitive approach trying to catch these varied perceptions by learning from a combination of positive and negative grammars. Analyzing results from various experiments we find that Logistic Regression with SGD on Apache Spark performs better with accuracy of 64.12% while being highly scalable. High dependency on the grammars is a limitation of the model. Improvements can be done by defining different quality and quantity of grammars. � 2015 IEEE.
