Conference Papers
Permanent URI for this collectionhttps://idr.nitk.ac.in/handle/123456789/28506
Browse
5 results
Search Results
Item Projection and interaction with ad-hoc interfaces on non-planar surfaces(IEEE Computer Society help@computer.org, 2013) Dere, K.S.; Guddeti, G.Projector-based display systems have been used in area of computer interaction as Ad-hoc interface in recent time. The mobile hand-held projectors are becoming more popular. Many human centric user interfaces with the human wearable computer are being developed. Most of such system uses daily objects for projection and the interaction. But most of ignores the fact that these object surfaces are not planar. Hence such interfaces suffers from the distortion due to non-planar projection surface. Besides this projection quality also suffers from the radiometric distortion as well. Further more the interaction proposed with such interfaces bound to the planar surface only. Hence this paper is targeted to address the geometric distortion free projection of and interaction with such interfaces on non planar surfaces. Kinect is used as depth sensor for 3D scenario acquisition. We use imageper warping to mesh from Kinect. We use colored fingertip gloves for interaction. Here our system aims any day to day object surface for distortion free projection such as human body, curved wall, room corners, curtain's and many more objects. © 2013 IEEE.Item Kinect based real-time gesture spotting using HCRF(2013) Chikkanna, M.; Guddeti, G.The sign language is an effective way of communication for deaf and dumb people. This paper proposes, developing the gesture spotting algorithm for Indian Sign Language that acquires sensory information from Microsoft Kinect Sensor. Our framework consists of three main stages: hand tracking, feature extraction and classification. In the first stage, hand tracking is carried out using frames of Kinect. In second stage, the features of Cartesian system (velocity, angle, location) and hand with respect to body are extracted. K-means is used for extracting the codewords of features for HCRF. In the third stage, Hidden Conditional Random Field is used for classification. The experimental results show that HCRF algorithm gives 95.20% recognition rate for the test data. In real-time, the recognition rate achieves 93.20% recognition rate. © 2013 IEEE.Item Depth Data based Chroma Keying using Grab-cut Segmentation(Institute of Electrical and Electronics Engineers Inc., 2018) Lestari, P.; Niyas, S.; Krisnandi, D.The research presents a depth-image based automatic object segmentation for chroma key editing in multimedia applications. Depth data taken from advanced depth data capturing devices like Microsoft Kinect has a key role in this research. The proposed approach uses both color and depth data and this hybrid segmentation generates results with clear foreground object boundaries. The present system is exclusively designed for segmenting human subjects from a chroma keyed scene. An Aggregate Channel Feature (ACF) based human detection is also employed here to eliminate false detection due to other foreground objects. The depth data results in dark regions may create small errors over edge pixel segmentation. So, the whole process is carried through a sequence of image processing techniques. The pixels nearby the head portion first restored using K-means clustering and then a coarse level segmentation of the human subjects is obtained using Fuzzy C means segmentation. A color characteristic-based segmentation us used here to eliminate most of the background pixels from the foreground subjects. After this coarse level segmentation, an adaptive Tri-map generation is employed and the ultimate fine level segmentation is achieved using Grab-cut Segmentation to generate the foreground human subjects with accurate edge boundaries for matting chroma keyed images/frames. Experimental results validate the segmentation results and its ability for an error free automated segmentation in editing chroma keyed images or video. © 2018 IEEE.Item Kinect Based Suspicious Posture Recognition for Real-Time Home Security Applications(Institute of Electrical and Electronics Engineers Inc., 2018) Vikram, M.; Anantharaman, A.; Suhas, B.S.; Ashwin, T.S.; Guddeti, R.M.R.Suspicious posture recognition is a paramount task with numerous applications in everyday life. We explore one such application in real-time home security using the Microsoft Kinect depth camera. We propose a novel method where the remote device itself detects the suspicious activity. The server is alerted by the remote device in case of a suspicious activity which further alerts the home owners immediately. We show that our method, works in real-time, is robust towards changing lighting conditions and the computations happen on the remote device itself which makes it suitable for real-time home security. © 2018 IEEE.Item Optimized Object Detection Technique in Video Surveillance System Using Depth Images(Springer, 2020) Shahzad Alam, M.; Ashwin, T.S.; Guddeti, R.M.R.In real-time surveillance and intrusion detection, it is difficult to rely only on RGB image-based videos as the accuracy of detected object is low in the low light condition and if the video surveillance area is completely dark then the object will not be detected. Hence, in this paper, we propose a method which can increase the accuracy of object detection even in low light conditions. This paper also shows how the light intensity affects the probability of object detection in RGB, depth, and infrared images. The depth information is obtained from Kinect sensor and YOLO architecture is used to detect the object in real-time. We experimented the proposed method using real-time surveillance system which gave very promising results when applied on depth images which were taken in low light conditions. Further, in real-time object detection, we cannot apply object detection technique before applying any image preprocessing. So we investigated the depth image by which the accuracy of object detection can be improved without applying any image preprocessing. Experimental results demonstrated that depth image (96%) outperforms RGB image (48%) and infrared image (54%) in extreme low light conditions. © 2020, Springer Nature Singapore Pte Ltd.
