Faculty Publications

Permanent URI for this communityhttps://idr.nitk.ac.in/handle/123456789/18736

Publications by NITK Faculty

Browse

Search Results

Now showing 1 - 3 of 3
  • Item
    Opportunities and Challenges in Development of Support System for Visually Impaired: A Survey
    (Institute of Electrical and Electronics Engineers Inc., 2023) Vijetha, U.; Geetha, V.
    Over the past few years, the usage of assistive technology by the visually impaired (VI) has significantly increased worldwide. These devices assist the VI in carrying out daily tasks efficiently, boosts their independence, thereby enhancing the quality of life. However, most of these technologies are very expensive and are not affordable by persons living in mid and low-economy countries. Advances in computer vision and deep learning have opened doors to the development of low-cost solutions for the visually impaired. This paper investigates the potential of a smartphone app to qualify as an affordable yet effective VI assistive device. We highlight the recent developments in computer vision and deep learning techniques that have the potential to provide innovative solutions for the benefit of VI community. We outline the strengths and weaknesses of different techniques and report on the unresolved issues and potential future directions in the context of a support system for the visually impaired. © 2023 IEEE.
  • Item
    Optimizing Reinforcement Learning-Based Visual Navigation for Resource-Constrained Devices
    (Institute of Electrical and Electronics Engineers Inc., 2023) Vijetha, U.; Geetha, V.
    Existing work on Deep reinforcement learning-based visual navigation mainly focuses on autonomous agents with ample power and compute resources. However, Reinforcement learning for visual navigation on resource-constrained devices remains an under-explored area of research, primarily due to challenges posed by processing high-dimensional visual inputs and making prompt decisions in realtime scenarios. To address these hurdles, we propose a State Abstraction Technique (SAT) designed to transform high-dimensional visual inputs into a compact representation, enabling simpler reinforcement learning agents to process the information and learn effective navigation policies. The abstract representation generated by SAT effortlessly serves as a versatile intermediary that bridges the gap between simulation and reality, enhancing the transferability of learned policies across various scenarios. Additionally, our reward shaping strategy uses the data provided by SAT to maintain a safe distance from obstacles, further improving the performance of navigation policies on resource-constrained devices. Our work opens up opportunities for navigation assistance and other applications in a variety of resource-constrained domains, where computational efficiency is crucial for practical deployment, such as guiding miniature agents on embedded devices or aiding visually impaired individuals through smartphone-integrated solutions. Evaluation of proposed approach on the AI2-Thor simulated environment demonstrates significant performance improvements over traditional state representations. The proposed method provides 84.18% fewer collisions, 28.96% fewer movement instructions and 11.3% higher rewards compared to the best alternative options available. Furthermore, we carefully account for real-world challenges by considering noise and motion blur during training, ensuring optimal performance during deployment on resource-constrained devices. © 2013 IEEE.
  • Item
    Obs-tackle: an obstacle detection system to assist navigation of visually impaired using smartphones
    (Springer Science and Business Media Deutschland GmbH, 2024) Vijetha, U.; Geetha, V.
    As the prevalence of vision impairment continues to rise worldwide, there is an increasing need for affordable and accessible solutions that improve the daily experiences of individuals with vision impairment. The Visually Impaired (VI) are often prone to falls and injuries due to their inability to recognize dangers on the path while navigating. It is therefore crucial that they are aware of potential hazards in both known and unknown environments. Obstacle detection plays a key role in navigation assistance solutions for VI users. There has been a surge in experimentation on obstacle detection since the introduction of autonomous navigation in automobiles, robots, and drones. Previously, auditory, laser, and depth sensors dominated obstacle detection; however, advances in computer vision and deep learning have enabled it using simpler tools like smartphone cameras. While previous approaches to obstacle detection using estimated depth data have been effective, they suffer from limitations such as compromised accuracy when adapted for edge devices and the incapability to identify objects in the scene. To address these limitations, we propose an indoor and outdoor obstacle detection and identification technique that combines semantic segmentation with depth estimation data. We hypothesize that this combination of techniques will enhance obstacle detection and identification compared to using depth data alone. To evaluate the effectiveness of our proposed Obstacle detection method, we validated it against ground truth Obstacle data derived from the DIODE and NYU Depth v2 dataset. Our experimental results demonstrate that the proposed method achieves near 85% accuracy in detecting nearby obstacles with lower false positive and false negative rates. The demonstration of the proposed system deployed as an Android app-‘Obs-tackle’ is available at https://youtu.be/PSn-FEc5EQg?si=qPGB13tkYkD1kSOf . © 2024, The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature.