Optimizing Reinforcement Learning-Based Visual Navigation for Resource-Constrained Devices

dc.contributor.authorVijetha, U.
dc.contributor.authorGeetha, V.
dc.date.accessioned2026-02-04T12:27:04Z
dc.date.issued2023
dc.description.abstractExisting work on Deep reinforcement learning-based visual navigation mainly focuses on autonomous agents with ample power and compute resources. However, Reinforcement learning for visual navigation on resource-constrained devices remains an under-explored area of research, primarily due to challenges posed by processing high-dimensional visual inputs and making prompt decisions in realtime scenarios. To address these hurdles, we propose a State Abstraction Technique (SAT) designed to transform high-dimensional visual inputs into a compact representation, enabling simpler reinforcement learning agents to process the information and learn effective navigation policies. The abstract representation generated by SAT effortlessly serves as a versatile intermediary that bridges the gap between simulation and reality, enhancing the transferability of learned policies across various scenarios. Additionally, our reward shaping strategy uses the data provided by SAT to maintain a safe distance from obstacles, further improving the performance of navigation policies on resource-constrained devices. Our work opens up opportunities for navigation assistance and other applications in a variety of resource-constrained domains, where computational efficiency is crucial for practical deployment, such as guiding miniature agents on embedded devices or aiding visually impaired individuals through smartphone-integrated solutions. Evaluation of proposed approach on the AI2-Thor simulated environment demonstrates significant performance improvements over traditional state representations. The proposed method provides 84.18% fewer collisions, 28.96% fewer movement instructions and 11.3% higher rewards compared to the best alternative options available. Furthermore, we carefully account for real-world challenges by considering noise and motion blur during training, ensuring optimal performance during deployment on resource-constrained devices. © 2013 IEEE.
dc.identifier.citationIEEE Access, 2023, 11, , pp. 125648-125663
dc.identifier.urihttps://doi.org/10.1109/ACCESS.2023.3323801
dc.identifier.urihttps://idr.nitk.ac.in/handle/123456789/22097
dc.publisherInstitute of Electrical and Electronics Engineers Inc.
dc.subjectAbstracting
dc.subjectAutonomous agents
dc.subjectComputational efficiency
dc.subjectConstrained optimization
dc.subjectDeep learning
dc.subjectJob analysis
dc.subjectNavigation
dc.subjectReinforcement learning
dc.subjectAdaptation models
dc.subjectCollision detection and avoidance
dc.subjectCollisions avoidance
dc.subjectReinforcement learnings
dc.subjectResource management
dc.subjectResource-constrained setting
dc.subjectReward shaping
dc.subjectSim2real transferability
dc.subjectState abstraction
dc.subjectTask analysis
dc.subjectVisual Navigation
dc.subjectSemantics
dc.titleOptimizing Reinforcement Learning-Based Visual Navigation for Resource-Constrained Devices

Files

Collections