Faculty Publications
Permanent URI for this communityhttps://idr.nitk.ac.in/handle/123456789/18736
Publications by NITK Faculty
Browse
4 results
Search Results
Item Smart Irrigation System Using Cloud and Internet of Things(Springer, 2019) Koduru, K.; Padala, V.G.D.P.R.; Padala, P.An intensive utilization of water resources in industries, agriculture, and groundwater consumption by humans for various purposes has degraded the water levels. A focus on effective utilization of water resources with simplified irrigation across different agricultural farms is required with the advancement of technology. This paper presents a framework based on cloud and Internet of things for implementing a smart irrigation system. Based on the defined framework, a use case for automated smart irrigation system is developed and a competent mechanism is defined for effective utilization of excessive water generated from showers to increase the groundwater levels. The use case provides flexibility to farmers for monitoring the farms in real time using the farmer’s cockpit. Here, heterogeneous devices are firmly integrated to empower smart irrigation and to monitor the system in real time. The use case actuation and automation are done based on certain imposed constraints to respond as per inputs and outputs generated by various devices installed in smart irrigation system. © 2019, Springer Nature Singapore Pte Ltd.Item Performance evaluation of deep learning frameworks on computer vision problems(Institute of Electrical and Electronics Engineers Inc., 2019) Nara, M.; Mukesh, B.R.; Padala, P.; Kinnal, B.Deep Learning (DL) applications have skyrocketed in recent years and are being applied in various domains. There has been a tremendous surge in the development of DL frameworks to make implementation easier. In this paper, we aim to make a comparative study of GPU-accelerated deep learning software frameworks such as Torch and TenserFlow (with Keras API). We attempt to benchmark the performance of these frameworks by implementing three different neural networks, each designed for a popular Computer Vision problem (MNIST, CIFAR10, Fashion MNIST). We performed this experiment on both CPU and GPU(Nvidia GeForce GTX 960M) settings. The performance metrics used here include evaluation time, training time, and accuracy. This paper aims to act as a guide to selecting the most suitable framework for a particular problem. The special interest of the paper is to evaluate the performance lost due to the utility of an API like Keras and a comparative study of the performance over a user-defined neural network and a standard network. Our interest also lies in their performance when subjected to networks of different sizes. ©2019 IEEE.Item AMMDAS: Multi-modular generative masks processing architecture with adaptive wide field-of-view modeling strategy(Institute of Electrical and Electronics Engineers Inc., 2020) Desanamukula, V.S.; Chilukuri, P.K.; Padala, P.; Padala, P.; Pvgd, P.R.The usage of transportation systems is inevitable; any assistance module which can catalyze the flow involved in transportation systems, parallelly improving the reliability of processes involved is a boon for day-to-day human lives. This paper introduces a novel, cost-effective, and highly responsive Post-active Driving Assistance System, which is "Adaptive-Mask-Modelling Driving Assistance System" with intuitive wide field-of-view modeling architecture. The proposed system is a vision-based approach, which processes a panoramic-front view (stitched from temporal synchronous left, right stereo camera feed) & simple monocular-rear view to generate robust & reliable proximity triggers along with co-relative navigation suggestions. The proposed system generates robust objects, adaptive field-of-view masks using FRCNN+Resnet-101_FPN, DSED neural-networks, and are later processed and mutually analyzed at respective stages to trigger proximity alerts and frame reliable navigation suggestions. The proposed DSED network is an Encoder-Decoder-Convolutional-Neural-Network to estimate lane-offset parameters which are responsible for adaptive modeling of field-of-view range (1570-2100) during live inference. Proposed stages, deep-neural-networks, and implemented algorithms, modules are state-of-the-art and achieved outstanding performance with minimal loss(L{p, t}, L?, LTotal) values during benchmarking analysis on our custom-built, KITTI, MS-COCO, Pascal-VOC, Make-3D datasets. The proposed assistance-system is tested on our custom-built, multiple public datasets to generalize its reliability and robustness under multiple wild conditions, input traffic scenarios & locations. © 2020 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.Item L, r-Stitch Unit: Encoder-Decoder-CNN Based Image-Mosaicing Mechanism for Stitching Non-Homogeneous Image Sequences(Institute of Electrical and Electronics Engineers Inc., 2021) Chilukuri, P.K.; Padala, P.; Padala, P.; Desanamukula, V.S.; Pvgd, P.R.Image-stitching (or) mosaicing is considered an active research-topic with numerous use-cases in computer-vision, AR/VR, computer-graphics domains, but maintaining homogeneity among the input image sequences during the stitching/mosaicing process is considered as a primary-limitation major-disadvantage. To tackle these limitations, this article has introduced a robust and reliable image stitching methodology (l,r-Stitch Unit), which considers multiple non-homogeneous image sequences as input to generate a reliable panoramically stitched wide view as the final output. The l,r-Stitch Unit further consists of a pre-processing, post-processing sub-modules a l,r-PanoED-network, where each sub-module is a robust ensemble of several deep-learning, computer-vision image-handling techniques. This article has also introduced a novel convolutional-encoder-decoder deep-neural-network (l,r-PanoED-network) with a unique split-encoding-network methodology, to stitch non-coherent input left, right stereo image pairs. The encoder-network of the proposed l,r-PanoED extracts semantically rich deep-feature-maps from the input to stitch/map them into a wide-panoramic domain, the feature-extraction feature-mapping operations are performed simultaneously in the l,r-PanoED's encoder-network based on the split-encoding-network methodology. The decoder-network of l,r-PanoED adaptively reconstructs the output panoramic-view from the encoder networks' bottle-neck feature-maps. The proposed l,r-Stitch Unit has been rigorously benchmarked with alternative image-stitching methodologies on our custom-built traffic dataset and several other public-datasets. Multiple evaluation metrics (SSIM, PSNR, MSE, L_{\alpha,\beta,\gamma } , FM-rate, Average-latency-time) wild-Conditions (rotational/color/intensity variances, noise, etc) were considered during the benchmarking analysis, and based on the results, our proposed method has outperformed among other image-stitching methodologies and has proved to be effective even in wild non-homogeneous inputs. © 2013 IEEE.
