Browsing by Author "Jamadagni, H."
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item HARQ Soft Combining Using Bidirectional LSTMs(Springer Science and Business Media Deutschland GmbH, 2025) Ubaradka, A.S.; Jamadagni, H.; Sunil, R.; Chandavarkar, B.R.Hybrid Automatic Repeat reQuest is used in modern wireless data communication to integrate both Automatic Repeat reQuest and high-rate Forward Error Correction mechanisms to enhance the reliability of data transmission. Unlike traditional ARQ, where an error-ridden frame is discarded upon reception, HARQ temporarily stores the erroneous frame in a buffer. When the re-transmission of the same frame occurs, these two frames are combined to generate a new frame, trying to minimize errors. This is HARQ with Soft Combining. Existing methods like Chase Combining (Type-I) and Incremental Redundancy (Type-II and Type-III) implement Log-Likelihood Ratio and Maximum Ratio Combining to combine two erroneous frames. This paper uses a Bidirectional Long Short-Term Memory model to combine two frames with high channel noise errors. This paper introduces the BiLSTM model, which aims to reduce the Bit Error Rate and provides an approach for integrating this model into the existing HARQ structure. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2025.Item UnDIVE: Generalized Underwater Video Enhancement Using Generative Priors(Institute of Electrical and Electronics Engineers Inc., 2025) Srinath, S.; Chandrasekar, A.; Jamadagni, H.; Soundararajan, R.; Prathosh, A.P.With the rise of marine exploration, underwater imaging has gained significant attention as a research topic. Under-water video enhancement has become crucial for real-time computer vision tasks in marine exploration. However, most existing methods focus on enhancing individual frames and neglect video temporal dynamics, leading to visually poor enhancements. Furthermore, the lack of ground-truth references limits the use of abundant available underwater video data in many applications. To address these issues, we propose a two-stage framework for enhancing underwater videos. The first stage uses a denoising diffusion probabilistic model to learn a generative prior from unlabeled data, capturing robust and descriptive feature representations. In the second stage, this prior is incorporated into a physics-based image formulation for spatial enhancement, while also enforcing temporal consistency between video frames. Our method enables real-time and computationally-efficient processing of high-resolution underwater videos at lower resolutions, and offers efficient enhancement in the presence of diverse water-types. Extensive experiments on four datasets show that our approach generalizes well and outperforms existing enhancement methods. Our code is available at github. com/suhas-srinath/undive. © 2025 IEEE.
